text
stringlengths
1
1.03M
id
stringlengths
1
7.38k
metadata
dict
\section{Introduction} Large scale free energy deviations $F\equiv Nf$ of a quenched disordered system from its selfaveraged mean $Nf_{\text{typ}}$ are extremely rare events with probability $\sim e^{N\,L(f)}$, with the system size $N \to\infty$, and $L(f)=O(1)<0$ for $f=O(1)\not =f_{\text{typ}}$. Only the case $f<f_{\text{typ}}$ is considered in this paper. As invented in Ref.\ \cite{Crisanti_et_al}, $L(f)$ can be computed by the replica method; namely it is the Legendre transform of the $n$-dependent free energy $\Phi$ of (\ref{Phi}): \[ \beta^{-1}L(f)=-n\Phi+nf\qquad\text{and}\qquad \frac{d}{dn}(n\Phi)=f. \] The replica number $n$ is considered a nonnegative real number corresponding to $f\le f_{\text{typ}}$, see also \cite{Parisi_Rizzo_1,Parisi_Rizzo_2}. In most applications, we have $\Phi-f_{\text{typ}}\sim -n^{a}$ for $n\to 0^{+}$ with some positive integer exponent $a$ characteristic to the model, and also to the region of control parameters (temperature, magnetic field, etc.) considered. Legendre transforming then provides $L(f)\sim -(f_{\text{typ}}-f)^{\frac{a+1}{a}}$. In the most common situations \cite{Crisanti_et_al,Parisi_Rizzo_1,Parisi_Rizzo_2}, $a=1$ and the large scale free energy fluctuations are Gaussian. A very atypical behaviour was found many years ago by Kondor \cite{Kondor} for the truncated version of the Sherrington-Kirkpatrick (SK) model \cite{SK}, i.e.\ for the mean field Ising spin glass, just below the transition temperature. The exponent $a=5$ was later proved to be true in the whole spin glass phase, including zero temperature \cite{Parisi_Rizzo_1,Parisi_Rizzo_2}. The original motivation of the present work was to find out how the large free energy deviations are influenced by including the geometry of a $d$-dimensional hypercubic lattice. For that purpose, we followed the usual steps to generate a field theory convinient for a perturbative calculation whose zeroth order is the mean field (SK) theory. These steps are: \begin{itemize} \item Transformation of the lattice system to a field theory, see Refs.\ \cite{BrMo79,rscikk}. The natural ultraviolet cutoff is then the boundary of the Brillouin zone, and it is $\Lambda=O(1)$ when length is measured in the units of the lattice spacing. \item The momentum space is shrinking such that $T_c-T\sim \tau\ll \Lambda\ll 1$. \item Momentum dependence is truncated up to the quadratic term. \end{itemize} We are led then to the replicated field theory presented in the next section, Eqs.\ (\ref{Z^n}) and (\ref{L}). (Zero external magnetic field is assumed throughout the present paper.) It turns out from the calculation of Sec.\ \ref{IV} that the leading behaviour of the large scale fluctuations changes to Gaussian when first order corrections to the low temperature SK model are taken into account (this has been noticed in Ref.\ \cite{As_Mr}), and the anomalous $n^5$ term is now subleading. It becomes also clear that this Gaussian behaviour follows for any structure of the order parameter as long as replica equivalence is assumed. The glassy phase of the SK model has the ultrametric structure with infinite step replica symmetry breaking \cite{MePaVi} (RSB)\footnote{The acronym RSB is used throughout the paper for the special type of replica symmetry breaking with infinite number of levels in the ultrametric hierarchy.}. For $T<T_c$, this RSB state is reached by a phase transition \cite{Kondor} when $n$ is lowered from around $1$ --- where $\Phi(n)$ is the free energy of the stable replica symmetric (RS) phase with a nonzero order parameter --- at some $n_{\text{AT}}(T)$ having two characteristics: \begin{enumerate} \item The RS phase becomes unstable due to the Almeida-Thouless \cite{AT} instability, i.e.\ the so called replicon mass $\Gamma_\text{R}$ is zero for $n=n_{\text{AT}}(T)$. \item $\Phi_{\text{RS}}=\Phi_{\text{RSB}}$ at the transition. \end{enumerate} Whether or not this feature of the SK model is perturbatively stable is studied in the second part of the paper, and rather nontrivial calculations lead us to a positive conclusion for $d>8$. The transitional domain of dimensions above the upper critical dimension ($6<d<8$) is, however, more complicated: it is argued that the question cannot be settled by this first order calculation. The outline of the paper is as follows: Sec.\ \ref{II} presents the replicated field theoretical model suitable for the perturbative calculation, whose basic formulae are provided in Sec.\ \ref{III}. The one-loop results for the $n$-dependent free energy are shown in Sec.\ \ref{IV} up to fourth order in the double series of $\tau\sim T_c-T$ and $n$: these are the leading contributions when $d>8$. Nonanalytic temperature dependences are more and more important while approaching dimension 8, and even for $6<d<8$. They are computed for both the RSB and RS schemes in Sec.\ \ref{V}. The line of equal free energies of the two schemes is calculated for $d>8$ in Sec.\ \ref{VI}, whereas we analyze the situation in the transitional regime just above the upper critical dimension, i.e.\ $6<d<8$, in Sec.\ \ref{VII}. One-loop results for the RS replicon mass $\Gamma_\text{R}$ are presented and the AT instability line deduced in Sec.\ \ref{VIII}. Some concluding remarks are left to Sec.\ \ref{IX}. Three appendices contain some computational details. \section{Field theoretic model for the calculation of the free energy} \label{II} In this section, we will define the replicated field theory which is appropriate for the calculation of perturbative corrections to the $n$-dependent free energy $\Phi(n,\beta)$ defined as \begin{equation}\label{Phi} \beta \Phi(n,\beta)\equiv -\frac{1}{nN}\,\ln \overline{Z^n} \end{equation} where $Z$ is the partition function of an Ising spin glass defined on the $d$-dimensional hypercubic lattice consisting of $N$ Ising spins, and the bar denotes averaging over the quenched Gaussian disorder with zero mean and variance $J^2$. The thermodynamic limit $N\to \infty$ is always understood, whereas $n>0$ is small but kept finite. We follow the strategy suggested in Ref.\ \cite{AT2008}: by conviniently choosing the bare parameters, the field theory provides the SK results \cite{Kondor,Parisi_Rizzo_1,Parisi_Rizzo_2} in the tree approximation, i.e.\ when neglecting loops. In zero external magnetic field we can express $\overline{Z^n}$ in the following functional integral form (see Refs.\ \cite{rscikk,droplet}): \begin{equation}\label{Z^n} \overline{Z^n}=C\times \int[d\phi]\,e^{-\mathcal{L}}, \end{equation} with the normalization factor $C$ to be fixed later by choosing it to match the SK result. The Lagrangian $\mathcal{L}$ has the form \begin{multline}\label{L} \mathcal{L}= \frac{1}{2}\sum_{\mathbf p} \bigg(\frac{1}{2} p^2+m\bigg)\sum_{\alpha\beta} \phi^{\alpha\beta}_{\mathbf p}\phi^{\alpha\beta}_{-\mathbf p} -\frac{1}{6\sqrt{N}}\,\sideset{}{'}\sum_{\mathbf {p_1p_2p_3}} w\sum_{\alpha\beta\gamma}\phi^{\alpha\beta}_{\mathbf p_1} \phi^{\beta\gamma}_{\mathbf p_2}\phi^{\gamma\alpha}_{\mathbf p_3} -\frac{1}{24N}\,\sideset{}{'}\sum_{\mathbf {p_1p_2p_3p_4}}\\[2pt] \bigg(u_1\!\!\sum_{\alpha\beta\gamma\delta}\phi^{\alpha\beta}_{\mathbf p_1} \phi^{\beta\gamma}_{\mathbf p_2}\phi^{\gamma\delta}_{\mathbf p_3} \phi^{\delta\alpha}_{\mathbf p_4}+u_2\!\sum_{\alpha\beta} \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\beta}_{\mathbf p_2} \phi^{\alpha\beta}_{\mathbf p_3}\phi^{\alpha\beta}_{\mathbf p_4} +u_3\!\sum_{\alpha\beta\gamma}\phi^{\alpha\gamma}_{\mathbf p_1} \phi^{\alpha\gamma}_{\mathbf p_2}\phi^{\beta\gamma}_{\mathbf p_3} \phi^{\beta\gamma}_{\mathbf p_4}+ u_4\!\!\sum_{\alpha\beta\gamma\delta} \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\beta}_{\mathbf p_2} \phi^{\gamma\delta}_{\mathbf p_3}\phi^{\gamma\delta}_{\mathbf p_4} \bigg)+\dot \end{multline} where the ellipses are for higher order replica symmetric invariants which are consistent with the extra symmetry of the zero magnetic field subspace \cite {droplet,nucl}. In this $n(n-1)/2$ component field theory the fluctuating fields are symmetric in the replica indices with zero diagonals: $\phi^{\alpha\beta}_{\mathbf p}=\phi^{\beta\alpha}_{\mathbf p}$ and $\phi^{\alpha\alpha}_{\mathbf p}=0$, $\alpha$,$\beta=1,\dots,n$. (Momentum conservation is indicated by the primed summations.) The bare mass $m$ depends on temperature as $m=-\frac{k^2}{2J^2}({T_c^{\text{mf}}}^2-T^2)$ where $k$ is Boltzmann's constant, and $kT_c^{\text{mf}}=J$ --- the mean field critical temperature --- differs from the {\em exact\/} one, $T_c$. Introducing the new parameter $\tau\equiv \frac{k^2}{2J^2}(T_c^2-T^2)$ measuring the distance from criticality, the bare mass can be written as $m=m_c-\tau$ with $m_c$ being obviously one-loop order. Beside $\tau$, the couplings $w$, $u_1$, $u_2$, $u_3$, $u_4$, \dots parametrize the Lagrangian. Around the critical point they can be considered as constants; a complete matching with the SK results is achieved by choosing $w=1$, $u_1=3$, $u_2=2$, $u_3=-6$, $u_4=0$ \cite{AT2008}. The spin glass phase below $T_c$ is characterized by the nonzero order parameter $\phi^{\alpha\beta}\equiv \frac{1}{\sqrt{N}} \langle \phi^{\alpha\beta}_{\mathbf p=0} \rangle$ where the average is now taken by the measure proportional to $e^{-\mathcal L}$. It is useful to redefine the fields by the shift $\phi^{\alpha\beta}_{\mathbf p} \longrightarrow \phi^{\alpha\beta}_{\mathbf p}-\sqrt{N}\,\phi^{\alpha\beta}\, \delta_{\mathbf p=0}^{\text{Kr}}$. By this transformation, the new fields continue fluctuating around zero; on the other hand, however, the Lagrangian has lost the higher symmetry of the paramagnetic phase resulting in the following generic theory (equally convinient for an RS or RSB ansatz):\footnote{For the sake of simplifying the notations, we will keep the symbols $\phi^{\alpha\beta}_{\mathbf p}$ and $\mathcal L$ for the transformed quantities.} \[\mathcal L=\mathcal L^{(0)}+\mathcal L^{(1)}+\mathcal L^{(2)}+ \mathcal L^{(3)}+\mathcal L^{(4)}+\dots. \] The terms above can be worked out using the results of the Appendices B and D of Ref.\ \cite{nucl} providing \begin{align} \frac{1}{N}\mathcal L^{(0)}&=\frac{1}{2}m\sum_{\alpha\beta}{\phi^{\alpha\beta}} ^2 -\frac{1}{6}w\sum_{\alpha\beta\gamma}\phi^{\alpha\beta} \phi^{\beta\gamma}\phi^{\gamma\alpha} -\frac{1}{24}\bigg(u_1\sum_{\alpha\beta\gamma\delta}\phi^{\alpha\beta} \phi^{\beta\gamma}\phi^{\gamma\delta}\phi^{\delta\alpha}+ u_2\sum_{\alpha\beta}{\phi^{\alpha\beta}}^4\notag\\ \phantom{\frac{1}{N}\mathcal L^{(0)}}&\mathrel{\phantom{=}} +u_3\sum_{\alpha\beta\gamma}{\phi^{\alpha\gamma}}^2 {\phi^{\beta\gamma}}^2+u_4\sum_{\alpha\beta\gamma\delta} {\phi^{\alpha\beta}}^2 {\phi^{\gamma\delta}}^2\bigg)+\dots,\label{L0}\\ \frac{1}{\sqrt N}\mathcal L^{(1)}&=\sum_{\alpha\beta}\bigg[m\,\phi^{\alpha\beta} -\frac{1}{2}w\sum_\gamma \phi^{\alpha\gamma}\phi^{\gamma\beta} -\frac{1}{6}\Big(u_1\sum_{\gamma\delta}\phi^{\alpha\gamma}\phi^{\gamma\delta} \phi^{\delta\beta}+u_2\,{\phi^{\alpha\beta}}^3+u_3\,\phi^{\alpha \beta}\sum_\gamma {\phi^{\beta\gamma}}^2\notag\\ \phantom{\frac{1}{\sqrt N}\mathcal L^{(1)}}&\mathrel{\phantom{=}}+u_4\,\phi^{\alpha\beta} \sum_{\gamma\delta}{\phi^{\gamma\delta}}^2\Big)+\dots\bigg] \times\phi^{\alpha\beta}_{\mathbf p=0},\label{L1}\\ \mathcal L^{(2)}&=\frac{1}{2}\sum_{\mathbf p}\Bigg[\sum_{\alpha\beta} \bigg(\frac{1}{2} p^2+m-\frac{1}{2}u_2{\phi^{\alpha\beta}}^2 -\frac{1}{6}u_3\sum_\gamma {\phi^{\beta\gamma}}^2 -\frac{1}{6}u_4\sum_{\gamma\delta}{\phi^{\gamma\delta}}^2+\dots \bigg \times\phi^{\alpha\beta}_{\mathbf p}\phi^{\alpha\beta}_{-\mathbf p}\notag\\ \phantom{\mathcal L^{(2)}}&\mathrel{\phantom{=}}+\sum_{\alpha\beta\gamma}\bigg( -w\phi^{\alpha\beta}-\frac{1}{3}u_1\sum_{\delta}\phi^{\alpha\delta} \phi^{\delta\beta}-\frac{1}{3}u_3\phi^{\alpha\gamma}\phi^{\gamma\beta} +\dots\bigg) \times\phi^{\alpha\gamma}_{\mathbf p}\phi^{\gamma\beta}_{-\mathbf p} \notag\\\phantom{\mathcal L^{(2)}}&\mathrel{\phantom{=}} +\sum_{\alpha\beta\gamma\delta}\bigg(-\frac{1}{6}u_1\phi^{\alpha\gamma} \phi^{\beta\delta} -\frac{1}{3}u_4\phi^{\alpha\beta}\phi^{\gamma\delta}+\dots \bigg)\times\phi^{\alpha\beta}_{\mathbf p}\phi^{\gamma\delta}_{-\mathbf p} \Bigg], \label{L2}\\ \sqrt N\mathcal L^{(3)}&=-\frac{1}{6} \sideset{}{'}\sum_{\mathbf {p_1p_2p_3}}\bigg[ \sum_{\alpha\beta\gamma}\Big(w+\dots \Big)\times \phi^{\alpha\beta}_{\mathbf p_1} \phi^{\beta\gamma}_{\mathbf p_2}\phi^{\gamma\alpha}_{\mathbf p_3} +\sum_{\alpha\beta}\Big(u_2\phi^{\alpha\beta}+\dots\Big)\times \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\beta}_{\mathbf p_2} \phi^{\alpha\beta}_{\mathbf p_3} \notag\\ \phantom{\sqrt N\mathcal L^{(3)}}&\mathrel{\phantom{=}} +\sum_{\alpha\beta\gamma}\Big(u_3\phi^{\beta\gamma}+\dots\Big)\times \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\beta}_{\mathbf p_2} \phi^{\beta\gamma}_{\mathbf p_3}+ \sum_{\alpha\beta\gamma\delta}\Big(u_4\phi^{\gamma\delta}+\dots\Big)\times \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\beta}_{\mathbf p_2} \phi^{\gamma\delta}_{\mathbf p_3}\notag\\ \phantom{\sqrt N\mathcal L^{(3)}}&\mathrel{\phantom{=}}+ \sum_{\alpha\beta\gamma\delta}\Big(u_1\phi^{\gamma\delta}+\dots\Big)\times \phi^{\alpha\beta}_{\mathbf p_1}\phi^{\alpha\gamma}_{\mathbf p_2} \phi^{\beta\delta}_{\mathbf p_3}+\dots\bigg]. \label{L3} \end{align} $\mathcal L^{(4)}$ has been omitted here, as it keeps its form in Eq.\ (\ref{L}) up to this order, whereas the ellipsis dots indicate the terms higher than quartic. \section{Perturbative calculation of the free energy} \label{III} Two steps lead to a systematic perturbative treatment: \begin{itemize} \item Firstly an interaction Lagrangian is detached, and $\mathcal L^{(2)}$ identified as the non-interractive part: \begin{equation}\label{Lsep} \mathcal L=\mathcal L^{(0)}+ \frac{1}{2}\sum_{\mathbf p}\,\sum_{(\alpha\beta),(\gamma\delta)} \big(p^2\,\delta^{\text{Kr}}_{\alpha\beta,\gamma\delta}+ \bar{M}_{\alpha\beta,\gamma\delta}\big)\,\phi^{\alpha\beta}_{\mathbf p} \phi^{\gamma\delta}_{-{\mathbf p}}+\mathcal{L}^{\text{I}} \end{equation} where in the second term the bare mass is now represented by the mass operator $\bar{M}_{\alpha\beta,\gamma\delta}$, and the summation $\sum_{(\alpha\beta)}\equiv \sum_{\alpha<\beta}$ is over the $n(n-1)/2$ {\em pairs\/} of replica indices, i.e., over the independent field components. Comparing with Eq.\ (\ref{L2}), the three different types of mass components are simply derived: \begin{align} \bar{M}_{\alpha\beta,\alpha\beta}&=2m-\frac{1}{3}(u_1+3u_2+2u_3+ 4u_4)\,{\phi^{\alpha\beta}}^2-\frac{1}{3}(2u_1+u_3+nu_4)\, \sum_\gamma {\phi^{\beta\gamma}}^2+\dots,\notag\\ \bar{M}_{\alpha\gamma,\beta\gamma}&=-w\,\phi^{\alpha\beta}-\frac{1}{3} (u_1+u_3+4u_4)\,\phi^{\alpha\gamma}\phi^{\beta\gamma}-\frac{1}{3}u_1\, \sum_\delta \phi^{\alpha\delta}\phi^{\beta\delta}+\dots,\label{Mbar}\\ \bar{M}_{\alpha\beta,\gamma\delta}&=-\frac{1}{3}u_1\, (\phi^{\alpha\gamma}\phi^{\beta\delta}+\phi^{\beta\gamma}\phi^{\alpha\delta}) -\frac{4}{3}u_4\,\phi^{\alpha\beta}\phi^{\gamma\delta}+\dots.\notag \end{align} Replica equivalence is a must even when replica symmetry is broken \cite{Parisi04}, and it has been assumed in the above formulas; see the last term of the diagonal component which is in fact independent of $\beta$. The free propagator of this generic RSB field theory is the $n(n-1)/2\times n(n-1)/2$ matrix $\bar G\equiv (p^2+\bar{M})^{-1}$. \item The order parameter $\phi^{\alpha\beta}$ must satisfy the ``equation of state" $\langle \phi^{\alpha\beta}_{\mathbf p=0} \rangle=0$; up to one-loop order it takes the form \begin{equation}\label{equation_of_state} \bar H_{\alpha\beta}+\frac{1}{2}\sum_{(\gamma\delta),(\mu\nu)} \bar W_{\alpha\beta,\gamma\delta,\mu\nu}\times \frac{1}{N}\sum_{\mathbf p} \bar{G}_{\gamma\delta,\mu\nu}(p)+\text{2-loop terms}=0. \end{equation} In perturbative computations, the capital letter representation of the couplings is more natural, the one- and three-point ones can be deduced by comparing with Eqs.\ (\ref{L1}) and (\ref{L3}): \begin{equation}\label{H} \begin{aligned} -\bar H_{\alpha\beta}&=2\times \bigg[m\,\phi^{\alpha\beta}- \frac{1}{2}w\sum_\gamma \phi^{\alpha\gamma}\phi^{\gamma\beta} -\frac{1}{6}\Big(u_1\sum_{\gamma\delta}\phi^{\alpha\gamma}\phi^{\gamma\delta} \phi^{\delta\beta}+u_2\,{\phi^{\alpha\beta}}^3+u_3\,\phi^{\alpha \beta}\sum_\gamma {\phi^{\beta\gamma}}^2\\ \phantom{\bar H_{\alpha\beta}}&\mathrel{\phantom{=}}+u_4\,\phi^{\alpha\beta} \sum_{\gamma\delta}{\phi^{\gamma\delta}}^2\Big)+\dots\bigg], \end{aligned} \end{equation} and the eight different types of cubic couplings: \begin{align} \bar W_{\alpha\beta,\beta\gamma,\gamma\alpha}&=w+\dots,& \bar W_{\alpha\beta,\alpha\beta,\alpha\beta}&=2 (u_1+u_2+u_3+2u_4) \phi^{\alpha\beta}+\dots,\notag\\ \bar W_{\alpha\beta,\alpha\beta,\beta\gamma}&=\frac{1}{3}(2u_1+u_3+4u_4) \phi^{\beta\gamma}+\dots,& \bar W_{\alpha\beta,\alpha\beta,\gamma\delta}&= \frac{4}{3}u_4\phi^{\gamma\delta}+\dots,\label{Wbar}\\[3pt] \bar W_{\alpha\beta,\alpha\gamma,\beta\delta}&=\frac{1}{3}u_1 \phi^{\gamma\delta}+\dots\notag \end{align} where the dots are for the next --- $O(\phi^2)$ --- order, and the three missing cubic vertices --- $\bar W_{\alpha\beta,\alpha\gamma,\alpha\delta}$, $\bar W_{\alpha\gamma,\beta\gamma,\mu\nu}$, and $ W_{\alpha\beta,\gamma\delta,\mu\nu}$ --- enter only at this higher order. \end{itemize} The critical temperature of the field theory, or equivalently $m_c$, can be simply deduced from Eq.\ (\ref{equation_of_state}) by expanding it up to its leading $O(\phi)$ order after setting $\tau=0$. Making use of the large momentum expansion of the free propagator $\bar G$, together with Eqs.\ (\ref{Mbar}) and (\ref{Wbar}), it straightforwardly follows: \begin{equation}\label{mc} m_c=\frac{1}{2}(n-2)w^2\frac{1}{N}\sum_{\mathbf p}\frac{1}{p^4} +\frac{1}{6}\big[(2n-1)u_1+3u_2+(n+1)u_3+(n^2-n+4)u_4\big] \frac{1}{N}\sum_{\mathbf p}\frac{1}{p^2}+ \text{2-loop terms.} \end{equation} The free energy $\ln \overline{Z^n}$ has the following expansion when $\mathcal L^{\text{I}}$ is handled as a perturbation; see (\ref{Z^n}) and (\ref{Lsep}): \begin{equation}\label{lnZ^n} \ln \overline{Z^n}=\ln C-\mathcal L^{(0)}+\ln Z_G-\langle L^{\text{I}}\rangle_G +\frac{1}{2}\Big(\langle {L^{\text{I}}}^2\rangle_G-\langle L^{\text{I}}\rangle_G^2\Big) +\dots, \end{equation} and the Gaussian averages $\langle\dots\rangle_G$ are taken by the measure $e^{-\mathcal L^{(2)}}/Z_G$. As ``tadpole" diagrams are missing now due to Eq.\ (\ref{equation_of_state}), the only one-loop term is the Gaussian free energy $\ln Z_G= \int [d\phi]e^{-\mathcal L^{(2)}}$ which has the familiar form in terms of the eigenvalues $\bar \lambda_j$'s of the mass operator $\bar M$: \begin{equation}\label{Gaussian} \ln Z_G=-\frac{1}{2}\sum_{\mathbf p}\sum_{j=1}^{n(n-1)/2} \ln \frac{p^2+\bar \lambda_j}{\pi}. \end{equation} Using the following identities: \begin{align*} \sum_{\mathbf p}\ln\Big(1+\frac{\bar \lambda_j}{p^2}\Big)&= \ln\Big(1+\frac{\bar \lambda_j}{\Lambda^2}\Big)\times \Big(\sum_{\mathbf p}1\Big)+ \bar \lambda_j\,\frac{2}{d}\,\sum_{\mathbf p}\frac{1}{p^2+\bar \lambda_j}, \\[4pt] \frac{1}{p^2+\bar \lambda_j}&=\frac{1}{p^2}-\frac{\bar\lambda_j}{p^4} +\frac{\bar\lambda_j^2}{p^6} -\frac{\bar\lambda_j^3}{p^6(p^2+\bar \lambda_j)} \end{align*} where $\Lambda$ is the ultraviolet cutoff, and frequently applying the trivial relationship $\sum_{\mathbf p}1=\frac{d-k}{d}\,\Lambda^k\, \sum_{\mathbf p}\frac{1}{p^k}$ with $d>k$, the Gaussian free energy can be arranged into its final form for $d>8$ \begin{equation}\label{lnZG} \begin{split} \ln Z_G &=\frac{n(n-1)}{2}\,\Big(\frac{1}{d}-\frac{1}{2}\ln \frac{\Lambda^2}{\pi} \Big)\times \sum_{\mathbf p}1-\frac{1}{2}\Big(\sum_j \bar\lambda_j\Big) \times \sum_{\mathbf p}\frac{1}{p^2} +\frac{1}{4}\Big(\sum_j \bar\lambda_j^2\Big) \times \sum_{\mathbf p}\frac{1}{p^4}\\[4pt] &-\frac{1}{6}\Big(\sum_j \bar\lambda_j^3\Big) \times \sum_{\mathbf p}\frac{1}{p^6} +\frac{1}{8}\Big(\sum_j \bar\lambda_j^4\Big)\,\,\frac{d-8}{d} \times \sum_{\mathbf p}\frac{1}{p^8}+\frac{1}{d}\sum_j\sum_{\mathbf p} \frac{\bar\lambda_j^4}{p^6(p^2+\bar\lambda_j)}+ O(\bar\lambda_j^5)\times\! \sum_{\mathbf p}1. \end{split} \end{equation} Before displaying the result for the free energy, it must be noticed that the contribution $Nn\frac{1}{2}m_c(\phi^2)^{\alpha\alpha}$ from $\mathcal L^{(0)}$% \footnote{Matrix notations, like $(\phi^2)^{\alpha\alpha}$ here, are frequently used in the following part of the paper.} is exactly cancelled by corresponding terms of $\ln Z_G$; see Eqs.\ (\ref{L0}), (\ref{mc}) and the second terms of the right hand sides of (\ref{TrM1}), (\ref{TrM2}). Moreover, as $\mathcal L^{(0)}$ is stationary at the zero-loop order parameter, we do not need to compute the one-loop correction to $\phi^{\alpha\beta}$. Substituting the traces from App.\ \ref{App1} into Eq.\ (\ref{lnZG}), the replicated free energy in (\ref{lnZ^n}) takes the following form: \begin{multline}\label{main} \frac{1}{nN}\,\ln \overline{Z^n} \frac{1}{nN}\, \big[\ln \overline{Z^n}\big]^{\text{para}}\\[5pt] +\frac{1}{2}\tau \times (\phi^2)^{\alpha\alpha}+ \frac{1}{6}w\Big\{1-\frac{1}{2}\big[u_1+3u_2+(n-1)u_3+(n^2-n-4)u_4\big] I_4-2(n-2)w^2I_6\Big\} \times (\phi^3)^{\alpha\alpha}\\[5pt] +\frac{1}{24}\Big\{u_1-\frac{1}{3}u_1\big[(2n-3)u_1+ 6u_2+2(n-1)u_3+2(n^2-n-8)u_4\big]I_4- 2w^2\big[(2n-7)u_1-3u_2-(n-1)u_3\\[5pt]-(n^2-n+4)u_4\big]I_6 +3(3n-4)w^4I_8 \Big\}\times (\phi^4)^{\alpha\alpha} +\frac{1}{24}\Big\{u_2+\frac{1}{3} \big[2u_1^2-4(n-2)u_1u_2+3u_2^2 -2(n-5)u_2u_3\\[5pt]-2(n^2-n-8)u_2u_4+2u_3^2\big]I_4- 4w^2\big[-2u_1+(n+1)u_2+2u_3\big]I_6+24w^4I_8 \Big\}\times \sum_{\beta}{\phi^{\alpha\beta}}^4\\[1pt] +\frac{1}{24}\Big\{(u_3+nu_4)+\frac{1}{3} \big[(3n-4)u_1^2+12u_1u_2+8u_1u_3-(n-3)u_3^2 -2(n^2-n-8)u_3u_4 -n(n^2-n-8)u_4^2\big]I_4\\[5pt] -2w^2\big[-(6n-19)u_1-3u_2+(n-5)u_3+(n^2-n+4)u_4\big]I_6+ 3(n-12)w^4I_8 \Big\}\times \left({(\phi^2)}^{\alpha\alpha}\right)^2. \end{multline} Some remarks are appropriate here: \begin{itemize} \item Only replica equivalence was used to derive the above formula which gives the first order --- i.e., one-loop --- correction to the mean field free energy. Any special RSB scheme is included in the ``invariants'' like $(\phi^3)^{\alpha\alpha}$. \item For $\phi\equiv 0$, the analytic continuation of the paramagnetic free energy is obtained: \[ \big[\ln \overline{Z^n}\big]^{\text{para}}=\ln C-\frac{n(n-1)}{4} \,\sum_{\mathbf p}^{\Lambda}\,\ln \frac{p^2-2\tau}{\pi}; \] an expansion of this formula gives the $\tau$ terms in $\ln Z_G$, see (\ref{lnZG}) and App.\ \ref{App1}. $n=1$ is the annealed model; the disappearance of the loop corrections expresses the triviality of this case: $\ln \overline{Z}=N\ln 2+\frac{N_B}{z}\,\frac{J^2}{2(kT)^2}$, where the number of interactions $N_B$ can be expressed by the coordination number $z$ as $N_B=N\frac{z}{2}$. This gives for $\ln C$: \[ \frac{1}{nN}\,\ln C=\ln 2+\frac{J^2}{4(kT)^2} \] which agrees with the correct value of the SK model for $n$ generic. \item The notation $I_k=\frac{1}{N}\sum_{\mathbf p}^{\Lambda}\,\frac{1}{p^k}$ was used in (\ref{main}) which is a truncated formula in the following sense: \begin{enumerate} \item The nonanalytic term proportional to $\tau^{d/2}$, see the last but one loop integral in (\ref{lnZG}), was neglected, as it is subleading in the region of space dimensions studied here, i.e.\ $d>8$. \item Higher than quartic $\phi$ terms in (\ref{main}) were left out for the same reason. \item We restrict ourself to the model where quartic couplings are the highest order, i.e.\ neglecting the invariants in (\ref{L}) which are there represented by the dots. A fifth order coupling, for instance, would provide a contribution $\sim I_2\times (\phi^3)^{\alpha\alpha}$ in Eq.\ (\ref{main}). \end{enumerate} \end{itemize} \section{$n$-dependent free energy in special cases} \label{IV} Eq.\ (\ref{main}) is now used to compute $\beta\Delta\Phi\equiv -\Big\{\frac{1}{nN}\,\ln \overline{Z^n}- \frac{1}{nN}\, \big[\ln \overline{Z^n}\big]^{\text{para}}\Big\}$, i.e.\ the shift of the $n$-dependent free energy from the continuation of the paramagnetic one, in some special cases. \subsection{Neglecting loops: the tree approximation} Inserting the expansions from (\ref{invariantsRSB}) into (\ref{main}), and neglecting loop terms, the mean field free energy of the model with nonzero $\tau$, $w$, $u_1$, $u_2$, $u_3$, $u_4$, and assuming infinite step RSB takes the form: \begin{equation}\label{mf_Phi} \beta\Delta\Phi^{\text{mf}}=\frac{1}{6}w^{-2}\tau^3+\Big(\frac{1}{8}u_1 +\frac{1}{24}u_2-\frac{1}{24}u_3\Big)w^{-4}\tau^4+O(\tau^5)-\frac{1}{24}u_4 w^{-4}n\tau^4-\frac{9}{640}u_2^{-3}w^4n^5. \end{equation} Inserting $w=1$, $u_1=3$, $u_2=2$, $u_3=-6$, and $u_4=0$, this gives the free energy of the SK model \[ \beta\Phi^{\text{SK}}=-\Big[\ln 2+\frac{J^2}{4(kT)^2}\Big]+\frac{1}{6}\tau^3+ \frac{17}{24}\tau^4+O(\tau^5)-\frac{9}{5120}n^5, \qquad 2\tau=1-\left(\frac{kT}{J}\right)^2. \] This formula agrees with the results of Ref.\ \cite{Parisi_Rizzo_2} up to the order studied here.\footnote{Unfortunately the definition of $\tau$ differs from that of Refs.\ \cite{Kondor,Parisi_Rizzo_1,Parisi_Rizzo_2}, which is called here $\tau'$. The simple relation between them is $\tau=\tau'-\frac{1}{2}{\tau'}^2$.} From Eq.\ (\ref{mf_Phi}) follows that infinite step RSB is only a {\em necessary\/} condition for the anomalous (non-Gaussian) free energy fluctuation: a nonzero $u_4$ produces a term linear in $n$. This may be a generic feature: interactions which are disconnected in replica space (like $ \sum_{\alpha\beta\gamma\delta} {\phi^{\alpha\beta}}^ {\phi^{\gamma\delta}}^2$) generate Gaussian free energy fluctuations even at the level of the tree approximation. \subsection{One-loop correction for the infinite step RSB case} When substituting $u_3$ by ${\bar u}_3\equiv u_3+nu_4$, the $n$-dependent free energy shift takes the following simple form in the first order perturbative calculation [see Eqs.\ (\ref{main}), (\ref{invariantsRSB}), (\ref{mf_Phi})]: \[ \beta\Delta\Phi=\beta\Delta\Phi^{(0)}+n\,\beta\Delta\Phi^{(1)} +\beta\Delta\Phi^{\text{anom}}, \] with \begin{align} \beta\Delta\Phi^{(0)}&= \frac{1}{6}w^{-2}\tau^3\big[1+(u_1+3u_2-{\bar u}_3-4u_4)I_4 -8w^2I_6\big] +\frac{1}{24}w^{-4}\tau^4\Big[(3u_1+u_2-{\bar u}_3)\notag\\[3pt] \phantom{\beta\Delta\Phi^{(0)}} \mathrel{+}\frac{1}{3}(33u_1^2+38u_1u_2-26u_1{\bar u}_3-24u_1u_4+21u_2^2-14u_2{\bar u}_3 -8u_2u_4+5{\bar u}_3^2+8{\bar u}_3u_4)I_4\notag\\[3pt] \label{phi0} \phantom{\beta\Delta\Phi^{(0)}} \mathrel{+}8(5u_1-u_2-{\bar u}_3+4u_4)\,w^2I_6+24w^4I_8\Big] ,\\[6pt] \beta\Delta\Phi^{(1)}&=\frac{1}{6}w^{-2}\tau^3\big[{\bar u}_3I_4+4w^2I_6\big] +\frac{1}{24}w^{-4}\tau^4\Big[\frac{1}{3}(-9u_1^2-4u_1u_2+12u_1{\bar u}_3 +8u_1u_4+4u_2{\bar u}_3\notag\\[3pt] \phantom{\beta\Delta\Phi^{(1)}}&\mathrel{-}8u_2u_4-5{\bar u}_3^2+8u_4^2)I_4 +4(u_2+4u_4)\,w^2I_6+24w^4I_8\Big], \label{phi1}\\[6pt] \intertext{and} \beta\Delta\Phi^{\text{anom}}&=-\frac{9}{640}u_2^{-4}w^4\Big\{u_2+\big[-2u_1^2 +2(2n-5)u_1u_2-9u_2^2-8u_2u_3-8u_2u_4-2u_3^2\big]I_4\notag\\[3pt] \phantom{\beta\Delta\Phi^{\text{anom}}}&\mathrel{+} 4\big[-6u_1+(n+7)u_2+6{\bar u}_3-6nu_4\big]w^2I_6-72\,w^4I_8\Big\} \times n^5.\label{phi_anom} \end{align} \subsection{For comparison: the replica symmetric free energy} \label{RS} The RS mean field order parameter $q\equiv \phi^{\alpha\beta}$, for any $\alpha\not=\beta$, satisfies Eq.\ (\ref{phi}) which now takes the form \[ \tau=-\frac{1}{2}(n-2)\,wq-\frac{1}{6}\big[(n^2-3n+3)u_1+u_2+(n-1)\bar u_3\big] \,q^2. \] This equation can be used to substitute $\tau$ for $q$ in the RS relations $\left(\phi^2\right)^{\alpha\alpha}=(n-1)\,q^2$, $\left(\phi^3\right)^{\alpha\alpha}=(n-1)(n-2)\,q^3$, $\left(\phi^4\right)^{\alpha\alpha}=(n-1)(n^2-3n+3)\,q^4$, $\sum_{\beta}{\phi^{\alpha\beta}}^4=(n-1)\,q^4$. It is now straightforward to derive from Eq.\ (\ref{main}) the free energy shift of the replica symmetric system with respect to the paramagnet: \[ \beta\Delta\Phi_{\text{RS}}=\beta\Delta\Phi^{(0)}_{\text{RS}} +n\,\beta\Delta\Phi^{(1)}_{\text{RS}} +n^2\,\beta\Delta\Phi^{(2)}_{\text{RS}}+\dots . \] We can make the following observations: \begin{itemize} \item The RS free energy is a regular power series in $n$, with no anomalous part, and the terms $\beta\Delta\Phi^{(k)}_{\text{RS}}$ all have the same character as a power series of $\tau$ starting with $\tau^3$. \item $\beta\Delta\Phi^{(0)}_{\text{RS}}=\beta\Delta\Phi^{(0)}$ of Eq.\ (\ref{phi0}) up to $\tau^4$ (inclusively). \item The leading term proportional to $\tau^3$ of $\beta\Delta\Phi^{(1)}_{\text{RS}}$ is identical with the corresponding RSB contribution in Eq.\ (\ref{phi1}). We have \begin{align*} \beta\Delta\Phi^{(1)}_{\text{RS}}&= \frac{1}{6}w^{-2}\tau^3\big[{\bar u}_3I_4+4w^2I_6\big] +\frac{1}{24}w^{-4}\tau^4\Big[u_2+\frac{1}{3}( -7u_1^2+10u_1u_2+12u_1\bar u_3+8u_1u_4\\[3pt] \phantom{\beta\Delta\Phi^{(1)}_{\text{RS}}}&\mathrel{+}21u_2^2+8u_2\bar u_3 -16u_2u_4-3{\bar u_3}^2+8u_4^2)\,I_4 +2(4u_1-8u_2-4\bar u_3+8u_4)\,w^2I_6+48\,w^4I_8\Big]. \end{align*} \item From the previous observations follows that the RS and RSB free energies differ only at the $O(5)$ order in the double series of $\tau$ and $n$. \end{itemize} The $O(n^2)$ term --- which is missing in the RSB scheme at one-loop level --- is given by \begin{equation}\label{n^2tau} \beta\Delta\Phi^{(2)}_{\text{RS}}= -\frac{1}{24}w^{-2}\tau^3\big[1+(u_1+3u_2-\bar u_3-4u_4)\,I_4 -8\,w^2I_6\big]+O\left(w^{-4}\tau^4\right). \end{equation} \section{The nonanalytic temperature dependence of the free energy} \label{V} As we have seen in the previous section, the RSB free energy starts to differ from the RS one only in the fifth analytic order $O(\tau^5)$, $O(\tau^4\,n)$, $O(\tau^3\,n^2)$, and --- lastly but most importantly --- in the anomalous term $O(n^5)$. In dimensions not much higher than 8, however, these terms are subdominate with respect to the nonanalytic one proportional to $\tau^{d/2}$. For retrieving this nonanalytic contribution from the Gaussian free energy in Eq.\ (\ref{Gaussian}), it is converted into the following equivalent form [Eq.\ (\ref{lnZG}) is not very useful for that purpose]: \[ \ln Z_G=-\frac{1}{2}\sum_j\ln \frac{\Lambda^2+\bar\lambda_j}{\pi}\, \sum_{\mathbf p}1+\frac{1}{d}\sum_j\sum_{\mathbf p}\frac{p^2}{p^2+\bar\lambda_j} , \] and the last term gives the nonanalytic contribution $\ln Z_G^{\text{na}}$ which can be generally written as \begin{equation}\label{nonanal_lnZG} \frac{1}{nN}\ln Z_G^{\text{na}}= \frac{1}{d}\, \frac{1}{N}\sum_{\mathbf p}p^2\, \frac{1}{n}\sum_{(\alpha\beta)}\bar G_{\alpha\beta,\alpha\beta}. \end{equation} \subsection{$\ln Z_G^{\text{na}}$ for the infinite step RSB sheme} At first sight, computing the right hand side of Eq.\ (\ref{nonanal_lnZG}) seems to be a formidable task due to the complicated structure of the Gaussian propagators; see Ref.\ \cite{beyond} for details. Nevertheless, as we are interested in the {\em leading\/} nonanalytic temperature dependence, some important simplifications follow: \begin{itemize} \item This leading term comes from the near infrared region $p^2\sim \tau$; the propagators there were all listed in Sec.\ 6 of Ref.\ \cite{beyond}. When computing this near infrared propagators, the quartic couplings can be neglected in the masses in Eq.\ (\ref{Mbar}), they enter only through the order parameter $\phi^{\alpha\beta}$. \item The $n=0$ propagators --- which were computed in \cite{beyond} --- may be used, as their $n$-dependence enters only at much higher orders. [This is just alike the order parameter function $q(x)$.] \item Beside the replica summations in Eq.\ \ref{nonanal_lnZG}, the origin of the relevant $n$-dependence is $x_0$; see (\ref{n_tau}). \end{itemize} The trace in (\ref{nonanal_lnZG}) can be written as an integral over the continuous overlap parameter $x$: $\frac{1}{n}\sum_{(\alpha\beta)}\bar G_{\alpha\beta,\alpha\beta}= -\frac{1}{2}\int_n^1 \bar G^{xx}_{11}\,dx$ and, by using Eq.\ (60) from Ref.\ \cite{beyond}, we can write \[ \int_n^1 p^2\,\bar G^{xx}_{11}\,dx=(1+2t+2t^2)(1-n) -2\,\frac{1+8t+8t^2}{(1+2t)^2}\,w^2\,\frac{1}{p^4}\int_n^1q(x)^2\,dx +\frac{8}{(1+2t)^2}\,w^4\,\frac{1}{p^8}\int_n^1q(x)^4\,dx \] where the dimensionless variable $t\equiv wq_1/p^2$ was introduced. Eq.\ (\ref{invariantsRSB}) can be used to compute $\int_n^1q(x)^2\,dx=-\sum_{\beta}{\phi^{\alpha\beta}}^2=-\left(\phi^2\right) ^{\alpha\alpha}$, and $\int_n^1q(x)^4\,dx=-\sum_{\beta}{\phi^{\alpha\beta}}^4$. Keeping only the relevant contributions,\footnote{Terms with an extra $x_1$ factor are subleading, and thus unimportant here. Their calculation using the near infrared propagators from Ref.\cite{beyond} would even be inconsistent, since corrections to these propagators give the same (subleading) order.} we arrive at \[ \int_n^1 p^2\,\bar G^{xx}_{11}\,dx= 1+2t-8t^3+32\,\frac{t^4(1+t)}{(1+2t)^2}-(1+2t+2t^2)\,n+\frac{81}{10}\, \frac{1}{p^4(p^2+2wq_1)^2}\,w^8u_2^{-4}\,n^5. \] We are now in the position to detach the leading nonanalytic temperature dependence in Eq.\ (\ref{nonanal_lnZG}); approximating $wq_1\approx \tau$, \begin{equation}\label{nonanal_RSB} \begin{gathered} \text{for}\quad d>8\quad \text{infinite step RSB} :\\[10pt] \frac{1}{nN}\ln Z_G^{\text{na}}=-\frac{16}{d}\int\limits^{\infty} \frac{d^dp}{(2\pi)^d}\, \frac{p^2+1}{p^6(p^2+2)^2}\,\times \tau^{d/2} -\frac{81}{20d}\int\limits^{\infty}\frac{d^dp}{(2\pi)^d}\, \frac{1}{p^4(p^2+2)^2}\,\times w^8u_2^{-4}\,\tau^{d/2-4}\,n^5. \end{gathered} \end{equation} [Dimensional regularization is to be understood here, allowing the ultraviolet cutoff $\Lambda$ go to infinity. This same remark is applicable for the RS case, Eq.\ (\ref{nonanal_RS2}).] Note the lack of the $\tau^{d/2}\,n$ term in the RSB scheme. The $O(\tau^{d/2-4}\,n^5)$ contribution is negligible; see Eq.\ (\ref{phi_anom}). \subsection{$\ln Z_G^{\text{na}}$ for the RS case} Eq.\ (\ref{nonanal_lnZG}) is now simplified as \begin{equation}\label{nonanal_RS1} \frac{1}{nN}\ln Z_G^{\text{na}}= \frac{n-1}{2d \frac{1}{N}\sum_{\mathbf p}p^2\, \bar G_1 \end{equation} with the diagonal propagator $\bar G_1$ satisfying \cite{PytteRudnick79} \[ \frac{1}{2}n(n-1)\,\bar G_1=\frac{1}{p^2+\bar\lambda_L}+ (n-1)\,\frac{1}{p^2+\bar\lambda_A}+ \frac{1}{2}n(n-3)\,\frac{1}{p^2+\bar\lambda_R} \] where the three masses are in leading order \begin{equation}\label{RSmasses} \bar\lambda_L=2\tau,\quad\bar\lambda_A=-\frac{4}{n-2}\tau,\quad\text{and}\quad \bar\lambda_R=-n\frac{2}{n-2}\tau. \end{equation} Extracting the terms providing nonanalytic temperature dependence, Eq.\ (\ref{nonanal_RS1}) takes the following form: \begin{equation}\label{nonanal_RS2} \begin{gathered} \text{for}\quad d>8\quad \text{ RS}:\\[10pt] \frac{1}{nN}\ln Z_G^{\text{na}}=-\frac{16}{d}\int\limits^{\infty} \frac{d^dp}{(2\pi)^d}\, \frac{p^2+1}{p^6(p^2+2)^2}\,\times \tau^{d/2} -\frac{8}{d}\int\limits^{\infty}\frac{d^dp}{(2\pi)^d}\, \frac{p^2+1}{p^4(p^2+2)^3}\,\times \tau^{d/2}n. \end{gathered} \end{equation} A comparison of Eqs.\ (\ref{nonanal_RSB}) and (\ref{nonanal_RS2}) shows that the leading nonanalytic free energy contributions of the RSB and RS phases coincide, and the free energy difference is $O(5)$ in the double series of $\tau$ and $n$ down to dimensions 8. In the next section, the line in the $\tau$, $n$ plane where the free energy difference disappeares is computed in first order perturbation theory taking into account the $O(5)$ analytical terms. \section{The line of equal free energies of the RSB and RS schemes for $d>8$} \label{VI} To find the fifth order (in the double series of $\tau$ and $n$) results, we must do two further steps: \begin{itemize} \item The one-replica quantities in Eq.\ (\ref{main}) must be extended to the appropriate order by using the formulae in appendix \ref{App2} (for RSB) and in subsection \ref{RS} (for RS). \item Eq.\ (\ref{main}) must be supplemented by the fifth order terms. \end{itemize} As for the first step, the notation $\delta\dots$ is introduced to mean the difference of a one-replica quantity in the two schemes; $\delta (\phi^2)^{\alpha\alpha}\equiv (\phi^2)^{\alpha\alpha}_{\text{RSB}}- (\phi^2)^{\alpha\alpha}_{\text{RS}}$ for instance. Then we have \begin{equation}\label{deltas} \begin{aligned} \delta (\phi^2)^{\alpha\alpha}&= -\frac{1}{9}w^{-6}u_2^2\,\tau^4+\frac{1}{3}w^{-4}u_2\,n\tau^3 -\frac{1}{4}w^{-2}\,n^2\tau^2+O(5),\\ \delta (\phi^3)^{\alpha\alpha}&= \frac{2}{5}w^{-7}u_2^2\,\tau^5-w^{-5}u_2\,n\tau^4+\frac{1}{2}w^{-3}\,n^2\tau^3 +\frac{27}{80}w^{3}u_2^{-3}\,n^5+O(6),\\ \delta \sum_{\beta}{\phi^{\alpha\beta}}^4&= -\frac{8}{15}w^{-6}u_2\,\tau^5+w^{-4}\,n\tau^4 -\frac{81}{80}w^4u_2^{-4}\,n^5+O(6); \end{aligned} \end{equation} whereas $\delta(\phi^4)^{\alpha\alpha}$ and $\left({(\phi^2)}^{\alpha\alpha}\right)^2$ are of order $O(6)$. Inclusion of the three fifth order one-replica quantities $(\phi^5)^{\alpha\alpha}$, $\sum_{\beta}{\phi^{\alpha\beta}}^3\, (\phi^2)^{\alpha\beta}$, and $(\phi^2)^{\alpha\alpha}\,(\phi^3)^{\alpha\alpha}$ into Eq.\ (\ref{main}) is not an easy task. Fortunately, however, the $\delta$'s formed of them are at most of order $O(6)$, and thus negligible for the present purpose. We can now search for the line in the $\tau$-$n$ plane where the RS and infinite step RSB free energies coincide. Inserting $n=c\,w^{-2}u_2\tau$ into Eq.\ (\ref{deltas}), the following --- somewhat surprising --- result turns up: \begin{align*} \delta (\phi^2)^{\alpha\alpha}&=-\frac{1}{4}(c-2/3)^2\,w^{-6}u_2^2\,\tau^4,\\ \delta (\phi^3)^{\alpha\alpha}&=\frac{9}{80}(c-2/3)^2 \,(3c^3+4c^2+4c+8)\,w^{-7}u_2^2\,\tau^5,\\ \delta \sum_{\beta}{\phi^{\alpha\beta}}^4&=-\frac{3}{80}(c-2/3)^2 \,(27c^3+36c^2+36c+32)\,w^{-6}u_2\,\tau^5. \end{align*} We can write the free energy difference between the RSB and RS phases up to one-loop order, using Eqs.\ (\ref{Phi}) and (\ref{main}), in terms of $c$ as follows: \begin{equation}\label{Phi_vs_c} \beta(\Phi^{\text{RSB}}-\Phi^{\text{RS}})\cong -\frac{1}{16} \left(c-\frac{2}{3}\right)^2\, \left[c-\frac{2}{3}-2u_2^{-1}f_d(\Lambda)\right]\,w^{-6}u_2^2\,\tau^5, \qquad c\to \frac{2}{3} \end{equation} where the correction term is \begin{equation}\label{f} f_d(\Lambda)=\frac{1}{3}(2u_1^2+11u_1u_2+12u_2^2+7u_2u_3 +4u_2u_4+2u_3^2)\,I_4+4(2u_1-3u_2-2u_3)w^2I_6+24w^4I_8. \end{equation} It can be seen from Eq.\ (\ref{Phi_vs_c}) that for the cases $c\approx 2/3$, the RSB and RS free energies differ only at 3-loop order! Without any further assumption, we should go up to that order to gain the 1-loop correction to $c_{\text{eq}}$, i.e.\ to the $c$ value of the line where the two free energies are equal. We will, however, assume that for $d>8$ the mean field form of Eq.\ (\ref{Phi_vs_c}), i.e. \begin{equation}\label{condition} \beta(\Phi^{\text{RSB}}-\Phi^{\text{RS}})\sim (c-c_{\text{eq}})^3, \end {equation} remains valid for the model. Eq.\ (\ref{Phi_vs_c}) then provides $c_{\text{eq}}=\frac{2}{3}[1+u_2^{-1}f_d(\Lambda)]$, and finally \begin{equation}\label{result} \text{for $d>8$}\quad\Phi^{\text{RSB}}=\Phi^{\text{RS}}\Longrightarrow n=\frac{2}{3}\left[u_2+f_d(\Lambda)+O(\text{2-loop})\right]\,w^{-2}\tau. \end{equation} \section{Between six and eight dimensions} \label{VII} \subsection{The infinite step RSB case} As it has been explained in Ref.\ \cite{beyond}, the quartic vertex whose bare value is $u_2$% \footnote{In \cite{beyond} $u=u_2/2$ was used.} suffers a change in its temperature scaling from $\tau^0$ to $\tau^{d/2-4}$ when crossing $d=8$. The RSB order parameter is severely influenced by this, as best seen from Eqs.\ (\ref{equation_of_state}), (\ref{H}) by extracting a term proportional to $q(x)^3$ from the one-loop contribution, see \cite{beyond}, and matching it to the corresponding zero-loop one: \[ \frac{1}{3}\Big[u_2+24 \int\limits^{\infty}\frac{d^dp}{(2\pi)^d}\, \frac{1}{p^4(p^2+2)^2}\,w^4 \tau^{d/2-4}\Big]\times {\phi^{\alpha\beta}}^3. \] The second part between the brackets is called $\tilde u_2$ and reproduced in Appendix \ref{App3}; it obviously dominates the bare value $u_2$. Beside $\tilde u_2$, the combination $\tilde u_1-\frac{1}{3}\tilde {\bar u}_3$ emerges too from the one-loop correction, as shown in Appendix \ref{App3}, and the equation of state in (\ref{equation_of_state}) and (\ref{H}), with the relevant terms kept only, takes the following form [the usual infinite step ultrametric ansatz is assumed here, and therefore $q(x)$ replaces $\phi^{\alpha\beta}$]: \begin{equation}\label{6<d<8_equation_of_state} 2\tau q(x)+w\left[\Big(n-\frac{2}{3}x_0\Big)q_0^2 -2\Big(q_1-\frac{1}{2}x_1q_1\Big)q(x) -\frac{1}{3}xq(x)^2 \right] +\left(\tilde u_1-\frac{1}{3}\tilde {\bar u}_3\right) q_1^2q(x) +\frac{1}{3}\tilde u_2 q(x)^3=0. \end{equation} Solving this equation provides the order parameter function in this transitional domain of dimensions: \begin{equation}\label{tilded_order_parameter} q(x)=\frac{w}{\tilde u_2}\,x,\qquad\tau=wq_1-\frac{1}{2}\left(\tilde u_1 +\tilde u_2-\frac{1}{3}\tilde {\bar u}_3\right)q_1^2,\qquad n=\frac{2}{3}x_0,\qquad 6<d<8; \end{equation} compare it with (\ref{q(x)}), (\ref{K}), and (\ref{n_tau}). The tilded quantities are proportional to $w^4\,\tau^{d/2-4}$, see Eq.\ (\ref{tilded_quantities}). Note that although $x_1\cong\frac{\tilde u_2}{w^2}\,\tau\sim w^2\tau^{d/2-3}$ is one-loop order, the order parameter $q(x)$ continues to be zero-loop order. We are now proceeding to compute the relevant nonanalytic terms to the free energy in (\ref{lnZ^n}): both $\mathcal L^{(0)}$ and $\ln Z_G$ must be considered in this regime of dimensions. \begin{itemize} \item Gaussian free energy: near infrared contribution. Eq.\ (\ref{nonanal_RSB}) remains valid, except replacing $u_2$ by $\tilde u_2$ in the second term, and exploiting the definition of $\tilde u_2$ in (\ref{tilded_quantities}): \begin{equation}\label{tilded_n^5_1} \frac{1}{nN}\ln Z_G^{\text{na}}=-\frac{16}{d}\int\limits^{\infty} \frac{d^dp}{(2\pi)^d}\, \frac{p^2+1}{p^6(p^2+2)^2}\,\times \tau^{d/2} -\frac{27}{160d}\,w^4\tilde u_2^{-3}n^5. \end{equation} \item A detailed analysis based on Ref.\ \cite{beyond} shows that only highly subdominant contributions --- $\frac{1}{nN}\ln Z_G^{\text{na}}\sim\tau^{d-1}$ --- result from the far infrared regime $p^2\sim u_2q^2\sim \frac{u_2}{w^2}\tau^2$; these are even smaller than the correction term from the near infrared regime which is $\sim \tau^{d/2+1}$. \item Nonanalytic temperature dependence occurs in $\mathcal L^{(0)}$ of Eq.\ (\ref{L0}) due to the tilded quartic couplings in the order parameter, see (\ref{tilded_order_parameter}). For retrieving the relevant leading contributions, it is sufficient to take \[ -\frac{1}{nN}\mathcal L^{(0)}=\frac{1}{2}\tau (\phi^2)^{\alpha\alpha} +\frac{1}{6}w(\phi^3)^{\alpha\alpha}, \] thus neglecting terms which are smaller by $u_2/\tilde u_2\sim \tau^{4-d/2}$. A straightforward calculation provides \begin{equation}\label{tilded_n^5_2} -\frac{1}{nN}\mathcal L^{(0)}=C_{\text{2-loop}}\times w^2\tau^{d-3}+ \frac{9}{160}\,w^4\tilde u_2^{-3}n^5. \end{equation} (Note the lack of the term $\sim\tau^{d/2}$.) The first term, which obviously dominates the subleading one $\sim\tau^{d/2+1}$, consists of contributions like $w^{-1}\tilde u_2^2q_1^5$. A consistent calculation of $C_{\text{2-loop}}$, however, requires the 2-loop extension of the equation of state. Similarly, 2-loop corrections behind the Gaussian free energy in Eq.\ (\ref{lnZ^n}) yield the same kind of term. \end{itemize} \subsection{The replica symmetric case} Similarly to the RSB case, we must include the one-loop term in Eq.\ (\ref{equation_of_state}), and --- keeping only the relevant terms --- the RS equation of state reads: \[ 2\tau q+(n-2)wq^2+(n-2)w\frac{1}{N}\sum_{\mathbf p}\bar G_2=0, \] with \cite{PytteRudnick79} \[ n(n-1)(n-2)\bar G_2=2(n-2)\,\frac{1}{p^2+\bar\lambda_L}+ (n-1)(n-4)\,\frac{1}{p^2+\bar\lambda_A}-n(n-3)\,\frac{1}{p^2+\bar\lambda_R}, \] and for the RS masses see (\ref{RSmasses}). Extracting the nonanalytic contribution from the third term is somewhat lengthy but straightforward; in the zero replica number limit it can be put into the form [see Eq.\ (\ref{tilded_quantities})] \[ (n-2)w\frac{1}{N}\sum_{\mathbf p}\bar G_2=\left[\left(\tilde u_1 -\frac{1}{3}\tilde {\bar u}_3\right)+\frac{1}{3}\tilde u_2 \right]\,q^3,\qquad n=0. \] Note that the resultant RS equation of state is equivalent with the RSB one of Eq.\ (\ref{6<d<8_equation_of_state}) for $x=x_0=x_1$. We are now in the position to analyse the two possible sources for the RS free energy having nonanalytic temperature dependence for $6<d<8$: \begin{itemize} \item As for the Gaussian part, Eq.\ (\ref{nonanal_RS2}) remains valid for $\frac{1}{nN}\ln Z_G^{\text{na}}$ even in this dimensional regime. \item Using the RS equation of state from above, it is straightforward to see that the leading nonanalytic term to $-\frac{1}{nN}\mathcal L^{(0)}$ is proportional to $\tau^{d-3}$. It has, however, a 2-loop character, and a consistent calculation would require extending to the next perturbative order. \end{itemize} \subsection{The free energy difference between the RSB ans RS phases} Collecting pieces of information from previous sections and subsections, namely Eqs.\ (\ref{n^2tau}), (\ref{nonanal_RS2}), (\ref{tilded_n^5_1}) and (\ref{tilded_n^5_2}), the leading terms which do not cancel in the free energy difference when $6<d<8$ are the following: \begin{multline*} \beta(\Phi^{\text{RSB}}-\Phi^{\text{RS}})=\\ C_{\text{2-loop}}\times w^2\tau^{d-3} +\frac{8}{d}\int\limits^{\infty}\frac{d^dp}{(2\pi)^d}\, \frac{p^2+1}{p^4(p^2+2)^3}\,\times \tau^{d/2}n +\frac{1}{24}w^{-2}\tau^3n^2 +\frac{9}{160d}(3-d)\,w^4\tilde u_2^{-3}n^5. \end{multline*} [The notation $C_{\text{2-loop}}$ was kept here for simplicity, although it may obviously differ from that defined in Eq.\ (\ref{tilded_n^5_2}). For the definition of $\tilde u_2$ see (\ref{tilded_quantities}).] The line where the RSB and RS free energies coincide follows from this equation: \[ n\sim w^2\,\tau^{d/2-3},\qquad 6<d<8. \] The proportionality constant is a one-loop integral, whose value can only be computed from the knowledge of the two-loop integral $C_{\text{2-loop}}$. \section{The stability boundary of the RS phase} \label{VIII} It is well known from the famous finding of de Almeida and Thouless \cite{AT} that the mean field Ising spin glass in a homogeneous magnetic field enters the RSB phase along the boundary where the RS phase becomes unstable. This was later extended by Kondor \cite{Kondor} to the case when the magnetic field is zero, the replica number $n$, however, is finite: $n$ essentially takes over the role of the magnetic field, and along an Almeida-Thouless (AT) line in the $\tau-n$ plane, the RS phase becomes unstable. In \cite{Kondor} a simplified model --- the ``truncated" model with all the quartic couplings but $u_2$ zero --- was used, and it was shown that the RS and RSB free energies coincide along the instability line. In Ref.\ \cite{droplet} the leading behaviour of the dangerous ``replicon" mass $\Gamma_\text{R}$ close to $T_c$ was expressed in terms of the {\em exact\/} cubic and quartic vertices $w^{\text{exact}}$ and $u_2^{\text{exact}}$, and also the exact order parameter $q^{\text{exact}}$ as% \footnote{The superscript ``exact" is used to distinguish these quantities from the ``bare" ones, which are the zeroth order contributions to them.} \[ \Gamma_\text{R}=nw^{\text{exact}}q^{\text{exact}}-\frac{2}{3}u_2^{\text{exact}} {q^{\text{exact}}}^2,\qquad d>8. \] The zero replicon mode signals the instability of the RS phase providing for the AT-line: \[ n_{\text{AT}}=\frac{2}{3}{w^{\text{exact}}}^{-1}u_2^{\text{exact}} q^{\text{exact}}. \] Substituting the bare values for the vertices and the mean field order parameter $wq=\tau$, agreement with the zero-loop result for the line of equal free energies of Eq.\ (\ref{result}) is found, thus reproducing Kondor's result. (Although the more generic model with all the quartic couplings is considered here.) Computing the one-loop correction of the replicon mass by standard perturbative methods is straightforward for $d>8$, although somewhat lengthy. Omitting the details, only the final result is displayed here: \begin{multline*} \Gamma_\text{R}=\left\{1+[(n-1)u_1+u_3+4u_4]\,I_4+(n-2)w^2\,I_6\right\}\,n\, (wq^{\text{exact}})+ \bigg\{\frac{1}{3}[n(n-3)u_1-2u_2]\\[3pt] +\frac{1}{9}[(2n^3-5n^2-3n-4)u_1^2 -12u_1u_2+4n(n-3)u_1u_3+24n(n-3)u_1u_4-18u_2^2-24u_2u_3-48u_2u_4-4u_3^2]\,I_4\\[5pt] +\frac{1}{3}[-(2n^3-6n^2+12n+16)u_1+24u_2-2(n^2-8)u_3+8n(n-6)u_4]\,w^2I_6 -(2n^3-9n^2+12n+16)\,w^4I_8 \bigg\}\,{q^{\text{exact}}}^2. \end{multline*} This must be complemented by the equation of state, i.e.\ by the one-loop relationship between the order parameter and $\tau$: \begin{multline*} -2\tau=(n-2)\left\{ +\frac{1}{3}[(n-2)u_1-3u_2-(n-2)u_3-(n^2-n-8) u_4]\,I_4-(n-2)w^2I_6\right\}(wq^{\text{exact}})\\[3pt] +O({q^{\text{exact}}}^2). \end{multline*} From these two equations, we can easily conclude the leading behaviour for $n\sim \tau \ll 1$: \begin{multline}\label{Rmass} \Gamma_\text{R}=\left[1+\frac{1}{3}(-u_1+3u_2+u_3+4u_4)\,I_4-4w^2I_6\right]\,n\tau -\frac{2}{3}\bigg[u_2+\frac{1}{3}(2u_1^2+10u_1u_2+15u_2^2\\[4pt] +8u_2u_3+8u_2u_4 +2u_3^2)\,I_4+8(u_1-2u_2-u_3)w^2I_6+24w^4I_8\bigg]\,w^{-2}\tau^2. \end{multline} The instability line is obtained from the condition $\Gamma_\text{R}=0$: \[ n_{\text{AT}}=\frac{2}{3}[u_2+f_d(\Lambda)]\,w^{-2}\tau \] where the correction term has been defined in (\ref{f}). Comparing this expression with (\ref{result}), we can conclude that the mean field type behaviour persists for $d>8$: the RS phase becomes unstable where its free energy coincides with that of the RSB phase. (See, however, the discussion in the Conclusion part.) \section{Conclusion} \label{IX} Two basic features of the mean field Ising spin glass were followed in this paper, by perturbatively taking into account the effect of the geometry of a high dimensional lattice ($d$ certainly larger than 6). These properties are: \begin{itemize} \item the anomalous sample to sample free energy fluctuations (considering only {\em large\/} deviations), \item and the equality of the free energies of the replica symmetric and infinite step replica symmetry broken phases along the line (Almeida-Thouless line) where the RS phase becomes unstable. \end{itemize} Both problems can be elaborated by studying the $n$-dependent free energy below the spin glass transition. As for the first item, perturbations break down the anomalous behaviour, and Gaussian large deviations take over the lead. As it was shown in Sec.\ \ref{IV}, Gaussian fluctuation is common for {\em any\/} ansatz of the order parameter with the property of replica eqivalence (not to be confused with replica symmetry), i.e.\ it must be a geometrical effect. As it was pointed out by G.\ Parisi \cite{private_communication_Parisi1}, Gaussian fluctuations always dominate whenever local interactions are inhomogeneous: this is certainly the case in the finite dimensional geometry of a hypercubic lattice (but not for the SK model). The fact that locally inhomogeneous interactions imply Gaussian large deviations of the free energy has been demonstrated recently on the Bethe lattice with finite connectivity, and continuously distributed quenched interactions \cite{Parisi_Rizzo_3}. The anomalous $n$-dependence of the RSB free energy, however, does persist, although it is subleading in finite dimensions. The coincidence of the RS instability line and the line of equal free energies with the RSB phase is somewhat misterious even in the SK model: the free energy difference is of fifth order (in the double series in $\tau$ and $n$), as contrasted with the replicon mass wich is a second order quantity. We found that this feature of the mean field theory persists for $d>8$, but in a rather nontrivial way (see the complicated correction term in Eq.\ (\ref{f})). We must emphasize that the two computations in Secs.\ \ref{VI} and \ref{VIII} are completely independent. This result gives an important support to the scenario, at least for $d>8$, that replica symmetry must be broken by the infinite ultrametric hierarchy of the mean field spin glass proposed by Parisi. We must remember, however, that the condition of Eq.\ (\ref{condition}) was a priori assumed, and any conclusions depend on the validity of it. In the dimensional domain $6<d<8$ the situation is more complicated. The AT line can be computed relatively easily, as it was in Ref.\ \cite{nucl}: \[ n_{\text{AT}}=\frac{2}{3}w^{-2}\tilde u_2\tau \sim w^2\tau^{d/2-3}, \] see (\ref{tilded_quantities}) and Eq.\ (47) of \cite{nucl}. Among the leading contributions to the free energy difference, which are now nonanalytic, there are terms which can be computed only by extending the calculation to two-loop order; as explained in Sec.\ \ref{VII}. This seems to be unfeasible, and no a priori assumptions are available now. A real miracle would be the coincidence of the two lines, presumably due to several cancellations. One can speculate that otherwise the separation of the two lines might be explained by some non mean field scenarios: first order transition or replica symmetry breaking with replica equivalence, but not with the infinite step ultrametric structure. \begin{acknowledgments} This work has benefited from a useful correspondence with Giorgio Parisi and Tommaso Rizzo. Sending me preliminary results of Ref.\ \cite{Parisi_Rizzo_3} is also acknowledged. \end{acknowledgments}
proofpile-arXiv_069-15849
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Anderson localization is a localization effect predicted to take place for a wave propagating in a disordered potential~\cite{anderson1958}. It is due to multiply scattered waves from random defects and yields exponentially localized density profiles, resulting in a complete suppression of the usual diffusive transport associated with incoherent wave scattering~\cite{lee1985}. While in the three dimensional world, one may observe a transition between extended and localized states, in a one-dimensional (1D) world, Anderson localization is a typical feature of the motion in a disordered potential~\cite{vantiggelen1999}. Cold atoms form a wonderful toolbox for controlling parameters of the system under study \cite{jaksch}. It comes out as no surprise that attempts have been made for a direct observation of the Anderson localization in cold atoms settings. Already the first attempts \cite{bodzio,inguscio,fort,aspect,wir,wir2} have revealed that the presence of atomic interactions may deeply affect the physics of the problem and make the observation of the localization non trivial. Further theoretical studies \cite{LSP2007,Lugan2007,skipetrov2008} were followed by successful observations of the phenomenon made possible by going to the regime of very weakly interacting particles~\cite{Billy2008}. While in that work a random speckle potential was used, in another attempt~\cite{ingu2008} a quasi-periodic version of the potential using superposition of laser beams was created resulting in the observation of Aubry-Andr\'e~\cite{AA} localization for noninteracting atoms. Anderson localization is a one-body phenomenon, and it is important to understand how it is modified when interactions between particles -- in our case, cold atoms -- are taken into account. In the absence of any external potential, at zero temperature, 1D particles interacting attractively tend to cluster together, forming a bright soliton. Explicit solutions of the many-body problem can be found for a contact interaction~\cite{McGuire64}. Altogether, a bright soliton appears as a composite particle, whose position is given by the center of mass of the constituting atoms and a mass equal to the sum of the mass of the atoms (see next section). Using external potentials, it has been experimentally shown how to put solitons in motion~\cite{brightexp}. The purpose of this contribution is to discuss what happens to a bright soliton exposed to a weak and smooth disordered potential \cite{kivshar,others}. Of course, if that potential was sufficiently strong, it could probably destroy the soliton altogether, break it into pieces etc. We are, however, interested in the other limit when the external potential is sufficiently weak and smooth not to perturb the soliton shape. It is then quite reasonable to expect that, if this weak potential is of random nature (disorder) that the soliton as a composite particle, undergoes multiple scattering, diffusive motion and eventually Anderson localization. In a recent short contribution~\cite{ours} we have shown that this is indeed the case by considering the effective quantum motion of the soliton. The present work brings a detailed derivation of the effective Hamiltonian applied before, and shows examples of the corresponding localized eigenstates. It provides thus a complementary material to our previous work~\cite{ours}. \section{Mean field description} \label{mean} \subsection{Equations of motion for a bright soliton in a disorder potential} \label{meana} Consider an ensemble of cold atoms (bosons) with attractive interactions at zero temperature. We assume a strong harmonic transverse confinement so a one-dimensional approximation can be used. In the mean field approach, a $c$-number function $\phi$ takes the place of the bosonic field operator $\hat\psi$. $\phi$ is a solution of the Gross-Pitaevskii equation \begin{equation} \label{GP} i\partial_t\phi=-\frac12 \partial_z^2\phi-|\phi|^2\phi, \end{equation} where we have adopted the following natural units for energy, length and time, respectively \begin{eqnarray} E_0&=&4m\omega_\perp^2a^2, \\ l_0&=&\frac{\hbar}{2|a|m\omega_\perp}, \\ t_0&=&\frac{\hbar}{4a^2m\omega_\perp^2}. \end{eqnarray} The transverse harmonic confinement frequency is denoted by $\omega_\perp$, $a$ is the atomic $s$-wave scattering length, and $m$ the mass of an atom. We normalize $\phi$ to the total number of particles $N$. Eq.\eqref{GP} admits a stationary bright soliton solution $e^{-i\mu t}\phi_0$ \cite{zakharov}, where \begin{equation} \phi_0(z-q)=\sqrt{\frac{N}{2\hl}}\frac{e^{-i\theta}}{\cosh[(z-q)/\hl]}, \label{bs} \end{equation} the chemical potential $\mu=-N^2/8$ and the soliton width $\hl=2/N$. This bright solitonic solution minimizes the energy functional \begin{equation} E=\int dz \left[\frac12 |\partial_z \phi|^2-\frac{1}{2} |\phi|^4 -\mu |\phi|^2\right]. \label{enfunc} \end{equation} Observe that eq.~(\ref{bs}) allows for an arbitrary center-of-mass (CM) position $q$ and an arbitrary global phase $\theta$. Suppose the soliton is placed in a weak and smooth disorder potential, $V(z)$, with variance $V_0^2$ and correlation length $\sigma_0$. We will concentrate on the case when $\sigma_0<\hl$ but the approach we present is general. Linearization of the Gross-Pitaevskii equation allows us to describe the perturbation of the soliton due to the presence of a weak potential \cite{castin}. Indeed, the substitution \begin{equation} e^{-i\mu t}[\phi_0+\delta\phi], \end{equation} into \eqref{GP} supplemented with the potential $V(z)$ leads to the following inhomogeneous time-dependent Bogoliubov equations \begin{equation} i\partial_t \left(\begin{array}{c} \delta\phi \\ \delta\phi^* \end{array}\right)={\cal L}\left(\begin{array}{c} \delta\phi \\ \delta\phi^* \end{array}\right)+\left(\begin{array}{c} S \\ -S^* \end{array}\right), \label{tbe} \end{equation} where \begin{eqnarray} {\cal L}=\left(\begin{array}{cc} -\frac{1}{2}\partial^2_z -2|\phi_0|^2-\mu & -\phi_0^2 \\ \phi_0^{*2} & \frac12\partial^2_z +2|\phi_0|^2+\mu \end{array}\right), \end{eqnarray} and \begin{equation} S=V(z)\;\phi_0(z-q). \end{equation} In Eq.~\eqref{tbe} we have neglected terms of order higher than ${\cal O}(\delta\phi,V)$. Solution of \eqref{tbe} can be expanded in right eigenvectors and corresponding adjoint modes of the non-hermitian operator ${\cal L}$. However, this operator is not diagonalizable \cite{castin,lewenstein,dziarmaga04}. For all eigenvectors $(u_n,v_n)$ corresponding to non-zero eigenvalues $E_n$, the adjoint modes are left eigenvectors of the $\cal L$. That is no longer true for the zero-eigenvalue modes. There are two zero modes in our system \begin{equation} \left(\begin{array}{c}u_\theta \\ v_\theta\end{array}\right) =i\partial_\theta\left(\begin{array}{c}\phi_0\\ \phi_0^*\end{array}\right), \quad \left(\begin{array}{c}u_q \\ v_q\end{array}\right) =i\partial_q\left(\begin{array}{c}\phi_0\\ \phi_0^*\end{array}\right), \end{equation} which are related to a small modification of the global phase of the solution \eqref{bs} and to a small shift of the CM, respectively \cite{dziarmaga04,ours}. As both modifications cost no energy they appear as zero modes of the ${\cal L}$ operator. Indeed, it is consistent with quadratic expansion of the energy functional, \begin{equation} E={\rm const}+\frac12 \int dz\; (\delta\phi^*,-\delta\phi)\;{\cal L}\; \left(\begin{array}{c}\delta\phi\\ \delta\phi^*\end{array}\right), \end{equation} where we see that contributions to soliton perturbation from zero modes do not change $E$. The modes adjoint to the zero modes are \begin{equation} \left(\begin{array}{c}u_\theta^{\rm ad} \\ v_\theta^{\rm ad}\end{array}\right) =\partial_N\left(\begin{array}{c}\phi_0\\ \phi_0^*\end{array}\right), \quad \left(\begin{array}{c}u_q^{\rm ad} \\ v_q^{\rm ad}\end{array}\right) =i\frac{z-q}{N} \left(\begin{array}{c} \phi_0 \\ -\phi_0^* \end{array}\right), \label{adj} \end{equation} which has been found by solving \begin{equation} {\cal L} \left(\begin{array}{c} u_{\theta,q}^\text{ad}\\ v_{\theta,q}^\text{ad}\end{array}\right) =\frac{1}{M_{\theta,q}} \left(\begin{array}{c} u_{\theta,q}\\ v_{\theta,q}\end{array}\right), \label{adjeq} \end{equation} where $M_{\theta}$ and $M_q$ are determined by the requirements $\langle u_{\theta}^\text{ad}| u_{\theta}\rangle-\langle v_{\theta}^\text{ad}| v_{\theta}\rangle =1$ and $\langle u_{q}^\text{ad}| u_{q}\rangle-\langle v_{q}^\text{ad}| v_{q}\rangle =1$ \cite{castin,lewenstein,dziarmaga04,ours}. It turns out that \begin{eqnarray} M_\theta=-\frac{4}{N}, \quad M_q=N. \label{masses} \end{eqnarray} The latter is equal to the total mass of the system. Equation~\eqref{adjeq} ensures that $(u_{\theta,q}^\text{ad},v_{\theta,q}^\text{ad})$ are orthogonal to all eigenvectors of $\cal L$ with $E_n\ne 0$. Perturbation of the soliton can be expanded in the complete basis vectors \begin{eqnarray} \left(\begin{array}{c} \delta\phi \\ \delta\phi^* \end{array}\right)&=& \frac{\theta'-\theta}{i} \left(\begin{array}{c} u_\theta \\ v_\theta \end{array}\right)+P_\theta \left(\begin{array}{c} u_\theta^{\rm ad} \\ v_\theta^{\rm ad} \end{array}\right) \cr && +\frac{q'-q}{i} \left(\begin{array}{c} u_q \\ v_q \end{array}\right)+P_q \left(\begin{array}{c} u_q^{\rm ad} \\ v_q^{\rm ad} \end{array}\right) \cr && +\sum_{n,E_n>0}\left[b_n\left(\begin{array}{c} u_n \\ v_n \end{array}\right) +b_n^*\left(\begin{array}{c} v_n^* \\ u_n^* \end{array}\right) \right], \label{pertbexp} \end{eqnarray} where real $q'$ and $\theta'$ describe translation of the soliton and shift of its global phase, respectively, while $P_q$ and $P_\theta$ (also real) are momentum of the CM of the soliton and momentum conjugate to the global phase, respectively. The momentum $P_\theta=N'-N$ represents deviation from the average total number of particles $N$. Deformation of the soliton shape is described by complex variables $b_n$. Substituting \eqref{pertbexp} into \eqref{tbe} and projecting on the basis vectors results in a set of equations \begin{eqnarray} \partial_t \theta'&=&\frac{P_\theta}{M_\theta}+2 \langle \partial_N\phi_0|V\phi_0\rangle,\label{1} \\ \partial_t P_\theta&=&0,\label{2} \\ \partial_t q'&=&\frac{P_q}{M_q},\label{3} \\ \partial_t P_q&=&-\int dz\;|\phi_0(z-q)|^2\;\partial_zV(z),\label{4} \\ i\partial_t b_n&=&E_n\;b_n+s_n,\label{5} \end{eqnarray} where real-valued \begin{equation} s_n=\langle u_n|S\rangle+\langle v_n|S^*\rangle. \label{sn} \end{equation} Equation~\eqref{1} describes linear evolution of the global phase and it is possible to obtain $\theta'(t)=\theta={\rm const}$ by a proper choice of $P_\theta$. The latter is a constant of motion, see \eqref{2}. We consider a weak disorder potential when $\sigma_0<\hl$. Therefore the force acting on the CM, which is the force acting on a single particle convoluted with the soliton profile \eqref{4}, is small and it oscillates around zero as a function of $q$. Thus, Eqs.~\eqref{3}-\eqref{4} imply that, if we choose $P_q(0)=0$ and such a $q$ that $\int dz |\phi_0(z-q)|^2\partial_zV=0$, then $q'(t)=q={\rm const}$. \subsection{Deformation of the soliton shape} \label{meanb} We have seen that in a disorder potential the CM of the soliton can be fixed and its global phase can be constant. Let us now concentrate on the set of Eqs.~\eqref{5} which describe changes in the soliton shape due to the presence of a disorder potential. Solving Eqs.~\eqref{5} with an assumption that initially the bright soliton is unperturbed, i.e. $b_n(0)=0$, we obtain \begin{eqnarray} \delta\phi&=&\sum_{n,E_n>0}\frac{s_n}{E_n}\left[ \left(e^{-iE_n t}-1\right)\;u_n(z-q)\right. \cr &&\left.+\left(e^{iE_n t}-1\right)\;v_n^*(z-q) \right]. \end{eqnarray} The lowest energy of the Bogoliubov modes in the case of the bright soliton is $E_1=|\mu|=N^2/8$ \cite{ueda}. Thus a large gap in energy separates the soliton from the Bogoliubov modes. These modes are delocalized and describe radiation of the soliton. The energy spectrum can be well approximated by a shifted free particle dispersion relation \begin{equation} E_n\approx \frac{2\pi^2}{L^2}n^2+|\mu|, \end{equation} where $n$ is integer and $L$ stands for the size of a box in which we consider our system. Moreover, due to the radiation character of the modes \begin{eqnarray} |u_n+v^*_n|&\le& \frac{1}{\sqrt{L}}, \\ |s_n|&\le& |V_0|\sqrt{\frac{N\hl}{2L}}. \end{eqnarray} The latter inequality is obtained taking a rectangular profile of size $\hl$ for the bright soliton. Finally, with $\sin^2(E_nt/2)\le1$ and $\sum_n 1/E_n\approx \int dn/E_n$, for deformation of the soliton shape, \begin{equation} |\phi_0+\delta\phi|^2\approx |\phi_0|^2+ \phi_0\;\delta\phi^*+\phi_0^*\;\delta\phi, \end{equation} we obtain the following estimate \begin{equation} |\phi_0\;\delta\phi^*+\phi_0^*\;\delta\phi|\le 4|V_0|, \end{equation} and if it is much smaller than $|\phi_0|^2\le 2|\mu|$, the shape of the soliton is negligibly changed. Hence, if we want the shape of the bright soliton to be unaffected by the presence of a disorder potential a sufficient condition is \begin{equation} |V_0|\ll|\mu|. \end{equation} Note that the upper bound on $V_0$ requires the potential to be sufficiently smooth, in particular the case of a $\delta$-correlated disorder potential is excluded by this condition \cite{kivshar}. \subsection{Dziarmaga approach} \label{meanc} In Sec.~\ref{meana}, equations of motion for a bright soliton in the presence of a weak disorder potential have been obtained using the perturbative expansion \eqref{pertbexp}. Consequently the long time evolution of the CM of the soliton for $P_q(0)\ne 0$ cannot be described by these equations. Indeed, after a finite time $|q'(t)-q|>\hl$ and the perturbative approach breaks down. Similar problem may occur in the case of the $\theta'$ variable. We will be interested in a quantum description of the bright soliton where states corresponding to superposition of the CM position over a distance much larger than $\hl$ will be considered. Therefore we need a method that allows us to describe non-perturbative displacement of the soliton. To this end we adopt Dziarmaga approach introduced in a problem of quantum diffusion of a dark soliton \cite{dziarmaga04}. Following Ref.~\cite{dziarmaga04} we do not perform a linear expansion of a perturbed soliton wave-function around fixed $q$ and $\theta$ like in \eqref{pertbexp} but we treat $q$ and $\theta$ themselves as dynamical variables \begin{eqnarray} \left(\begin{array}{c} \phi \\ \phi^* \end{array}\right)&=& \left(\begin{array}{c} \phi_0 \\ \phi_0^* \end{array}\right)+P_\theta \left(\begin{array}{c} u_\theta^{\rm ad} \\ v_\theta^{\rm ad} \end{array}\right) +P_q \left(\begin{array}{c} u_q^{\rm ad} \\ v_q^{\rm ad} \end{array}\right) \cr && +\sum_{n,E_n>0}\left[b_n\left(\begin{array}{c} u_n \\ v_n \end{array}\right) +b_n^*\left(\begin{array}{c} v_n^* \\ u_n^* \end{array}\right) \right]. \label{exp} \end{eqnarray} Note that now if $q(t)$ and $\theta(t)$ are changing in time all modes also evolve because they depend on $q$ and $\theta$, e.g. $u^{\rm ad}_\theta=u^{\rm ad}_\theta(z-q(t))$. Substituting \eqref{exp} into energy functional \eqref{enfunc} supplemented with the $\int dz V|\phi|^2$ term and keeping terms of order ${\cal O}(P^2,b^2,PV,bV)$ only, we obtain the effective Hamiltonian \begin{eqnarray} H&=&\frac{P_q^2}{2M_q}+\int dz\;V(z)\;|\phi_0(z-q)|^2 \cr && +\frac{P_\theta^2}{2M_\theta}+2P_\theta\langle \partial_N\phi_0|V\phi_0\rangle \cr && +\sum_{n,E_n>0}\left(E_nb_n^*b_n+(b_n+b_n^*)s_n\right), \label{clham} \end{eqnarray} which generates the following equations motion \begin{eqnarray} \partial_t \theta&=&\frac{\partial H}{\partial P_\theta}=\frac{P_\theta}{M_\theta}+2 \langle \partial_N\phi_0|V\phi_0\rangle, \label{n1}\\ \partial_t P_\theta&=&-\frac{\partial H}{\partial \theta}=0, \\ \partial_t q&=&\frac{\partial H}{\partial P_q}=\frac{P_q}{M_q}, \\ \partial_t P_q&=&-\frac{\partial H}{\partial q}\approx-\int dz\;|\phi_0(z-q)|^2\; \partial_zV(z), \label{n4}\\ i\partial_t b_n&=&\frac{\partial H}{\partial b_n^*}=E_n\;b_n+s_n. \label{n5} \end{eqnarray} In \eqref{n4} we have neglected terms $P_\theta\partial_q\langle \partial_N\phi_0|V\phi_0\rangle$ and $(b_n+b_n^*)\partial_q s_n$ because they are of order of ${\cal O}(PV,bV)$ while in the equations we keep the linear terms only. Strictly speaking in order to show that the pairs of variables in \eqref{n1}-\eqref{n5} are canonically conjugate one should switch to Lagrangian formalism of the problem, however, as the result is obvious, we have skipped it, see \cite{dziarmaga04}. Equations~\eqref{n1}-\eqref{n5} possess a form identical to \eqref{1}-\eqref{5}. However, $q$ and $\theta$ present in $\phi_0$ and $s_n$ on the right hand side of the current equations are not fixed and evolve in time. It introduces couplings between $q$ and $\theta$ and $b_n$ degrees of freedom which were absent in \eqref{1}-\eqref{5}. Inserting solutions of \eqref{n1}-\eqref{n5} into \eqref{exp} we can obtain long distance propagation of a bright soliton including possible changes of its shape, something not possible with the expansion \eqref{pertbexp}. The Hamiltonian \eqref{clham} cannot be used for extremely large momentum of the CM. That is, it is valid provided $P_q\hl/N\ll 1$, compare \eqref{exp} and \eqref{adj}. For the case of large $P_q$ see \cite{kivshar}. Note also, that due to the fact $M_\theta$ is negative, see \eqref{masses}, the bright soliton \eqref{bs} is a saddle point of the energy functional (\ref{enfunc}). It has, however, no consequences since $P_\theta=N'-N$ is a constant of motion. \section{Quantum description} \label{qdes} From the point of view of quantum mechanics the classical ground state solution \eqref{bs} breaks $U(1)$ gauge and translation symmetries of the quantum many Hamiltonian \cite{dziarmaga04}. That is, the quantum Hamiltonian commutes with $\hat U=e^{i\hat N\theta}$ and, in the absence of a disorder potential, also with the translation operator. In the Bogoliubov description; the $\theta$ and $q$ degrees of freedom appear as zero energy modes and, thanks to the Dziarmaga approach, we know how to properly describe arbitrarily large changes in $\theta$ and $q$. The quantum mechanical version of \eqref{clham} reads \begin{eqnarray} \hat H&=&\frac{\hat P_q^2}{2M_q}+\int dz\;V(z)\;|\phi_0(z-\hat q)|^2 \cr && +\frac{\hat P_\theta^2}{2M_\theta}+2\hat P_\theta\langle \partial_N\phi_0|V\phi_0\rangle \cr && +\sum_{n,E_n>0}\left(E_n\hat b_n^\dagger\hat b_n+(\hat b_n+\hat b_n^\dagger) s_n\right), \label{qmham} \end{eqnarray} where \begin{eqnarray} \hat P_q&=&-i\partial_q, \\ \hat P_\theta&=&\hat N-N=-i\partial_\theta, \end{eqnarray} and \begin{eqnarray} \left[\hat q,\hat P_q\right]&=& i, \\ \left[\hat \theta,\hat P_\theta\right]&=&i, \\ \left[\hat b_n,\hat b_m^\dagger\right]&=&\delta_{nm}. \end{eqnarray} Because $[\hat P_\theta,\hat H]=0$ we can choose a state $|N\rangle$ of the many body system with exactly $N$ particles where $\hat P_\theta|N\rangle=0$. If we consider the Bogoliubov vacuum state of the quasi-particle operators, i.e. $\hat b_n|0_b\rangle=0$, such a state will be very weakly coupled to other eigenstates of the $\sum_n E_n\hat b_n^\dagger\hat b_n$ operator because the coupling strengths $s_n$ are, for a weak disorder potential, much smaller than the large energy gap for quasi-particle excitations $E_1=|\mu|=N^2/8$ \cite{ueda}. Hence, the effective Hamiltonian that describes the CM motion reduces to \begin{eqnarray} \hat H_q&=&\langle N;0_b|\hat H|N;0_b\rangle \cr &=& \frac{\hat P_q^2}{2N}+\int dz\;V(z)\;|\phi_0(z-\hat q)|^2, \label{hamq} \end{eqnarray} where we have inserted explicit expression for $M_q$. In the following we will use the Hamiltonian \eqref{hamq} in analyzing of Anderson localization of the CM of a bright soliton. Second order contributions, with respect to the coupling to the quasiparticle modes, to the effective Hamiltonian (43) are of the order of $NV_0^2/\mu$ and they can be neglected for the parameters of the system used in the present paper. \section{Anderson localization} \label{al} We discuss in more detail the Anderson localization in the so called optical speckle potential, as realized e.g. in the experiment~\cite{Billy2008}. The potential originates from the light shifts experienced by the the atoms in the laser light detuned from the resonance. In effect, the potential $V(z)\propto\alpha|E(z)|^2$ is proportional to the intensity of the local field $E(z)$ and to the atomic polarizability $\alpha$, whose sign depends on the detuning of the external light frequency from the atomic resonance. \begin{figure \centering \includegraphics*[width=0.9\linewidth]{fig1.eps} \caption{Dashed lines: bare potential $V(z)$, solid lines: convoluted potential, i.e. $\int dz' V(z')|\phi_0(z'-z)|^2/N$. Panel (a) for the bare potential amplitude $V_0=+0.1$ (red detuned laser case), panel (b) for $V_0=-0.1$ (blue detuned laser case). The correlation length of the bare potential is $\sigma_0=0.28\hl$ where $\hl=0.02$. } \label{one} \end{figure} Any disordered potential is completely characterized by its correlation functions $\overline{V(z_1)\dots V(z_n)}$ where the overbar denotes an ensemble average over disorder realizations. The average potential value shifts the origin of energy and can always be set to zero, $\overline{V(z)}=0$. The pair correlator can be written as $\overline{V(z')V(z'+z)}=V_0^2 C(z/\sigma_0)$, where $V_0$ measures the potential strength, and $\sigma_0$ the spatial correlation length. For a gaussian random process, higher order correlation functions are simple functions of the average and the pair correlator. This is no longer the case for non-gaussian potentials that require to specify also higher-order correlations. \begin{figure \centering \includegraphics*[width=0.9\linewidth]{fig2.eps} \caption{Panels (a) and (b): eigenstates of the CM of a bright soliton; panels (c) and (d): corresponding probability density in log scale. The eigenstates correspond to the CM momentum $P_q\approx10$. The red detuned laser case is shown in (a) and (c) while the blue detuned one in (b) and (d). The inverse localization length is $\gamma=23\pm3$ (red detuned case) and $\gamma=16.5\pm0.6 $ (blue detuned case). The parameters of the potentials are the same as in Fig.~\ref{one}. } \label{two} \end{figure} An optical speckle potential is a good example of such a non-gaussian behaviour. At fixed detuning, the potential features either random peaks (the ``blue-detuned'' case) or wells (``red-detuned''). Even after shifting to $\overline{V(z)}=0$, the potential distribution is asymmetric (compare Fig.~\ref{one}), and the importance of odd moments can be probed experimentally by comparing the blue- and red-detuned cases for fixed amplitude $|V_0|$. The latter is determined by the laser strength, and we will use $|V_0|=8\cdot10^{-5}|\mu|=0.1$ in the following. The bare speckle potential has the pair correlation function $C(y)=[\sin (y)/y]^2$, with a correlation length that can be as short as $0.28\,\mu$m \cite{Billy2008} or $\sigma_0=0.0056$ in our units. We shall use this value in the following. The CM of the soliton feels, however, not the bare potential, but rather its convolution with the soliton shape, see Eq.~\eqref{hamq}. The convoluted effective potential $\int dz' V(z')|\phi_0(z'-z)|^2/N$ (the $N$ factor in the denominator is due to the normalization of $\phi_0$) is also shown in Fig.~\ref{one}. While the convolution makes the potential smoother it is apparent that it remains quite asymmetric, thus we may expect that the non-gaussian character (in particular non-vanishing odd moments) shows up in the properties of the system. For that reason we compare the results for both red- and blue- detuned potential of similar amplitude. \begin{figure \centering \includegraphics*[width=0.9\linewidth]{fig3.eps} \caption{Panels (a) and (b): eigenstates of the CM of soliton; panels (c) and (d): corresponding probability density in log scale. The eigenstates correspond to the CM momentum $P_q\approx50$. The red detuned laser case is shown in (a) and (c) while the blue detuned one in (b) and (d). The inverse localization length is $\gamma=0.27\pm0.03 $ (red detuned case) and $\gamma= 1.8\pm0.1$ (blue detuned case). The parameters of the potentials are the same than in Fig.~\ref{one}. } \label{three} \end{figure} The generic properties of Anderson localization in 1D \cite{vantiggelen1999} allow us to expect that all the eigenstates of \eqref{hamq} are exponentially localized, i.e., have a typical shape with the overall envelope \begin{equation} |\Psi|^2\propto \exp\left[-\gamma(P_q)|q-q_0|\right], \end{equation} with $q_0$ being the mean position while $\gamma(P_q)$ is naturally referred to as the inverse localization length. It depends on the eigenenergy of the state $E$, or writing $P_q\approx\sqrt{2NE}$ on the associated momentum $P_q$. By diagonalizing the Hamiltonian \eqref{hamq} on a grid, we obtain the wavefunctions that are represented in Fig.~\ref{two} and Fig.~\ref{three} for two significantly different energies (momenta). Fig.~\ref{two} shows the probability densities for the CM of the soliton at relatively low energies, observe that the exponential envelope behaviour is visible over several decades. Due to the tridiagonal form of the diagonalized matrix on the grid, the errors are well under control and the accuracy seems not to be limited by double precision arithmetics. Observe that the inverse localization lengths obtained for red-detuned case and the blue-detuned situation differ significantly, stronger localization is observed for the former case. The situation is quite different at higher energies as shown in Fig.~\ref{three}. Observe that now blue-detuned potential leads to a much stronger localization. Of course the inverse localization lengths at high energies are much smaller than those depicted in Fig.~\ref{two}, in fact, at sufficiently high energies $\gamma(P_q)$ decays exponentially with $P_q$ as observed by us before \cite{ours}. \begin{figure \centering \includegraphics*[width=0.9\linewidth]{fig4.eps} \caption{Inverse localization length as a function of the momentum for red-detuned (red-line, solid) and blue-detuned (blue-line, dashed) potentials obtained using the transfer matrix technique. The red dots as well as the blue squares correspond to the wavefunctions shown in Fig.~\ref{two} and Fig.~\ref{three}. Observe the exponential decay of $\gamma$ for sufficiently large $P_q$. } \label{four} \end{figure} The inverse localization lengths shown as lines in Fig.~\ref{four} are obtained by a transfer matrix technique \cite{MacKinnon1981} and quite nicely agree with values obtained from exact diagonalization. Clearly there is a striking difference between the two cases of red-detuned and blue-detuned potentials exemplifying its non-gaussian character and the importance of higher moments, in particular third moments. This in turn indicates that the application of the celebrated Born approximation \cite{vantiggelen1999,ours} which considers the two lowest moments only is deemed to fail in our case despite the fact that the potential is very weak, smooth and thus, at first glance one could naively expect the Born approximation to perform quite well. With exponentially localized eigenstates, one can now consider the dynamics, e.g., the spread of an initially localized wavepacket. As shown by us elsewhere \cite{ours}, one can expect an algebraic localization of such a CM wavepacket. For realistic parameters, localization occurs on a timescale of seconds making the experimental verification of the localization feasible. We refer the reader to \cite{ours} for details. \section{Conclusions} Using the Bogoliubov expansion and treating the zero modes non-perturbatively, we have shown in detail how to obtain the effective quantum Hamiltonian which governs the motion of the center of mass of the bright soliton in a weak and smooth potential without affecting the soliton shape. When this potential is of the disorder type one may expect to observe Anderson localization of the CM motion. The optical speckle potential was considered as a realistic example. It turns out that localization properties of wavefunctions strongly depend on the sign of the potential (red- or blue- detuning). This indicates that, even for a weak potential, applicability of the Born approximation is limited and the quantitative predictions depend on higher correlation functions of the disorder potential. Anderson localization of the CM of a bright soliton should be experimentally observable. \section*{Acknowledgements} We are grateful for the privilege of delightful lively discussions with Cord M\"uller. Support within Polish Government scientific funds (for years 2008-2011 -- KS and 2009-2012 -- JZ) as a research project and by Marie Curie ToK project COCOS (MTKD-CT-2004-517186) is acknowledged. The research has been conducted within LFPPI network.
proofpile-arXiv_069-15878
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Determining the star formation history of the Universe is a crucial part of understanding the formation and evolution of galaxies. Exploration of the global star formation history has two components: ({\it i}) measurement of the star formation density over time and ({\it ii}) understanding the physical processes that drive star formation. Here we use a large, moderate-depth spectroscopic survey to address both issues: ({\it i}) we determine the star formation density over the last 4\,Gyr using the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emission line as star formation indicator and ({\it ii}) we investigate the possible influence of galaxy interactions on the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. There is abundant observational evidence for an order of magnitude increase in the star formation density since redshift $z\sim 1-2$ \citetext{\citealp{Lilly96,Madau96}; and the compilations of \citealp{Hopkins04,Hopkins06}}. Major mergers, tidal interactions, gas removal from conversion into stars, and/or ram pressure stripping may explain the decrease in the star formation. The challenge is deciding which of these processes are important in quenching of star formation \citep{Bell05}. The decline in star formation density coincides with a rapid decrease in the characteristic luminosity of galaxies ($L^*$) in the rest-frame $U$-band \citep[e.g.][]{Ilbert05,Prescott09}. A decrease in the number of merging systems can explain the decrease of the characteristic luminosity $L^*$ \citep{Fevre00}. \citet{Sobral09} find a strong morphology-\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity relation for mergers and non-mergers. The characteristic luminosity $L^*$ defines a critical switch-over luminosity between the mergers and non-mergers; the mergers are more luminous. Studies of close pairs show that enhancement in the star formation rate is largest for galaxies in major pairs \citep[$|\Delta m| \lesssim 0.7 - 2$;][]{Woods06,Woods07,Ellison08} and that the average star formation rate in a galaxy increases with decreasing projected separation \citep{Li08}. Simulations of interacting and merging galaxies reveal that the interactions can trigger short powerful bursts of star formation by forcing substantial fractions of the gas into the central regions \citep{Mihos96}. Systematic effects dominate the comparison of star formation rates determined from different star formation indicators like the rest-frame ultra-violet (UV) and \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}. Hence, to study the variation of the star formation density with time, the use of a single star formation indicator is best. The rest-frame UV spectrum of a galaxy directly measures the population of newborn stars \citep[e.g.][]{Lilly96,Treyer98}. However, the rest-frame UV is strongly attenuated \citep[e.g.][]{Cardelli89,Calzetti00}. The most-direct optical indicator is the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emission line emitted by gas surrounding the embedded star forming region \citep[e.g.][]{Kennicutt98}. The \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line is also affected by attenuation--albeit less than the UV--which can be corrected using spectroscopy. Many surveys use narrowband filters \citep{Thompson96,Moorwood00,Jones01,Fujita03,Hippelein03,Ly07,Pascual07,Dale08,Geach08,Morioka08,Shioya08,Westra08,Sobral09} to determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function parameters over a range of redshifts. Despite the depth of the narrowband surveys, measurements of individual luminosity function parameters and the star formation density are not well-constrained. Narrowband surveys lack spectroscopy for the faint \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies. Thus, general assumptions about stellar absorption, extinction corrections, contributions by active galactic nuclei (AGNs), or interloper contamination need to be made for the sample as a whole rather than for each galaxy. These issues may lead to systematic uncertainties. \citet{Massarotti01} show that applying an average extinction correction introduces a systematic underestimate of the extinction-corrected star formation density. A spectroscopic survey does not suffer these limitations, although it is usually limited in its depth. Several spectroscopic \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} surveys exist \citep[e.g.][]{Gallego95,Tresse98,Sullivan00,Tresse02,Perez03,Shim09}. Both \citeauthor{Gallego95} and \citeauthor{Perez03} use the Universidad Complutense de Madrid (UCM) survey. This survey covers an extremely wide area on the sky (472\,\sq{}\degr{}). However, it is limited to a very low redshift ($z_\mathrm{max} \sim 0.045$). For their \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} survey, \citet{Sullivan00} use galaxies selected from UV imaging in a 2.2\,\sq{}\degr{} field. The other surveys have an area $\le$ 0.25\,\sq{}\degr{}. Thus, most surveys are too limited in volume to overcome cosmic variance. The Smithsonian Hectospec Lensing Survey (SHELS) is a spectroscopic survey covering 4\,\sq{}\degr{} on the sky to a limiting $R$-band magnitude $R_\mathrm{tot} = 20.3$ \citep{Geller05}. We use SHELS to obtain a consistent determination of the star formation history over the last 4\,Gyr based on the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emission line over a relatively large area and redshift range. The spectroscopy enables us to reduce systematic uncertainties by allowing an individual galaxy extinction correction. We can also remove individual AGNs rather than applying a global correction factor for contamination by AGNs as is done in narrowband surveys. We use the large survey area to determine the characteristic luminosity $L^*$ of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function and associated systematic uncertainties. We discuss the SHELS spectroscopic data in Section~\ref{sec:fieldselection}. In Section~\ref{sec:sampleselection} we introduce our \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} sample selection. We combine our $R$-band selected \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} sample with the narrowband \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} survey of \citet{Shioya08} in Section~\ref{sec:nbvbb} to obtain a jointly-determined \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function at $z\sim0.24$. Sections~\ref{sec:halfSHELS} and \ref{sec:sfd} discuss the derivation and evolution of the luminosity function and star formation density, respectively, over the past 4\,Gyr. We include an investigation of the influence of our selection criteria on the derivation of the luminosity function parameters. In Section~\ref{sec:properties} we examine the stellar age of the star forming galaxies and the influence of galaxy-galaxy interactions on these galaxies. We summarize our results in Section~\ref{sec:summary}. Throughout this paper we assume a flat Universe with $H_0 = 71$\,\ifmmode{\mathrm{km\,s^{-1}\,Mpc^{-1}}}\else{km\,s$^{-1}$\,Mpc$^{-1}$}\fi{}, $\Omega_{\rm m} = 0.27$ and $\Omega_{\Lambda} = 0.73$. All quoted magnitudes are on the AB-system and luminosities are in \ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi{}. \section{SHELS observations} \label{sec:fieldselection} We constructed the SHELS galaxy catalog from the $R$-band source list for the F2 field of the Deep Lens Survey \citep{Wittman02,Wittman06}. The DLS is an NOAO key program covering 20\,\sq{}\degr{} in five separate fields; the 4.2\,\sq{}\degr{} F2 field is centered at $\alpha = 09^h19^m32.4^s$ and $\delta = +30\degr{}00\arcmin{}00\arcsec{}$. We exclude regions around bright stars ($\sim$\,5\,\% of the total survey) resulting in an effective area of 4.0\,\sq{}\degr{}. We use surface brightness and magnitude to separate stars from galaxies. This selection removes some AGN. Photometric observations of F2 were made with the MOSAIC I imager \citep{Muller98} on the KPNO Mayall 4\,m telescope between 1999 November and 2004 November. The $R$-band exposures, all taken in seeing $ < 0.9\arcsec{}$ FWHM, are the basis for the SHELS survey. The effective exposure time is about 14,500 seconds and the 1\,$\sigma$ surface brightness limit in $R$ is 28.7 magnitudes per square arcsecond. \citet{Wittman06} describe the reduction pipeline. We acquired spectra for the galaxies with the Hectospec fiber-fed spectrograph \citep{Fabricant98,Fabricant05} on the MMT from 2004 April 13 to 2007 April 20. The spectrograph is fed by 300 fibers that can be positioned over a 1\degr{} field. Roughly 30 fibers per exposure are used to determine the sky. The Hectospec observation planning software \citep{Roll98} enables efficient acquisition of a magnitude limited sample. The SHELS spectra cover the wavelength range $\lambda = 3,500 - 10,000$\,\AA{} with a resolution of $\sim$6\,\AA{}. Exposure times ranged from 0.75~hours to 2~hours for the lowest surface brightness objects in the survey. We reduced the data with the standard Hectospec pipeline \citep{Mink07} and derived redshifts with RVSAO \citep{Kurtz98} with templates constructed for this purpose \citep{Fabricant05}. We have 1,468 objects that have been observed twice. These repeat observations imply a mean internal error of 56\,\ifmmode{\mathrm{km\,s^{-1}}}\else{km\,s$^{-1}$}\fi{} for absorption-line objects and 21\,\ifmmode{\mathrm{km\,s^{-1}}}\else{km\,s$^{-1}$}\fi{} for emission-line objects \citep[see also][]{Fabricant05}. \citet{Fabricant08} describe the technique we use for photometric calibration of the Hectospec spectra based on the particularly stable instrument response. For galaxies in common between SHELS and SDSS, the normalized \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line fluxes agree well in spite of the difference in fiber diameters for the Hectospec (1\farcs5) and the SDSS (3\arcsec{}). For high-signal-to-noise SHELS spectra, the typical uncertainties in emission line fluxes are 18\,\%. SHELS includes 9,825 galaxies to the limiting apparent magnitude. The overall completeness of the redshift survey to a total\footnote{The total magnitude is the SExtractor \citep{Bertin96} {\sc mag\_auto} as opposed to an aperture magnitude.} $R$-band magnitude of $R_\mathrm{tot} \le 20.3$ is 97.7\,\%, i.e. 9,595 galaxies have a redshift measured; the differential completeness at the limiting magnitude is 94.6\,\%. The 230 objects without redshifts are low surface and/or faint objects, or objects near the survey corners and edges. M.~J.~Kurtz et al. (2010; in preparation) includes a detailed description of the full redshift survey. The SHELS survey also includes 1,852 galaxies with $20.3 < R \leq 20.6$, for which we have measured a redshift; the total sample of galaxies with $20.3 < R \leq 20.6$ is 3,590, i.e. the survey is 52\,\% complete in this magnitude interval. The completeness is patchy across the field. \begin{figure*}[tb] \centering \includeIDLfigPcustom[\textwidth]{12pt}{20pt}{112pt}{8pt}{fig.01.eps} \caption{Redshift cone diagram for the galaxies in the final sample: $R_\mathrm{tot} \leq 20.3$, S/N$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}} > 5$ and $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. AGNs have not been removed from this sample. The large-scale structure is apparent with extended low-density regions and well-populated narrow structures.} \label{fig:cone} \end{figure*} The F2 field contains an atypical under-dense region at the lowest redshifts because the DLS fields are selected against nearby clusters at $z < 0.1$. We show the redshift distribution of our \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies in Figure~\ref{fig:cone}. \subsection{The $R$-band $k+e$-corrections} To calculate the absolute $R$-band magnitude $M_R$ we determine the appropriate $k+e$-corrections. The $k+e$-correction converts the observed absolute magnitude to the rest-frame of the galaxy, correcting for redshift and evolution. We use the $k+e$-corrections for 9 types of galaxies: bright cluster (BCG), elliptical (E), S0, Sa, Sb, Sbc, Sc, Sd and irregular (Irr) galaxies determined by J.~Annis \citetext{priv.~comm.}\footnote{The table with the corrections for the SDSS filter set as function of galaxy type and redshift can be obtained from \url{http://home.fnal.gov/~annis/astrophys/kcorr/kcorr.html}.}. We use the corrections for the SDSS $r'$-filter as a function of redshift and \giCol{}-color because the SDSS $r'$-filter is similar to the $R$-filter used for the DLS. We obtain \giCol{} by cross-matching our catalog with SDSS DR6 \citep{SDSS6}. For those galaxies not found in SDSS DR6 (these galaxies are either unresolved or below the surface brightness limit in SDSS) we convert the \VRCol{} from 61 galaxies in the DLS to \giCol{}. For 42 galaxies we cannot determine or derive \giCol{} due to the proximity of another object; we assume that these are Sa galaxies. We interpolate the models in redshift to obtain the $k+e$-corrections for each galaxy type determined by its \giCol{} and redshift. \section{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} sample selection} \label{sec:sampleselection} We use SHELS to construct \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions over the redshift range $0.010 < z < 0.377$ (Table~\ref{tab:ngals}). Here, we describe the determination of our final emission-line luminosities and the discrimination between pure star-forming galaxies and AGNs. \subsection{Emission-line measurements} \label{sec:contsub} \label{sec:emabmeasure} The emission-line flux emanating from star-forming regions is affected by the absorption-line spectrum from the underlying stellar population. The absorption mostly affects the measurements of the hydrogen Balmer lines. To measure the emission-line flux we thus remove the contribution of the stellar population. We use the \citet{Tremonti04} continuum subtraction method to correct for the stellar absorption rather than applying a constant, global correction \citep[e.g.,][]{Hopkins03}. The \citeauthor{Tremonti04} method removes the stellar continuum by fitting a linear combination of template spectra resampled to the correct velocity dispersion. The method also accounts for redshift and reddening. The template spectra are based on single stellar population models generated by the population synthesis code of \citet{Bruzual03}. We use models with 10 different ages (0.005, 0.025, 0.1, 0.3, 0.6, 0.9, 1.4, 2.5, 5 and 10 Gyr) at solar metallicity. We determine the emission-line fluxes from the continuum-subtracted spectra by integrating the line flux within a top-hat filter centered on the emission-line. We remove any local over- or under-subtraction of the continuum by subtracting the mean of the flux-density at both sides of the filter. Next, we determine the continuum level by taking the mean of the flux-density at wavelengths bluer and redder than the emission-line on the best-fit continuum model. Finally, we determine the absorption contribution of the underlying stellar population using the same top-hat filter but on the best-fit model; we remove the flux contributed by the continuum. \begin{figure}[tb] \centering \includeIDLfigPcustom{15pt}{15pt}{26pt}{7pt}{fig.02.eps} \caption{Fraction of light contained in the 1\farcs5 fibers as a function of redshift. We indicate the fraction of galaxies with more than 20\,\% of the light contained in the fiber ({\it horizontal dashed line}) for each redshift bin ({\it vertical dashed lines}) we use to construct the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions. The galaxies have $R_\mathrm{tot} \leq 20.3$, S/N$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}} > 5$ and $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. The AGNs have been removed.} \label{fig:lightfraction} \end{figure} The Hectospec fibers have a fixed diameter of 1\farcs5. At all redshifts where \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} is observable (and in particular at the lowest redshifts) the fiber does not cover the entire galaxy. Hence, we use an aperture correction \label{sec:apcorr} \begin{equation} A = \pow{-0.4(m_\mathrm{total} - m_\mathrm{fiber})} \label{eq:apcorr} \end{equation} to correct for the fiber-covering fraction. Figure~\ref{fig:lightfraction} shows the fraction of light, $1/A$, contained in the fiber as a function of redshift. \citet{Kewley05} show that a spectrum measuring at least 20\,\% of the galaxy light avoids substantial scatter between the nuclear and integrated SFR measurements. The overall majority of galaxies from SHELS have a light-fraction $1/A \ge 20\,\%$ (Figure~\ref{fig:lightfraction}). \citet{Fabricant08} compared the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} and [{\sc Oii}]{} emission-line fluxes from SHELS with SDSS DR6 after making an aperture correction. They found excellent agreement between the two surveys, even though the fibers of the SDSS spectrograph are 3\arcsec{} in diameter. Moreover, most of the SDSS galaxies are at low redshift ($z \lesssim 0.14$) where we have the largest fraction of galaxies with a light-fraction less than 20\,\%. There is no dependence of final \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity on the light-fraction. We are thus confident that the use of these aperture corrections does not affect the final results even when the covering fraction is small. \begin{figure}[tb] \centering \includeIDLfigP{fig.03.eps} \caption{Attenuation at \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} as function of observed \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity. The black line indicates the least-absolute-deviates fit to the gray points. We indicate $A_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} = 0$ ({\it solid horizontal line}) and a commonly assumed value of $A_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi}$ \citep[$A_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} = 1$, {\it dashed horizontal line}; e.g.][]{Tresse98,Fujita03,Ly07,Sobral09}} \label{fig:extinction} \end{figure} \subsection{Extinction correction} \label{sec:extcorr} The light from star forming regions in a galaxy is often heavily attenuated. To determine the intrinsic SFR of a galaxy we must remove the effects of attenuation. We calculate the attenuation by comparing the observed value of the Balmer decrement (corrected for stellar absorption) with the theoretical value \citep[$f_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi/f_\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi = 2.87$ for $T\,=\,\pow{4}$\,K and case B recombination; Table~2 of][]{Calzetti01}. The intrinsic flux is $f_\mathrm{intr}(\lambda) = f_\mathrm{obs}(\lambda) \pow{0.4 A_\lambda}$, where $A_\lambda$ is the wavelength-dependent extinction. $A_\lambda$ is \begin{eqnarray} A_\lambda & = & k(\lambda) E(B-V)_\mathrm{gas} \nonumber\\ & = & k(\lambda)\frac{2.5 \log R_{\alpha \beta}}{k(\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi) - k(\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi)}~, \end{eqnarray} where $R_{\alpha\beta}$ is the ratio of the attenuated-to-intrinsic Balmer line ratios, $k(\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi) - k(\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi)$ is the differential extinction between the wavelengths of \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} and \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}, and $k(\lambda)$ is the extinction at wavelength $\lambda$. We apply the \citet{Calzetti00} extinction law, which has $k(V) = 4.05$, $k(\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi) = 3.325$ and $k(\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi) = 4.596$. \begin{figure}[tb] \centering \includeIDLfigPcustom{16pt}{16pt}{30pt}{21pt}{fig.04.eps} \caption{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity (corrected for extinction) as a function of redshift. The solid line indicates the additional selection criterion $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. Vertical dashed lines show the edges of the redshift bins used to construct the SHELS \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions.} \label{fig:lhavz} \end{figure} Figure~\ref{fig:extinction} shows the attenuation as a function of observed \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for galaxies with both \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} and \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} at a $\mathrm{S/N} > 5$. We use the relation between $A_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}$ and $L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}$ as determined from a least-absolute-deviates fit to the high-S/N data-points with observed luminosities $40.5 < \log L_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} < 41.5$ ({\it gray points}) for galaxies where $\mathrm{S/N}_\mathrm{\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi} \le$ 5 or where the observed equivalent width of \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{}, OEW$_\mathrm{\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi} \le$ 1\,\AA{} (uncorrected for stellar absorption). We limit OEW$_\mathrm{\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi}$ to avoid galaxies with excessively large attenuation resulting from a very small (noise-dominated) \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} flux compared to \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}. We assume that galaxies with $A_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} \le 0$ to have no attenuation and assign $A_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} = 0$ to these galaxies. \subsection{Sample definition} \label{sec:sampledef} Figure~\ref{fig:lhavz} shows the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity as a function of redshift. Below $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} = \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{} the number of galaxies decreases rapidly. We impose a constant \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux limit $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{} on the sample (after corrections for stellar absorption and attenuation) because we are only complete to this flux. We apply this criterion in addition to the magnitude limit ($R_\mathrm{tot} \leq 20.3$) and the $\mathrm{S/N}_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} > 5$ requirement. \subsection{AGN classification} \label{sec:agn} The presence of an active nucleus in a galaxy contributes to the (apparent) star-formation in the galaxy. For example, \citet{Pascual01} find that approximately 15\,\% of the luminosity density of the UCM survey \citep{Gallego95} results from galaxies identified as AGN. \citet{Westra08} find a 5\,\% contribution for their survey. \begin{figure}[tb] \centering \includeIDLfigPcustom{6pt}{16pt}{18pt}{5pt}{fig.05.eps} \caption{BPT \citep[after][]{Baldwin81} diagram for SHELS. The solid blue and dashed red lines indicate the demarcation of pure star formation from \citet{Kauffmann03} and of extreme starbursts from \citet{Kewley01}. We classify the galaxies as: pure star forming galaxies ({\it black diamonds}), AGNs ({\it red squares}), and composite galaxies ({\it blue triangles}). We indicate galaxies or AGNs with either \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} or \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} undetected, i.e. $\mathrm{S/N} < 3$, as lower limits ({\it green arrows}).} \label{fig:agn} \end{figure} We use the demarcations of pure star formation from \citet{Kauffmann03} and of extreme starburst from \citet{Kewley01} to identify galaxies as pure star forming, AGN, or a combination (composite galaxies) based on the line ratios of \ifmmode{\mbox{[{\sc O\,iii}]}}\else{[{\sc O\,iii}]}\fi{}/\ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} and [{\sc N\,ii}]{}/\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} (Figure~\ref{fig:agn}). We select all galaxies with \oiii~$\lambda$5007{} and \nii~$\lambda$6585{} detected with a signal-to-noise ratio (S/N) $\ge 3$. If the galaxies have both \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} and \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} detected with S/N $\ge 3$, we identify them as pure star forming galaxies when their line ratios are below the \citeauthor{Kauffmann03} relation, pure AGNs when the ratios are above the \citeauthor{Kewley01} relation, and composites when they lie between the relations. For galaxies with either \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} or \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} undetected (S/N $<$ 3), we use the 3\,$\sigma$ value for the line flux to calculate the line ratios. These ratios are lower limits. We classify these galaxies as composite or AGN; some of the composite galaxies might be AGNs. We identify a separate class of broad-line AGN. The width of these broad Balmer-lines extends beyond the limited-width top-hat filter used for measuring the line fluxes (Section~\ref{sec:emabmeasure}). In some cases the \nii~$\lambda\lambda$6550,6585{} lines are not distinguishable from the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line in a spectrum with a very broad \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line. Inspecting each spectrum would be time-consuming. Hence, we fit the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} and \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} lines in the continuum-subtracted spectra in an automated way and individually inspected each candidate broad-line AGN. We fit both lines simultaneously with the assumption that the full-width-half-maximum (FWHM) of the line profile is the same for both lines. Candidate broad-line AGNs have a peak of both \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} and \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} $> 5 \times \pow{-18}$\,\fluxunits\,\ifmmode{\mathrm{\AA^{-1}}}\else{\AA$^{-1}$}\fi{} above the continuum residuals (which avoids the inclusion of noise peaks) and a FWHM of the Gaussian component of the line profile (we use a Gaussian convolved with the instrumental profile as our line profile) before convolution larger than 14\,\AA{}. From these candidates, we select the galaxies that are genuine broad-line AGNs. The fraction of galaxies identified as AGN and/or composite over the redshift ranges 0.010-0.100, 0.100-0.200, 0.200-0.300 and 0.300-0.377 for an \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity limited sample ($\log L \ge 41.18$; lowest \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity at $z = 0.377$) is 5.9, 6.6, 5.3 and 5.2\,\%, respectively. The fraction of AGN is more or less constant with redshift. However, we cannot draw any conclusions about the evolution of the AGN-fraction as a function of redshift. We removed stellar objects from the initial sample and thus may have inadvertently removed AGNs particular at greater redshifts. \section{The \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function at redshift $\sim$ 0.24} \label{sec:nbvbb} The recent advent of wide-field cameras on telescopes has aided searches for star forming galaxies by increasing the area (and hence volume) of narrowband surveys, e.g. \citet{Fujita03}, \citet{Ly07}, \citet{Shioya08}, \citet{Westra08}, and many more. This technique has recently been extended to the near-infrared, e.g. \citet{Sobral09}. A narrowband survey efficiently probes the faint end of the luminosity function which is hard to explore in a spectroscopic survey. In contrast, a spectroscopic survey can cover a larger volume and sample the rare luminous galaxies at the bright end of the luminosity function. Here, we combine the strength of a narrowband survey--the ability to go deep--with that of our broadband selected spectroscopic survey--coverage of a large volume--to determine a well-constrained luminosity function at $z\sim0.24$. For the narrowband survey we use the publicly available data from \citet[hereafter \citetalias{Shioya08}]{Shioya08} together with that of the Cosmic Evolution Survey \citep[COSMOS\footnote{The COSMOS catalog can be downloaded from \url{http://irsa.ipac.caltech.edu/data/COSMOS/tables/cosmos\_phot\_20060103.tbl.gz}};][]{Capak07} which formed the basis of the survey of \citetalias{Shioya08}. We use the spectroscopic survey of SHELS for the bright end of the luminosity function. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{20pt}{25pt}{29pt}{34pt}{fig.06.eps} \caption{Comparison of SHELS with \citetalias{Shioya08} for galaxies with the OEW$_{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} > 12$\,\AA{} not corrected for [{\sc N\,ii}]{} for both surveys. The data are: \citetalias{Shioya08} ({\it solid black circles}), SHELS in the redshift range of \citetalias{Shioya08} ({\it open red squares}), and SHELS at $0.01 < z < 0.15$ ({\it open blue triangles}). The \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity, $L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi$, for both surveys is corrected for [{\sc N\,ii}]{}.} \label{fig:shishelscomp} \end{figure*} The \citetalias{Shioya08} and SHELS surveys use different approaches (imaging versus spectroscopy). A comparison of the data allows a consistency check of the aperture corrections applied to the SHELS data. We construct the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function over the redshift range of \citetalias{Shioya08} ($0.233 < z < 0.251$) based on the catalog with emission-line fluxes determined in Section~\ref{sec:emabmeasure} (which already include corrections for underlying stellar absorption), redshifts, extinction corrections from Section~\ref{sec:extcorr}, and removal of composites and AGNs (Section~\ref{sec:agn}). Constraining SHELS to the same redshift range yields a sample of 192 SHELS galaxies at $0.233 < z < 0.251$. \subsection{Data comparison} \label{sec:comparison} Figure~\ref{fig:shishelscomp} shows the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity, \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} rest-frame equivalent width (REW), and the 3\arcsec{} aperture absolute $R$-band magnitude from \citetalias{Shioya08} ({\it solid black circles}) matched to the selection criteria of SHELS, $R_\mathrm{tot} \leq 20.3$. We also show the SHELS data ({\it open red squares}) for the redshift range covered by \citetalias{Shioya08} ($0.233 < z < 0.251$). To match \citetalias{Shioya08} we require an observed equivalent width of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} combined with [{\sc N\,ii}]{} $\ge 12$\,\AA{} (as per the selection criteria of \citetalias{Shioya08}), and $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. The strengths of both surveys are immediately apparent. SHELS includes the highest luminous galaxies, \citetalias{Shioya08} probes the faint end of the luminosity function. \begin{figure}[tb] \centering \includeIDLfigPcustom{6pt}{16pt}{12pt}{8pt}{fig.07.eps} \caption{Comparison of the total \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity with the 3\arcsec{} aperture $R$-band magnitude of SHELS ({\it open red squares}) and the survey of \citetalias{Shioya08} ({\it solid black circles}). The galaxies have OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{}, a flux-limit of $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}, $R_\mathrm{tot} \leq 20.3$, and $0.233 < z < 0.251$. We show the median \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for 0.5 magnitude-wide bins for SHELS ({\it red line}) and \citetalias{Shioya08} ({\it black line}). The SHELS galaxies shift toward greater $L_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi}$ at fixed $M_R$} \label{fig:shishelsRHa} \end{figure} \begin{figure}[tb] \centering \includeIDLfigPcustom{16pt}{8pt}{25pt}{15pt}{fig.08.eps} \caption{Redshift distribution of the galaxies in zCOSMOS DR2 \citep[][{\it histogram}]{Lilly07}, the relative NB816 filter transmission curve used by \citetalias{Shioya08} ({\it gray solid line}), and where the transmission of the NB816 filter is 50\,\% of its maximum ({\it dotted lines}).} \label{fig:zcosmos} \end{figure} \citetalias{Shioya08} measure magnitudes from a 3\arcsec{} aperture, scale them to the total $i'$-band magnitude, and calculate \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} fluxes. We measure fluxes in a similar way. We use spectra taken with a 1\farcs{}5-fiber aperture scaled to the total $R$-band magnitude. The data from the lower redshift range $0.01 < z < 0.15$ of SHELS (Figure~\ref{fig:shishelscomp}; {\it open blue triangles}) show that the relation between the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity and the 3\arcsec{} total $R$-band magnitude of the two surveys is similar. The scaling of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux from the limited-aperture magnitude to the total magnitude introduces no systematic biases and is consistent with \citetalias{Shioya08}. If we constrain the data of \citetalias{Shioya08} to $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} = \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}, a difference at the bright end becomes apparent (see Figure~\ref{fig:shishelsRHa}). There are more-luminous galaxies in SHELS than in \citetalias{Shioya08}. This difference results from two effects: ({\it i}) SHELS probes a larger volume, and ({\it ii}) the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} fluxes determined from narrowband surveys can easily underestimate the true line flux. Galaxies with redshifts that place the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line in the wings of the filter underestimate the mean recovered \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux. To examine the redshift distribution of the galaxies in \citetalias{Shioya08}, Figure~\ref{fig:zcosmos} shows the redshift distribution of the 10k zCOSMOS catalog\footnote{zCOSMOS DR2, which can be obtained from the ESO archives.} \citep{Lilly07} in combination with the filter transmission curve of the NB816 normalized to the maximum throughput\footnote{The filter profile is available at \url{http://www.naoj.org/Observing/Instruments/SCam/txt/NB816.txt}.}. Any galaxy with a redshift placing it in the wings of the narrowband filter has its \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux underestimated far more than the 21\,\% \citetalias{Shioya08} use to correct their line fluxes. In the COSMOS field the galaxies tend to be at redshifts towards the red edge of the filter. In this case, the \nii~$\lambda$6585{} line (the strongest of the two [{\sc N\,ii}]{} lines that straddle \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}) barely contributes to the flux probed by the filter. Both the underestimation of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux and over-correction for [{\sc N\,ii}]{} can explain the difference in the distribution of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} fluxes in Figure~\ref{fig:shishelsRHa}. Despite this difference, we can still use the fainter galaxies from \citetalias{Shioya08} to determine the faint-end slope of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. The systematic underestimation of fluxes causes a shift in the luminosity function which affects the determination of the characteristic luminosity (i.e. bright end), not the faint-end slope. \subsection{Derivation and fit} \label{sec:combinedlf} We fit a Schechter function \citep{Schechter76} to the SHELS and \citetalias{Shioya08} data. The Schechter function is \begin{equation} \phi(L) dL = \phi^* \left ( \frac{L}{L^*} \right ) ^{-\alpha} \exp \left ( -\frac{L}{L^*} \right ) d \left ( \frac{L}{L^*} \right ), \end{equation} where $\alpha$ is slope of the faint-end part, $L^*$ is a characteristic luminosity, and $\phi^*$ is the normalization. Throughout this paper the units for the Schechter parameters $L^*$ and $\phi^*$ are \ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi{} and \perMpcQube{}, respectively. $\alpha$ is dimensionless. To combine the two data sets we use the non-parametric 1/V$_\mathrm{max}$ method \citep{Schmidt68} to determine the Schechter parameters. The number density of galaxies for each luminosity bin $j$ with a width of $\Delta \log L$ is \begin{equation} \phi(L_j) \Delta \log L = \sum \limits _{i=1} ^{N_\mathrm{gal}} \frac{W(L_i)}{V_i}, \end{equation} where $W(x) = 1$ when the luminosity is enclosed by bin $j$ and $W(x) = 0$ otherwise, and $V_i$ is the volume sampled by galaxy $i$. The uncertainties in the bins are Poisson errors \begin{equation} \sigma^2_{\phi(L_j)} = \sum \limits _{i=1} ^{N_\mathrm{gal}} \frac{W(L_i)}{V_i}. \end{equation} \begin{figure}[tb] \centering \includeIDLfigPcustom{3pt}{28pt}{29pt}{5pt}{fig.09.eps} \caption{The 1/V$_\mathrm{max}$ data-points for SHELS ({\it solid squares}), SHELS without the OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{} criterion ({\it solid diamonds}), and \citetalias{Shioya08} ({\it open squares}). The galaxies have OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{}, $R_\mathrm{tot} \leq 20.3$, and $0.233 < z < 0.251$. The {\it thick solid line} indicates the combined fit of SHELS and \citetalias{Shioya08} with $\phi^*_\mathrm{comb} = -3.05 \pm 0.09$, $\alpha = -1.41 \pm 0.03$, and $\log L^* = 42.14 \pm 0.08$. The {\it thin lines} indicate the luminosity function for the SHELS and \citetalias{Shioya08} data separately with $\phi ^* _\mathrm{SHELS} = -3.11 \pm 0.09$ ({\it dashed line}) and $\phi ^* _\mathrm{S08} = -2.91 \pm 0.09$ ({\it dotted line}), respectively. Both luminosity functions also have $\alpha = -1.41 \pm 0.03$ and $\log L^* = 42.14 \pm 0.08$.} \label{fig:combinedLF} \end{figure} Figure~\ref{fig:combinedLF} shows the luminosity function for the combined data set ({\it thick solid line}). For both surveys we apply the selection criteria $R_\mathrm{tot} \leq 20.3$, OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{}, and $0.233 < z < 0.251$. We determine the data-points using 1/V$_\mathrm{max}$ where the uncertainties are Poisson errors for both SHELS ({\it solid squares}) and \citetalias{Shioya08} ({\it open squares}). We fit a Schechter function with common $L^*$ and $\alpha$ to the combined SHELS and \citetalias{Shioya08} dataset. For this fit, we use the data with $\log L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} \ge 41.4$ for SHELS and $\log L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} \ge 39.6$ for \citetalias{Shioya08}. We recover a single value for $\alpha$ and $L^*$ of the joint fit: $\alpha = -1.41 \pm 0.03$ and $\log L^* = 42.14 \pm 0.08$. For those values, the normalization for SHELS is $\log \phi ^* _\mathrm{SHELS} = -3.11 \pm 0.09$ and for \citetalias{Shioya08} is $\log \phi ^* _\mathrm{S08} = -2.91 \pm 0.09$. We combine $\phi ^* _\mathrm{S08}$ and $\phi ^* _\mathrm{SHELS}$ using a volume-weighted average. Thus, $\log \phi ^* _\mathrm{comb} = \log[(1.5 \times \pow{-2.91} + 4.0 \times \pow{-3.11})/5.5] = -3.05$. This method of combining is one way to account for cosmic variance. We adopt $\alpha = -1.41 \pm 0.03$, $\log L^* = 42.14 \pm 0.08$, $\log \phi ^* _\mathrm{comb} = -3.05 \pm 0.09$ for comparison to other surveys. \section{The \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions from SHELS} \label{sec:halfSHELS} \begin{deluxetable}{llccccc} \tablewidth{0pt} \tablecaption{Number of galaxies satisfying each selection criterion.\label{tab:ngals}} \tablehead{ \multicolumn{2}{c}{Criterion} & \colhead{$0.01 \le z < 0.10$} & \colhead{$0.10 \le z < 0.20$} & \colhead{$0.20 \le z < 0.30$} & \colhead{$0.30 \le z < 0.38$} & \colhead{Total} } \startdata ({\it i}) & $R_\mathrm{tot} \leq 20.3$ & 461 & 1949 & 2746 & 2114 & 7270\\ ({\it ii}) & $\mathrm{S/N}_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} > 5$ & 420 & 1640 & 1857 & 1275 & 5192\\ ({\it iii}) & $\log _\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge -15.5$ & 369 & 1441 & 1702 & 1186 & 4698\\ ({\it iv}) & pure star forming & 322 & 1127 & 1268 & \phn848 & 3565\\ \enddata \tablecomments{Successive lines are a subset of the line above. We apply the selection criteria sequentially in the order of the Table.} \end{deluxetable} Here we use SHELS to determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions as a function of redshift. We can identify \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} in our spectra up to a redshift of $z_\mathrm{max} = 0.377$. We next examine the influence of the $R$-magnitude limited survey on the derivation of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. We limit our sample to $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{} (see Section~\ref{sec:sampledef}). Table~\ref{tab:ngals} lists the number of galaxies satisfying each selection criterion. Figure~\ref{fig:lhavz} shows the distribution of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity as a function of redshift and the redshift bins used to construct the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions. \subsection{Derivation and fit} \label{sec:styfit} Here we use the STY-method \citep{Sandage79}, a parametric estimation method, to determine the three Schechter parameters for each redshift bin. The STY-method identifies the luminosity function parameters that maximize the probability of obtaining the observed sample. The probability $\mathcal{P}$ is \begin{equation} \mathcal{P} = \prod \limits _{i=1} ^{N_\mathrm{gal}} \frac{\phi(L_i)}{\int ^\infty _{L(z_i)_{\mathrm{lim},i}} \phi(L) dL}, \end{equation} where $L(z)_{\mathrm{lim},i}$ is the faintest luminosity where galaxy $i$ at redshift $z_i$ is observable. We use a truncated-Newton method to maximize the natural logarithm of $\mathcal{P}$. \begin{figure*}[tb] \centering \includeIDLfigP[0.75\textwidth]{fig.10.eps} \caption{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions for several redshift bins for our pure star forming sample. The Schechter functions are derived using the STY-method with $\alpha$ fixed to $-1.2$. The data-points come from 1/V$_\mathrm{max}$ where the uncertainties are Poisson errors. Table~\ref{tab:schechter} lists the parameters.} \label{fig:schechter} \end{figure*} Table~\ref{tab:schechter} lists the fit-parameters, and Figure~\ref{fig:schechter} shows the results. We fit for $\alpha$, $L^*$, and $\phi^*$ ({\it dashed lines}). We also use a fixed $\alpha = -1.20$ ({\it solid lines}). This fixed value represents the slope over the redshift range $0.05 < z < 0.20$ for SHELS. This range has a large enough volume to sample the bright end of the luminosity function, while still having galaxies faint enough to determine the faint end slope. We do not consider this large redshift range in our further analysis. Narrowband surveys apply a correction to the total luminosity density for galaxies hosting an AGN \citep[e.g.][]{Fujita03,Ly07,Shioya08,Westra08}. A large fraction of these surveys have little or no spectroscopy. Thus there is no way to separate AGNs from star forming galaxies. SHELS enables a direct separation (see Section~\ref{sec:agn}). We derive the Schechter parameters for the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies with the AGNs removed (Table~\ref{tab:schechter}). Removal of AGNs moves $L^*$ slightly fainter and reduces the normalization because AGNs are systematically in more luminous galaxies. Failure to account for this bias introduces a systematic offset. \begin{deluxetable}{lcccccc} \tablewidth{0pt} \tablecaption{Parameters for the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions.\label{tab:schechter}} \tablehead{ \colhead{} & \multicolumn{3}{c}{fixed $\alpha$} & \multicolumn{3}{c}{unconstrained $\alpha$}\\ \colhead{redshift range} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} } \tablewidth{0pt} \startdata \multicolumn{7}{c}{Pure star forming sample}\\ $0.010 < z < 0.100$\tablenotemark{a} & $-1.20$ & $41.72 \pm 0.10$ & $-2.86 \pm 0.04$ & $-1.22 \pm 0.06$ & $41.74 \pm 0.13$ & $-2.90 \pm 0.10$\\ $0.100 < z < 0.200$ & $-1.20$ & $42.09 \pm 0.04$ & $-2.97 \pm 0.02$ & $-0.87 \pm 0.05$ & $41.79 \pm 0.06$ & $-2.58 \pm 0.05$\\ $0.200 < z < 0.300$ & $-1.20$ & $42.52 \pm 0.04$ & $-3.29 \pm 0.01$ & $-0.71 \pm 0.07$ & $42.13 \pm 0.06$ & $-2.79 \pm 0.05$\\ $0.300 < z < 0.377$ & $-1.20$ & $42.83 \pm 0.03$ & $-3.59 \pm 0.01$ & $-0.50 \pm 0.06$ & $42.30 \pm 0.05$ & $-2.96 \pm 0.04$\\ $0.233 < z < 0.251$\tablenotemark{b} & \multicolumn{3}{c}{} & $-1.41 \pm 0.03$ & $42.14 \pm 0.08$ & $-3.05 \pm 0.09$\\ \multicolumn{7}{c}{Pure star forming sample including composites and AGNs}\\ $0.010 < z < 0.100$\tablenotemark{a} & $-1.20$ & $41.76 \pm 0.10$ & $-2.82 \pm 0.04$ & $-1.25 \pm 0.06$ & $41.85 \pm 0.13$ & $-2.93 \pm 0.10$\\ $0.100 < z < 0.200$ & $-1.20$ & $42.13 \pm 0.04$ & $-2.88 \pm 0.02$ & $-0.99 \pm 0.05$ & $41.92 \pm 0.06$ & $-2.61 \pm 0.05$\\ $0.200 < z < 0.300$ & $-1.20$ & $42.55 \pm 0.04$ & $-3.17 \pm 0.01$ & $-0.88 \pm 0.07$ & $42.26 \pm 0.06$ & $-2.80 \pm 0.05$\\ $0.300 < z < 0.377$ & $-1.20$ & $42.83 \pm 0.03$ & $-3.44 \pm 0.01$ & $-0.66 \pm 0.06$ & $42.39 \pm 0.05$ & $-2.90 \pm 0.04$\\ \enddata \tablenotetext{a}{The redshift range $0.010 < z < 0.100$ covers an atypical under-dense region (Section~\ref{sec:fieldselection}).} \tablenotetext{b}{Combined SHELS and \citetalias{Shioya08} result. The quoted uncertainties are the formal uncertainties of the fit.} \tablecomments{For each redshift range, we list the determined Schechter parameters with fixed $\alpha = -1.20$ ({\it left}) and $\alpha$ unconstrained ({\it right}) and their uncertainties. We also list the parameters for the Schechter fit for a pure star forming sample ({\it top}) and a sample where the AGNs and composites are included ({\it bottom}). The pure star forming sample and the sample including composites and AGNS are based on criterion ({\it iv}) and ({\it iii}), respectively, in Table~\ref{tab:ngals}. The quoted uncertainties are calculated in Section~\ref{sec:styfit}.} \end{deluxetable} \begin{figure}[tb] \centering \includeIDLfigP[\columnwidth]{fig.11.eps} \caption{The distributions of $\log L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi$ as function of $M_R$ for each redshift bin. We use these distributions to assign an \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity to the observed absolute magnitude and to assign an absolute magnitude to a simulated \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity. This procedure enables us to determine the final uncertainties in the Schechter parameters and to study the influence of the survey selection criterion $R_\mathrm{tot} \le 20.3$.} \label{fig:system} \label{fig:propersimdist} \end{figure} We determine the final uncertainties in the Schechter parameters by constructing 1,000 sets of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosities. We simulate the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosities by converting the observed absolute magnitudes into \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosities using the distribution of $L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}$ as a function of $M_R$ for each redshift bin from Figure~\ref{fig:system}. We redetermine the Schechter parameters for each simulation using the STY-method. The 1\,$\sigma$ spread in the redetermined parameters is the final uncertainty which includes the formal fitting uncertainty, uncertainties resulting from the size of the sampled volume, and the uncertainties in the observed \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity. Table~\ref{tab:schechter} lists the Schechter parameters and their uncertainties. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{18pt}{16pt}{15pt}{5pt}{fig.12.eps} \includeIDLfigPcustom[0.24\textwidth]{90pt}{435pt}{40pt}{70pt}{fig.12.legend.eps} \caption{The three Schechter parameters as a function of redshift and look-back time for SHELS for pure star forming galaxies ({\it red, green, blue, and cyan points}), SHELS combined with \citetalias{Shioya08} ({\it magenta point}), and surveys at similar redshifts that also use \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} as star formation indicator ({\it black points}). } \label{fig:parevolution} \end{figure*} \subsection{Parameter evolution and impact of selection criteria} \label{sec:parevolution} Figure~\ref{fig:parevolution} compares the Schechter function parameters of SHELS with fixed $\alpha = -1.20$ with other \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} surveys. Evolution in the characteristic luminosity $L^*$ is clearly visible. The selection of galaxies by apparent $R$-band magnitude does not yield the same sample of galaxies obtained when selecting by \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux/luminosity. Because we determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function from an $R$-selected spectroscopic survey, we must investigate the potential systematic effects of the selection criteria ($R_\mathrm{tot} \leq 20.3$ and $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}) on the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. From a given luminosity function ($\alpha = -1.20$, $\log L^* (\ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi) = 42.00$, and $\log \phi^* (\perMpcQube) = -2.75$) we construct a sample of galaxies with a flux $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. We choose these parameters because they are close to our recovered parameters from SHELS. Furthermore, we keep the parameters constant over our redshift range to test whether our selection criteria introduce an artificial evolution to the parameters. We assume a uniform galaxy distribution in a comoving volume. We assign each simulated galaxy an absolute magnitude using the distribution of absolute magnitude as function of $L_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi}$ in Figure~\ref{fig:propersimdist}. We calculate the apparent magnitude using the allocated redshift and a $k+e$-correction based on the observed distribution as a function of redshift. We apply the survey selection criterion of $R_\mathrm{tot} \leq 20.3$ and recover the Schechter parameters using the STY-method. Figure~\ref{fig:propersimresult} and Table~\ref{tab:propersim} give the results. \begin{deluxetable}{lcccccc} \tablewidth{0pt} \tablecaption{Recovered Schechter parameters for simulated \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions.\label{tab:propersim}} \tablehead{ \colhead{} & \multicolumn{3}{c}{fixed $\alpha$} & \multicolumn{3}{c}{unconstrained $\alpha$}\\ \colhead{redshift range} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} } \startdata input parameters & $-1.20$ & $42.00$ & $-2.75$ & $-1.20$ & $42.00$ & $-2.75$\\ $0.010 \le z < 0.100$ & $-1.20$ & $42.01 \pm 0.07$ & $-2.75 \pm 0.02$ & $-1.18 \pm 0.04$ & $41.98 \pm 0.10$ & $-2.71 \pm 0.08$\\ $0.100 \le z < 0.200$ & $-1.20$ & $42.06 \pm 0.03$ & $-2.77 \pm 0.01$ & $-1.07 \pm 0.03$ & $41.93 \pm 0.04$ & $-2.60 \pm 0.04$\\ $0.200 \le z < 0.300$ & $-1.20$ & $42.18 \pm 0.02$ & $-2.84 \pm 0.01$ & $-0.73 \pm 0.05$ & $41.85 \pm 0.03$ & $-2.43 \pm 0.03$\\ $0.300 \le z < 0.377$ & $-1.20$ & $42.33 \pm 0.02$ & $-2.95 \pm 0.01$ & $-0.17 \pm 0.07$ & $41.81 \pm 0.03$ & $-2.38 \pm 0.01$\\ \enddata \end{deluxetable} \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.495\textwidth]{6pt}{14pt}{30pt}{5pt}{fig.13.left.eps} \includeIDLfigPcustom[0.495\textwidth]{6pt}{14pt}{30pt}{5pt}{fig.13.right.eps} \caption{Histogram of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function parameters determined from simulations to test the influence of the $R$-band selection criteria on the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. Each histogram ({\it red, green, blue, cyan}) indicates the recovered parameters for a different redshift bin. Vertical dotted lines indicate the input parameters of the luminosity function to the simulations ($\alpha = -1.20$, $\log L^* = 42.00$, and $\log \phi^* = -2.75$). We indicate the results with $\alpha$ fixed at $-1.20$ ({\it left}) and with $\alpha$ as a free parameter ({\it right}). Table~\ref{tab:propersim} also shows the results.} \label{fig:propersimresult} \end{figure*} When we keep $\alpha$ fixed in fitting the simulations, there is an artificial trend of increasing $L^*$ and decreasing $\phi^*$ with increasing redshift. This trend results from selective removal of the fainter \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies. The removal would otherwise result in $\alpha$ decreasing (which we also show for $\alpha$ unconstrained). Thus, $L^*$ and $\phi^*$ should be corrected for the fact that $\alpha$ is kept fixed at a steeper value than would be fit. However, the simulated trend in $L^*$ and $\phi^*$ is far smaller ($\Delta \log L^* = 0.33$, $\Delta \log \phi^* = -0.20$) than we determine from the observations ($\Delta \log L^* = 1.11$, $\Delta \log \phi^* = -0.73$). Thus, the evolution in $L^*$ in Figure~\ref{fig:parevolution} is real. When we fit the simulations for all three parameters (unconstrained $\alpha$ in Table~\ref{tab:propersim}), the decrease of $\alpha$ ($\Delta \alpha = -1.01$) with increasing redshift is close to that of the observations ($\Delta \alpha = -0.72$; Table~\ref{tab:schechter}). Moreover, the faint-end slope from our lowest redshift bin ($0.010 < z < 0.100$; where the faint-end of the luminosity function is well-sampled) is consistent with that of the combined luminosity function of SHELS and \citetalias{Shioya08} at $z \sim 0.24$ (Section~\ref{sec:combinedlf}) within the uncertainties. Hence, we have {\it no} evidence for evolution of the faint-end slope over the redshift range covered by SHELS. The trend observed with the faint-end slope unconstrained is the result of our selection criteria. We also notice an artificial trend in $L^*$ with increasing redshift for a constant luminosity function (although smaller than with $\alpha$ constrained) opposite to the trend observed, and opposite to the trend we derive fitting for $\alpha = -1.20$. To compensate for a shallow faint-end slope and a slight decrease in $L^*$ with redshift, $\phi^*$ increases in these simulations. We do not consider $\phi^*$ because it only normalizes the luminosity function and does not determine the shape of it, unlike $\alpha$ and $L^*$. The normalization is dependent on the number of galaxies sampled. Because this number is heavily influenced by the distribution of galaxies (i.e. the large-scale structure, see Figure~\ref{fig:lhavz}), it is not possible to say anything meaningful about any trend in $\phi^*$ even with the area covered by SHELS. In summary, there is strong evidence for evolution in $L^*$ and no evidence for evolution in $\alpha$ over $0.100 < z < 0.377$. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{6pt}{14pt}{18pt}{9pt}{fig.14.eps} \caption{The luminosity function parameters for the input \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function used to determine the observed \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity functions as shown in Figure~\ref{fig:schechter}. The contours show the 68.3, 95.4, and 99.7\,\% confidence intervals based on the fit of the output \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function.} \label{fig:truehalf} \end{figure*} \subsection{The ``true'' \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function} \label{sec:truehalf} In Section~\ref{sec:parevolution} we investigate the influence of our selection criteria on an assumed luminosity function. We can extend this application to determine the ``true'' \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. We construct a sample of galaxies with a flux $f_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi} \ge \pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{} for a grid of given values of $\alpha$ and $L^*$. We constrain $\phi^*$ by the number of observed galaxies in each redshift bin. These choices are our input Schechter parameters. We apply our magnitude selection of $R_\mathrm{tot} \le 20.3$. Then we determine the output: the parameters one would recover using the STY-method. We take the median of the recovered Schechter parameters as our final output Schechter parameters. With these final output parameters we determine the likelihood for our observations (Figure~\ref{fig:schechter}). We also show the input parameters and the confidence intervals from the likelihood-determination for each redshift bin (Figure~\ref{fig:truehalf}). Again, we find a significant evolution of $L^*$. $L^*$ increased towards higher redshifts, regardless of the inclusion of the lowest redshift results. Furthermore, there is no significant evolution in the faint end slope of the intrinsic \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. These results confirm the findings in the Section~\ref{sec:parevolution}. The evolution in $L^*$ is real and there is no evidence for evolution in $\alpha$. \begin{figure}[tb] \centering \includeIDLfigPcustom{10pt}{0pt}{26pt}{7pt}{fig.15.eps} \caption{Logarithm of the selection function for each of the redshift bins sampled by SHELS as a function of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} with respect to $L^*$ at the respective redshift bin.} \label{fig:haselfie} \end{figure} Given the input \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function, we can calculate the selection function for each redshift bin. We define the selection function as the ratio of the measured data points in Figure~\ref{fig:schechter} and the intrinsic or ``true'' \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. The selection function measures the effect of our $R \le 20.3$ selection criterion. We show the selection functions in Figure~\ref{fig:haselfie}. At $z\sim0.24$ we also consider the data of \citetalias{Shioya08} These data should be complete over the luminosity range covered by SHELS. We thus assume the data from the narrowband survey as the intrinsic \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function and take the ratio between the \citetalias{Shioya08} and the SHELS data as an estimate of the selection function (Figure~\ref{fig:haselfie}; {\it magenta long-dashed line}). For consistency with the other redshift bins of SHELS we remove the OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{} constraint in this calculation ({\it solid diamonds} in Figure~\ref{fig:combinedLF}). \begin{figure}[tb] \centering \includeIDLfigPcustom[\columnwidth]{6pt}{10pt}{18pt}{9pt}{fig.16.eps} \caption{Redshifts from zCOSMOS DR2 of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} candidates of \citetalias{Shioya08} ({\it solid histograms}) and all galaxies ({\it dotted histogram}), and the relative NB816 filter transmission curve for \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} ({\it red line}) and both [{\sc S\,ii}]{} lines ({\it blue lines}). Note the \citetalias{Shioya08} galaxies within the [{\sc S\,ii}]{} sensitive redshift range.} \label{fig:zcosmosSII} \end{figure} \begin{figure}[tb] \centering \includeIDLfigPcustom[\columnwidth]{6pt}{16pt}{18pt}{9pt}{fig.17.eps} \caption{Fraction of candidates with a redshift from zCOSMOS DR2 corresponding to \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} ($z\sim0.24$; {\it red histogram}) or [{\sc S\,ii}]{} ($z\sim0.21$; {\it blue histogram}) as a function of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity calculated from the narrowband survey of \citetalias{Shioya08}.} \label{fig:zcosmosFraction} \end{figure} The selection function computed at $z\sim0.24$ using \citetalias{Shioya08} should lie on top of the SHELS selection function at $0.200 < z < 0.300$, but it does not. Thus either SHELS underestimates--or \citetalias{Shioya08} overestimates--the number of faint \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies. Either the $R$-band magnitude vs. \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}-luminosity relation between SHELS and \citetalias{Shioya08} must be significantly different, or there is another selection effect not yet considered. Figure~\ref{fig:shishelscomp} rules out a different $R$-\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} relation. This figure shows that the two surveys clearly overlap and do not have a significantly different $R$-\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} relation. A selection effect that removes some galaxies from the SHELS sample is the OEW$_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi+[{\sc N\,ii}]} \ge 12$\,\AA{} criterion. Figure~\ref{fig:combinedLF} shows the effect of this criterion; it slightly increases the number of galaxies at the faint-end of the SHELS luminosity function. This bias is, however, insufficient to explain the differences in the SHELS- and \citetalias{Shioya08}-based selection functions. The narrowband survey of \citetalias{Shioya08} may overestimate the number of fainter \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies. Even though consistent within the uncertainties, the faint-end slope of the combined luminosity function at $z \sim 0.24$ is somewhat steeper ($\alpha \sim -1.4$) than at our lowest redshift bin ($\alpha \sim -1.2$) causing a very steep selection function at $z \sim 0.24$. We can determine the redshift of several narrowband-survey candidates of \citetalias{Shioya08} using zCOSMOS DR2. Figure~\ref{fig:zcosmosSII} shows the redshifts of the galaxies from \citetalias{Shioya08} with confirmed redshifts near $z \sim 0.24$. Several galaxies have redshifts outside the wavelength range where the NB816 filter is sensitive to \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} at $z \sim 0.24$ ({\it red line}). About 25\,\% of the candidates with spectroscopy are at a lower redshift $z \sim 0.21$. This redshift correspond to the wavelength range where the NB816 filter is sensitive to the \sii~$\lambda\lambda$6733,6718{} doublet ({\it blue lines}). These galaxies belong to an overdensity in the large-scale structure at $z \sim 0.22$ ({\it dotted histogram} in Figure~\ref{fig:zcosmosSII} and {\it solid histogram} in Figure~\ref{fig:zcosmos}). Figure~\ref{fig:zcosmosFraction} shows the fraction of galaxies with redshifts corresponding to [{\sc S\,ii}]{} or \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} as a function of the \citetalias{Shioya08} \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity. The figure suggests that the fraction of [{\sc S\,ii}]{} galaxies increases towards fainter luminosities. This effect could produce an excess of faint \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies in the \citetalias{Shioya08} survey and thus could explain the difference in the selection functions implied by the SHELS simulations and the comparisons of SHELS and \citetalias{Shioya08}. Color-color selections are sufficient to remove contaminating galaxies from the narrowband survey at higher redshifts. The contaminants include \ifmmode{\mathrm{H\beta}}\else{H$\beta$}\fi{} and \ifmmode{\mbox{[{\sc O\,iii}]}}\else{[{\sc O\,iii}]}\fi{} at $z \sim 0.6-0.7$, [{\sc Oii}]{} at $z \sim 1.2$ and Ly$\alpha${} at $z \sim 5.7$ for a narrowband survey at $\sim 8150$\,\AA{} \citep[e.g.][]{Fujita03,Ly07}. However, it is impossible to distinguish \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies from [{\sc S\,ii}]{} galaxies by color \citep{Westra08}. The contamination is survey dependent because it depends on the details of the large-scale structure. The \citetalias{Shioya08} survey is a case where there is a peak in the redshift distribution exactly where the narrowband survey is sensitive to [{\sc S\,ii}]{}. \subsection{Volume dependence} \label{sec:volumeeffect} There is a large spread in the parameters determined from different surveys around $z \sim 0.24$ (Figure~\ref{fig:parevolution}) accompanied by very large uncertainties. All of these surveys \citep{Fujita03,Hippelein03,Ly07,Westra08} use a single or multiple narrowband filters over $\sim 300 - 950\,\sq \arcmin$. \citetalias{Shioya08} uses $5540\,\sq \arcmin$. Typical volumes are $0.5-1 \times \pow{4}\,\ifmmode{\mathrm{Mpc^{3}}}\else{Mpc$^{3}$}\fi$; \citetalias{Shioya08} covers $3 \times \pow{4}\,\ifmmode{\mathrm{Mpc^{3}}}\else{Mpc$^{3}$}\fi$. The smaller volumes are not large enough to constrain the bright end of the luminosity function. We discussed \citetalias{Shioya08} in detail in Section~\ref{sec:nbvbb}. To examine the impact of small volumes, we split SHELS into 16 separate pieces to match the area ($\sim 0.25\,\sq \degr$) of typical narrowband surveys that probe redshift $\sim 0.24$. Table~\ref{tab:shotnoisetest} gives the median recovered parameters and the inter-quartile range. For $\alpha = -1.20$ the recovered parameters are almost identical to those of the entire field. The inter-quartile range is large, even when compared to the uncertainties in Table~\ref{tab:schechter}. If we combine the 16 ``surveys'', we would have to increase the uncertainties in Table~\ref{tab:schechter} because of the smaller number of galaxies, i.e. an increase in shot-noise. This uncertainty easily explains the scatter of the parameters observed at $z\sim0.24$. It underscores the need for large-volume surveys to constrain the bright end of the luminosity function. To constrain $\alpha$ it is more important to have a deep survey and to span a large range of luminosities rather than to cover a large area. The data from \citet{Ly07} demonstrate this point. As discussed in \citetalias{Shioya08}, the data-points from \citetalias{Shioya08} and \citeauthor{Ly07} are quite similar at the fainter luminosities, both in slope and in amplitude. Thus, the survey area of \citeauthor{Ly07}, i.e. $\sim 0.25\,\sq \degr$, can be large enough to constrain $\alpha$, and $\alpha$ only. Their area (volume) is too small to determine the bright end of the luminosity function \citepalias[see also][]{Shioya08} because they do not observe enough of the rare most-luminous galaxies. To estimate the area required to constrain $L^*$ and $\phi^*$, we simulate many observed galaxies given a specific luminosity function at $0.233 < z < 0.251$ for different sized areas. We fit the parameters with fixed $\alpha = -1.20$. {\it On average}, the parameters are very well recovered. However, for the smaller areas the spread in the recovered parameters is large. We show the 1\,$\sigma$ spread around the mean, the median, the inter-quartile range, and minimum and maximum values of the recovered values for each area in Figure~\ref{fig:volumetest}. \begin{deluxetable}{lcccccc} \tablewidth{0pt} \tablecaption{Median, upper and lower quartile range of Schechter parameters for 0.25\,\sq{}\degr{} subsets using SHELS.\label{tab:shotnoisetest}} \tablehead{ \colhead{} & \multicolumn{3}{c}{fixed $\alpha$} & \multicolumn{3}{c}{unconstrained $\alpha$}\\ \colhead{redshift range} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} & \colhead{$\alpha$} & \colhead{$\log L^*$} & \colhead{$\log \phi^*$} } \startdata $0.010 < z < 0.100$ & $-1.20$ & $41.60 _{-0.17} ^{+0.23}$ & $-2.79 _{-0.13} ^{+0.06}$ & $-1.18 _{-0.10} ^{+0.19}$ & $41.58 _{-0.35} ^{+0.23}$ & $-2.76 _{-0.34} ^{+0.26}$\\ $0.100 < z < 0.200$ & $-1.20$ & $42.06 _{-0.09} ^{+0.10}$ & $-2.96 _{-0.13} ^{+0.07}$ & $-0.62 _{-0.32} ^{+0.10}$ & $41.63 _{-0.22} ^{+0.18}$ & $-2.57 _{-0.05} ^{+0.28}$\\ $0.200 < z < 0.300$ & $-1.20$ & $42.53 _{-0.19} ^{+0.13}$ & $-3.28 _{-0.12} ^{+0.08}$ & $-0.61 _{-0.16} ^{+0.15}$ & $42.07 _{-0.21} ^{+0.21}$ & $-2.75 _{-0.07} ^{+0.07}$\\ $0.300 < z < 0.377$ & $-1.20$ & $42.82 _{-0.05} ^{+0.04}$ & $-3.56 _{-0.09} ^{+0.05}$ & $-0.48 _{-0.12} ^{+0.17}$ & $42.27 _{-0.15} ^{+0.04}$ & $-2.96 _{-0.24} ^{+0.02}$\\ $0.233 < z < 0.251$ & $-1.20$ & $42.40 _{-0.22} ^{+0.22}$ & $-3.28 _{-0.09} ^{+0.07}$ & $-0.22 _{-0.44} ^{+0.21}$ & $41.91 _{-0.27} ^{+0.11}$ & $-2.79 _{-0.63} ^{+0.20}$\\ \enddata \end{deluxetable} \begin{figure}[tb] \centering \includeIDLfigPcustom{14pt}{16pt}{28pt}{5pt}{fig.18.eps} \caption{Box and whisker plot for the simulated surveys as a function of area. The surveys cover $0.233 < z < 0.251$ and have a limiting flux of $\pow{-15.5}$\,\ergs\,\ifmmode{\mathrm{cm^{-2}}}\else{cm$^{-2}$}\fi{}. The gray box indicates the 1\,$\sigma$ around the mean, the dash indicates the median, the boxes indicates the inter-quartile range, and the whiskers indicate minimum and maximum values of recovered Schechter parameters (using the STY-method) from the simulations.} \label{fig:volumetest} \end{figure} If we assume that 10\,\% is an acceptable uncertainty for a parameter ($\sim 0.04$ in dex), then the survey area required is $\sim 3\,\sq \degr$. Surveys like \citet{Fujita03}, \citet{Ly07} and \citet{Westra08} at $z \sim 0.24$ are thus not large enough to constrain the bright end of the luminosity function. \citetalias{Shioya08} is a factor of two shy of this area; SHELS is larger. Hence, combining the \citetalias{Shioya08} and SHELS data (Section~\ref{sec:combinedlf}) is an excellent way to constrain the faint and bright end of the luminosity function simultaneously. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{10pt}{16pt}{28pt}{10pt}{fig.19.eps} \caption{Logarithm of the total luminosity density evaluated at ($\log \frac{L_\mathrm{lim}}{L^*}$, $\alpha$) divided by the luminosity density at ($\log \frac{L_\mathrm{lim}}{L^*}$, $\alpha$) = (-1, -1.35) ({\it white filled circle}). The black lines indicate negative values.} \label{fig:lumcuttest} \end{figure*} \section{Star formation density} \label{sec:sfd} We determine the star formation density ($\dot{\rho}$ in \Msunyr\,\ifmmode{\mathrm{Mpc^{-3}}}\else{Mpc$^{-3}$}\fi{}) from the integrated \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity density for each redshift range. We use the conversion from \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity to star formation rate from \citet{Kennicutt98} for Case B recombination and $T_e = \pow{4}$\,K \begin{equation} \mathrm{SFR} = 7.9 \times \pow{-42} L_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}}, \end{equation} where SFR in \ifmmode{\mathrm{M_\odot\,yr^{-1}}}\else{M$_\odot$\,yr$^{-1}$}\fi{} and $L_\mathrm{\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{}}$ in \ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi{}. We determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity density for $L \ge L_\mathrm{lim}$ from the parameters of the Schechter function using \begin{equation} \mathcal{L} = \phi^* L^* \Gamma(\alpha+2, \frac{L_\mathrm{lim}}{L^*}), \label{eq:lumdens} \end{equation} where $\Gamma$ is the incomplete gamma function. The choice $L_\mathrm{lim} = 0$ affects the integrated luminosity density\footnote{$L_\mathrm{lim} = 0$ reduces Eq~\eqref{eq:lumdens} to $\mathcal{L} = \phi^* L^* \Gamma(\alpha+2)$, where $\Gamma$ is the complete gamma function.}. Figure~\ref{fig:lumcuttest} shows the logarithm of the total luminosity density evaluated at ($\log \frac{L_\mathrm{lim}}{L^*}$, $\alpha$) divided by the luminosity density at ($\log \frac{L_\mathrm{lim}}{L^*}$, $\alpha$) = (-1, -1.35). We can thus determine the effect of using different limiting luminosities on the total luminosity density. For example, using $L_\mathrm{lim} = 0$ for $\alpha = -1.35$ rather than $L_\mathrm{lim} = 0.1 L^*$ gives a difference of $\pow{0.12 - 0.00} = 1.31$, i.e. an increase of 30\,\%. These effects are obviously more severe for steeper values of $\alpha$. When comparing surveys of different depths, one needs to be careful about extrapolations of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function (Schechter function) to very low star formation rates, especially for steep $\alpha$. \begin{deluxetable}{lcccccc} \tablewidth{0pt} \tablecaption{Star formation density.\label{tab:sfd}} \tablehead{ \colhead{} & \colhead{} & \multicolumn{2}{c}{$\log \dot{\rho}$} & \multicolumn{2}{c}{$\log \dot{\rho}$ with $\log L_\mathrm{lim} = 40$}\\ \colhead{redshift range} & \colhead{$\log L_\mathrm{lim}$\tablenotemark{a}} & fixed $\alpha$ & unconstrained $\alpha$ & fixed $\alpha$ & unconstrained $\alpha$ } \tablewidth{0pt} \startdata $0.010 < z < 0.100$\tablenotemark{b} & 37.84 & $-2.18 \pm 0.10$ & $-2.19 \pm 0.17$ & $-2.20 \pm 0.10$ & $-2.21 \pm 0.17$\\ $0.100 < z < 0.200$ & 39.89 & $-1.92 \pm 0.09$ & $-1.92 \pm 0.12$ & $-1.93 \pm 0.09$ & $-1.92 \pm 0.12$\\ $0.200 < z < 0.300$ & 40.55 & $-1.82 \pm 0.05$ & $-1.81 \pm 0.10$ & $-1.81 \pm 0.05$ & $-1.81 \pm 0.10$\\ $0.300 < z < 0.377$ & 40.95 & $-1.81 \pm 0.03$ & $-1.82 \pm 0.08$ & $-1.80 \pm 0.03$ & $-1.81 \pm 0.08$\\ $0.233 < z < 0.251$\tablenotemark{c} & \multicolumn{3}{c}{} & \nodata & $-1.86 \pm 0.13$\\ \enddata \tablenotetext{a}{$L_\mathrm{lim} = 4 \pi D_L ^2 (z_\mathrm{low})$} \tablenotetext{b}{The redshift range $0.010 < z < 0.100$ covers an atypical under-dense region (Section~\ref{sec:fieldselection}).} \tablenotetext{c}{Combined SHELS and \citetalias{Shioya08} result.} \tablecomments{We use the Schechter parameters determined for the pure star forming galaxies in Table~\ref{tab:schechter}. We calculate the uncertainties using standard uncertainty propagation for Eq.~(\ref{eq:lumdens}) and the uncertainties in Table~\ref{tab:schechter}. $\dot{\rho}$ is in \Msunyr\,\ifmmode{\mathrm{Mpc^{-3}}}\else{Mpc$^{-3}$}\fi{}.} \end{deluxetable} Table~\ref{tab:sfd} lists the star formation densities and uncertainties for SHELS down to the luminosity limit of the appropriate redshift bin. We also show the star formation densities down to $\log L_\mathrm{lim} = 40.00$ corresponding to a star formation rate of 0.079\,\ifmmode{\mathrm{M_\odot\,yr^{-1}}}\else{M$_\odot$\,yr$^{-1}$}\fi{} for comparison with other surveys (Figure~\ref{fig:sfd}). We choose this value for all surveys because most surveys either reach this star formation rate, or the required extrapolation is modest. The solid symbols in Figure~\ref{fig:sfd} represent surveys with star formation densities derived from the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line; the open symbols come from either the [{\sc Oii}]{} or \ifmmode{\mbox{[{\sc O\,iii}]}}\else{[{\sc O\,iii}]}\fi{} line. Figure~\ref{fig:sfd} shows that the star formation density for other surveys at $0.200 < z < 0.300$ is consistent with the star formation density determined from the combined luminosity function of SHELS and \citetalias{Shioya08}. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{14pt}{16pt}{10pt}{5pt}{fig.20.eps} \includeIDLfigPcustom[0.24\textwidth]{85pt}{435pt}{40pt}{70pt}{fig.20.legend.eps} \caption{Star formation density as a function of look-back time and redshift for SHELS ({\it red, green, blue, and cyan large solid circles}) compared with other surveys using the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} line ({\it solid symbols}), or either [{\sc Oii}]{} or \ifmmode{\mbox{[{\sc O\,iii}]}}\else{[{\sc O\,iii}]}\fi{} lines ({\it open symbols}) as star formation indicator. We also indicate the combined SHELS and \citetalias{Shioya08} point ({\it solid large magenta circle}). We calculate the star formation density using the Schechter parameters of each survey to a limiting star formation rate of 0.079\,\ifmmode{\mathrm{M_\odot\,yr^{-1}}}\else{M$_\odot$\,yr$^{-1}$}\fi{} (corresponds to $L_\ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} = \pow{40}$\,\ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi{}) to reduce the systematic uncertainty from extrapolation to $L = 0$\,\ifmmode{\mathrm{erg\,s^{-1}}}\else{erg\,s$^{-1}$}\fi{}.} \label{fig:sfd} \end{figure*} Figure~\ref{fig:sfd} also shows a clear increase in the star formation density with increasing redshift. However, our lowest redshift point ($0.010 \le z < 0.100$) lies below surveys at similar redshifts. This underestimate occurs because our field was selected against low redshift clusters. This survey is thus an underdense region at low redshifts and the star formation density is probably correspondingly underestimated. Because we use the integrated Schechter function to determine the star formation density, the arguments in Section~\ref{sec:volumeeffect} for the Schechter parameters $L^*$ and $\phi^*$, are valid for the star formation density. The median, upper and lower quartile range for the star formation density for 0.25\,\sq{}\degr{} subsets are in Table~\ref{tab:sfdnoise}. Again, the recovered star formation density of a 0.25\,\sq{}\degr{} subset is almost identical to that of the entire field, but the standard deviation in the star formation density is very large (almost a factor of 2). The large uncertainty mainly results from the scatter in $L^*$ and $\phi^*$. Again, combined with the increased uncertainties resulting from increased shot-noise, the spread in star formation densities at narrow redshift slices can easily be explained by sampling a volume that is too small. \begin{deluxetable}{lcc} \tablewidth{0pt} \tablecaption{Median, upper and lower quartile range of the star formation density for 0.25\,\sq{}\degr{} subsets using SHELS.\label{tab:sfdnoise}} \tablehead{ \colhead{} & \multicolumn{2}{c}{$\log \dot{\rho}$}\\ \colhead{redshift range} & fixed $\alpha$ & unconstrained $\alpha$ } \startdata $0.010 < z < 0.100$ & $-2.25 _{-0.23} ^{+0.12}$ & $-2.30 _{-0.22} ^{+0.12}$\\ $0.100 < z < 0.200$ & $-1.96 _{-0.15} ^{+0.13}$ & $-1.97 _{-0.15} ^{+0.25}$\\ $0.200 < z < 0.300$ & $-1.84 _{-0.13} ^{+0.16}$ & $-1.85 _{-0.13} ^{+0.16}$\\ $0.300 < z < 0.377$ & $-1.78 _{-0.05} ^{+0.03}$ & $-1.82 _{-0.19} ^{+0.03}$\\ $0.233 < z < 0.251$ & $-1.95 _{-0.26} ^{+0.23}$ & $-1.96 _{-0.26} ^{+0.23}$\\ \enddata \tablecomments{The values for the star formation density are integrated down to $\log L_\mathrm{lim} = 40$.} \end{deluxetable} \section{Physical properties of star forming galaxies} \label{sec:properties} \subsection{Stellar population age} In star forming galaxies the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emission originates from gas surrounding the young stars. The spectrum from an actively star forming galaxy is dominated by the light emitted by these young stars. Figure~\ref{fig:d4000} shows $D_n{4000}$, the ratio of the continuum red- and bluewards of the $H+K$ break and an indicator of the age of the stellar population \citep{Balogh99,Bruzual83}, as a function of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for pure star forming galaxies. A low $D_n{4000}$ \citetext{$D_n{4000} \lesssim 1.44$; D.~F.~Woods et al. 2010; in preparation} indicates a young stellar population. The majority of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies contain a young stellar population. \begin{figure}[tb] \centering \includeIDLfigPcustom{15pt}{16pt}{28pt}{20pt}{fig.21.eps} \caption{$D_n{4000}$ as a function of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for each of the four redshift bins for pure star forming \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies.} \label{fig:d4000} \end{figure} \subsection{Galaxy-galaxy interaction} \citet{Sobral09} find that the fraction of mergers rises with increasing luminosity particularly around $L^*$. Some of the SHELS galaxies are quite luminous in \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} indicating they are undergoing a starburst. \citet{Barton00} find that a close pass of two galaxies can initiate a starburst. Following \citeauthor{Sobral09}, we examine the SHELS data to look for evidence of the impact of interactions on the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function. Thus, we focus on galaxies that may have (or may have had) a recent encounter with another galaxy. We determine whether each galaxy has an apparently nearby ``neighbor''. A galaxy has a neighbor when the velocity difference (corrected for redshift) between the two galaxies is $\le 500\,\ifmmode{\mathrm{km\,s^{-1}}}\else{km\,s$^{-1}$}\fi$, and their projected separation is $\le 100$\,kpc. These values are a standard definition of galaxy pairs \citep[e.g.][]{Barton00,Patton00,Lin04,Woods07,Park09}. We include the somewhat deeper SHELS catalog to look for neighboring galaxies (see Section~\ref{sec:fieldselection}). This catalog contains spectra of galaxies with magnitudes $20.3 < R < 20.6$ where the spectroscopy is 52\,\% complete. The fraction of all our pure star forming galaxies that have a neighbor is 15.3\,\% (547 out of 3565)\footnote{If we decrease our projected separation criterion to 50\,kpc, the fraction drops with a factor of $\sim$2 to 7.4\,\% (265 out of 3565). The fractions in Figure~\ref{fig:friendshisto} scale with roughly the same factor within the uncertainties. Our conclusions are not affected by the choice of projected separation.}. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{8pt}{16pt}{28pt}{20pt}{fig.22.eps} \caption{Fraction of pure star forming galaxies with one or more neighbors for each redshift bin ({\it colored histograms}) as a function of redshift and the fraction for $0.01 < z < 0.30$ ({\it black thick histogram}). Colored triangles indicate $L^*$ (determined with $\alpha = -1.20$) for each redshift range.} \label{fig:friendshisto} \end{figure*} Figure~\ref{fig:friendshisto} shows the fraction of galaxies with one or more neighbors as a function of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for the lowest three redshift bins ({\it colored histograms}) and for the three redshift ranges combined ({\it thick black histogram}). The fraction is always a lower limit; deeper spectroscopy might reveal only more neighbors of a galaxy, never fewer. It is striking that the fraction of galaxies with neighbors increases around $L^*$ (and for the lowest redshift bin towards the lowest \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosities). This result agrees with a rise in the fraction of mergers with increasing \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity found by \citet{Sobral09}. The interesting question for galaxy evolution is whether the location of the increase determines $L^*$, or whether $L^*$ determines the location of the increase. \begin{figure*}[tb] \centering \includeIDLfigPcustom[0.75\textwidth]{8pt}{16pt}{28pt}{20pt}{fig.23.eps} \caption{Magnitude difference between the galaxy and its neighbor as a function of \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity for pure star forming \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies. The solid lines show $|\Delta m_R| = 2$, the demarcation between major and minor interactions. Galaxies above the dotted line are fainter than their neighbor, galaxies below are more luminous.} \label{fig:minormajor} \end{figure*} To investigate this behavior further, we investigate the magnitude difference between the galaxy and its neighbor(s). Figure~\ref{fig:minormajor} shows the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity as a function of the magnitude difference $\Delta m_R = R_\mathrm{galaxy} - R_\mathrm{neighbor}$\footnote{For neighbors in the $R_\mathrm{tot} > 20.3$ catalog we assumed $R = 20.3$. Thus, $\Delta m_R = R_\mathrm{galaxy} - 20.3$ for these galaxies.}. We also indicate the demarcation between minor and major pairs, i.e. $|\Delta m| = 2$ \citep[e.g.][]{Woods07}. Luminous \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies with neighbors tend to be mostly part of a major pair, and to a lesser extent the more luminous galaxy of a minor pair; faint \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies with neighbors can be part of a major or minor pair. However, when faint \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies are part of a minor pair, they tend to be the fainter (smaller) galaxy. The behavior in Figure~\ref{fig:minormajor} is consistent with the picture of interaction-induced star formation. The increase in the fraction around $L^*$ implies that galaxy-galaxy interactions are important for the increase of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity in these galaxies. \section{Summary and conclusion} \label{sec:summary} We use the Smithsonian Hectospec Lensing Survey (SHELS) to study \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} emitting galaxies. SHELS is complete to $R_\mathrm{tot} = 20.3$ over a large 4\,\sq{}\degr{} area. This area yields a large enough volume to study the bright end of the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function as a function of redshift. We determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} flux and attenuation from the SHELS spectroscopy. We also identify galaxies that host AGNs or are composites. We combine the strengths of two surveys, the breadth of SHELS (to constrain the bright-end of the luminosity function) and the depth of the narrowband survey of \citetalias{Shioya08} (to determine the faint end slope of the luminosity function), to determine a well-constrained \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function at $z \sim 0.24$. A narrowband survey goes deep over a limited field of view to cover the faint end of the luminosity function. A broadband selected spectroscopic survey can easily cover a larger volume to probe the bright end of the luminosity function. The resulting Schechter parameters are consistent with \citetalias{Shioya08} within their uncertainties. We determine the \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity function from SHELS for four redshift intervals over $0.010 < z < 0.377$. The lowest redshift interval ($0.010 < z < 0.100$) covers an atypical underdense region due to field selection. The characteristic luminosity $L^*$ increases as a function of redshift ($\Delta \log L^* = 0.84$ over $0.100 < z < 0.377$). The star formation density also increases with increasing redshift ($\Delta \log \dot{\rho} = 0.11$ over $0.010 < z < 0.377$). The star formation rate from the combined luminosity function of SHELS and \citetalias{Shioya08} is consistent with that of SHELS alone at $0.200 < z < 0.300$. The fraction of galaxies with neighbors increases by a factor of $2-5$ around $L^*$ for the most luminous star forming galaxies at each redshift, similar to \citet{Sobral09}. The fraction appears to also increase towards fainter \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} luminosity as a result of interactions in minor pairs. We conclude that triggered star formation is important for both the highest and lowest luminosity \ifmmode{\mathrm{H\alpha}}\else{H$\alpha$}\fi{} galaxies. The future of surveys for star forming galaxies is a combination of a large-area spectroscopic survey combined with very deep narrowband imaging. However, the narrowband imaging requires extensive test spectroscopy because the impact of large-scale structure with respect to the filter response is unknown a priori. The combination of methods can constrain and remove the scatter in the star formation density as a function of redshift. The combination also allows a secure determination of the shape of the luminosity function over a large luminosity range. \section*{Acknowledgments} We thank Christy Tremonti for providing her continuum subtraction routine, Anil Seth for suggesting the usage of the routine to compensate for the underlying stellar absorption, Antonaldo Diaferio for discussions on Schechter function fitting, Warren Brown, Scott Kenyon, and Deborah Woods for useful discussions. We are grateful for the contributions of the members of the MMT Observatory and the Telescope Data Center of the CfA. EW acknowledges the Smithsonian Institution for the support of his post-doctoral fellowship. We appreciate the thorough reading of this manuscript by an anonymous referee whose report has helped to improve the paper. Observations reported here were obtained at the MMT Observatory, a joint facility of the Smithsonian Institution and the University of Arizona. The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The zCOSMOS observations were made with ESO Telescopes at the La Silla or Paranal Observatories under programme ID 175.A-0839. \bibliographystyle{apj} \begin{footnotesize}
proofpile-arXiv_069-15990
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Many problems in science and engineering have some uncertainty in their nature and fuzzy differential equations are appropriate tools for modeling of such problems \cite{F1}. The interpretation of a fuzzy differential equation in the sense of generalized differentiability allows to fuzzifying the appropriate numerical methods of ordinary differential equations to fuzzy differential equations. The Hukuhara derivative with the extension principle or differential inclusions have some disadvantages. The main drawback is that the solutions obtained in this setting have increasing length of their supports \cite{38,Bede3}. Many authors have been generalized the traditional methods such as Euler's method, Adams-Bashforth methods, predictor-corrector method, Runge-Kutta method,...\cite{Bede5,21,20,26,28,29,23,19} to fuzzy differential and fuzzy initial value problems. However, they use Hukuhara differentiability and fuzify the numerical method using extension principle or other methods \cite{F1}. Under the concept of strongly generalized differentiability there exist fuzzy derivative for a large class of fuzzy-number-valued functions \cite{Bede2,Bede3}. Another advantage is that there exist two local solutions, so-called, (i)-differentiable and (ii)-differentiable solutions. According to the nature of the initial value problem we can choose the best meaningful practical solution. In this paper we develop the GLM schemes based on strongly generalized differentiability concept. Notion of a fuzzy derivative first introduced by Chang and Zadeh \cite{3} and Dubo and Prade \cite{4} introduced its extension. Stefanini \cite{30, 31} introduced the fuzzy gH-difference and Bede and Stefanini\cite{Bede7} defined and studied new generalization of the differentiability for fuzzy-number-valued functions. The aim of this paper is to develop the GLMs for fuzzy differential equations and study their consistency, stability and convergence. In this paper, under the strongly generalized differentiability we develop a well-known Adams-Bashforth methods in the framework of a general linear method. This starting step will motivate us to develop the arbitrary classes of GLMs with demanded properties in forthcoming research. Let us denote by $\mathbb{R}_{\mathcal{F}}$ the class of fuzzy numbers, i.e. normal, convex, upper semicontinuous and compactly supported fuzzy subsets of the real numbers. The fuzzy initial value problem is defined as follow: \begin{eqnarray}\label{eq-f} y'(t) &=& f(t,y(t)),\quad t\in [t_0,T],\\ y(t_0) &=& y_0,\nonumber \end{eqnarray} where, $f:[t_0,T]\times \mathbb{R}_{\mathcal{F}}\to \mathbb{R}_{\mathcal{F}}$ and $y_0\in \mathbb{R}_{\mathcal{F}}$. Here, we explain the GLM for ordinary IVP (\ref{eq-f}) and in next sections we will discuss on the development of GLM for FIVP. Burage and Butcher \cite{2} have presented a standard representation of a GLM in terms of four matrices. These methods were formulated as follows: \begin{equation}\label{1} \begin{array}{c} Y=hAf(Y)+Uy^{[n-1]},\\ y^{[n]}=hBf(Y)+Vy^{[n-1]}. \end{array} \end{equation} where $y^{[n-1]}$ and $y^{[n]}$ are input and output approximations, respectively, and \[A\in \mathbb{R}^{s\times s},\quad U\in \mathbb{R}^{s\times r},\quad B\in \mathbb{R}^{r\times s},\quad V\in \mathbb{R}^{r\times r}.\] In this paper we use the fuzzy interpolation for constructing Adams-Bashforth schemes in the general linear methods framework. The organization of this paper is as follow: In section \ref{sec2} we present the preliminaries from GLM and fuzzy calculus. In section \ref{sec3} we apply the GLM form of linear multistep methods to solve the fuzzy differential equations and in section \ref{sec5} numerical results are given. \section{Preliminaries}\label{sec2} In this section we present the required concepts from general linear methods and also we shortly review the required definitions form fuzzy calculus, as given in \cite{Bede1}. We will give the main idea of the paper for an important subclass of LMMs, the so-called Adams methods, in GLM framework. \begin{defn} Let $u,v\in \mathbb{R}_\mathcal{F}$, the Hukuhara difference (H-difference $\circleddash_{H}$) of $u$ and $v$ is defined by \[u\circleddash v=w \Longleftrightarrow u=v+w.\] \end{defn} Where $w\in\mathbb{R}_\mathcal{F}$ is called the H-difference of $u$ and $v$. If H-difference $u\circleddash v$ exists, then $[u\circleddash v]_r=[u_r^--v_r^-,u_r^+-v_r^+]$. The Hukuhara derivative for a fuzzy function was introduced by Puri and Relescu \cite{36}. From Kaleva \cite{37} and Diamond \cite{38}, it follows that a Hukuhara differentiable function has increasing length of its support interval. So the Hukuhara difference rarely exists and to overcome this situation strongly generalized differentiability of fuzzy-number-valued functions was introduced and studied by Bede-Gal \cite{Bede3}. Thus, in this case a differentiable function may have the property that the support has increasing or decreasing length. \begin{defn}\label{def2.5} Let $f:(a,b)\rightarrow\mathbb{R}_\mathcal{F}$ and $x_0\in(a,b)$. We say that f is strongly generalized differentiable at $x_0$, if there exists an element $f'(x_0)\in \mathbb{R}_\mathcal{F}$, such that \begin{itemize} \item[(i)] for each $h>0$ sufficiently close to 0, the H-differences $f(x_0+h)\circleddash f(x_0)$ and $f(x_0)\circleddash f(x_0-h)$ exist and \[\lim_{h\rightarrow0}\frac{f(x_0+h)\circleddash f(x_0)}{h}=\lim_{h\rightarrow0}\frac{f(x_0)\circleddash f(x_{0}-h)}{h}=f'(x_0),\] or \item[(ii)] for each $h>0$ sufficiently close to 0, the H-differences $f(x_0)\circleddash f(x_0+h) $ and $f(x_0-h)\circleddash f(x_0)$ exist and \[\lim_{h\rightarrow0}\frac{f(x_0)\circleddash f(x_0+h)}{(-h)}=\lim_{h\rightarrow0}\frac{f(x_0-h) \circleddash f(x_0)}{(-h)}=f'(x_0).\] \end{itemize} \end{defn} Let $f:(a,b)\rightarrow\mathbb{R}_\mathcal{F}$, we say that $f$ is $(i)$-differentiable and ($ii$)-differentiable on $(a,b)$ if $f$ is differentiable in the sense ($i$) and ($ii$) of Definition \ref{def2.5}, respectively. There is also two other differentiability cases - (iii) and (iv) - differentiability - that in these cases there is no existence theorems and we do not discuss them here. Bede in \cite{Bede5} proved that under certain conditions the fuzzy initial value problem \eqref{eq-f} has a unique solution and is equivalent to the system of ODEs \begin{equation*} \left\{\begin{array}{c} (y_{r}^-)'=f_{r}^-(t,y_{r}^-,y_r^+)\\ (y_{r}^+)'=f_{r}^+(t,y_{r}^-,y_r^+) \end{array},r\in[0,1]\right. \end{equation*} with respect to H-differentiability. In this interpretation solutions of a fuzzy differential equation have always an increasing length of its support interval. So a fuzzy dynamical system will have more uncertain behavior in time and it does not allow to have a periodic solutions. Thus, for solve FDEs the different ideas and methods have been investigated. The second interpretation was based on Zadeh's extension principle defined in \cite{44}. Consider the classical ODE $x'=f(t,x,a)$, $x(t_0)=x_0\in\mathbb{R}$ where $a\in\mathbb{R}$ is a parameter. By using Zadeh's extension principle on the classical solution, we obtain a solution of the FIVP. The third interpretation have been developed based on generalized fuzzy derivative. In this work we will work with interpretation based on strongly generalized differentiability. Fuzzy differential equations based on generalized H-differentiability were investigated by Bede-Gal in \cite{Bede3} and more general results were proposed in Bede-Gal \cite{Bede4}. According to the assumptions of the Theorem 9.11 in \cite{Bede1}, the fuzzy initial value problem \eqref{1} is equivalent to the union of the ODEs: \begin{eqnarray}\label{i-diff} \left\{ \begin{array}{ll} (y_{\alpha}^{-})'(t) = f_{\alpha}^{-}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)) \\ (y_{\alpha}^{+})'(t) = f_{\alpha}^{+}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)), &\alpha\in [0,1] \\ (y_{\alpha}^{-})(t_0) = (y_0)_{\alpha}^{-},\quad (y_{\alpha}^{+})(t_0) = (y_0)_{\alpha}^{+}. \end{array} \right. \end{eqnarray} and \begin{eqnarray}\label{ii-diff} \left\{ \begin{array}{ll} (y_{\alpha}^{-})'(t) = f_{\alpha}^{+}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)) \\ (y_{\alpha}^{+})'(t) = f_{\alpha}^{-}(t,y_{\alpha}^{-}(t),y_{\alpha}^{+}(t)), & \alpha\in [0,1] \\ (y_{\alpha}^{-})(t_0) = (y_0)_{\alpha}^{-},\quad (y_{\alpha}^{+})(t_0) = (y_0)_{\alpha}^{+}. \end{array} \right. \end{eqnarray} For triangular input data we have the same systems \eqref{i-diff} and \eqref{ii-diff} with an extra equation $(y_{\alpha}^{1})'(t) = f_{\alpha}^{1}(t,y_{\alpha}^{-}(t),y_{\alpha}^{1}(t),y_{\alpha}^{+}(t))$ where $f=(f^{-},f^{1},f^{+})$ (see Theorem 9.12 in \cite{Bede1}). A linear multistep method is defined by the first characteristic polynomial $\rho(r) = \sum_{j=0}^k \alpha_j r^j$ and the second characteristic polynomial $\sigma(r) = \sum_{j=0}^k \beta_j r^j$ as follow \begin{equation} \sum_{j=0}^k \alpha_j y_{n+j} = h\sum_{j=0}^k \beta_j f_{n+j}, \end{equation} where $a=t_{n}\leq t_{n+1}\leq \cdots \leq t_{N}=b$, $h=\frac{b-a}{N}=t_{n+k}-t_{n+k-1}$, $f_{n+j} = f(t_{n+j},y_{n+j})$ and $\alpha_j$ and $\beta_{j}, j=0,1,\cdots,k$ are constants. In this scheme we can evaluate an approximate solution $y_{n+k}$ for the exact value $y(x_{n+k})$ using the starting values $y_0,y_1,\dots, y_{n+k-1}$. The Adams schemes are characterized by their first characteristic polynomial as $\rho(r)=r^{k}-r^{k-1}$. Therefor, we have \begin{equation}\label{eq2.1} y_{n+k} = y_{n+k-1}+h\sum_{j=0}^{k}\beta_{j}f_{n+j}, \end{equation} In (\ref{eq2.1}) the case $\beta_{k}=0$ means that the method is explicit and otherwise the method is implicit. The stability issue of LMMs are characterized by the root condition for the first characteristic polynomial $\rho(r)$, that means the roots $r_s, s=1,2,\dots,k$ of $\rho(r)$ satisfy $|r_s|\le 1$ and the roots with $|r_s|=1$ are simple \cite{15}. The zero-stability of an LMM and correspondingly the its GLM form depends on that the first characteristic polynomial $\rho(r)$ or the minimal polynomial of the matrix $V$ satisfies the root condition. \section{A GLM scheme with strongly generalized differentiability}\label{sec3} In this section we present the derivation of a GLM based on linear $k-$step Adams schemes for solving fuzzy initial value problem under strongly generalized differentiability. Assume that for an equally spaced points $0=t_0<t_1<\cdot<t_N=T$ at $t_n$ the exact solutions are indicated by ${\textbf{Y}}_{1}(t_{n};r)=[\textbf{Y}_1^-(t_n;r),\textbf{Y}_1^+(t_n;r)]$ and $\textbf{Y}_{2}(t_{n};r)=[\textbf{Y}_2^-(t_n;r),\textbf{Y}_2^+(t_n;r)]$ under (i) and (ii)-differentiability, respectively. Also assume that $y_{1}(t_{n};r)=[y_1^-(t_n;r),y_1^+(t_n;r)]$ and $y_{2}(t_{n};r)=[y_2^-(t_n;r),y_2^+(t_n;r)]$ are approximate solutions at $t_n$ under (i) and (ii)-differentiability, respectively. The $k$-step Adams methods under Hukuhara or (i)-differentiability can be written as: \begin{equation}\label{equ3.1} \begin{array}{ccc} y_{1r}^-(t_{n+k};r) &=& y_{1r}^-(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^-(t_{n+j},y_{1r}(t_{n+j};r)),\\ y_{1r}^+(t_{n+k};r) &=& y_{1r}^+(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^+(t_{n+j},y_{1r}(t_{n+j};r)), \end{array} \end{equation} and under (ii)-differentiability can be written as: \begin{equation}\label{equ3.2} \begin{array}{ccc} y_{2r}^-(t_{n+k};r) &=& y_{2r}^-(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^+(t_{n+j},y_{2r}(t_{n+j};r)),\\ y_{2r}^+(t_{n+k};r) &=& y_{2r}^+(t_{n+k-1};r) +h\sum_{j=0}^{k}\beta_j f^-(t_{n+j},y_{2r}(t_{n+j};r)), \end{array} \end{equation} The Adams schemes are $k$-step methods \eqref{eq2.1} with $\rho(r)=r^{k}-r^{k-1}$. In this setting we can find their corresponding general linear method framework. In GLM representation we should first determine the input and output vectors and then find the corresponding matrices. For this end we consider the input and output approximation of general linear methods as follow \begin{equation*} y^{[n-1]}=\left( \begin{array}{c} y_{n+k-1}\\ hf_{n+k-1}\\ hf_{n+k-2}\\ \vdots\\ hf_{n+1}\\ hf_{n} \end{array}\right),\qquad y^{[n]}=\left( \begin{array}{c} y_{n+k}\\ hf_{n+k}\\ hf_{n+k-1}\\ \vdots\\ hf_{n+2}\\ hf_{n+1} \end{array}\right). \end{equation*} Similarly, a linear k-steps methods under strongly generalized differentiability \eqref{equ3.1} and \eqref{equ3.2} can be representation in the form of general linear methods. For this representation the input vectors for the GLM form of \eqref{equ3.1} and \eqref{equ3.2} are indicated by $y_{1r}^{[n-1]}=\big[y_{1r}^{-[n-1]},y_{1r}^{+[n-1]}\big]$ and $y_{2r}^{[n-1]}=\big[y_{2r}^{-[n-1]},y_{2r}^{+[n-1]}\big]$ under (i) and (ii)-differentiability, respectively. Corresponding to the input vectors, the output vectors are indicated by $y_{1r}^{[n]}=\big[y_{1r}^{-[n]},y_{1r}^{+[n]}\big]$ and $y_{2r}^{[n]}=\big[y_{2r}^{-[n]},y_{2r}^{+[n]}\big]$ under (i) and (ii)-differentiability, respectively. Now, we consider the input approximation of general linear methods in terms of (i)-differentiability as: \begin{equation}\label{input_i} y_{1r}^{-[n-1]}=\left(\begin{array}{c} y^-_{{n+k-1}_{1r}} \\ hf^-_{{n+k-1}_{1r}} \\ hf^-_{{n+k-2}_{1r}} \\ \vdots\\ hf^-_{{n+1}_{1r}}\\ hf^-_{{n}_{1r}} \end{array}\right),\qquad y_{1r}^{+[n-1]}=\left(\begin{array}{c} y^+_{{n+k-1}_{1r}} \\ hf^+_{{n+k-1}_{1r}} \\ hf^+_{{n+k-2}_{1r}} \\ \vdots\\ hf^+_{{n+1}_{1r}}\\ hf^+_{{n}_{1r}} \end{array}\right), \end{equation} and under the (ii)-differentiability we obtain the following input vectors: \begin{equation}\label{input_ii} y_{2r}^{-[n-1]}=\left(\begin{array}{c} y^-_{{n+k-1}_{2r}} \\ hf^+_{{n+k-1}_{2r}} \\ hf^+_{{n+k-2}_{2r}} \\ \vdots\\ hf^+_{{n+1}_{2r}}\\ hf^+_{{n}_{2r}} \end{array}\right),\qquad y_{1r}^{+[n-1]}=\left(\begin{array}{c} y^+_{{n+k-1}_{2r}} \\ hf^-_{{n+k-1}_{2r}} \\ hf^-_{{n+k-2}_{2r}} \\ \vdots\\ hf^-_{{n+1}_{2r}}\\ hf^-_{{n}_{2r}} \end{array}\right), \end{equation} By considering the above input vectors, the fuzzy general linear methods form of \eqref{equ3.1} and \eqref{equ3.2} can be formulated in case of (i)-differentiability as: \begin{equation}\label{GLM1} \left(\begin{array}{c} Y_{1r} \\ \hline y_{1r}^{[n]} \end{array}\right) =\left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)\left(\begin{array}{c} hf_{1r}(Y_{1r}) \\ \hline y_{1r}^{[n-1]} \end{array}\right), \end{equation} and in case of (ii)-differentiability we have: \begin{equation}\label{GLM1} \left(\begin{array}{c} Y_{2r} \\ \hline y_{2r}^{[n]} \end{array}\right) =\left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)\left(\begin{array}{c} hf_{2r}(Y_{2r}) \\ \hline y_{2r}^{[n-1]} \end{array}\right), \end{equation} where $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ are internal stages under (i) and (ii)-differentiability, respectively. Also \begin{equation*} \left(\begin{tabular}{c|c} A & U \\ \hline B & V \end{tabular}\right)= \left(\begin{tabular}{c|ccccc} 0 & 1 & $\beta_{k-1}$ & $\cdots$ & $\beta_{1}$ & $\beta_{0}$ \\ \hline 0 & 1 & $\beta_{k-1}$ &$ \cdots$ & $\beta_{1}$ & $\beta_{0}$ \\ 1 & 0 & 0 & $\cdots$ & 0 & 0 \\ 0 & 0 & 1 & $\cdots$ & 0 & 0 \\ $\vdots$ & $\vdots$ &$ \vdots$ & $\quad$ & $\vdots$ & $\vdots$ \\ 0 & 0 & 0 & $\cdots$ & 1 & 0 \end{tabular}\right). \end{equation*} Now, we consider two example of Fuzzy GLMs form of k-step methods under strongly generalized differentiability for $k=4,5$. First, Consider $k=4$. The input vectors for $k=4$ under (i) and (ii)-differentiability are as follow, respectively: \begin{equation* {y}^{\mp[n-1]}_{1r}=\left(\begin{array}{c} {y}^{\mp}_{1r}(t_{n+3})\\ h{f}^{\mp}_{1r}(t_{n+3},y_{1r}(t_{n+3}))\\ h{f}^{\mp}_{1r}(t_{n+2},y_{1r}(t_{n+2}))\\ h{f}^{\mp}_{1r}(t_{n+1},y_{1r}(t_{n+1}))\\ h{f}^{\mp}_{1r}(t_{n},y_{1r}(t_{n})) \end{array}\right),\quad {y}^{\mp[n-1]}_{2r}=\left(\begin{array}{c} {y}^{\mp}_{2r}(t_{n+3})\\ h{f}^{\pm}_{2r}(t_{n+3},y_{2r}(t_{n+3}))\\ h{f}^{\pm}_{2r}(t_{n+2},y_{2r}(t_{n+2}))\\ h{f}^{\pm}_{2r}(t_{n+1},y_{2r}(t_{n+1}))\\ h{f}^{\pm}_{2r}(t_{n},y_{2r}(t_{n})) \end{array}\right), \end{equation*} and \begin{equation*} \left( \begin{tabular}{l|lllll} 0&1&$\frac{55}{24}$&$\frac{-59}{24}$&$\frac{37}{24}$&$\frac{-9}{24}$\\ \cline{1-6} 0&1&$\frac{55}{24}$&$\frac{-59}{24}$&$\frac{37}{24}$&$\frac{-9}{24}$\\ 1&0&0&0&0&0\\ 0&0&1&0&0&0\\ 0&0&0&1&0&0\\ 0&0&0&0&1&0 \end{tabular}\right). \end{equation*} similarly, for $k=5$ we obtain \begin{equation* {y}^{\mp[n-1]}_{1r}=\left(\begin{array}{c} {y}^{\mp}_{1r}(t_{n+4})\\ h{f}^{\mp}_{1r}(t_{n+4},y_{1r}(t_{n+4}))\\ h{f}^{\mp}_{1r}(t_{n+3},y_{1r}(t_{n+3}))\\ h{f}^{\mp}_{1r}(t_{n+2},y_{1r}(t_{n+2}))\\ h{f}^{\mp}_{1r}(t_{n+1},y_{1r}(t_{n+1}))\\ h{f}^{\mp}_{1r}(t_{n},y_{1r}(t_{n})) \end{array}\right),\quad {y}^{\mp[n-1]}_{2r}=\left(\begin{array}{c} {y}^{\mp}_{2r}(t_{n+4})\\ h{f}^{\pm}_{2r}(t_{n+4},y_{2r}(t_{n+4}))\\ h{f}^{\pm}_{2r}(t_{n+3},y_{2r}(t_{n+3}))\\ h{f}^{\pm}_{2r}(t_{n+2},y_{2r}(t_{n+2}))\\ h{f}^{\pm}_{2r}(t_{n+1},y_{2r}(t_{n+1}))\\ h{f}^{\pm}_{2r}(t_{n},y_{2r}(t_{n})) \end{array}\right), \end{equation*} and \begin{equation*} \left( \begin{tabular}{l|llllll} 0&1&$\frac{1901}{720}$&$\frac{-2774}{720}$&$\frac{2616}{720}$&$\frac{-1274}{720}$&$\frac{251}{720}$\\ \cline{1-7} 0&1&$\frac{1901}{720}$&$\frac{-2774}{720}$&$\frac{2616}{720}$&$\frac{-1274}{720}$&$\frac{251}{720}$\\ 1&0&0&0&0&0&0\\ 0&0&1&0&0&0&0\\ 0&0&0&1&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0 \end{tabular}\right). \end{equation*} \section{Convergence, consistency and stability} To address the convergence of the presented FGLMs we consider the numerical solutions ${y}_{1}(t_{n+j};r)=[{y}^-_{1}(t_{n+j};r),{y}^+_{1}(t_{n+j};r)]$ and ${y}_{2}(t_{n+j};r)=[{y}^-_{2}(t_{n+j};r),{y}^+_{2}(t_{n+j};r)]$ and the corresponding exact solutions $\mathbf{Y}_{1}(t_{n+j};r)=[\mathbf{Y}^-_{1}(t_{n+j};r),\mathbf{Y}^+_{1}(t_{n+j};r)]$ and $\mathbf{Y}_{2}(t_{n+j};r)=[\mathbf{Y}^-_{2}(t_{n+j};r),\mathbf{Y}^+_{2}(t_{n+j};r)]$ under (i) and (ii)-differentiability, respectively. The local truncation errors (LTEs) of the FGLMs under strongly generalized differentiability are defined by \begin{equation}\label{eq3.23} \begin{array}{c} {\Psi}_{1}(t_{n+k};r)=\sum_{j=0}^{k}r_{j}y_{1}(t_{n+j};r)-h\psi_{f_{1}}\big(y_1(t_{n+k};r),\cdots,y_1(t_n;r)\big),\\ {\Psi}_{2}(t_{n+k};r)=\sum_{j=0}^{k}r_{j}y_{2}(t_{n+j};r)-h\psi_{f_{2}}\big(y_2(t_{n+k};r),\cdots,y_2(t_n;r)\big),\\ \end{array} \end{equation} where $r_{k}=-r_{k-1}=1$ and $r_{j}=0$ for $j=0,1,\ldots,k-2$ and \begin{eqnarray*} \psi_{f_{1}}\big(y_1(t_{n+k};r),\cdots,y_1(t_n;r)&=&\sum_{j=0}^{k-1}\beta_jf_1(t_{n+j},y_1(t_{n+j};r))\\ \psi_{f_{2}}\big(y_2(t_{n+k};r),\cdots,y_2(t_n;r)&=&\sum_{j=0}^{k-1}\beta_jf_2(t_{n+j},y_2(t_{n+j};r)) \end{eqnarray*} Consistency and stability are two essential conditions for convergent. \begin{defn} A Fuzzy GLM form of k-step method under generalized differentiability is said to be consistent if for all fuzzy initial value problems, the residual ${\Psi}_{1}(t_{n+k};r)$ and ${\Psi}_{2}(t_{n+k};r)$ defined by (\ref{eq3.23}) satisfies \begin{eqnarray*} \lim_{h\rightarrow0}\frac{1}{h}{\Psi}_{1}(t_{n+k};r)=0,\\ \lim_{h\rightarrow0}\frac{1}{h}{\Psi}_{2}(t_{n+k};r)=0. \end{eqnarray*} \end{defn} \begin{defn} A Fuzzy GLM is stable if the minimal polynomial of coefficient matrix $V$ has no zeros greater than 1 and all zeros equal to 1 are simple, in other words it satisfies the root condition. \end{defn} To verify the stability of the given Fuzzy GLMs under generalized differentiability in section \ref{sec3} we found the minimal polynomial $p_{k}(w)$ of the coefficient matrix $V$ for $k=4,5$: \begin{eqnarray*} p_{k}(w)=w^{k}(w-1),\quad k=4,5, \end{eqnarray*} which simply satisfies the root condition and the corresponding Fuzzy GLMs are stable. \section{Numerical results}\label{sec5} In this section, we report among many test problems an example to show the numerical results of FGLMs for solving fuzzy differential equations under strongly generalized differentiability. We utilize the FGLMs ($k=4,5$) presented in section \ref{sec3}. The absolute error numerical results concerning the order of convergence is provided. We can estimate the order of convergence $p$ by evaluation of the fraction $\frac{E(h/2)}{E(h)}= O(\frac{1}{2^p})$. \begin{test}\label{test6.1} (Bede \cite{Bede1}) Consider the following fuzzy initial value problem \begin{equation}\label{FIVP1} y'=-y+e^{-t}(-1,0,1),\qquad y_0=(-1,0,1). \end{equation} The system of ODEs corresponding to (i)-differentiability is given by \begin{equation*} \left\{\begin{array}{l} (y^-)'= -y^+-e^{-t},\\ (y^1)'= -y^1,\\ (y^+)' = -y^-+e^{-t}, \\ y_0=(-1,0,1). \end{array}\right. \end{equation*} The analytical solution under (i)-differentiability is \begin{eqnarray} Y_1^-(t;r) &=& (1-r)(\frac{1}{2}e^{-t}-\frac{3}{2}e^t) \nonumber\\ Y_1^+(t;r) &=& (1-r)(\frac{3}{2}e^t-\frac{1}{2}e^{-t}) \nonumber \end{eqnarray} Similarly, the system of ODEs corresponding to (ii)-differentiability is given by \begin{equation*} \left\{\begin{array}{l} (y^-)'= -y^-+e^{-t},\\ (y^1)'= -y^1,\\ (y^+)' = -y^+-e^{-t}, \\ y_0=(-1,0,1), \end{array}\right. \end{equation*} and the analytical solution under (ii)-differentiability is \begin{eqnarray} Y_2^-(t;r) &=& (-1+r)(1-t)\exp(-t) \nonumber\\ Y_2^+(t;r) &=& (1-r)(1-t)\exp(-t). \nonumber \end{eqnarray} We demonstrate the numerical solution of FIVP \eqref{FIVP1} in the interval $[0,2]$. The (i) and (ii)-exact and approximate solutions, resulted by FGLMs for $k=4$ and $k=5$, are presented in Tables \ref{Tab6.1.1} and \ref{Tab6.1.2} at $t=2$ with $N=20$ and $h=\frac{T-t_0}{N}$. Moreover, the results for their convergence provided in Tables \ref{Tab6.1.3} and \ref{Tab6.1.4}. \end{test} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccc} \hline $r$ & $y_{1r}$ & $Y_{1r}$ & $E_{1r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [-1.101531E1, 1.101531E1] & [-1.101592E1, 1.101592E1] & 6.024101E-4\\[0.5mm] 0.1 & [-9.913783E0, 9.913783E0] & [-9.914325E0, 9.914325E0] & 5.421691E-4\\[0.5mm] 0.2 & [-8.812251E0, 8.812251E0] & [-8.812733E0, 8.812733E0] & 4.819281E-4\\[0.5mm] 0.3 & [-7.710720E0, 7.710720E0] & [-7.711142E0, 7.711142E0] & 4.216871E-4\\[0.5mm] 0.4 & [-6.609188E0, 6.609188E0] & [-6.609550E0, 6.609550E0] & 3.614461E-4\\[0.5mm] 0.5 & [-5.507657E0, 5.507657E0] & [-5.507958E0, 5.507958E0] & 3.012050E-4\\[0.5mm] 0.6 & [-4.406126E0, 4.406126E0] & [-4.406367E0, 4.406367E0] & 2.409640E-4\\[0.5mm] 0.7 & [-3.304594E0, 3.304594E0] & [-3.304775E0, 3.304775E0] & 1.807230E-4\\[0.5mm] 0.8 & [-2.203063E0, 2.203063E0] & [-2.203183E0, 2.203183E0] & 1.204820E-4\\[0.5mm] 0.9 & [-1.101531E0, 1.101531E0] & [-1.101592E0, 1.101592E0] & 6.024101E-5\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (a) \end{tabular} \begin{tabular}{cccc} \hline $r$ & $y_{2r}$ & $Y_{2r}$ & $E_{2r}$\\[0.5mm] \hline\\[-1.5mm] 0 & [1.352883E-1, -1.352883E-1] & [1.353353E-1, -1.353353E-1] & 4.699417E-5\\[0.5mm] 0.1 & [1.217595E-1, -1.217595E-1] & [1.218018E-1, -1.218018E-1] & 4.229476E-5\\[0.5mm] 0.2 & [1.082306E-1, -1.082306E-1] & [1.082682E-1, -1.082682E-1] & 3.759534E-5\\[0.5mm] 0.3 & [9.470180E-2, -9.470180E-2] & [9.473470E-2, -9.473470E-2] & 3.289592E-5\\[0.5mm] 0.4 & [8.117297E-2, -8.117297E-2] & [8.120117E-2, -8.120117E-2] & 2.819650E-5\\[0.5mm] 0.5 & [6.764414E-2, -6.764414E-2] & [6.766764E-2, -6.766764E-2] & 2.349709E-5\\[0.5mm] 0.6 & [5.411532E-2, -5.411532E-2] & [5.413411E-2, -5.413411E-2] & 1.879767E-5\\[0.5mm] 0.7 & [4.058649E-2, -4.058649E-2] & [4.060058E-2, -4.060058E-2] & 1.409825E-5\\[0.5mm] 0.8 & [2.705766E-2, -2.705766E-2] & [2.706706E-2, -2.706706E-2] & 9.398835E-6\\[0.5mm] 0.9 & [1.352883E-2, -1.352883E-2] & [1.353353E-2, -1.353353E-2] & 4.699417E-6\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (b) \end{tabular} \caption{\scriptsize(a)Approximate solution of the FGLM ($k=4$) $y_{1r}=[y_{1r}^-,y_{1r}^+]$, exact solution $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and absolute error $E_{1r}$ under (i)-differentiability, (b)Approximate solution of the FGLM ($k=4$) $y_{2r}=[y_{2r}^-,y_{2r}^+]$, exact solution $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ and absolute error $E_{2r}$ under (ii)-differentiability, Test \ref{test6.1}.}\label{Tab6.1.1} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{ccccccc} \hline $r$ & $y_{1r}$ & $Y_{1r}$ & $E_{1r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [-1.101587E1, 1.101587E1] & [-1.101592E1, 1.101592E1] & 4.451187E-5\\[0.5mm] 0.1 & [-9.914285E0, 9.914285E0] & [-9.914325E0, 9.914325E0] & 4.006069E-5\\[0.5mm] 0.2 & [-8.812698E0, 8.812698E0] & [-8.812733E0, 8.812733E0] & 3.560950E-5\\[0.5mm] 0.3 & [-7.711110E0, 7.711110E0] & [-7.711142E0, 7.711142E0] & 3.115831E-5\\[0.5mm] 0.4 & [-6.609523E0, 6.609523E0] & [-6.609550E0, 6.609550E0] & 2.670712E-5\\[0.5mm] 0.5 & [-5.507936E0, 5.507936E0] & [-5.507958E0, 5.507958E0] & 2.225594E-5\\[0.5mm] 0.6 & [-4.406349E0, 4.406349E0] & [-4.406367E0, 4.406367E0] & 1.780475E-5\\[0.5mm] 0.7 & [-3.304762E0, 3.304762E0] & [-3.304775E0, 3.304775E0] & 1.335356E-5\\[0.5mm] 0.8 & [-2.203174E0, 2.203174E0] & [-2.203183E0, 2.203183E0] & 8.902375E-6\\[0.5mm] 0.9 & [-1.101587E0, 1.101587E0] & [-1.101592E0, 1.101592E0] & 4.451187E-6\\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (a) \end{tabular} \begin{tabular}{cccc} \hline $r$ & $y_{2r}$ & $Y_{2r}$ & $E_{2r}$ \\[0.5mm] \hline\\[-1.5mm] 0 & [1.353406E-1, -1.353406E-1] & [1.353353E-1, -1.353353E-1] & 5.270043E-6 \\[0.5mm] 0.1 & [1.218065E-1, -1.218065E-1] & [1.218018E-1, -1.218018E-1] & 4.743039E-6 \\[0.5mm] 0.2 & [1.082724E-1, -1.082724E-1] & [1.082682E-1, -1.082682E-1] & 4.216035E-6 \\[0.5mm] 0.3 & [9.473839E-2, -9.473839E-2] & [9.473470E-2, -9.473470E-2] & 3.689030E-6 \\[0.5mm] 0.4 & [8.120433E-2, -8.120433E-2] & [8.120117E-2, -8.120117E-2] & 3.162026E-6 \\[0.5mm] 0.5 & [6.767028E-2, -6.767028E-2] & [6.766764E-2, -6.766764E-2] & 2.635022E-6 \\[0.5mm] 0.6 & [5.413622E-2, -5.413622E-2] & [5.413411E-2, -5.413411E-2] & 2.108017E-6 \\[0.5mm] 0.7 & [4.060217E-2, -4.060217E-2] & [4.060058E-2, -4.060058E-2] & 1.581013E-6 \\[0.5mm] 0.8 & [2.706811E-2, -2.706811E-2] & [2.706706E-2, -2.706706E-2] & 1.054009E-6 \\[0.5mm] 0.9 & [1.353406E-2, -1.353406E-2] & [1.353353E-2, -1.353353E-2] & 5.270043E-7 \\[0.5mm] 1.0 & [0,0] & [0,0] & 0 \\[0.5mm] \hline \\ (b) \end{tabular} \caption{\scriptsize(a)Approximate solution of the FGLM ($k=5$), $y_{1r}=[y_{1r}^-,y_{1r}^+]$, exact solution $Y_{1r}=[Y_{1r}^-,Y_{1r}^+]$ and absolute error $E_{1r}$ under (i)-differentiability, (b)Approximate solution of the FGLM ($k=5$), $y_{2r}=[y_{2r}^-,y_{2r}^+]$, exact solution $Y_{2r}=[Y_{2r}^-,Y_{2r}^+]$ and absolute error $E_{2r}$ under (ii)-differentiability, Test \ref{test6.1}.}\label{Tab6.1.2} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccccccccccc} \hline $r$&$h_{i}$&\quad&\quad&\quad&\quad& $E_{2r}(h_{i})$&\quad&\quad&\quad&\quad& $p$\\[1mm] \hline 0.2& $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 3.759533862475462E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 2.360816361970941E-6& \quad&\quad&\quad&\quad& 3.993196066118324E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.475860954835984E-7& \quad&\quad&\quad&\quad& 3.999657112312050E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 9.220812974275461E-9& \quad&\quad&\quad&\quad& 4.000519042265663E0\\[1mm] \hline 0.4 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 2.819650396852780E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.770612271481675E-6& \quad&\quad&\quad&\quad& 3.993196066113545E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.106895716057599E-7& \quad&\quad&\quad&\quad& 3.999657112405316E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 6.915609765401065E-9& \quad&\quad&\quad&\quad& 4.000519034937461E0\\[1mm] \hline 0.6 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 1.879766931239119E-5& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.180408180992409E-6& \quad&\quad&\quad&\quad& 3.993196066110909E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 7.379304775567697E-8& \quad&\quad&\quad&\quad& 3.999657112049211E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 4.610406514893306E-9& \quad&\quad&\quad&\quad& 4.000519033851667E0\\[1mm] \hline 0.8 &$\frac{1}{10}$ & \quad&\quad&\quad&\quad& 9.398834656195593E-6& \quad&\quad&\quad&\quad& \quad\\[1mm] \quad&$\frac{1}{20}$ & \quad&\quad&\quad&\quad& 5.902040904962047E-7& \quad&\quad&\quad&\quad& 3.993196066110909E0\\[1mm] \quad&$\frac{1}{40}$ & \quad&\quad&\quad&\quad& 3.689652387783848E-8& \quad&\quad&\quad&\quad& 3.999657112049211E0\\[1mm] \quad&$\frac{1}{80}$ & \quad&\quad&\quad&\quad& 2.305203257446653E-9& \quad&\quad&\quad&\quad& 4.000519033851667E0\\[1mm] \hline \end{tabular} \caption{\scriptsize Convergence of the FGLM ($k=4$)under (ii)-differentiability}\label{Tab6.1.3} \end{table} \begin{table}[!htp] \centering \tiny \begin{tabular}{cccccccccccc} \hline $r$&$h_{i}$&\quad&\quad&\quad&\quad& $E_{2r}(h_{i})$&\quad&\quad&\quad&\quad& $p$\\ [.5mm]\hline\\[-1.5mm] 0.2 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 4.216034534335056E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.336319765954386E-7 & \quad&\quad&\quad&\quad& 4.979549509841708E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 4.185707641601866E-9 & \quad&\quad&\quad&\quad& 4.996649911789974E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 1.308334413030465E-10 & \quad&\quad&\quad&\quad& 4.999668298467584E0\\[1mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 4.088021587911328E-12 & \quad&\quad&\quad&\quad& 5.000184718796247E0\\[1mm] \hline 0.4 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 3.162025900754761E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 1.002239824188234E-7 & \quad&\quad&\quad&\quad& 4.979549510242824E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 3.139280738140293E-9 & \quad&\quad&\quad&\quad& 4.996649908201587E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 9.812499424111110E-11 & \quad&\quad&\quad&\quad& 4.999669576905354E0\\[1mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 3.065672715685253E-12 & \quad&\quad&\quad&\quad& 5.000345072764134E0\\[1mm] \hline 0.6 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 2.108017267167528E-6 & \quad&\quad&\quad&\quad& \quad\\[.5mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 6.681598831159707E-8 & \quad&\quad&\quad&\quad& 4.979549509542057E0\\[.5mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 2.092853827739827E-9 & \quad&\quad&\quad&\quad& 4.996649907306344E0\\[.5mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 6.541663044590251E-11 & \quad&\quad&\quad&\quad& 4.999670292639665E0\\[.5mm] \quad& $\frac{1}{160}$ & \quad&\quad&\quad&\quad& 2.043996916167856E-12 & \quad&\quad&\quad&\quad& 5.000192524602108E0\\[.5mm] \hline 0.8 & $\frac{1}{10}$ & \quad&\quad&\quad&\quad& 1.054008633583764E-6 & \quad&\quad&\quad&\quad& \quad\\[1mm] \quad& $\frac{1}{20}$ & \quad&\quad&\quad&\quad& 3.340799415579854E-8 & \quad&\quad&\quad&\quad& 4.979549509542057E0\\[1mm] \quad& $\frac{1}{40}$ & \quad&\quad&\quad&\quad& 1.046426913869913E-9 & \quad&\quad&\quad&\quad& 4.996649907306344E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 3.270831522295126E-11 & \quad&\quad&\quad&\quad& 4.999670292639665E0\\[1mm] \quad& $\frac{1}{80}$ & \quad&\quad&\quad&\quad& 1.021998458083928E-12 & \quad&\quad&\quad&\quad& 5.000192524602108E0\\[1mm] \hline \end{tabular} \caption{\scriptsize Convergence of the FGLM ($k=5$)under (ii)-differentiability}\label{Tab6.1.4} \end{table} From Tables \ref{Tab6.1.3} and \ref{Tab6.1.4}, it follows that the Fuzzy GLMs of $4$-step methods under strongly generalized differentiability have convergence order 4 and the Fuzzy GLMs form of 5-step methods have convergence order 5. \section{Conclusion} In this paper we have developed the linear multistep methods (Adams-Bashforth methods) in the framework of general linear methods for solving fuzzy differential equations under strongly generalized differentiability. We have shown the consistency, stability, and convergence of the new FGLM formulation. The general framework of FGLMs will be studied in the forthcoming paper.
8d4b16b516f1da8187dba82d20d126a32f0ba2e9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Microwave photonics, based on low noise tunable optically--carried microwave signals, addresses many applications such as frequency standards distribution \cite{Ma:94}, photonic remoting for antenna in radar systems, photonic links for cellular, wireless, satellite and radio--astronomy applications, cable television systems, optical signal processing \cite{capmany:07}, etc. In contrast with other technologies, dual--frequency lasers can provide, thanks to their beat note, optically carried RF signals and offer the advantage of optimal modulation depth \cite{Alouini:2001}. The two laser modes share the same cavity, their phase noises are thus expected to cancel in the beat note. To get a tunable RF signal with such a dual--frequency laser, a birefringent crystal (BC) can be inserted inside the cavity to create two optical paths for two orthogonal polarization modes. Besides, VECSELs exhibit class--A dynamics \cite{Baili:09} since the photon lifetime inside the cavity can be much longer than the carriers lifetime inside the quantum wells. Free from relaxation oscillations and displaying low noise, dual--frequency VECSELs are therefore good candidates for microwave photonics. Their intensity and phase noises have already been studied in details at 1~$\mu$m \cite{De:14} and 1.55~$\mu$m wavelengths, and more recently at 852 nm \cite{Liu:18}. The investigation of this latter wavelength has been triggered by potential new applications such as cesium atomic clocks \cite{Dumont:14} or sensors based on coherent population trapping (CPT). For instance, a double lambda scheme for CPT probed by $\mathrm{lin}\perp\mathrm{lin}$ laser beams is shown to create Raman--Ramsey fringes with a larger contrast than the usual simple lambda scheme in \cite{Zanon:2005}. All these potential applications of dual--frequency VECSELs at various wavelenths require low intensity and beatnote phase noises. A deep understanding of the noise mechanisms is the cornerstone of our approach reported in \cite{Liu:18} and a key point for noise reduction strategies. In particular, intensity noise reduction and phase noise reduction are shown to rely on the same four rules : (i) the pump noise should be obviously made as small as possible; (ii) the excitation ratios for the two lasing modes have to be balanced; (iii) the cross--saturation between the two modes must be minimized and; (iv) the correlations between the intensity noises of the pumping regions corresponding to each mode have to be in--phase and as strong as possible. On the one hand, minimizing the cross--saturation can be achieved by increasing the spatial separation between the two modes in the active medium. On the other hand, in--phase 100\% correlation between the two pumping areas can be achieved trivially using a single--mode fibered pump (provided it delivers enough power) \cite{Liu:18OL} instead of a multi--mode fibered one that would create a speckle pattern on the VECSEL structure. Indeed, single--mode fibered pumping allows the gain areas for the two modes areas in the gain medium to intercept the same noise. However, this solution cannot be applied to several wavelengths for which either there simply exists no commercial single--mode fibered pump laser, or there is a power issue. Pumping 1.5 $\mu$m dual--frequency VECSELs using currently available single-mode fibered pump diodes requires a trick, as reported in \cite{Liu:18OL}, otherwise the power delivered to the structure is not sufficient. In the particular case of 852 nm VECSELs for example, no commercial single--mode fibered laser diode is currently available with enough power. Yet another way to directly increase the pump noises correlation is to superimpose as much as possible the two pumping regions so that they can intercept the same intensity fluctuations. Hence, fulfilling both conditions (iii) and (iv) simultaneously seems contradictory. The question then is to find a way to get low cross--saturation and strong in--phase correlations between the pumping regions at the same time using a multi--mode fibered pump. In this paper, we address this issue for noise reduction in a dual--frequency VECSEL at 852 nm using a new pump architecture. Using amplitude division of a laser diode beam, the two lasing modes are separately pumped. Fitting the modes with these two copies of the same pump and minimizing their overlap are key points to achieve the intended goal of phase noise improvements with multi--mode pumping. The paper is organized as follows. The dual--frequency VECSEL and its novel pumping scheme are presented in Section \ref{section1}. A particular attention is paid to the pump noise features, the correlation between the two pumping regions and the cross--saturation level. Section \ref{section2} focuses on the intensity noise of each mode and the noise mechanisms driven by our pumping set--up. Section \ref{section3} is dedicated to the analysis of the phase noise of the beat note. We perform a comparison with the previous studies \cite{Liu:18, Dumont:14}, which use a single pump spot on the gain medium. Experimental results are also compared to the model developed in \cite{Liu:18}. Finally, we sum up our results in Section \ref{conclusion}. \section{Device implementation for fully--correlated pumping} \label{section1} \begin{figure}[!ht] \centering\includegraphics[width=0.75\textwidth]{fig1.eps} \caption{Experimental set-up for fully in-phase correlated pumping. The cross-polarized ordinary (o) wave and extraordinary (e) wave created by the birefringent crystal (BC) are separately pumped thanks to amplitude division of a multi--mode fibered laser diode.} \label{fig1} \end{figure} The dual--frequency VECSEL and its pumping architecture are schematized in Fig. \ref{fig1}. The semiconductor chip (referred to as 1/2--VCSEL) is glued to a Peltier cooler, which is bonded to a heat sink. The Peltier temperature is stabilized at 20$^{\circ}$C. This 1/2--VCSEL is a multi-layered structure grown on a 350-$\mu$m-thick GaAs substrate by metal-organic chemical-vapor deposition method. It contains a distributed Bragg reflector and active layers. The Bragg reflector is composed of 32.5 pairs of AlAs/Al$_{0.22}$GaAs quarter-wave layers leading to a reflectivity larger than 99.94\% around 850~nm. This mirror constitutes a half of the cavity formed with the output coupler which is a concave mirror with a transmission of 0.5\% and a radius of curvature of 5~cm. In the active layers, seven 8-nm-thick GaAs quantum wells are embedded in Al$_{0.22}$GaAs barriers, which absorb the pump power. About 75\% of the pump power at 673~nm can be absorbed in a single pass. Each of the quantum wells is located at an antinode of the laser field. Two layers of Al$_{0.39}$GaAs form potential barriers to confine the carriers. A 50-nm-thick InGaP and 5~nm GaAs layer cap the structure to prevent the Al oxidation. This chip is designed without anti-reflection coating to increase the gain of the mode resonant within the micro-cavity created in the semiconductor structure. If we consider the losses of the output coupler, the photon lifetime inside the empty 5-cm-long cavity is 32~ns, which is much longer than the carrier lifetime inside the wells $\tau~\simeq~1~\mathrm{ns}$, thus ensuring that the laser obeys class-A dynamics \cite{Baili:09}. An anti-reflection coated YVO$_4$ BC is inserted inside the cavity. It is cut at 45$^{\circ}$ off its optical axis, thus creating a walk--off between the extraordinary polarization and the ordinary one. This separation leads to the oscillation of two orthogonally $x$- and $y$-polarized laser modes inside the cavity simultaneously. However, depending on the mode radius $w_0$ in the structure, the two modes partially overlap inside the gain structure and thus experience some competition through nonlinear coupling, i.e, gain cross-saturation. This is described by the self- to cross- saturation ratios denoted as $\xi_{xy}$ and $\xi_{yx}$ and the coupling constant $C = \xi_{xy} \cdot \xi_{yx}$. Longitudinal single-mode operation is obtained for each of the two orthogonal polarizations by inserting an isotropic YAG etalon. The 100 $\mu$m--thickness of the etalon leads to a beat note frequency in the range of few hundreds of MHz. In order to minimize the cross--saturation (condition (iii)), the walk--off can be increased by increasing the thickness of the BC. But this would lead to an increase of the intra--cavity losses. The thickness of the BC chosen here results from a trade--off between these two effects and is equal to 1~mm, leading to a value of the walk--off equal to 100~$\mu$m. With this crystal and a cavity length equal to $L_\mathrm{cav} = 46.8\,\mathrm{mm}$, the two modes are almost completely separated in the active medium and we estimate the coupling constant to be as low as $C \simeq 0.05$, which is three times smaller than the value obtained in \cite{Liu:18}. This reduced value of the coupling constant is promising for phase noise reduction provided condition (iv) is simultaneously fulfilled. The pump laser is a 635~nm laser diode. It is delivered to a connector by a multi--mode fiber whose core diameter is $102$~$\mu\mathrm{m}$ with a numerical aperture equal to 0.22. Since the separating distance between the two modes is about $100$~$\mu\mathrm{m}$, which is larger than the mode radius $w_0$ in the 1/2--VCSEL, pumping the gain medium with a single spot would lead the two modes pumping regions to intercept poorly correlated intensity fluctuations from the speckle pattern. This is not compatible with condition (iv) and is thus a dead--end for phase noise reduction. As a solution to make the two spatially separated modes undergo the same pump intensity fluctuations, we propose to use two spatially separated pump beams originating from the same source. The diode laser beam is collimated with a lens and shaped with a slit. About two thirds of the power are transmitted. The laser beam is then sent to a beam-splitter which performs an amplitude division and thus creates two copies of the same pump. A second lens is then used to image the pumps onto the semiconductor chip with an incidence angle $\theta \simeq 40^{\circ}$ and a working distance of 30 mm. As a result, two approximatively $100 \,\mu\mathrm{m} \times 70 \,\mu\mathrm{m}$ elliptical spots are able to pump each mode on the structure. Thanks to the $70 \,\mu\mathrm{m}$ typical width of the pump spot along $y$ direction, a walk--off of $100$~$\mu\mathrm{m}$ indeed enables to pump separately the two modes regions without overlap, meaning that each beam pumps one and only one mode. However, this is true only if the two pump spots fit the separating distance between the two modes. To this aim, the reflector used after the beam-splitter is put on an adjustable stage and can therefore be tilted. The pumps are then responsible for the population inversion of each mode through the creation of reservoirs of carriers whose average unsaturated numbers are denoted as $\overline{N}_{0i}$ for $i = \left\lbrace x, y\right\rbrace$ and their fluctuations $\delta N_{0i}$. As a consequence, each pump, whose effective power is denoted as $P_{\mathrm{p},i}$, enables its own laser operation with the excitation ratio noted $r_i$. Due to the choice of amplitude division to generate these two pumps, full correlation of the intensity noises of the two beams is expected. The correlation amplitude is denoted as $\eta$ while its phase is denoted $\Psi$. With $\mathrm{RIN_p}$ the Relative Intensity Noise of the pump, the correlation between the modes verifies : \begin{equation} \label{eq1} \left\langle \widetilde{\delta N}_{0x}\left(f\right) \cdot \widetilde{\delta N}^{\ast}_{0y} \left(f\right)\right\rangle = \eta\,\mathrm{RIN_p}\left(f\right)\,\overline{N}_{0x}\,\overline{N}_{0y}\,e^{i\,\Psi} \, , \end{equation} where $\left\langle \cdot \right\rangle$ stands for the statistical average, tilde denotes the Fourier transformed quantities and $f$ the noise frequency. \begin{figure}[!ht] \centering \includegraphics[width=.49\textwidth]{fig2a.eps} \includegraphics[width=.49\textwidth]{fig2b.eps} \caption{Pump noise measurements. The thick (black) line in panel (a) shows the RIN spectrum of the beam pumping $x$-polarization while the thin (cyan) line stands for $y$-polarization. Panel (b) shows the correlation spectra between the two pump beams. The correlation amplitude is represented by open circles (in purple), the detection floor by thick lines (in black) and the correlation phase lies inside the devoted inset (green diamonds).} \label{fig2} \end{figure} Figure \ref{fig2} focuses on the experimental noise properties of the two pumps, namely the pump RIN level and the correlation between the RINs of the two pumps. The RIN spectra of the pump beams are displayed in Fig.~\ref{fig2}(a). They can be modeled by a constant value $\mathrm{RIN_p} = - 133 \,\mathrm{dB/Hz}$. Figure \ref{fig2}(b) evidences a very high correlation amplitude, very close to 1 at low frequencies and around 0.95 after a few MHz. This figure also highlights a fully in--phase behavior of the pumps since $\Psi =0$. The pumping scheme shown in Fig.~\ref{fig1} is thus able to overcome the apparent contradiction between conditions (iii) and (iv). Indeed, very strongly in--phase correlated pumps are driving the dual--frequency VECSEL while very low cross--saturation is displayed. In the following, the pump noise correlation amplitude is modeled by a constant value $\eta = 0.98$. Together with the coupling constant $C = 0.05$, this forms a couple of laser parameters in strong contrast with the values ($\eta=0.45$~,~$C=0.44$) and ($\eta=0.1$~,~$C=0.15$) previously achieved in \cite{Liu:18}. We can thus expect these low mode coupling and strong in--phase pump noise correlation to have a positive impact on the noises of our dual--frequency VECSEL. \section{Analysis of the in--phase and anti--phase components of the laser intensity noise} \label{section2} \begin{figure}[!ht] \centering\includegraphics[width=0.49\textwidth]{fig3a.eps} \includegraphics[width=0.49\textwidth]{fig3b.eps} \caption{Intensity noise spectra of the dual--frequency VECSEL. The symbols stand for measurements and the solid lines stand for the model. Panel (a) shows the RIN of $x$--polarization (in blue) and the RIN of $y$--polarization (in orange). Panel (b) shows the RIN of the in-phase noise mechanism (in purple) and the RIN of the anti-phase noise (in green). The parameters used for the model are : $\tau = 1\,\mathrm{ns}$, $\Psi=0$, $\eta =0.98$, $r_x=1.38$, $r_y=1.23$, $C=0.05$, $\mathrm{RIN_p}=-133\,\mathrm{dB/Hz}$, $\tau_x=30\,\mathrm{ns}$, $\tau_y = 17\,\mathrm{ns}$.} \label{fig3} \end{figure} Using a half--wave plate then a polarization beam splitter, the $x$- and $y$-polarized modes emitted by the dual--frequency VECSEL are separately sent to two photodiodes. After amplification, the intensity fluctuations are finally recorded by an oscilloscope and analyzed. Figure~\ref{fig3}(a) shows that the RINs exhibit the usual behavior of a low--pass filter whose cut--off frequency is typically given by $1/\tau_i$, where $\tau_i$ corresponds to the photon lifetime inside the cavity for polarization $i = \left\lbrace x, y\right\rbrace$. Figure 3 compares these measurements to a model which takes into account the influence of the pump noise on the RIN through coupled rate equations \cite{Liu:18, De:14} theory. A very good agreement is found. The eigenmodes of the intensity noises in dual--frequency oscillation can be interpreted in terms of in--phase and anti--phase modes just like for coupled pendulums. Figure \ref{fig3}(b) evidences the fact that the anti--phase noise is between 8 and 15 dB lower than the in--phase noise throughout the whole frequency range 10 kHz -- 20 MHz. This confirms the expectation that the full in--phase pumping combined with a very low non--linear coupling constant $C $ are favorable conditions for the in--phase noise to dominate. Indeed, such a domination of the in--phase mechanism was predicted in\cite{Liu:18}, which states that decreasing the coupling between the two modes favors in--phase noise with respect to anti--phase noise. The dramatic anti--phase noise reduction corroborates the fact that the gain competition between the two modes is well inhibited. This proves that the present laser architecture prevents the noises of the dual--frequency VECSEL from being deteriorated by cross--saturation. \begin{figure}[!ht] \centering\includegraphics[width=.49\textwidth]{fig4a.eps} \includegraphics[width=.49\textwidth]{fig4b.eps} \caption{Amplitude (a) and phase (b) of the correlation spectrum between the intensity noises of the two modes. The symbols stand for measurements and the solid lines stand for the model computed with the same parameters as in Fig.~\ref{fig3}.} \label{fig4} \end{figure} The amplitude and phase of the resulting correlation of the dual--frequency VECSEL intensity noises are analysed in Fig.~\ref{fig4}. The two modes intensity noises are found to be fully in--phase correlated with a strong correlation amplitude. More precisely, Fig.~\ref{fig4}(a) shows that the correlation amplitude between the two modes is larger than 0.8 up to several MHz. The decrease at frequencies larger than 8 MHz is due to the limitations of our measurement. Indeed, at this frequency the intensity noise approaches -140 dB/Hz (as shown in Fig.~\ref{fig3}), which is very close to the shot--noise level. This explains why this decrease in correlation amplitude is not predicted by the model. Figure~\ref{fig4}(b) corroborates the fact that in--phase noise mechanism is strongly dominant between 10 kHz and 20 MHz. To summarize, we have proved in this section that, when conditions (iii) and (iv) are both verified, a significant reduction of the dual--frequency VECSEL anti--phase intensity noise is observed, as expected from our modelling. The next section aims at investigating whether a similar decrease in the phase noise of the beat note is obtained. \section{Phase noise reduction} \label{section3} First, it is worth mentioning some phase noise properties of VECSELs. These semiconductor lasers exhibit large Henry $\alpha$ factors responsible for the coupling between the phase and the amplitude fluctuations of the laser field \cite{Henry:1982}. This effect of the Henry factor has been shown to be the dominating mechanism for the phase noise of the beat note of dual--frequency VECSELs at high frequencies \cite{De:14}, typically above one MHz. Another relevant contribution to the phase noise has been shown to originate from the thermal fluctuations of the semiconductor active medium induced by pump intensity fluctuations. This thermal effect dominates the beat note phase noise at low frequencies and its properties can be described using a macroscopic model which involves three parameters \cite{Laurain:10} : $R_\mathrm{T}$, the thermal resistance of the semiconductor structure ; $\tau_\mathrm{T}$, its thermal response time ; and $\Gamma_\mathrm{T}$, its refractive index variation with temperature. Besides, for dual--frequency VECSELs, the non--linear coupling between the modes, induced by cross--saturation, has a detrimental influence on the phase--noise \cite{Liu:18}. That is why, in the introduction, condition (iii) stipulates a very low coupling constant to achieve a low phase noise. Here, we have estimated the coupling constant to be only $C = 0.05$. The beat note phase--noise level is also expected to strongly depend on the correlations between the pump noises seen by the two modes \cite{De:14}, which thus deserve to be optimized. Condition (iv) stipulates that fully in--phase correlated pumping is required, i.e. $\eta \to 1$ and $\Psi \to 0$. With the present specially designed pumping architecture, Section~\ref{section2} has shown that we achieve a correlation amplitude $\eta$ close to 1 with $\Psi =0$. In order to also meet condition (ii), balanced excitation ratios ($r_x \simeq r_y$) are needed. This latter condition plays though a marginal role to reduce the phase noise as soon as $\eta \neq 1$, which is the case here. Figure \ref{fig5} reports the VECSEL output beat note detected with a spectrum analyser in two respective conditions : with a single pump spot as reported in \cite{Liu:18} (in blue) and with the two in--phase correlated pumps (in orange). As a consequence of the novel pumping scheme and as predicted, the beat note noise pedestal experiences a tremendous decrease. \begin{figure}[!ht] \centering\includegraphics[width=7.5cm]{fig5.eps} \caption{Beat note spectrum obtained with an electrical spectrum analyzer. The bottom light trace (in cyan) stands for the detection floor. The beat note spectra are plotted versus the frequency offset from the carrier. The wider spectrum (in blue) corresponds to the single pump spot scheme reported in \cite{Liu:18} whereas the other one (in orange) corresponds to the two fully in--phase correlated pumps.} \label{fig5} \end{figure} \begin{figure}[!ht] \centering\includegraphics[width=7.5cm]{fig6.eps} \caption{Power spectral density spectra of the beat note phase noise. The symbols are experimental measurements. The dots (in red) correspond to the pumping scheme with two beams fully in--phase correlated while the open squares (in light--blue) stand for the single pump--spot scheme reported in \cite{Liu:18}. The open gray circles represent the detection floor. The total phase noise model with the two pumps is plotted with a solid line (in dark purple). The dashed line (in green) is the contribution of the phase-amplitude coupling. The dash--dotted line (in magenta) is the contribution of the thermal effects. They are computed with $\alpha = 5.2$, $P_{\mathrm{p},x} = 0.48 \,\mathrm{W}$, $P_{\mathrm{p},y} = 0.45\,\mathrm{W}$, $R_\mathrm{T} = 40 \,\mathrm{K.W^{-1}}$, $\tau_\mathrm{T} = 30 \,\mu\mathrm{s}$, $\Gamma_\mathrm{T} = 1.39 \times 10^{-7} \,\mathrm{K^{-1}}$ and the same other parameters as in Fig.~\ref{fig3}.} \label{fig6} \end{figure} Figure \ref{fig6} plots the power spectral density of phase fluctuations of the VECSEL beat note. This figure evidences a drastic reduction of 10 to 20 dB of the phase noise throughout the whole frequency range from 10 kHz to 20 MHz (red dots) with respect to the former pumping scheme (open light-blue squares). The beat note phase noise falls below the detection floor at about 2 MHz offset from the carrier. Thanks to the model, Fig.~\ref{fig6} also shows that the phase--amplitude coupling contribution to the phase noise power spectral density (dashed green line) has vanished to reach the level of the measurements floor. The remaining part of the phase noise at low frequencies is thus mainly induced by the thermal fluctuations (dash-dotted magenta line) and technical noises. It is worth noticing that, in the model for the thermal noise, the fact that the correlation amplitude $\eta$ of the pump noises is not strictly equal to 1 leads to a larger contribution to the beat note phase noise than the unbalanced pumping of the two modes $P_{\mathrm{p},x} \neq P_{\mathrm{p},y}$. We could thus expect a stronger phase noise suppression to occur with a single--mode fibered pump ensuring $\eta =1$ and delivering enough power like in \cite{Liu:18OL}, provided such a single--mode fiber--coupled high--power diode laser would exist around 673 nm. Furthermore, we observe a discrepancy reaching almost 10 dB at 10 kHz between the model and the measurements. The experimental spectra exhibit a $f^{-3}$ slope for frequencies below 300 kHz whereas the theoretical model exhibits a larger slope in $f^{-4}$. This means that the model for thermal noise fails reproducing the flicker like frequency noise which is experimentally demonstrated. Therefore, this model would deserve to be further refined. To do so, a microscopic approach would be necessary. Indeed, the theoretical plots in Fig.~\ref{fig6} have been obtained using an oversimplified model in which the frequency response function $\Gamma\left(f \right)$ of the thermal effect is approximated by a simple second--order low--pass filter with a cut--off frequency given by $1/\tau_\mathrm{T}$. $\Gamma\left(f \right)$ could be derived for example using a semi--analytic method like in \cite{Reichling:1994}, which would take into account the detailed structure of the semiconductor chip. \section{Conclusion} \label{conclusion} In conclusion, we have demonstrated a tremendous reduction of the beat note phase-noise of a dual--frequency VECSEL operating at 852 nm with multi--mode pumping. This reduction comes as a result of the fully in--phase correlated pumping (with correlation amplitude $\eta = 0.98$ and phase $\Psi=0$) with very low gain--competition ($C=0.05$). This has been made possible using amplitude--division of a multi--mode fibered pump diode, which creates spatially separated pump spot fitting the modes and originating from the same pump source. This work is moreover consistent with the theoretical predictions. This novel multi--mode fibered pump scheme, allowing low noise dual--frequency operation, can be applied to all wavelengths for which there exists no commercial suitable single-mode fibered pump laser diode. Furthermore, even for wavelengths for which single-mode fibered pump lasers exist, this pumping scheme can prove useful to use the larger power of multi--mode pumps while keeping a low phase noise operation. The dual--frequency VECSEL studied here delivers a beat note in the range of few hundreds of MHz, through the use of an isotropic YAG etalon of thickness 100 $\mu$m. This setup could be adapted to produce higher frequency betnotes by optimizing the etalon, as was recently performed to get a beatnote frequency tunable around 9.2 GHz \cite{Dumont:14}. This makes our setup useful to all possible applications of dual--frequency VECSELs. For example, our noise reduction strategy could be applied to CPT cesium clocks. In this case, the stringent needs in terms of noise for the laser source were evaluated in \cite{Tricot:2018,Dumont:15}. For example, a RIN level as low as -150 dB/Hz up to 1 MHz is required to target a $5\times 10^{-13}$ relative frequency stability at 1 s integration time. Both the use of our novel pumping architecture and implementation of an OPLL could improve the phase and intensity noises. To this aim, a source of low noise RF reference, as reported in \cite{Francois:14}, will be required. The simultaneous stabilization of both the frequency difference on a RF reference and the absolute laser frequency on an atomic transition has also been investigated in \cite{Camargo:13} in such a context. The two frequency lockings were not perturbing one another. Notice also that there should be solutions to reduce the volume and the influence of environmental perturbations on the laser. For example, industrial integration of single-frequency VECSELs is currently being explored \cite{Chomet:18}. Such techniques could represent promising potential solutions for dual--frequency VECSELs in view of developing more compact atomic clocks. Moreover, the implementation of an OPLL for our dual--frequency VECSEL with improved multi--mode pumping architecture will permit further characterization of the noise properties of these lasers, especially at low frequencies. In particular, a new model for the thermal noise needs to be developed, taking into account the microscopic structure of the semiconductor laser and the particular dual--spot pumping geometry. It will allow us to gain a deeper understanding of the remaining phase noise and will benefit to all applications of dual--frequency VECSELs. Finally, let us mention that our dual--spot pumping strategy may find applications beyond cw dual--frequency VECSELs. For example, it could be used to reduce the noise of dual--polarization MIXSELs that are presently developed to perform dual--comb spectroscopy \cite{Link:2018}. \section*{Funding} Agence Nationale de la Recherche (grant number ANR-15-CE24-0010-04) ; Direction G\'en\'erale de l'Armement (DGA). \section*{Acknowledgments} The work of HL, FB, FG, GG and GB is performed in the framework of the joint lab between Laboratoire Aim\'e Cotton and Thales R\&T. The authors thank Gr\'egoire Pillet, Syamsundar De, Christophe Siour, Ga\"elle Lucas-Leclin and Sylvie Janicot for technical help and valuable discussions.
5b7e62ee52513f835b148fb5d115a67986b96593
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} As large scales become more energetic and pronounced in high-Reynolds-number wall-bounded turbulent flows, their role throughout the boundary layer, especially on the small scales in the near-wall region, has been the focus of many recent investigations. One particular phenomenon that took attention by many is the amplitude modulation of the small-scale fluctuations in the near-wall region by the large scales in the outer region. Quantifying this amplitude modulation (AM) is not straightforward due to the non-linear nature of the phenomenon (Mathis \emph{et al.}, 2009), and therefore various approaches have been used in the literature. The first step into investigating the scale interactions is to decompose the fluctuating velocity signal into its large- and small-scale components. A common approach to achieve this is to use a cut-off spectral filter, typically applied in Fourier space. This was done by Hutchins and Marusic (2007), Mathis \emph{et al.} (2009) and many others thereafter. The filter is chosen based on a cut-off wavelength (in space or time) that separates the inner and outer peaks in the pre-multiplied energy spectra of the streamwise velocity fluctuations, $u$. Depending on the available data set, filters can be defined as either temporal or spatial filters or a combination of both. Another approach to decompose the scales, which was used quite extensively by Leschziner and co-workers ({\it e.g.}\ Agostini and Leschziner, 201 ), is the Hilbert-Huang empirical mode decomposition (EMD) which assumes that the data consists of intrinsic modes of oscillations called intrinsic mode functions (\emph{imf}s). To quantify the modulation, Mathis \emph{et al.} (2009) defined a correlation coefficient, $R$, between the large-scale velocity fluctuations, $u_{L}$, and the filtered envelope of the small-scale fluctuations, $E_{L}\left(u_S\right)$, \begin{equation} \label{eq:R} R = \frac{\overline{{u_L^+}~{E_L\left(u_S^+\right)}}}{{\sigma_{u_L^+}}~{\sigma_{E_L\left(u_S^+\right)}}}~~~, \end{equation} where the overbar indicates the time-average operator, $\sigma$ is the standard deviation of the quantity in the subscript and $\left({\cdot}\right)^+$ represents inner scaling which is defined through the characteristic inner scales: the friction velocity $U_\tau$ (defined as $U_{\tau}=\sqrt{\tau_{w}/\rho}$, where $\tau_{w}$ is the mean wall-shear stress and $\rho$ is the fluid density) and the viscous length scale $\nu/U_{\tau}$, where $\nu$ is the kinematic viscosity. The recent review paper by Dogan \emph{et al.}\ (2018) gives an overview of the different approaches to investigate the AM. They used two main aspects to categorize the studies in the literature on this topic: (i) the method for decomposing the scales, specifically Fourier filters and empirical mode decomposition (EMD), and (ii) how the modulation was quantified, namely single-point, two-point correlations and the scale-decomposed skewness term of the velocity fluctuations. They studied the various aspects on a single data set from a well-resolved large-eddy simulation (LES) by Eitel-Amor \emph{et al.} (2014). The present contribution will use the same data set to further explore the effects of secondary filtering in the $R$ definition by Mathis \emph{et al.} (2009), applied on the envelope of the small-scale fluctuations, on the quantification of the modulation. Additionally, the robustness of the definition will be investigated with random signals and the time shifts between the scale components. \vspace{-3mm} \section{Data set and spectral analysis} \label{sec:dataset} The data set used in this study is the well-resolved LES of a spatially developing zero-pressure gradient turbulent boundary layer by Eitel-Amor \emph{et al.}\ (2014). For this data set, the domain starts at a low (laminar) $Re_\theta=180$, which is the momentum-thickness-based Reynolds number where $U_\infty$ is the free-stream velocity and is the momentum thickness. After tripping (Schlatter and \"Orl\"u, 2012), the flow is completely turbulent at around $Re_\theta~{\approx}~600$ and the maximum $Re_{\theta}$ shortly before the outflow is 8300. For the present contribution, the position $Re_\theta~{\approx}~8200$ is investigated since long time series of the flow are available for that $Re_\theta$. The reader is referred to Eitel-Amor \emph{et al.}\ (2014) for further details about the simulation, the numerical setup and its turbulence statistics. To investigate the distribution of energy across various scales in the boundary layer, contour maps of the energy spectra of the streamwise velocity fluctuations are extensively used, and since the current data set provides both spanwise and temporal information, 2D spectral maps are presented here. Figure \ref{fig:2Dspec} shows the inner-scaled pre-multiplied 2D spectral contour maps of the streamwise velocity fluctuations for $Re_\theta~{\approx}~8200$ at wall-normal locations of $y^+{\approx}15$ and $y^+{\approx}180$, which exhibits the two energy peaks in the boundary layer, {\it i.e.}\ near-wall and outer peak, respectively. The two panels at different wall-normal locations show the footprint of the large scales, {\it i.e.}\ the larger wavelengths in both directions located in the outer region (see Figure \ref{fig:2Dspec2}), on the region close to the wall (see Figure \ref{fig:2Dspec1}), confirming the superposition effect of the large scales on the near-wall region (Hutchins and Marusic, 2007). This spectral representation also allows assigning the energetic wavelengths of the streamwise fluctuating velocity signal to its large- and small-scale components, which will be further explored in the following section. \begin{figure*}[h!] \centering \subfigure[]{ \centering \includegraphics[width=0.32\linewidth]{ETMM_2Dspec1} \label{fig:2Dspec1}} \subfigure[]{ \centering \includegraphics[width=0.32\linewidth]{ETMM_2Dspec2} \label{fig:2Dspec2}} \vspace{-2mm} \caption{\label{fig:2Dspec} Contour maps of the inner-scaled pre-multiplied 2D energy spectra of the streamwise velocity fluctuations, $k_tk_z\phi_{uu}/U_{\tau}^{2}$, for $Re_\theta=8200$ at (a) $y^+{\approx}15$ and (b) $y^+{\approx}180$.\ The ordinates show spanwise wavelength, $\lambda_z$, in both inner (left) and outer (right) scaling. The abscissae show the wavelength in time, $\lambda_t$, in inner (bottom) and outer (top) scaling. Dashed white lines: the cut-off wavelengths for the spectral filter in both domains, i.e. $\lambda_t^+{\approx}400$ and $\lambda_z^+{\approx}400$. Black cross: the outer peak location, $\lambda_t{\approx}10\delta/U_\infty$ and $\lambda_z{\approx}\delta$.} \end{figure*} \subsection{Scale decomposition} \label{sec:scale} In the present contribution we will handle the scale decomposition using two methods: spectral cut-off filters, {\it i.e.}\ Fourier filters, and EMD. Regarding the Fourier filters, the dashed lines representing the cut-off locations at $\lambda_t^+{\approx}400$ and $\lambda_z^+{\approx}400$ in Figure \ref{fig:2Dspec} reasonably demarcate the two peaks, {\it i.e.}\ the near-wall and outer peak. This is accepted as the sufficient criteria to set the Fourier filters to decompose the scales in the flow using spectral maps (note that different cut-off filters were also tested for the robustness of the filter choice and negligible differences were observed for the considered analyses, as previously suggested by Mathis \emph{et al.}\ (2009)). Implementing two cut-off filters in both directions (denoted 2DS filter here), we define the small scales as those with wavelengths smaller than the cut-off, both in time and span (with large scales being defined as the three remaining quadrants) giving them a robust representation. For EMD, as previously mentioned, the underlying idea is that any data can be represented through different \emph{imf}s. Here, the extraction algorithm of these modes is adapted from the implementation of FABEMD (fast and adaptive bi-dimensional EMD, see Bhuiyan \emph{et al.}, 2008) by Cormier \emph{et al.}\ (2016), which was applied to a direct numerical simulation (DNS) of a channel flow at $Re_\tau=1000$. The FABEMD code is run to obtain 10 modes. The number of modes assigned to define small scales, $n_{SS}$, is in principle a free parameter that has an important impact on the separation of the scales. In our case, we found $n_{SS}=5$ to show optimal behaviour for the distribution of the variance profiles across the whole boundary layer, see Dogan \textit{et al.}\ (2018) for further discussion. The large scales comprise the residual of the signal once the small scales are assigned to their modes. A comparison between different decomposition methods can be assessed by analysing the energy spectra of the decomposed velocity signals. Figure \ref{fig:scalesspec} shows the inner-scaled pre-multiplied energy spectra of the streamwise velocity from small and large scales obtained through 2DS Fourier filter and EMD. The figure clearly indicates that the time spectra exhibits a significant overlap between the scales and makes it hard to define a meaningful cut-off to distinguish them; therefore it is advantageous to have both temporal and spatial information to define a 2D filter, at least for the present moderate $Re$ range. For higher $Re$, defining the cut-off through temporal period (or streamwise wavelength if Taylor's hypothesis is applied) has been a common approach, by Mathis \emph{et al.}\ (2009) and others, to decompose the scales, especially for hot-wire measurements where only time information is available. On the other hand, the spanwise spectrum shows a clear distinction between small and large scales. This demarcation is very sharp for the spectral filter, while a limited overlap for the EMD filter exists. It is quite remarkable that the difference of the energy levels of the scales obtained from the two decomposition methods is not significant. This shows the efficacy of both methods in adequately decomposing the scales. \begin{figure*}[h!] \centering \subfigure[]{ \centering \includegraphics[width=0.32\linewidth]{ETMM_scales_timespectra_Ret8000} \label{fig:timespec}} \subfigure[]{ \centering \includegraphics[width=0.32\linewidth]{ETMM_scales_spanwisespectra_Ret8000} \label{fig:spanspec}} \vspace{-2mm} \caption{\label{fig:scalesspec} Contour plots of the inner-scaled pre-multiplied energy spectra of the streamwise velocity fluctuations for the decomposed scales at $Re_\theta=8200$. The energy levels for the small scales are represented by blue color and the large scales by black. The dashed contour lines represent the scales from 2DS Fourier filter, and the solid contour lines are from EMD. The abscissae show (a) time wavelength $\lambda_t$ and (b) spanwise wavelength $\lambda_z$, in inner (bottom) and outer (top) units. The ordinates show the wall-normal location, $y$, in inner (left) and outer (right) units. The increment and the minimum contour level for the contours are (a) 0.2 and 0.2 (b) 0.3 and 0.3, respectively.} \end{figure*} \section{Results and Discussion} To assess the influence of large scales in the outer region (at a height $y_1$) on the small scales in the near-wall region (at a height $y_2$), in principle simultaneous measurements at these two positions in the boundary layer are necessary. However, such a technique is not easy in experiments, which lead Mathis \emph{et al.}\ (2009) to define the correlation coefficient $R$, Equation (\ref{eq:R}) using single-point hot-wire measurements although verified as a sufficient surrogate with two-point measurements. Numerical simulations, on the other hand, naturally provide simultaneous multi-point data (in fact whole velocity fields), and consequently Bernardini and Pirozzoli (2011) presented the correlation between the scales as 2D covariance maps from their DNS data. We use their quantification of the modulation, using the covariance, $C_{AM}$, {\it i.e.}\ the unnormalised form of the $R$ coefficient. In this definition, one aspect that did so far not receive sufficient attention is the effect of the filters involved in the calculation of each term, specifically, how the scales are decomposed, the augmented signal is calculated and the envelope is filtered. \subsection{Different filters} Figure \ref{fig:cov2p} shows the contour maps of the covariance $C_{AM}$ for different filters, as detailed in Table \ref{tab:caption}. Before comparing different panels, it is helpful to highlight the notable features of these contour maps. These maps typically depict two peaks, namely a diagonal and an off-diagonal peak. The formation of the off-diagonal peak is suggested to be a more refined picture of AM (Bernardini and Pirozzoli, 2011, Eitel-Amor \emph{et al.}, 2014). Bernardini and Pirozzoli (2011) observed that the off-diagonal peak seemed to disappear for a synthetic signal whereas the diagonal peak, {\it i.e.}\ the single-point correlation ($y_1$=$y_2$), was still present. This is consistent with the findings by Schlatter and \"Orl\"u (2010), and thus Bernardini and Pirozzoli concluded that the diagonal peak might have artifacts while the off-diagonal one conveys realistic information for the AM. For their data set, only spatial information was available; therefore every step of the calculation was performed in the spanwise direction only. Here, Figure \ref{fig:2pspan} follows their approach and similarly depicts two peaks. However, when the filters are changed at different steps of the calculation compared to their case, the shape of the covariance maps does change significantly. Figure \ref{fig:2pspantime} uses a time filter for the secondary filter and the distinct diagonal peak seems to disappear and this brings up the potential dependence of the diagonal peak on the secondary filtering of the envelope signal. However, this may not be an effect entirely on its own. The primary filtering to decompose the scales has also an impact on this effect, since similar results to those in Figure \ref{fig:2pspantime} are obtained in Figure \ref{fig:2p2DS_Hilbertspan_time} when the 2DS filter is used for the primary filtering to decompose the scales. On the other hand, among the three, Figure \ref{fig:2p2DS_Hilbertspan_time}, \ref{fig:2p2DStime} and \ref{fig:2p2DS}, there are no negligible differences, hinting on the more dominant effect of the primary filtering of the scale decomposition. In this regard, Figure \ref{fig:2pEMDtime} gives an informative result, when compared to \ref{fig:2p2DStime}: The correlation of the large scales with the small scales seems to extend in a larger area throughout the boundary layer with the 2DS Fourier filter than with EMD. Also, it is noticeable that the off-diagonal peak for EMD is not as strong in amplitude as for the 2DS Fourier filters. The assignment of the number of modes to each scale in EMD could affect the dominance of the large scales over the small scales. It is also worth noting here the potential limitation of the method as practised in the literature, that is to fix the number of modes for every wall-normal location. This might affect the correct representation of {\it e.g.}\ the increasing energy of the large scales in the outer region. \begin{table*}[htbp!] \vspace{-2mm} \small \centering \caption{\label{tab:caption} Details of the panels in Figure \ref{fig:cov2p}} \label{tab} \begin{tabular}{llll} \multicolumn{1}{c}{\textbf{Figure panel}} & \multicolumn{1}{c}{\textbf{Scale decomposition}} & \multicolumn{1}{c}{\textbf{Envelope calculation}} & \multicolumn{1}{c}{\textbf{Filtered envelope calculation}} \\ a & spanwise filter & Hilbert in spanwise & spanwise filter \\ b & spanwise filter & Hilbert in spanwise & time filter \\ c & 2DS filter & Hilbert in spanwise & time filter \\ d & 2DS filter & Hilbert in time & time filter \\ e & 2DS filter & Hilbert in time & 2DS filter \\ f & EMD, nSS = 5 & Hilbert in time & time filter \\ \end{tabular} \normalsize \end{table*} \begin{figure*}[htbp!] \centering \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_span} \label{fig:2pspan}} \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_span_time} \label{fig:2pspantime}} \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_2DS_Hilbertspan_time} \label{fig:2p2DS_Hilbertspan_time}} \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_2DS_time} \label{fig:2p2DStime}} \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_2DS} \label{fig:2p2DS}} \subfigure[]{ \centering \includegraphics[width=0.26\linewidth]{ETMM_cov2p_EMD_time} \label{fig:2pEMDtime}} \vspace{-2mm} \caption{\label{fig:cov2p} Contour maps of the covariance definition, $C_{AM}$, for $Re_\theta=8200$. White contour lines mark zero values. Some contour levels are shown for reference. Black crosses mark the locations $\left(y_1, y_2\right)$ for the covariance peaks in inner and outer region. White cross indicates the location for $\left(3.9Re_\tau^{0.5}, 10\right)$. Refer to Table \ref{tab:caption} for the details of each panel.} \end{figure*} \vspace{-2mm} \subsection{Robustness} A sound measure of AM must be robust against the particular filtering procedure of choice, and not allow false positive identification of AM. In other words, it should predict modulation only when it is actually present in the signal, and return a vanishing measure in case the signal is constructed in a way that no modulation is present. In the following, these two aspects are addressed for two different measures of AM: the covariance defined previously, $C_{AM}$, and the simplified measure coming from the decomposed skewness, proposed by Eitel-Amor {\it et al.} (2014): \begin{equation} C_{AM}^\ast=\overline{u_L^+(y_1^+) {u_S^+}^2 (y_2^+)}\, . \end{equation} Figure \ref{fig:CAM} shows $C_{AM}$ computed from the same turbulent boundary layer data at $Re_\theta=8200$ treated in different ways. In the baseline cases, the natural turbulent data has been fed directly to the primary and secondary filters required for decomposing into $u_S$ and $u_L$, and then for computing $E_L(u_S)$ (corresponding to cases a and f in Table \ref{tab}). In order to test the robustness against false positives, the inherent scale information is intentionally removed by randomly scrambling the signal along the directions across which the primary and secondary filter operate. The spanwise Fourier filter and EMD (denoted a and f in Table \ref{tab}, respectively) are considered for both primary and secondary filtering, thus scrambling occurs along spanwise direction or along both spanwise direction and time, respectively. The randomised signal preserves all statistical moments of the original turbulent signal but contains no scale information or spatial coherence along the specified direction. In this way, no data values are generated, only their position is randomly changed. Thus, it features a flat spectrum typical of white noise. The randomisation occurs homogeneously in the wall-normal direction, in order to maintain the wall-normal coherence and covariance $\overline{u(y_1) u(y_2)}$ of the original turbulent signal prior to scale decomposition. Having done otherwise would have necessarily implied zero values of any two-point statistics at two different wall-normal heights, as to be expected from two independent random signals. Let us first consider the effect of the primary and secondary filter pair on $C_{AM}$ maps for natural turbulent signals. As apparent from the figure, the filter choice affects $C_{AM}$ both quantitatively and qualitatively. Whereas the peak for small $y_2$ is visible for both filters, the log region features a significantly more negative AM for the Fourier filter. On the other hand, for a randomised signal, the covariance drops for both filters, only leaving a small spurious inner peak for the Fourier signal. \begin{figure*}[htbp!] \begin{center} \subfigure[]{\centering\includegraphics[scale=0.75]{plots_CAM_2}} \subfigure[]{\centering\includegraphics[scale=0.75]{plots_CAM_1}}\hspace{5pt}% \vspace{-2mm} \caption{Covariance $C_{AM}=\overline{u_L^+(y_1^+)E_L(u_S^+ (y_2^+))}$ computed for turbulent velocity signals at $Re_\theta=8200$, before and after having randomised the order of the data in the spanwise and (for case f) time direction. (a) Full map of the two-point covariance. Dashed, solid and dotted lines indicate, respectively, the contour levels $\left\{-0.05; 0.1; 0.4 \right\}$. (b) Single-point $y_1^+=y_2^+$ covariance. The background colour refers to $C_{AM}$ obtained with the spanwise filter (case a). The green line indicates the value of $C_{AM}$ obtained by shifting the $u_S^+$ signal in time until $C_{AM}$ is maximised. \label{fig:CAM}} \end{center} \end{figure*} The same analysis is presented in Figure\ \ref{fig:CAMstar} for $C_{AM}^{*}$. It is interesting to note that in this case, the behaviour of the two filters is much more similar, even quantitatively. This can potentially be explained by the fact that one considers directly one term of the decomposed skewness, and is thus less affected by the exact details of the correlation, as long as a proper separation of large and small scales can be accomplished. Similarly, the randomised signal, having the same statistical moments, shows nearly no AM throughout the $y_1, y_2$ plane. \begin{figure*}[htbp!] \begin{center} \subfigure[]{\centering\includegraphics[scale=0.75]{plots_CAMstar_2}} \subfigure[]{\centering\includegraphics[scale=0.75]{plots_CAMstar_1}}\hspace{5pt}% \vspace{-2mm} \caption{Covariance $C_{AM}^\ast=\overline{u_L^+(y_1^+) {u_S^+}^2 (y_2^+)}$ computed for turbulent velocity signals at $Re_\theta=8200$. Colors as in Figure \ref{fig:CAM}. Dashed, solid and dotted lines indicate respectively the contour levels $\left\{-0.2; 0.5; 1.5 \right\}$. \label{fig:CAMstar}} \end{center} \end{figure*} \subsection{Time shift and convection} As discussed in \emph{e.g.}\ Mathis {\it et al.}, due to the inclination of the large-scale structures in a boundary layer, the strongest AM is presumably not in the vertical direction, but rather along the inclination of structures; an angle of $\alpha=$10--15$^\circ$ is commonly reported in the literature. Using our LES data, we tested at each wall-normal position different time shifts between the small-scale and large-scale signals, and extracted a shift which maximises the correlation (or covariance). The corresponding results are shown in Figures\ \ref{fig:CAM}--\ref{fig:dt} (green lines). As noted by Bernardini and Pirozzoli (2011), there is only negligible influence of this time shift on the actual AM values, see Figures\ \ref{fig:CAM}(b) and \ref{fig:CAMstar}(b). This is most likely due to the fact that the structures are very long, and thus a small inclination will ultimately not change the quantification significantly. The actual time shift can be shown as a function of height for fixed $y_1^+=10.3$, see Figure\ \ref{fig:dt}. As expected, a larger height for the large-scale signal $y_2$ implies a larger time shift $\Delta t$. The plot indicates a nearly perfect linear fit, $\Delta t^+ = 0.27 \Delta y^+$. Assuming a convection velocity of around $0.8U_\infty$ for the large-scale structures, \emph{i.e.}\ $\Delta x^+ = 0.8{U_\infty^+} \Delta t^+$, one obtains $\Delta x^+ = 0.22{U_\infty^+} \Delta y^+$ (with ${U_\infty^+}\approx{27.5}$) which is close to the expected angle $\tan(\alpha)=\Delta y^+/ \Delta x^+$. One can thus conclude that it is indeed these large structures that are responsible for the AM. \begin{figure}[htbp!] \begin{center} \includegraphics[scale=0.75]{plots_CAMstar_3} \vspace{-2mm} \caption{Time shift $\Delta t^+$ of the $u_S^+$ time series that maximises the covariance $C_{AM}^\ast=\overline{u_L^+(y_1^+,t) \left[ u_S^+ (y_2^+,t+\Delta t) \right]^2}$ at $y_1^+=10.3$ for the $Re_\theta=8200$ case. \label{fig:dt}} \end{center} \end{figure} \section{ Conclusions } We use a well-resolved LES data set by Eitel-Amor {\it et al.}\ (2014) to investigate the robustness of the amplitude modulation phenomenon, relevant for wall-bounded turbulence to quantify scale interactions. Both Fourier filters and EMD were tested to decompose the scales and both were found effective. For quantifying, the definition by Mathis \emph{et al.}\ (2009) was adapted for two-point correlation maps, which are regarded to better represent AM. This definition requires two subsequent filterings, and the effects of different filters at each step were investigated and both affected the resulting maps both qualitatively and quantitatively. The primary filter in the correlation definition, \textit{i.e.}\ to decompose the scales, manifested a more dominant effect. Also, a randomised signal was used to check the robustness of AM definition against a particular filtering procedure. Finally, the inclination of the large-scale structures in the boundary layer was taken into account and negligible differences in AM quantification were observed. While comparing and contrasting with the existing literature on this topic, the current work provides the various analyses on a single data set allowing a robustness study and reveals important points for further amplitude modulation investigations. \par\vspace{6pt}\noindent\textbf{\large Acknowledgments }\par\vspace{2pt} Financial support provided by the Knut and Alice Wallenberg Foundation is gratefully acknowledged. DG is supported by the Priority Programme SPP 1881 Turbulent Superstructures of the DFG. \begin{References} \vspace{-1mm} \item Agostini L. and Leschziner M. (2014), On the influence of outer large-scale structures on near-wall turbulence in channel flow, {\it Phys. Fluids}, Vol. 26, 075107. \item Bernardini M. and Pirozzoli S. (2011), Inner/outer layer interactions in turbulent boundary layers: A refined measure for the large-scale amplitude modulation mechanism, {\it Phys. Fluids}, Vol. 23, 061701. \item Bhuiyan S. M. A., Adhami R. R. and Khan J. F. (2008), Fast and Adaptive Bidimensional Empirical Mode Decomposition Using Order-Statistics Filter Based Envelope Estimation, {\it EURASIP J. Adv.}, 728356. \item Cormier M., Gatti D. and Frohnapfel B. (2016), Interaction between inner and outer layer in drag-reduced turbulent flows, {\it Proc. Appl. Math. Mech.}, Vol. 16, 633-634. \item Dogan E., {\"O}rl{\"u} R., Gatti D., Vinuesa R. and Schlatter P. (2018), Quantification of amplitude modulation in wall-bounded turbulence, {\it Fluid Dyn. Res.} (in press). \item Eitel-Amor G., {\"O}rl{\"u} R. and Schlatter P. (2014), Simulation and validation of a spatially evolving turbulent boundary layer up to {R}e$_{\theta}$=8300, {\it Int. J. Heat Fluid Flow}, Vol. 47, pp. 57-69. \item Hutchins N. and Marusic I. (2007), Large-scale influences in near-wall turbulence, {\it Phil. Trans. R. Soc. A}, Vol. 365, pp. 647-664. \item Mathis R., Hutchins N. and Marusic I. (2009), Large-scale amplitude modulation of the small-scale structures in turbulent boundary layers, {\it J. Fluid Mech.}, Vol. 628, pp. 311-337. \item Schlatter P., {\"O}rl{\"u} R. (2012), Turbulent boundary layers at moderate Reynolds numbers: inflow length and tripping effects, {\it J. Fluid Mech.}, Vol. 710, pp. 5-34. \item Schlatter P., {\"O}rl{\"u} R. (2010), Quantifying the interaction between large and small scales in wall-bounded turbulent flows: a note of caution, {\it Phys. Fluids}, Vol. 22, pp. 051704. \end{References} \end{document}
18c5b90fae1e09a2f1ae180ae289de7de32bc3eb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{sec:introduction} Image data, and data in general, is often filtered to remove \textit{noise}, random fluctuations that hide the underlying pattern. For images, one of the most common solutions is to apply Gaussian blur, which smooths the data to remove noise. Because of its use, there has been much interest in discovering efficient algorithms for Gaussian blur \cite{recursive-cosine-gblur,elboher-efficient-gblur,gaussian-blur}. Waltz and Miller \cite{gaussian-blur} in particular provide a clear example of the ways in which properties of binomial coefficients can be leveraged to create such an algorithm. An analysis of their algorithm in Section \ref{sec:background} leads to the following definitions. \begin{definition}\label{dfn:collapsing sum} Let $A$ be a real $m\times n$ matrix. If $m\geq 2$, then the $(m-1)\times n$ matrix $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}} (A)$ has entries \[ \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}} (A)_{i,j} = a_{i,j} + a_{i+1,j}.\] If $n\geq 2$, then the $m\times (n-1)$ matrix $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}(A)$ has entries \[ \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}(A)_{i,j} = a_{i,j} + a_{i,j+1}. \] Finally, the matrix $\sigma(A):=\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}\circ\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}(A)=\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}\circ\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}(A)$ is the \textit{collapsing sum} of $A$. \end{definition} The collapsing sum captures mathematically what Waltz and Miller describe computationally in \cite{gaussian-blur}. In this paper, we establish the connection between the collapsing sum and Gaussian blur and provide a theoretical study of the combinatorial properties of this operator. Section 3 provides the main combinatorial analysis of the operator. We recast the collapsing sum (and therefore Gaussian blur) in terms of matrix multiplication and define a new class of matrices called coefficient matrices that generalize Gaussian blur. The section culminates in Theorem \ref{thm:gaussian blur}, which explicitly describes the connection between the collapsing sum and Gaussian blur. In the remainder of the paper, we turn to the purely combinatorial properties of this operator; for example, we completely describe the fully-collapsed state of Toeplitz matrices (see Proposition \ref{thm:circulant-fully-collapsed}). Finally, we discuss generalizations of Gaussian blur in connection to Waltz and Miller's algorithm. \section{Background}\label{sec:background} As the collapsing sum will be motivated by Gaussian blur, we begin with a description of image filtering. Grayscale images are stored as matrices: Shades of gray are represented as numbers in a particular range (for example, integers from 0 to 255, or real numbers from 0 to 1), and each entry represents a pixel.\footnote{Whether 0 represents black or white depends on the application; in printing, 0 represents white, whereas in computing, 0 represents black. We won't need to pick between these conventions for our purposes.} We will consider only grayscale images, but this is not an artificial restriction; the same techniques are used to apply a filter to color images. The data for color images are stored as three separate values of red, green, and blue. Applying a filter to a color image consists of separating the data into three matrices by color type, applying the filter to each, and recombining. It may be the case that the image contains noise, so that the pixel values are randomly perturbed by environmental factors. Because noise is random, it seems possible to eliminate it by averaging pixel values in the neighborhood of a central pixel. This process is known as filtering. Filters are applied in a process called convolution. The matrix that represents the filter is called a \textit{kernel matrix}. Typically, kernel matrices are square with dimensions $(2r+1)\times (2r+1)$. The integer $r$ is the \textit{radius} of the filter and controls the size of the neighborhood. For simplicity in the convolution formula, kernel matrices are indexed so that the central entry has coordinates (0,0). Convolving the kernel matrix $K=(k_{i,j})$ with an $m\times n$ image matrix $A$ returns the $m\times n$ matrix $K\ast A$ with entries \[(K*A)_{p,q} := \sum_{i,j=-r}^r k_{i,j}\cdot a_{p-i,q-j}.\] The convolution can be equivalently expressed as \[ (K\ast A)_{p,q} = \sum_{\substack{i+k = p \\ j+\ell=q}} k_{i,j}\cdot a_{k,\ell}. \] We require that $\sum_{i,j}k_{i,j} = 1$ so that the overall intensity of the image does not change. As written, however, convolution is not well-defined when $a_{p,q}$ is near a boundary of $A$. In these cases, the convolution formula requires values of entries that don't exist, such as $a_{-1,0}$. To fix this problem, we use what are called \textit{edge-handling techniques}. In this paper, we only consider two common techniques: extending $A$ to have values beyond its edges or applying the filter to only those pixels for which convolution is defined (the latter is called \textit{cropping}).\footnote{See \url{https://en.wikipedia.org/wiki/Kernel\_(image\_processing)} for a list of edge-handling techniques.} To apply a kernel matrix of radius $r$ to all pixels in an $m\times n$ matrix $A$, we need to extend $A$ by $r$ rows and columns on each side, to a matrix of size $(m+2r) \times (n+2r)$, where the central $m\times n$ block is the matrix $A$. The filter is applied to each pixel in the central $m\times n$ block of the enlarged matrix. If extension is chosen as the edge-handling technique, let $A'$ denote the corresponding extension of $A$. If cropping is chosen as the edge-handling technique, then set $A' = A$. Applying the filter to $A$ with the chosen edge-handling technique is equivalent to applying the filter to $A'$ with cropping. The simplest blur filter is the \textit{box blur}. Let $J_{m\times n}$ represent the $m\times n$ matrix with each entry equal to $1$, and abbreviate $J_{n\times n}$ by $J_n$. \begin{definition} The kernel matrix $B_{2r+1}$ for the box blur of radius $r$ is $(2r+1)^{-2} J_{2r+1}$. \end{definition} As a visual example, consider the following image. \begin{center} \includegraphics[scale=0.5]{Original.png} \end{center} \noindent The results of applying box blurs with radii of 1, 2, and 3, respectively, to this image are shown below. \begin{center} \includegraphics[scale=0.5]{BoxBlur1.png}\qquad \includegraphics[scale=0.5]{BoxBlur2.png}\qquad \includegraphics[scale=0.5]{BoxBlur3.png} \end{center} One problem with box blurs, especially ones of large radius, is that pixels are weighted the same regardless of their distance from the central pixel. It makes sense to weight closer pixels more heavily than distant pixels: Pixels that are closer to each other will contain more information about each other than those that are farther away. Because of this, the \textit{Gaussian blur}, which takes this into account, is more commonly used. The values for the Gaussian blur kernel matrix are derived from the two-dimensional Gaussian curve \[f(x,y)=\frac{1}{2\pi s^2}e^{-\frac{x^2+y^2}{2s^2}},\] where $s$ represents the standard deviation of the distribution. Sometimes values are directly sampled from this function, but they are often approximated using binomial coefficients. \begin{definition} The $(2r+1)\times (2r+1)$ kernel matrix $G_{2r+1}$ of the approximate Gaussian blur with radius $r$ has entries, for each $-r \leq i,j \leq r$, of \begin{equation}\label{eq:gaussian-blur-def} (G_{2r+1})_{i,j} = \frac{1}{4^{2r}}\binom{2r}{i+r}\binom{2r}{j+r}. \end{equation} \end{definition} \begin{example} The kernel matrix for the $5\times 5$ approximate Gaussian blur is \[G_5 = \frac{1}{256}\begin{pmatrix} 1&4&6&4&1\\ 4&16&24&16&4\\ 6&24&36&24&6\\ 4&16&24&16&4\\ 1&4&6&4&1 \end{pmatrix}.\] \end{example} Notice that the pixels near the center are weighted highest, and that the values taper off toward the edges. Applying Gaussian blurs of radii 1, 2, and 3, respectively, to our example image from above results in the images below. The images appear smooth, while each individual element of the image remains clear. \begin{center} \includegraphics[scale=0.5]{GBlur1.png}\qquad \includegraphics[scale=0.5]{GBlur2.png}\qquad \includegraphics[scale=0.5]{GBlur3.png} \end{center} Each Gaussian blur kernel matrix can be decomposed into the product of a row vector and a column vector. Since it is much faster to compute smaller convolutions than large ones, Gaussian blur algorithms break the computation into two smaller convolutions: one with the row vector, and one with the column vector. In \cite{gaussian-blur}, Waltz and Miller develop an algorithm for computing Gaussian blur that is more efficient than simple decomposition. The key observation that the authors use is that Gaussian blurs of larger radius can be created through repeated convolution with Gaussian blurs of smaller radius. Their algorithm decomposes the Gaussian blur kernel matrix into a row vector and a column vector, and it decomposes each of these vectors into the repeated convolution of the matrices $\begin{pmatrix} 1 & 1 \end{pmatrix}$ and $\begin{pmatrix} 1 & 1 \end{pmatrix}^T$, respectively. With a bit of clever programming, Waltz and Miller created an algorithm that runs much faster than one that only uses the decomposition property. Convolution by the matrices $\begin{pmatrix} 1 & 1 \end{pmatrix}$ and $\begin{pmatrix} 1 & 1 \end{pmatrix}^T$ corresponds to the operations $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}$ and $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}$, respectively. This observation leads to Definition \ref{dfn:collapsing sum}. \begin{example} Take the $2\times 2$ matrix \[A=\begin{pmatrix} a_{1,1} & a_{1,2} \\ a_{2,1} & a_{2,2}\end{pmatrix}.\] Applying the collapsing operations, we get \begin{gather*} \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}(A) = \begin{pmatrix} a_{1,1} + a_{2,1} & a_{1,2} + a_{2,2}\end{pmatrix} \qquad \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}(A) = \begin{pmatrix} a_{1,2} + a_{2,2} \\ a_{2,1} + a_{2,2}\end{pmatrix} \\ \sigma (A)= \begin{pmatrix} a_{1,1} + a_{1,2} + a_{2,1} + a_{2,2} \end{pmatrix}. \end{gather*} \end{example} \section{Equivalence of Gaussian blur and collapsing sum}\label{sec:eq blur and sum} In this section, we place the collapsing sum on a matrix-theoretic foundation and explicitly connect it with Gaussian blur via Theorem \ref{thm:gaussian blur}. The following properties of the collapsing sum follow directly from Definition \ref{dfn:collapsing sum}. \begin{proposition}\label{thm:sigma linearity} Let $A$ and $B$ be $m\times n$ matrices and $c$ be any real number. Then \begin{enumerate} \item $\sigma (A+B)=\sigma (A) + \sigma (B)$,\vspace{-0.3em} \item $\sigma(cA) = c\cdot\sigma (A)$,\vspace{-0.3em} \item $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}(A^T) = \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}(A)^T$, and\vspace{-0.3em} \item $\sigma (A^T) = \sigma (A)^T$\vspace{-0.3em} \end{enumerate} whenever the operations are defined. The first two statements also hold for $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}$ and $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}$. \end{proposition} Much of the investigation will examine repeated application of the collapsing sum. Let $A$ be an $m\times n$ matrix. Then $\sigma^0 (A) = A$, and for each positive integer $1\leq s<\min\{m,n\}$, we define $\sigma^s (A)=\sigma(\sigma^{s-1}(A))$. The operators $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^s$ and $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^s$ are defined similarly. Let $I_m$ be the $m\times m$ identity matrix and $\delta_{i,j}$ be the Kronecker delta function \[\delta_{i,j} = \begin{cases} 1 & \text{if } i = j \\ 0 & \text{if } i \not= j. \end{cases}\] \begin{definition} We denote by $R_m$ the $(m-1) \times m$ matrix with entries $r_{i,j}=\delta_{i,j}+\delta_{i+1,j}$. For a positive integer $k < m$, we define $\rdown{m}{k}$ as the product $R_{m-k+1}R_{m-k+2}\cdots R_m$. Further, let $\rdown{m}{0}=I_{m}$. \end{definition} The matrices $R_m$ have $1$'s on the diagonal and superdiagonal and $0$'s elsewhere. The notation $\rdown{m}{k}$ is defined analogously to the falling power notation $n^{\underline{k}} = n(n-1)\cdots (n-k+1)$. \begin{example} We have \[ R_4 = \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{pmatrix} \] and \[ \rdown{4}{2} = \begin{pmatrix} 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix} \begin{pmatrix} 1 & 1 & 0 & 0 \\ 0 & 1 & 1 & 0 \\ 0 & 0 & 1 & 1 \end{pmatrix} = \begin{pmatrix} 1 & 2 & 1 & 0 \\ 0 & 1 & 2 & 1 \end{pmatrix}. \] \end{example} \begin{proposition}\label{matrix version single sum} Let $A$ be an $m \times n$ matrix with $m,n\geq 2$. Then $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^s(A)=\rdown{m}{s} A$ and $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^s(A) = A (\rdown{n}{s})^T$. \end{proposition} \begin{proof} First note that $R_m A$ is an $(m-1) \times n$ matrix. Using the definition of $R_m$, the entry $(R_m A)_{i,j}$ is \[ \sum_{k=1}^{m} (\delta_{i,k} + \delta_{i+1,k})a_{k,j} =a_{i,j}+a_{i+1,j} =\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}(A)_{i,j}. \] A quick induction argument shows that $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^s(A)=\rdown{m}{s} A$. The calculation for the second assertion is similar. \end{proof} Consequently, $\sigma^s(A)=(\rdown{m}{s})A(\rdown{n}{s})^T$. \begin{proposition}\label{product of sum matrices} Let $m$ be a positive integer and $s\leq m$ be a nonnegative integer. Then $\rdown{m}{s}$ is an $(m-s)\times m$ matrix with entries $(\rdown{m}{s})_{i,j}=\binom{s}{j-i}$. \end{proposition} \begin{proof} We proceed by induction. For $s=0$, the theorem simplifies to the definition of $I_{m} = \rdown{m}{0}$. Now suppose that the theorem holds for some nonnegative integer $k$. Then $\rdown{m}{k+1}=R_{m-k}\rdown{m}{k}$. Since $\rdown{m}{k}$ is an $(m-k) \times m$ matrix, $\rdown{m}{k+1}$ is an $(m-k-1)\times m$ matrix. Further, by writing $(R_{m-k})_{i,r} = \delta_{i,r} + \delta_{i+1,r}$, we have \begin{align*} (\rdown{m}{k+1})_{i,j}&=\sum_{r=1}^{m-k}(R_{m-k})_{i,r} (\rdown{m}{k})_{r,j}\\ &=\binom{k}{j-i}+\binom{k}{j-(i+1)}\\ &=\binom{k+1}{j-i}, \end{align*} so the formula holds by induction. \end{proof} We now introduce an object that will facilitate the proof of Theorem \ref{thm:gaussian blur}. \begin{definition}\label{def:coefficient matrix} Let $a<m$ and $b < n$ be nonnegative integers. The \textit{coefficient matrix} $C_{m\times n}^{a, b} = (c_{i,j})$ is the unique $m\times n$ matrix such that $\sum_{i,j}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(A)_{i,j}=\sum_{i,j} c_{i,j}a_{i,j}$ for all $m\times n$ matrices $A$. We abbreviate $C^{a,b}_n := C^{a,b}_{n\times n}$ and $C^a_{m\times n} := C^{a,a}_{m\times n}$. \end{definition} One interpretation of the coefficient matrix uses indeterminates. Let $X = (x_{i,j})$ be an $m\times n$ matrix of indeterminates; that is, the entries of $X$ are distinct symbols, not numbers. The entry $c_{i,j}$ of the coefficient matrix $C^{a,b}_{m\times n}$ is the sum of the coefficients of $x_{i,j}$ across all entries of $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(X)$. Thus, one way to think of the coefficient matrix is that its $(i,j)$th entry represents the number of times that $x_{i,j}$ appears in $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(X)$. We now work to describe the entries of the coefficient matrices explicitly. \begin{definition} Let $A$ be an $m\times n$ matrix and $\mathbf{e}_n$ be the $n\times 1$ vector in which each entry is $1$. The \textit{column sum vector} of $A$ is $\alpha=A^T\mathbf{e}_m$, and the \textit{row sum vector} of $A$ is $\beta = A \mathbf{e}_n$. That is, $\alpha_j$ is the sum of the elements in the $j$th column of $A$, and $\beta_j$ is the sum of the elements of the $j$th row of $A$. \end{definition} \begin{lemma}\label{matrix product sum vectors} Let $X=(x_{i,j})$ be a matrix of indeterminates and $A$ and $B$ be matrices such that the product $AXB$ is defined. If $\alpha$ is the column sum vector of $A$ and $\beta$ is the row sum vector of $B$, then the coefficient of $x_{p,q}$ in the formal expression $\sum_{i,j}(AXB)_{i,j}$ is $\alpha_p\beta_q$. \end{lemma} \begin{proof} Choose any indeterminate $x_{p,q}$. We have \[\sum_{i,j}(AXB)_{i,j}=\sum_{i=1}^{m}\sum_{j=1}^n\left[\sum_{k=1}^m\sum_{r=1}^n a_{i,k}\cdot x_{k,r}\cdot b_{r,j}\right].\] We obtain the coefficient of $x_{p,q}$ by summing only those terms where $k=p$ and $r=q$. This coefficient is thus \[\sum_{i=1}^{m}\sum_{j=1}^n a_{i,p}b_{q,j}=\Bigg[\sum_{i=1}^m a_{i,p}\Bigg]\Bigg[\sum_{j=1}^n b_{q,j}\Bigg].\] The left term in this product is $\alpha_p$, and the right term is $\beta_q$. \end{proof} \begin{proposition}\label{coefficient matrix sum vectors} Let $\alpha$ be the column sum vector of $\rdown{m}{a}$ and $\beta$ be the column sum vector of $\rdown{n}{b}$. Then $C_{m\times n}^{a,b} = \alpha\beta^T$. \end{proposition} \begin{proof} Let $X$ be an $m\times n$ matrix of indeterminates. Apply Lemma \ref{matrix product sum vectors} to $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(X)=(\rdown{m}{a})X(\rdown{n}{b})^T$. The row sum vector of $(\rdown{n}{b})^T$ is simply the column sum vector of $\rdown{n}{b}$. The sum in Lemma \ref{matrix product sum vectors} is the sum that defines the coefficient matrix, so the $(i,j)$th entry of the coefficient matrix $C^{a,b}_{m\times n}$ is $(\alpha\beta^T)_{i,j} = \alpha_i\beta_j$. \end{proof} Proposition \ref{coefficient matrix sum vectors} implicitly gives the following formula for coefficient matrices. \begin{corollary}\label{intermediate collapse} Let $m$ and $n$ be positive integers and $a < m$ and $b < n$ be nonnegative integers. The coefficient matrix $C_{m\times n}^{a,b}$ has entries $c_{i,j} = \big[\!\sum_{\ell=1}^{m-a}\binom{a}{i-\ell}\big] \! \big[\!\sum_{\ell=1}^{n-b}\binom{b}{j-\ell}\big]$. \end{corollary} \begin{proof} Proposition \ref{product of sum matrices} shows that \[ \alpha_{i} = \sum_{\ell=1}^{m-s} (\rdown{m}{a})_{\ell,i} = \sum_{\ell=1}^{m-a}\binom{a}{i-\ell}. \] A similar calculation holds for $\beta_j$. \end{proof} \begin{corollary}\label{square collapsed} Let $A$ be an $m\times n$ matrix. The value of the single entry of the matrix $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^{m-1}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^{n-1} (A)$ is $\sum_{i=1}^m\sum_{j=1}^n \binom{m-1}{i-1}\binom{n-1}{j-1} a_{i,j}$. \end{corollary} \begin{proof} Let $C_{m\times n}^{m-1,n-1} = (c_{i,j})$. Since $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^{m-1}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^{n-1} (A)$ has a single entry, we have \[ \sigma^{n-1}(A)_{1,1} = \sum_{i,j}\sigma^{n-1}(A)_{i,j} = \sum_{i,j} c_{i,j}a_{i,j} \] by the definition of the coefficient matrix. Corollary \ref{intermediate collapse} shows that $c_{i,j} = \binom{m-1}{i-1}\binom{n-1}{j-1}$. \end{proof} The entries of $\sigma^s(A)$ are determined by the blocks of $A$ of size $(s+1)\times(s+1)$. From this observation, Corollary \ref{square collapsed} can be used to find the value of any entry in $\sigma^s(A)$: simply apply the corollary to the submatrix $(a_{p+i,q+j})_{i,j=0}^s$ to determine the value of $\sigma^s(A)_{i,j}$. Recall that to apply a kernel matrix, we need to specify an edge-handling technique, wherein we extend the matrix $A$ to a matrix $A'$. Then applying the filter to $A$ with the edge-handling technique is equivalent (by definition) to applying the filter to $A'$ with cropping. \begin{theorem}\label{thm:gaussian blur} Suppose a matrix $A$ and an edge-handling technique yielding the extension $A'$ of $A$ are given. Then $G_{2r+1} \ast A = 4^{-2r}\sigma^{2r}(A')$ for all nonnegative integers $r$. \end{theorem} \begin{proof} Each entry of $G_{2r+1} \ast A$ corresponds to a block of $A'$ of size $(2r+1)\times (2r+1)$. From equation (\ref{eq:gaussian-blur-def}), the value of the entry $(G_{2r+1}\ast A)_{p,q}$ is \[ \frac{1}{4^{2r}} \sum_{i=-r}^r\,\sum_{j=-r}^r\binom{2r}{i+r}\binom{2r}{j+r}a'_{p+i,q+i}. \] On the other hand, let $B := (a'_{p+i,q+j})_{i,j=-r}^r$ be a submatrix of $A'$. Applying Corollary \ref{square collapsed} and then (\ref{eq:gaussian-blur-def}) gives \begin{align*} \sigma^{2r}(A')_{p,q} &= \sigma^{2r}(B)_{1,1}\\ &= \sum_{i=0}^{2r}\,\sum_{j=0}^{2r} \binom{2r}{i}\binom{2r}{j} b_{i,j}\\ &= \sum_{i=-r}^r\,\sum_{j=-r}^r\binom{2r}{i+r}\binom{2r}{j+r}a'_{p+i,q+i}\\ &= 4^{2r} (G_{2r+1}\ast A)_{p,q}. \qedhere \end{align*} \end{proof} Theorem \ref{thm:gaussian blur} may be equivalently stated as an equality of operators: \[ G_{2r+1} = 4^{-2r}\sigma^{2r}. \]\vspace{-2.2em} \section{Further properties of the collapsing sum}\label{sec:further properties} \subsection{Special classes of matrices} \begin{definition} A \textit{Toeplitz matrix} is an $m\times n$ matrix $A$ with the property that $a_{i,j} = a_{k,\ell}$ whenever $i-j = k-\ell$. We denote by $\toep(a_{-n+1},\dots,a_{m-1})$ the $m\times n$ Toeplitz matrix $A$ with entries $a_{i,j} = a_{i-j}$. \end{definition} \begin{example} The general $4 \times 5$ Toeplitz matrix $\toep(a_{-4},\dots,a_{3})$ is \[ \begin{pmatrix} a_0 & a_{-1} & a_{-2} & a_{-3} & a_{-4}\\ a_{1} & a_0 & a_{-1} & a_{-2} & a_{-3}\\ a_{2} & a_{1} & a_0 & a_{-1} & a_{-2}\\ a_{3} & a_{2} & a_{1} & a_0 & a_{-1} \end{pmatrix}. \] \end{example} Toeplitz matrices have applications in a wide variety of pure and applied areas, including representation theory, signal processing, differential and integral equations, and quantum mechanics. Moreover, every $n\times n$ matrix can be decomposed as the product of at most $2n+5$ Toeplitz matrices \cite{toeplitz-decomp}. In what follows, we use $(m+1)\times (n+1)$ Toeplitz matrices $\toep(a_{-n},\dots, a_m)$ to slightly simplify the statements of the results. \begin{proposition}\label{thm:circulant-fully-collapsed} Let $A = \toep(a_{-n},\dots,a_{m})$ be an $(m+1)\times (n+1)$ Toeplitz matrix. Then $\sum_{k=-n}^{m} \binom{m+n}{n+k}a_k$ is the single entry of $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^{m}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^{n}(A)$. \end{proposition} \begin{proof} We can decompose the Toeplitz matrix into ``stripes'': \[ A = \sum_{k=-n}^{m} \toep(0,\dots,0,a_k,0,\dots,0). \] Since by Proposition \ref{thm:sigma linearity} the collapsing sum distributes over addition, we need only consider the case when one $a_k$ is nonzero. Moreover, since $\sigma(cA) = c\sigma(A)$, we can restrict to $a_k=1$. Therefore, suppose $ A = \toep(0,\dots,0,1,0,\dots,0)$, so that \[ a_{i,j} = \begin{cases} 1 & \text{if } i-j = k\\ 0 &\text{otherwise.} \end{cases} \] We use the convention that $\binom{n}{r}=0$ if $r < 0$ or $r > n$. Applying Corollary \ref{square collapsed} and the binomial symmetry $\binom{n}{r} = \binom{n}{n-r}$ gives \begin{align*} \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^{m}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^{n}(A)_{1,1} &= \sum_{i=0}^{m}\, \sum_{j=0}^{n} \binom{m}{i}\binom{n}{j}a_{i+1,j+1}\\ &= \sum_{i=0}^{m} \binom{m}{i}\binom{n}{i-k}\\ &=\sum_{i=0}^{m} \binom{m}{i}\binom{n}{(n+k)-i}. \end{align*} Applying the well-known identity $ \sum_{i=0}^{m} \binom{m}{i}\binom{n}{r-i} = \binom{m+n}{r} $ finishes the proof. \end{proof} A direct application of Proposition \ref{thm:circulant-fully-collapsed} yields the following. \begin{corollary} The single entry in $\sigma^n(I_{n+1})$ is the central binomial coefficient $\binom{2n}{n}$. \end{corollary} A similar result holds for coefficient matrices. \begin{proposition} The single entry in $\sigma^n(C_{n+1}^{n})$ is $\binom{2n}{n}^2$. \end{proposition} \begin{proof} From Corollary \ref{square collapsed}, we have \[ \sigma^n(C_{n+1}^{n}) = \sum_{i=0}^n \, \sum_{j=0}^n \binom{n}{i}\binom{n}{j}c_{i,j} = \sum_{i=0}^n \, \sum_{i=0}^n \binom{n}{i}^2\binom{n}{j}^2. \] Using the identity $\sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{n}$ completes the proof. \end{proof} Recall that $J_{m\times n}$ denotes the $m\times n$ matrix with each entry equal to $1$. \begin{proposition}\label{thm:sum-entries-coeff-matrix} Let $a < m$ and $b < n$ be nonnegative integers. Then $\sum_{i,j}(C_{m\times n}^{a,b})_{i,j}=2^{a+b}(m-a)(n-b)$. \end{proposition} \begin{proof} It follows from Definition \ref{def:coefficient matrix} that the sum of the entries of $C^{a,b}_{m\times n}$ is equal to the sum of the entries of $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(J_{m\times n})$. Each $(a+1) \times (b+1)$ block of $J_{m\times n}$ is $J_{(a+1)\times (b+1)}$, so $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(J_{m\times n})_{i,j} = \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(J_{(a+1)\times(b+1)})_{1,1}$ for all $1 \leq i \leq m-a$ and $1 \leq j \leq n-b$. Since $J_{(a+1)\times (b+1)}$ is a Toeplitz matrix, Proposition \ref{thm:circulant-fully-collapsed} gives \[ \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(J_{(a+1)\times(b+1)})_{1,1} = \sum_{i=0}^{a+b} \binom{a+b}{i} = 2^{a+b}. \] Noting that $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^a\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^b(J_{m\times n})$ has $(m-a)(n-b)$ entries completes the proof. \end{proof} The proof of Corollary \ref{square collapsed} shows that the Gaussian blur kernel $G_{2r+1}$ is proportional to the coefficient matrix $C_{2r+1}^{2r}$. Similarly, the box blur kernel $B_{2r+1}$ is proportional to the coefficient matrix $C_{2r+1}^0$. The constant of proportionality in both cases is $(\sum_{i,j}(C^s_{2r+1})_{i,j})^{-1}$, where $s=2r$ or $s=0$, respectively. Thus, the coefficient matrices are a generalization that unite these two filters. That is, the expression $(2^{a+b}(m-a)(n-b))^{-1}C^{a,b}_{m\times n}$ provides an interpolation between box blur and Gaussian blur. \subsection{Further connections with Gaussian blur} Waltz and Miller \cite{gaussian-blur} extend their techniques to non-square blurs, that is, Gaussian-like blurs using non-square kernel matrices. These can be defined in parallel to the square Gaussian blurs. If $G_{a\times b}$ denotes the kernel for the $a\times b$ Gaussian blur, then \[(G_{a\times b})_{i,j} = 2^{-(a+b-2)}\binom{a-1}{i-1}\binom{b-1}{j-1}.\] Since either $a$ or $b$ might be even, there may be no central element, so here we index from $(1,1)$ in the top left corner of the matrix. The kernel matrix $G_{a\times b}$ is proportional to the coefficient matrix for a fully collapsed $a\times b$ matrix. This extends Theorem \ref{thm:gaussian blur}, since convolving $G_{a\times b}$ with a matrix $A$ is equivalent to applying $2^{-(a+b-2)}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}^{a-1}\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}^{b-1}$ to the extended matrix $A'$. The authors also venture into higher dimensions and discuss higher-dimensional blurs. We can easily transfer this idea to the language of the collapsing sum. Suppose we want to collapse (or, equivalently, blur) an $n$-dimensional array. We can define $\sigma_{\vec{\imath}}$, for $1\leq i\leq n$, to be the operator that ``collapses'' the array in the $i$th direction, akin to the effects of $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}$ and $\sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}$ in two dimensions. Define $\sigma_n := \sigma_{\vec{1}}\cdots\sigma_{\vec{n}}$. Then powers of $2^{-n}\sigma_n$ give the higher-dimensional blur that Waltz and Miler describe. As before, general rectangular blurs are obtained by simply composing the operators $\frac{1}{2}\sigma_{\vec{\imath}}$ for various values of $i$. \subsection{A generalized collapsing sum}\label{sec:generalized-sum} Waltz and Miller's algorithm for Gaussian blur may be extended to an operator that returns weighted sums of entries. \begin{definition}\label{generalized sigma def} Let $\gamma$ be an $b_1\times b_2$ matrix and $A$ be an $m\times n$ matrix with $m,n\geq \max\{b_1,b_2\}$. Then $\sigma_{\gamma}(A)$ is an $(m-b_1)\times (n-b_2)$ matrix with \[ \sigma_{\gamma}(A)_{p,q}= \sum_{i=0}^{b_1-1}\sum_{j=0}^{b_2-1} \gamma_{i+1,j+1}a_{p+i,q+j}. \] \end{definition} If $\gamma = \left(\begin{smallmatrix}1&1\\1&1\end{smallmatrix}\right)$, then we recover the original collapsing sum. Moreover, if $\gamma = (1\, 1)$, then $\sigma_\gamma = \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\rightarrow}}$, and $\sigma_{\gamma^T} = \sigma_{\hspace{-0.15em}\scaleobj{0.7}{\downarrow}}$. For any matrix $\gamma$ of rank $1$, there exist two column vectors $\rho$ and $\phi$ such that $\gamma=\rho\phi^T$. Waltz and Miller's algorithm may be easily adapted for any $2\times 2$ rank-1 matrix. Our previous results on the collapsing sum may also be extended to $\sigma_\gamma$ for any (not necessarily square) matrix $\gamma$ of rank $1$. \begin{definition} Let $\phi$ be a column vector with $k$ entries. Let $\phi$ be a column vector with $k$ entries. The $(m-k+1)\times m$ matrix $R^\phi_m$ has entries $(R^\phi_m)_{p,q}=\sum_{i=0}^{k-1}\phi_{i+1}\delta_{p+i,q}$. \end{definition} Again, notice that if $\phi=(1\, 1)$, then $R^\phi_m=R_m$. We define the falling powers of these matrices analogously to those of $R_m$. A generalized form of Proposition \ref{coefficient matrix sum vectors} holds in that, for any $m\times n$ matrix $A$, \[\sigma_{\rho}^a\sigma_{\phi^T}^b(A) = (R_m^\rho)^{\underline{a}} \, A \, [(R_n^{\phi^T})^{\underline{b}}]^T.\] In particular, if $\gamma=\rho\phi^T$, then \[ \sigma_\gamma^s(A) = (R_m^\rho)^{\underline{s}} \, A \, [(R_n^{\phi^T})^{\underline{s}}]^T. \] Similar extensions may be obtained for other results, including the entries of the corresponding coefficient matrices. \section{Conclusion} By introducing the collapsing sum operators, we have provided a new combinatorial way to view Gaussian blur. We established the close connection between these concepts and also established a collection of theoretical results on the collapsing sum. It would be interesting to study the collapsing sum as a matrix operator in its own right. For example, if $G$ is an abelian group and $G^{m\times n}$ represents the additive group of $m\times n$ matrices with entries in $G$, then the collapsing sum is a map from $G^{m\times n}$ to $G^{(m-1)\times (n-1)}$. What are the combinatorial and algebraic properties of this map? \section*{Acknowledgements} The author would like to thank Samuel Gutekunst for his invaluable guidance and support, as well as Mike Orrison, Elizabeth Sattler, and the anonymous reviewers, whose insightful comments and suggestions greatly increased the quality of this paper.
c00c6bc335191d05be993aad5645c573e97a29ce
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Conflict and Controversy} \label{sec:intro} Prior to the presentation of \textit{Statistical Problems in Agricultural Experimentation} to the Royal Statistical Society in $1935$ (\citep{Neyman1935}), Jerzy Neyman and Ronald Aylmer Fisher were on fairly good terms, both professionally and personally. Joan Fisher Box's biography of her father (\citep{Box}, pages 262--263, 451) and Neyman's oral autobiography (\citep{Reid}, pages 102, 114--117) describe two scientists who respected each other during this time. However, Neyman's study of randomized complete block (RCB) and Latin square (LS) designs sparked Fisher's legendary temper (\citep{Reid}, pages 121--124; \citep{Box}, pages 262--266; \citep{Lehmann}, pages 58--59), with the resulting heated debate recorded in the discussion. The relationship between Fisher and Neyman became acrimonious, with no reconciliation ever being reached (\citep{Reid}, pages 124--128, 143, 183--184, 225--226, 257; \citep{Lehmann}, Chapter~4). The source of this conflict was Neyman's suggestion that RCBs were a more valid experimental design than LSs, for both hypothesis testing and precision of estimates. He reached this conclusion using potential outcomes, which he introduced in $1923$ as part of his doctoral dissertation (\citep{Neyman1923}), the first place formalizing, explicitly, the notation of potential outcomes for completely randomized (CR) experiments. \citet {Neyman1935} extended this framework in a natural way from CR designs to RCBs and LSs, and calculated the expected mean residual sum of squares and expected mean treatment sum of squares for both. \citet{Neyman1935} stated that, under the null hypothesis of zero average treatment effects (\emph{Neyman's null hypothesis}), the expected mean residual sum of squares equals the expected mean treatment sum of squares for RCBs, whereas the expected mean residual sum of squares is less than or equal to the expected mean treatment sum of squares for LSs, with equality holding under special cases, such as \emph{Fisher's sharp null hypothesis} of no individual treatment effects. From this comparison of the expected mean residual and treatment sums of squares, Neyman concluded that the standard ANOVA F-test for RCBs was ``unbiased,'' whereas the corresponding test for LSs was ``biased,'' potentially detecting differentiation among the treatments, when none existed on average, more often than desired (i.e., having a higher Type I error than advertised under Neyman's null): \begin{quote} In the case of the Randomized Blocks the position is somewhat more favourable to the z test [i.e., the F-test], while in the case of the Latin Square this test seems to be biased, showing the tendency to discover differentiation when it does not exist. It is probable that the disturbances mentioned are not important from the point of view of practical applications. (\citep{Neyman1935}, page 114) \end{quote} Fisher's fury at Neyman's assertions is evident in his transcribed response: \begin{quote} Professor R. A. Fisher, in opening the discussion, said he had hoped that Dr. Neyman's paper would be on a subject with which the author was fully acquainted, and on which he could speak with authority \ldots. Since seeing the paper, he had come to the conclusion that Dr. Neyman had been somewhat unwise in his choice of topics. \ldots Apart from its theoretical defects, Dr. Neyman appears also to have discovered that it [the LS] was, contrary to general belief, a less precise method of experimentation than was supplied by Randomized Blocks, even in those cases in which it had hitherto been regarded as the more precise design. It appeared, too, that they had to thank him, not only for bringing these discoveries to their notice, but also for concealing them from public knowledge until such time as the method should be widely adopted in practice! \ldots I think it is clear to everyone present that Dr. Neyman has misunderstood the intention \ldots of the z test and of the Latin Square and other techniques designed to be used with that test. Dr. Neyman thinks that another test would be more important. I am not going to argue that point. It may be that the question which Dr. Neyman thinks should be answered is more important than the one I have proposed and attempted to answer. I suggest that before criticizing previous work it is always wise to give enough study to the subject to understand its purpose. Failing that it is surely quite unusual to claim to understand the purpose of previous work better than its author. (\citep{Fisher1935}, pages 154, 155, 173) \end{quote} Although Fisher reacted in an intemperate manner, his discussion nevertheless hints at errors in Neyman's calculations. In fact, Fisher was the sole discussant who identified an incorrect equation (27), in Neyman's appendix: \begin{quote} Then how had Dr. Neyman been led by his symbolism to deceive himself on so simple a question? \ldots Equations (13) and (27) of his appendix showed that the quantity which Dr. Neyman had chosen to call $\sigma^2$ did not contain the same components of error as those which affected the actual treatment means, or as those which contributed to the estimate of error. (\citep{Fisher1935}, page 156) \end{quote} Neyman in fact made a crucial algebraic mistake in his appendix, and his expressions for the expected mean residual sum of squares for both designs are generally incorrect. We present the correct expressions in Sections~\ref{sec:RCBD_theory} and \ref{sec:LSD_theory}, and provide an interpretation of these formulae in Section~\ref{sec:interactions_EMS}. As we shall see, if one subscribes to Neyman's suggestion that a comparison of expected mean sums of squares determines Type I errors when testing Neyman's null, then the F-test for RCBs is predictably wrong, whereas the F-test for LSs is unpredictably wrong. However, Neyman's suggestion is generally incorrect. We present in Section~\ref{sec:concrete} simple examples of LSs for which Neyman's null holds and the expected mean residual sum of squares equals the expected mean treatment sum of squares, yet the Type I error of the F-test is smaller than nominal. Such examples lead to the general result that, for any size RCB or LS, Type I errors are not dictated by a simple comparison of expected sums of squares without further conditions. A cacophony of commentary on this controversy exists in the literature, and we compiled the most relevant articles in Sections~\ref{sec:RCBD_history}, \ref{sec:LSD_history} and \ref {sec:cacophony}. Our results agree with similar calculations made by \citet{Wilk1955} and \citet{WilkKempthorne}. A major difference is that we work in a more general setting of Neyman's framework, whereas others [especially \citet {Wilk1955}] tend to make further assumptions on the potential outcomes, albeit assumptions possibly justified by applied considerations. Furthermore, although \citet{WilkKempthorne} extend Neyman's framework to consider random sampling of rows, columns and treatment levels from some larger population for LSs, their ultimate suggestion that the expected mean residual sum of squares is larger than the expected mean treatment sum of squares is not generally true. A different parametrization of similar quantities, used in Section~\ref{sec:interactions_EMS}, reveals how the inequality could go in either direction. This controversy had substantial consequences for the subsequent development of statistics for experimental design. As we discuss in Section~\ref{sec:consequences}, deep issues arising from this disagreement led to a shift from \emph{potential outcomes} to additive models for \emph{observed outcomes} in experiments, seriously limiting the scope of inferential tools and reasoning. Our ultimate goal in this historical study is not simply to correct Neyman's algebra. Instead, we wish to highlight the genesis of the current approach to experimental design resulting from this controversy, which is based on linear models and other simple regularity conditions on the potential outcomes that are imprecise without applied contexts. \section{Controversial Calculations} \label{sec:ANOVA} \subsection{Randomized Complete Block Designs: Theory} \label{sec:RCBD_theory} We first consider RCBs with $N$ blocks, indexed by $i$, and $T$ treatments, indexed by $t$, with each block having $T$ experimental units, indexed by $j = 1, \ldots, T$. Treatments are assigned randomly to units in a block, and are applied independently across blocks (\citep{Kempthorne}, Chapter~9). Although our results hold for general RCB designs, we adopt the same context as Neyman: blocks represent physical blocks of land on a certain field, and we compare agricultural treatments that may affect crop yield, for example, fertilizers. We explicitly define treatment indicators $\mathbf{W} = \{W_{ij}(t)\}$ as \begin{eqnarray*} &&W_{ij}(t)\\ && = \cases{1, & $\mbox{if unit } j \mbox{ in block } i \mbox{ is assigned treatment } t,$ \vspace*{2pt} \cr 0, & $\mbox{otherwise.}$ } \end{eqnarray*} \citet{Neyman1935} specified the potential outcomes as \[ x_{ij}(t) = X_{ij}(t) + \varepsilon_{ij}(t), \] where $X_{ij}(t) \in\mathbb{R}$ are unknown constants representing the ``mean yield'' of unit $j$ in block $i$ under treatment~$t$, and $\varepsilon_{ij}(t) \sim[0, \sigma_{\varepsilon }^2]$ are mutually independent and identically distributed (i.i.d.) ``technical errors,'' independent of the random variables \textbf{W}. This framework for the potential outcomes, excluding the $\varepsilon_{ij}(t)$, is similar to that presented in Neyman's 1923 dissertation (\citep{Neyman1923}). \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), pages 110, 114, 145] stated that technical errors represent inaccuracies in the experimental technique, for example, inaccuracies in measuring crop yield, and assumed that technical errors are Normal random variables. We find these technical errors rather obscure, but their inclusion does not alter our conclusions. To summarize, in Neyman's specification there are two sources of randomness: the unconfounded assignment mechanism (\citep{Rubin1990}), that is, the random assignment of treatments to plots specified by the distribution on $\mathbf{W}$, and the technical errors $\varepsilon_{ij}(t)$. Potential outcomes are decomposed by \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 111] into \begin{equation} x_{ij}(t) = \bar{X}_{\cdot\cdot}(t) + B_i(t) + \eta_{ij}(t) + \varepsilon_{ij}(t), \end{equation} where \begin{eqnarray*} \bar{X}_{\cdot\cdot}(t)& = &\frac{1}{NT}\sum_{i=1}^N \sum_{j=1}^T X_{ij}(t), \\ B_i(t)& =& \bar{X}_{i \cdot}(t) - \bar{X}_{\cdot\cdot}(t), \\ \eta_{ij}(t) &=& X_{ij}(t) - \bar{X}_{i \cdot}(t), \end{eqnarray*} with \[ \bar{X}_{i \cdot}(t) = \frac{1}{T} \sum_{j=1}^T X_{ij}(t). \] Neyman describes $B_i(t)$ as a correction for the specific fertility of the $i$th block, and $\eta_{ij}(t)$ as a correction for fertility variation within the block or, alternatively, the soil error. Hinkelmann and Kempthorne [(\citeyear{Kempthorne}), page 300] refer to terms such as $\eta_{ij}(t)$ as unit-treatment interactions, but they distinguish between \emph{strict} unit-treatment interactions and block-treatment interactions. For strict unit-treatment interaction, treatment effects depend on the experimental unit, in the sense that for two treatments $t, t'$ and experimental units $j, j'$ in a block $i$, \[ X_{ij}(t) - X_{ij}\bigl(t'\bigr) \neq X_{ij'}(t) - X_{ij'}\bigl(t'\bigr). \] Block-treatment interactions are characterized by treatment effects depending\vadjust{\goodbreak} on the block, in the sense that for two treatments $t,t'$, experimental units $j, j', j'', j'''$, and blocks $i, i'$, \[ X_{ij}(t) - X_{ij'}\bigl(t'\bigr) \neq X_{i'j''}(t) - X_{i'j''''}\bigl(t'\bigr). \] As pointed out by a referee, allowing fertility variation to depend on treatment $t$ was a unique contribution by Neyman and was never recognized in the discussion by Fisher, who focused on his sharp null hypothesis (described next), under which the corrections do not depend on $t$. The purpose of the local field experiment, as described by \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 111] is to compare the $\bar{X}_{\cdot\cdot}(t)$ for $t = 1, \ldots, T$, each of which represents the average mean yield when one treatment $t$ is applied to all plots in the field, a conceptual experiment. As stated in the discussion, and later by \citeauthor{Welch} [(\citeyear{Welch}), page 23] Neyman does not test \emph{Fisher's sharp null hypothesis} of zero individual treatment effects, that is (when excluding technical errors), \begin{eqnarray} H_0^{\#}\dvtx X_{ij}(t) = X_{ij} \bigl(t'\bigr) \nonumber\\ \eqntext{\forall i = 1, \ldots, N; j = 1, \ldots, T; t \neq t'.} \end{eqnarray} Instead, Neyman sought to test the more general null hypothesis \[ H_0\dvtx\bar{X}_{\cdot\cdot}(1) = \cdots = \bar{X}_{\cdot\cdot}(T), \] referred to throughout as \emph{Neyman's null hypothesis}: \begin{quote} I am considering problems which are important from the point of view of agriculture. And from this viewpoint it is immaterial whether any two varieties react a little differently to the local differences in the soil. What is important is whether on a larger field they are able to give equal or different yields. (\citep{Neyman1935}, page 173) \end{quote} \noindent If the treatment effects are additive across all units, that is, \begin{eqnarray} X_{ij}(t) = U_{ij} + \tau(t)\nonumber\\ \eqntext{ \forall i = 1, \ldots, N; j = 1, \ldots, T; t = 1, \ldots, T,} \end{eqnarray} then testing Neyman's null is equivalent to testing Fisher's sharp null. The observed yield of the plot assigned treatment $t$ in block $i$ is \[ y_i(t) = \sum_{j=1}^T W_{ij}(t) x_{ij}(t), \] and the observed average yield for all plots assigned treatment $t$ is \[ \bar{y}_{\cdot}(t) = \frac{1}{N} \sum_{i=1}^N y_i(t).\vadjust{\goodbreak} \] \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 112] noted that an unbiased estimator for the difference between average treatment means, $\bar{X}_{\cdot\cdot}(t) - \bar{X}_{\cdot\cdot}(t')$, is $\bar {y}_{\cdot}(t) - \bar{y}_{\cdot}(t')$, and correctly calculated its sampling variance over its randomization distribution as \begin{eqnarray*} \operatorname{Var} \bigl\{ \bar{y}_{\cdot}(t) - \bar{y}_{\cdot} \bigl(t'\bigr) \bigr\} &=& \frac{2\sigma_{\varepsilon}^2}{N} + \frac{\sigma_{\eta}^2(t) + \sigma_{\eta}^2(t')}{N}\\ &&{} + \frac{2r(t,t')\sqrt{\sigma_{\eta}^2(t) \sigma_{\eta}^2(t')}}{N(T-1)}, \end{eqnarray*} where \begin{eqnarray*} \sigma_{\eta}^2(t) &=& \frac{1}{NT}\sum _{i=1}^N \sum_{j=1}^T \eta _{ij}(t)^2, \\ r\bigl(t,t'\bigr)& =& \frac{\sum_{i=1}^N \sum_{j=1}^T \eta_{ij}(t) \eta _{ij}(t')}{NT \sqrt{\sigma_{\eta}^2(t) \sigma_{\eta}^2(t')}}. \end{eqnarray*} \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 145] assumed that $\sigma_{\eta}^2(t)$ and $r(t,t')$ are constant functions of $t, t'$ only to save space and simplify later expressions; this particular set of assumptions appears to have been made purely for mathematical simplicity, and is not driven by any applied considerations, unlike assumptions made by \citet{Wilk1955} and \citet{WilkKempthorne} (described in Sections~\ref{sec:RCBD_history} and \ref{sec:LSD_history}). Neyman then calculated expectations of mean residual sum of squares and mean treatment sum of squares, expressed in our notation as (resp.) \begin{eqnarray*} S_0^2 &= &\frac{1}{(N-1)(T-1)}\\ &&{}\times \sum _{i=1}^N \sum_{t=1}^T \bigl\{ y_i(t) - \bar{y}_{\cdot}(t) - \bar {y}_i(\cdot) + \bar{y}_{\cdot}(\cdot) \bigr\}^2 \end{eqnarray*} and \[ S_1^2 = \frac{N}{T-1}\sum _{t=1}^T \bigl\{ \bar{y}_{\cdot}(t) - \bar{y}_{\cdot }(\cdot) \bigr\}^2. \] As proven in our appendix (\citep{SabbaghiRubin}), the expectations are \begin{eqnarray*} \mathbb{E}\bigl(S_0^2\bigr) &=& \sigma_{\varepsilon}^2 + \frac{1}{T}\sum_{t=1}^T \sigma_{\eta}^2(t) \\ &&{}+ \frac{1}{T(T-1)^2}\sum _{t \neq t'}r\bigl(t,t'\bigr)\sqrt{ \sigma_{\eta}^2(t) \sigma_{\eta}^2 \bigl(t'\bigr)} \\ &&{} + \frac{1}{(N-1)(T-1)} \sum_{i=1}^N \sum _{t=1}^T \bigl\{ B_i(t) - \bar {B}_i(\cdot) \bigr\}^2 \end{eqnarray*} and \begin{eqnarray*} \mathbb{E}\bigl(S_1^2\bigr) &=& \sigma_{\varepsilon}^2 + \frac{1}{T}\sum_{t=1}^T \sigma_{\eta}^2(t)\\ &&{} + \frac{1}{T(T-1)^2}\sum _{t \neq t'}r\bigl(t,t'\bigr) \sqrt{ \sigma_{\eta }^2(t)\sigma_{\eta}^2 \bigl(t'\bigr)} \\ &&{} + \frac{N}{T-1}\sum_{t=1}^T \bigl\{ \bar{X}_{\cdot\cdot}(t) - \bar{X}_{\cdot \cdot}(\cdot) \bigr\}^2. \end{eqnarray*} \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), pages 147--150] correctly calculated the expected mean treatment sum of squares, but made a mistake when calculating the expected mean residual sum of squares. His incorrect expression is equation (27) on page $148$. \citeauthor{Sukhatme} [(\citeyear{Sukhatme}), page 166] his Ph.D. student at the University of London, incorrectly calculated the expectation for the general case when $\sigma_{\eta}^2(t)$ and $r(t,t')$ are not constant in $t,t'$, and the corresponding incorrect expression is his equation (3): \begin{eqnarray*} &&\sigma_{\varepsilon}^2 + \frac{1}{T}\sum _{t=1}^T \sigma_{\eta}^2(t) \\ &&\quad{}+\frac{1}{T(T-1)^2}\sum_{t \neq t'}r\bigl(t,t' \bigr)\sqrt{\sigma_{\eta}^2(t) \sigma_{\eta} \bigl(t'\bigr)}. \end{eqnarray*} To see why the last term in $\mathbb{E}(S_0^2)$ is missing in these equations, note that the expression within the brackets of $S_0^2$ can be written as the sum of the three terms \begin{eqnarray*} &&B_i(t) - \bar{B}_i(\cdot), \\ &&\sum_{j=1}^T W_{ij}(t) \eta_{ij}(t) - \frac{1}{N} \sum_{i=1}^N \sum_{j=1}^T W_{ij}(t) \eta_{ij}(t) \\ &&\quad{}- \frac{1}{T}\sum_{t=1}^T \sum_{j=1}^T W_{ij}(t) \eta_{ij}(t) \\ &&\quad{} + \frac{1}{NT} \sum_{i=1}^N \sum _{t=1}^T \sum_{j=1}^T W_{ij}(t)\eta_{ij}(t) \end{eqnarray*} and \begin{eqnarray*} &&\sum_{j=1}^T W_{ij}(t) \varepsilon_{ij}(t) - \frac{1}{N} \sum _{i=1}^N \sum_{j=1}^T W_{ij}(t) \varepsilon_{ij}(t)\\ &&\quad{} - \frac{1}{T}\sum _{t=1}^T \sum_{j=1}^T W_{ij}(t) \varepsilon_{ij}(t) \\ &&\quad{} + \frac{1}{NT} \sum_{i=1}^N \sum _{t=1}^T \sum_{j=1}^T W_{ij}(t) \varepsilon_{ij}(t). \end{eqnarray*} Neyman's equation $(17)$ is missing the first term $B_i(t) - \bar {B}_i(\cdot)$, which is not necessarily equal to zero, and was never explicitly declared to be zero by Neyman. Consequently, under Neyman's null, the expected mean residual sum of squares is greater than or equal to the expected mean treatment sum of squares, with equality holding if and only if for each block $i$, $B_i(t)$ is constant across treatments $t$. Alternatively, equality holds under Fisher's sharp null. If one accepts Neyman's logic regarding ``unbiased tests'' (discussed in Section~\ref{sec:cacophony}), then the correct expressions for the expectations of mean squares suggest that the standard ANOVA F-test for RCBs has a Type I error bounded above by its nominal level. A simple example makes this concrete. Suppose $N = T = 2$ and $\sigma _{\varepsilon}^2 = 0$, with the potential outcomes in Table~\ref{tab1}. Note that $\bar{X}_{\cdot\cdot}(\mathrm{1}) = \bar {X}_{\cdot\cdot}(\mathrm{2})$, so Neyman's null is satisfied. We calculate $\mathbb{E}(S_0^2) = 215.875, \mathbb{E}(S_1^2) = 213.625$, and \[ \mathbb{E}\bigl(S_0^2\bigr) - \mathbb{E} \bigl(S_1^2\bigr) = 2.25 = \sum _{i=1}^2\sum_{t=1}^2 \bigl\{ B_i(t) - \bar{B}_i(\cdot) \bigr \}^2. \] \begin{table} \caption{Table of potential outcomes for a RCB with $\mathbb{E}(S_0^2) > \mathbb{E}(S_1^2)$}\label{tab1} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcc@{}} \hline & \textbf{Treatment} $\bolds{1}$ & \multicolumn{1}{c@{}}{\textbf{Treatment} $\bolds{2}$} \\ \hline Block $1$, Plot $1$ & $10$ & $15$ \\ Block $1$, Plot $2$ & $10$ & $2$ \\ Block $2$, Plot $1$ & $20$ & $3$ \\ Block $2$, Plot $2$ & $30$ & $50$ \\ \hline \end{tabular*} \end{table} \subsection{Randomized Complete Block Designs: After the Controversy} \label{sec:RCBD_history} Neyman's potential outcomes framework is similar to the ``conceptual yield'' framework developed by \citeauthor{Kempthorne1952} (\citeyear{Kempthorne1952,Kempthorne1955}). Certain features of these two are only cosmetically different: for example, \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), page 137] and later\vadjust{\goodbreak} \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page 280] represent treatment indicators by $\delta_{ij}^k$ (with $k$ denoting treatment level) and potential outcomes as $y_{ijk}$. As emphasized by a referee, using treatment indicators as random variables provides a mathematical foundation for the randomization theory of \citet{Fisher}, connecting potential outcomes with observed responses. An important difference between Neyman and Kempthorne concerns the notion of technical errors. \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page 161] make a distinction between experimental and observational errors, and include separate terms for each, allowing them to depend on treatment. Neyman effectively only considers their sum when defining technical errors, which may be a source of confusion. Of course, Neyman's results were for local field experiments, in which case he might not have considered it necessary to introduce observational errors arising from random sampling of experimental units from some larger population. \citet{Kempthorne1952} made an interesting comment relating to Fisher's sharp null, Neyman's null and Neyman's notation for technical errors: \begin{quote} If the experimenter is interested in the more fundamental research work, Fisher's null hypothesis is more satisfactory, for one should be interested in discovering the fact that treatments have different effects on different plots and in trying to explain why such differences exist. It is only in technological experiments designed to answer specific questions about a particular batch of materials which is later to be used for production of some sort that Neyman's null hypothesis appears satisfactory \ldots Neyman's hypothesis appears artificial in this respect, that a series of repetitions is envisaged, the experimental conditions remaining the same but the technical errors being different. (\citep{Kempthorne1952}, page 133) \end{quote} Furthermore, \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), pages 145--151] correctly noted (in agreement with our results in Section~\ref{sec:interactions_EMS}) that block-treatment interactions must be zero in order for $\mathbb{E}(S_0^2) = \mathbb{E}(S_1^2)$ under Neyman's null, also known as unbiasedness of a design in the \citet{Yates} sense. As Kempthorne stated in a later article: \begin{quote} For the case of randomized blocks it is found that block treatment interactions must be zero in order that the design be unbiased in Yates's sense.\vadjust{\goodbreak} \ldots It does not appear to be at all desirable to section the experimental material into ordinary randomized blocks, of \ldots highly different fertilities (or basal yields) because this procedure is likely to lead to block treatment interactions. (\citep{Kempthorne1955}, page 964) \end{quote} Additivity of treatment effects was not invoked by Neyman, and nonadditivity for RCBs was investigated later (\citep{Tukey}; \citep{Kempthorne1955}; \citep{Wilk1955}; \citep{Mandel}). Perhaps the most substantial work, in the same direction as Neyman, was done by \citet{Wilk1955}, who extended the results of \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), pages~145--151] for RCBs to the case of generalized randomized blocks. Wilk studied randomization moments of mean sums of squares, estimation of various finite-population estimands and Normal theory approximations for testing Fisher's sharp null and Neyman's null. He also distinguished between experimental error, that is, the failure of different experimental units treated alike to respond identically, and technical error, or limitations on experimental technique that prevent the exact reproduction of an applied treatment. To us, this use of notation confuses mathematical derivations and practical interpretations of symbols. More importantly, although Wilk made assumptions on the potential outcomes (consequently not working in our more general setting), he attempted to justify them as physically relevant, as opposed to Neyman, who only made assumptions to facilitate calculations. For example, when translating Wilk's notation into Neyman's, we see that \citeauthor{Wilk1955} [(\citeyear{Wilk1955}), page 72] explicitly considered the physical situation that, if the blocking of experimental units is successful, then the $\eta_{ij}(t) - \bar{\eta}_{ij}(\cdot)$ will be negligible for all $i, j, t$, whereas block-treatment interactions $B_i(t) - \bar{B}_i(\cdot)$ would be important, in the sense of varying with $t$. When units in a block are as homogeneous as possible with respect to background covariates, the assumption of no strict unit-treatment interactions becomes more plausible, similar to the plausibility of zero partial correlation among potential outcomes given all measured covariates. Accordingly, block-treatment interactions become more important. A referee made a similar comment, remarking that for agronomic experiments, it is reasonable to assume that the $\eta _{ij}(t)$ are negligible, whereas in situations such as medical experiments involving human subjects, this may no longer be true. Wilk's explicit physical consideration is used to justify his assumption (stated without further explanation by \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page 301] in their description of the general model for RCBs) that treatments react additively within a block but\vadjust{\goodbreak} can react nonadditively from block-to-block, that is, \begin{eqnarray*} \bigl\{ X_{ij}(t) - \bar{X}_{ij}(\cdot) \bigr\} - \bigl\{ \bar{X}_{i \cdot}(t) - \bar {X}_{i \cdot}(\cdot) \bigr\} &=& \eta_{ij}(t) - \bar{\eta}_{ij}(\cdot) \\ &=& 0 \end{eqnarray*} for all $i, j, t$, even though \[ B_i(t) - \bar{B}_i(\cdot) \neq0 \] for at least one pair $(i, t)$. \citeauthor{Wilk1955} [(\citeyear{Wilk1955}), page 73] then stated that, if \[ \eta_{ij}(t) - \bar{\eta}_{ij}(\cdot) \neq0 \] for at least one triple $(i,j,t)$, then the expected mean treatment sum of squares is not equal to the expected mean residual sum of squares under Neyman's null. \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page 301] when summarizing Wilk's work, noted that the expected mean residual sum of squares for RCB designs contains the interaction between blocking and treatment factors, similar to our result. \subsection{Latin Square Designs: Theory} \label{sec:LSD_theory} It was in his treatment of LSs that Neyman's error substantially changes conclusions. We consider $T \times T$ LSs with rows and columns denoting levels of two blocking factors, for example, north--south and east--west. Our treatment indicators are \[ W_{ij}(t) = \cases{1, & $\mbox{if the unit in row } i, \mbox{column } j,$\vspace*{2pt}\cr &$\mbox{is assigned treatment } t$, \vspace*{2pt} \cr 0, & $\mbox{otherwise.}$} \] Neyman specified the potential outcomes as \[ x_{ij}(t) = X_{ij}(t) + \varepsilon_{ij}(t), \] with $X_{ij}(t) \in\mathbb{R}$ unknown constants representing the ``mean yield'' of the unit in cell $(i,j)$ under treatment $t$, and $\varepsilon_{ij}(t) \sim[0, \sigma_{\varepsilon}^2]$ technical errors that are i.i.d. and independent of \textbf{W}. Potential outcomes were then decomposed into \begin{eqnarray} x_{ij}(t) &=& \bar{X}_{\cdot\cdot}(t) + R_i(t) + C_j(t) \nonumber \\[-8pt] \\[-8pt] \nonumber &&{}+ \eta_{ij}(t) + \varepsilon_{ij}(t), \end{eqnarray} where \begin{eqnarray*} R_i(t) &= &\bar{X}_{i \cdot}(t) - \bar{X}_{\cdot\cdot}(t), \\ C_j(t) &=& \bar{X}_{\cdot j}(t) - \bar{X}_{\cdot\cdot}(t), \\ \eta_{ij}(t) &=& X_{ij}(t) - \bar{X}_{i \cdot}(t) - \bar{X}_{\cdot j}(t) + \bar{X}_{\cdot\cdot}(t). \end{eqnarray*} Similar to RCBs, Neyman described $R_i(t)$ and $C_j(t)$ as corrections for specific soil fertility of the $i$th row and $j$th column, respectively, and $\eta_{ij}(t)$ as the soil error for plot $(i,j)$ under treatment $t$.\vadjust{\goodbreak} We define $\bar{x}_{\cdot\cdot}^o(t)$ as the observed average yield for plots assigned treatment $t$, \[ \bar{x}_{\cdot\cdot}^{o}(t) = \frac{1}{T} \sum _{i=1}^T \sum_{j=1}^T W_{ij}(t) x_{ij}(t). \] \citet{Neyman1935} correctly noted that $\mathbb{E} \{ \bar{x}_{\cdot\cdot}^{o}(t) - \bar{x}_{\cdot\cdot }^{o}(t') \} = \bar{X}_{\cdot\cdot}(t) - \bar{X}_{\cdot\cdot}(t')$ and that \begin{eqnarray*} &&\operatorname{Var}\bigl\{ \bar{x}_{\cdot\cdot}^{o}(t) - \bar{x}_{\cdot\cdot }^{o}\bigl(t'\bigr) \bigr\}\\ &&\quad= \frac{2\sigma_{\varepsilon}^2}{T} + \frac{\sigma_{\eta}^2(t) + \sigma_{\eta}^2(t')}{T-1}\\ &&\qquad{} + \frac{2r(t,t')\sqrt{\sigma_{\eta}^2(t) \sigma_{\eta}^2(t')}}{(T-1)^2}. \end{eqnarray*} Neyman then calculated the expected mean sums of squares. The mean residual and treatment sums of squares are defined as (resp.) \begin{eqnarray*} S_0^2 &=& \frac{1}{(T-1)(T-2)}\\ &&{}\times \sum _{i=1}^T \sum_{j=1}^T \Biggl\{ y_{ij} - \bar{y}_{i \cdot} - \bar {y}_{\cdot j} \\ &&\hspace*{50pt}{}- \sum_{t=1}^T W_{ij}(t) \bar{x}_{\cdot\cdot}^o(t) + 2 \bar{y}_{\cdot \cdot} \Biggr \}^2 \end{eqnarray*} and \[ S_1^2 = \frac{T}{T-1}\sum _{t=1}^T \bigl\{ \bar{x}_{\cdot\cdot}^o(t) - \bar {y}_{\cdot\cdot} \bigr\}^2, \] with $y_{ij} = \sum_{t=1}^T W_{ij}(t) x_{ij}(t)$ the observed response of cell $(i,j)$, and \begin{eqnarray*} \bar{y}_{i \cdot} &=& \frac{1}{T} \sum_{j=1}^T y_{ij},\\ \bar{y}_{\cdot j}& =& \frac{1}{T} \sum _{i=1}^T y_{ij}, \\ \bar{y}_{\cdot\cdot} &=& \frac{1}{T} \sum_{j=1}^T \bar{y}_{\cdot j} = \frac{1}{T} \sum_{i=1}^T \bar{y}_{i \cdot} \end{eqnarray*} We prove in our appendix (\citep{SabbaghiRubin}) that the correct expectations are \begin{eqnarray*} \mathbb{E}\bigl(S_0^2\bigr) &=& \sigma_{\varepsilon}^2 + \frac{T-2}{(T-1)^2}\sum_{t=1}^T \sigma_{\eta}^2(t) \\ &&{}+ \frac{2}{(T-1)^3} \sum _{t \neq t'}r\bigl(t,t'\bigr) \sqrt{ \sigma_{\eta}^2(t) \sigma_{\eta}^2 \bigl(t'\bigr)} \\ & &{} + \frac{1}{T(T-1)^2}\sum _{i=1}^T \sum_{j=1}^T \sum_{t=1}^T\bigl[\bigl\{ R_i(t) - \bar{R}_i(\cdot) \bigr\}^2 \\ &&\hspace*{110pt}{}+ \bigl \{ C_j(t) - \bar{C}_j(\cdot) \bigr\}^2 \bigr] \end{eqnarray*} and \begin{eqnarray*} \mathbb{E}\bigl(S_1^2\bigr) &=& \sigma_{\varepsilon}^2 + \frac{1}{T-1}\sum_{t=1}^T \sigma_{\eta}^2(t) \\ &&{}+ \frac{1}{(T-1)^3} \sum _{t \neq t'} r\bigl(t,t'\bigr)\sqrt{ \sigma_{\eta}^2(t) \sigma_{\eta}^2 \bigl(t'\bigr)} \\ & &{}+ \frac{T}{T-1}\sum_{t=1}^T \bigl \{ \bar{X}_{\cdot\cdot}(t) - \bar{X}_{\cdot \cdot}(\cdot) \bigr \}^2. \end{eqnarray*} \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 152] made a similar mistake as he did for RCBs, excluding \[ R_i(t) + C_j(t) - \bar{R}_i(\cdot) - \bar{C}_j(\cdot) \] in a simplified expression\vspace*{1pt} for the term inside the brackets of $S_0^2$ in his equation $(50)$. In effect, Neyman once again excluded corrections for soil fertility, as it is not necessarily true (nor stated explicitly) that $R_i(t)$ is constant in $t$ for all rows $i$ and that $C_j(t)$ is constant in $t$ for all columns $j$. \citeauthor{Sukhatme} [(\citeyear{Sukhatme}), page 167] made a similar mistake for the case when $\sigma_{\eta}^2(t)$ and $r(t,t')$ are not constant in $t, t'$. After incorrectly calculating the expected mean residual sum of squares, Neyman stated that the expected mean residual sum of squares was less than or equal to the expected mean treatment sum of squares under Neyman's null (\citep{Neyman1935}, page 154), with equality only under special cases, such as Fisher's sharp null. Based on this observation, Neyman conjectured that the standard ANOVA F-test for LSs is potentially invalid in the sense of having a higher Type~I error than nominal, that is, rejecting more often than desired under Neyman's null. However, the expected mean residual sum of squares is not necessarily less than the expected mean treatment sum of squares under Neyman's null. In fact, the inequality could go in either direction. We describe in Section~\ref{sec:interactions_EMS} how the inequality depends on interactions between row/column blocking factors and the treatment. \begin{table}[b] \caption{Table of potential outcomes for a LS with $\mathbb{E}(S_0^2) > \mathbb{E}(S_1^2)$}\label{tab2} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lccc@{}} \hline & \textbf{Column} $\bolds{1}$ & \textbf{Column} $\bolds{2}$ & \multicolumn{1}{c@{}}{\textbf{Column} $\bolds{3}$} \\ \hline Row $1$ & $(3,10,15)$ & $(50,30,13)$ & $(20,20,40)$ \\ Row $2$ & $(10,13,50)$ & $(20,40,3)$ & $(30,15,20)$ \\ Row $3$ & $(13,3,20)$ & $(15,20,10)$ & $(40,50,30)$ \\ \hline \end{tabular*} \end{table} Two examples of LSs with $T = 3$, $\sigma_{\varepsilon}^2 = 0$, and $\bar{X}_{\cdot\cdot}(\mathrm{1}) = \bar{X}_{\cdot\cdot}(\mathrm{2}) = \bar{X}_{\cdot\cdot}(\mathrm{3})$ (i.e., Neyman's null) demonstrate this fact. In Tables~\ref{tab2} and \ref{tab3}, each unit's potential outcomes are represented by an ordered triple, with the $t$th coordinate denoting the potential outcome under treatment $t$. For Table~\ref{tab2}, $\mathbb{E}(S_0^2) = 252.07, \mathbb{E}(S_1^2) = 172.38$. From our formulae, \begin{eqnarray*} &&\mathbb{E}\bigl(S_0^2\bigr) - \mathbb{E} \bigl(S_1^2\bigr) \\ &&\quad=-\frac{1}{(T-1)^2}\sum _{t=1}^T \sigma_{\eta}^2(t) \\ &&\qquad{}+ \frac{1}{(T-1)^3}\sum_{t \neq t'} r\bigl(t,t' \bigr)\sqrt{\sigma_{\eta}^2(t) \sigma_{\eta}^2 \bigl(t'\bigr)} \\ & &\qquad{}+ \frac{1}{T(T-1)^2}\sum_{i=1}^T \sum _{j=1}^T \sum_{t=1}^T \bigl[\bigl\{ R_i(t) - \bar{R}_i(\cdot) \bigr \}^2 \\ &&\hspace*{111pt}\qquad{}+ \bigl\{C_j(t) - \bar{C}_j(\cdot) \bigr\}^2\bigr]. \end{eqnarray*} We verify by explicit randomization that the discrepancy $\mathbb {E}(S_0^2) - \mathbb{E}(S_1^2) = 79.69$ equals this expression, so that this is one LS for which the expected mean residual sum of squares is greater than the expected mean treatment sum of squares. The inequality \vspace*{1pt} is in the other direction for Table~\ref{tab3}, with $\mathbb{E}(S_0^2) = 4.96, \mathbb{E}(S_1^2) = 6.77$. \begin{table} \caption{Table of potential outcomes for a LS with $\mathbb{E}(S_0^2) < \mathbb{E}(S_1^2)$}\label{tab3} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lccc@{}} \hline & \textbf{Column} $\bolds{1}$ & \textbf{Column} $\bolds{2}$ & \multicolumn{1}{c@{}}{\textbf{Column} $\bolds{3}$} \\ \hline Row $1$ & $(7,4,8)$ & $(5,9,4)$ & $(6,6,5)$ \\ Row $2$ & $(8,5,6)$ & $(3,3,3)$ & $(2,2,7)$ \\ Row $3$ & $(1,8,2)$ & $(4,7,9)$ & $(9,1,1)$ \\ \hline \end{tabular*} \end{table} \subsection{Latin Square Designs: After the Controversy} \label{sec:LSD_history} As with RCBs, no additivity assumption is made on the potential outcomes for LSs. Nonadditivity for LSs has been further studied in the literature (\citep{Gourlay1955b}; \citep{Tukey1955}; \citep{Rojas}). Kempthorne recognized the issue of interactions between row/column blocking factors and the treatment factor in a LS (discussed in the next section): \begin{quote} It is clear that, if there are row-treatment or column-treatment interactions, these will enter into the error mean square but not into the treatment mean square. The situation is entirely analogous to that of randomized blocks in that block-treatment interactions enter the error mean square but not the treatment mean square. (\citep{Kempthorne1952}, page 195) \end{quote} \noindent \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), page 204] continued by noting a defect of large LSs, namely, that there are more opportunities for row/column interactions with treatments. A substantial investigation in the spirit of Neyman was perfomed by \citet{WilkKempthorne}, and is briefly summarized by Hinkelmann and Kempthorne [(\citeyear{Kempthorne}), page 387]. Wilk and Kempt\-horne [(\citeyear{WilkKempthorne}), page 224] adopt the same specification of potential outcomes as \citet{Neyman1935}, allowing technical errors to differ based on treatment level $k$: \[ y_{ijk} = Y_{ijk} + \varepsilon_{ijk}. \] One difference that makes the conceptual yield framework of Wilk and Kempthorne more general is that they consider randomly sampling rows, columns and treatments from some larger population. In any case, \citeauthor{WilkKempthorne} [(\citeyear{WilkKempthorne}), page 227] reach the reverse conclusion as Neyman, stating that, usually, the expected mean residual sum of squares is larger than the expected mean treatment sum of squares. \citeauthor{WilkKempthorne} [(\citeyear{WilkKempthorne}), page 227] explain this difference and the fact that Neyman did not recognize interactions between row/column blocking factors and the treatments, by noting that \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page~145] made additional homogeneity assumptions. However, Neyman's assumptions were invoked solely to facilitate calculations and had no physical justifications. Our results are in agreement with a summary of their work in Table~3 from \citeauthor{WilkKempthorne} [(\citeyear{WilkKempthorne}), page 226]. Thus, it appears that Wilk and Kempthorne do not seriously consider the possibility that the inequality could go in the direction Neyman claimed. In fact, \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page~387] when summarizing this paper, explicitly state that the expected mean residual sum of squares is larger than the expected mean treatment sum of squares under Neyman's null. A possible explanation can be found in the sixth remark on page $227$, where Wilk and Kempthorne discuss how the standard approach to designing LSs may likely result in interactions of row/column blocking factors with treatments. As explained in our next section, the magnitudes of these interactions ultimately drive the direction of the inequality. \citet{Cox1958} built on the work of Wilk and Kempt\-horne, and provided a rather unique viewpoint on this entire problem.\vadjust{\goodbreak} After first summarizing Wilk and Kempthorne's results by stating that it is usually the case that the expected mean residual sum of squares is larger than the expected mean treatment sum of squares, Cox then considered the practical importance of this difference of expectations, which he correctly recognized as being related to interactions between the treatment and blocking factors. \citeauthor{Cox1958} [(\citeyear{Cox1958}), page 73] raised the thought-provoking question of whether, for a LS, the practical scientific interest of the null \[ H_0\dvtx\mathbb{E}\bigl(S_0^2\bigr) = \mathbb{E}\bigl(S_1^2\bigr) \] is comparable to, or greater than, Neyman's null, especially when the difference between these expected mean sums of squares is considered important. He concluded that testing Neyman's null when there is no unit-treatment additivity does not seem to be helpful: \begin{quote} \ldots if substantial variations in treatment effect from unit to unit do occur, one's understanding of the experimental situation will be very incomplete until the basis of this variation is discovered and any extension of the conclusions to a general set of experimental units will be hazardous. The mean treatment effect, averaged over all units in the experiment, or over the finite population of units from which they are randomly drawn, may in such cases not be too helpful. Particularly if appreciable systematic treatment-unit interactions are suspected, the experiment should be set out so these may be detected and explained. (\citep{Cox1958}, page 73) \end{quote} \citeauthor{Cox2012} [(\citeyear{Cox2012}), page 3] later argued that when this more realistic null is formulated, the biases described earlier disappear, and so do issues surrounding the LS. A related point for the LS design noted by Cox is the marginalization principle, in which models having nonzero interactions and zero main effects are not considered sensible [similar to the effect heredity principle (\cite{wuhamada}, page 173)]. \citet{Box1984}, when commenting on \citet{Cox1984}, provided an opposing view that makes such a principle context-dependent. \subsection{Block-Treatment Interactions and Expected Sums of Squares} \label{sec:interactions_EMS} Neyman excluded the following (respective) terms in $\mathbb{E}(S_0^2)$ for RCBs and LSs: \begin{eqnarray*} &&\frac{1}{(N-1)(T-1)} \sum_{i=1}^N \sum _{t=1}^T \bigl\{ B_i(t) - \bar {B}_i(\cdot) \bigr\}^2, \\ &&\frac{1}{(T-1)^2}\sum_{i=1}^T\sum _{t=1}^T \bigl\{ R_i(t) - \bar{R}_i(\cdot) \bigr\}^2 \\ &&\quad{}+ \frac{1}{(T-1)^2}\sum _{j=1}^T\sum_{t=1}^T \bigl\{ C_j(t) - \bar{C}_j(\cdot) \bigr \}^2. \end{eqnarray*} In each, we are adding squared differences between the fertility correction for a specific combination of block and treatment levels, and the average (over treatments) fertility correction for the same block level. For the LS, this is decomposed as a sum over the row and a sum over the column blocking factors. Formally, these terms gauge whether, for each level of a blocking factor, the fertility corrections are constant over the treatments, and represent interactions between blocking factors and treatments. For RCBs, we have \[ B_i(t) - \bar{B}_i(\cdot) = \bigl\{ \bar{X}_{i \cdot}(t) - \bar{X}_{i \cdot}(\cdot) \bigr\} - \bigl\{ \bar {X}_{\cdot\cdot}(t) - \bar{X}_{\cdot\cdot}(\cdot)\bigr\}, \] which is the interaction between the $i$th block and the $t${th} treatment in terms of potential outcomes. Similarly, we have for LSs that \begin{eqnarray*} R_i(t) - \bar{R}_i(\cdot) &=& \bigl\{ \bar{X}_{i \cdot}(t) - \bar{X}_{i \cdot}(\cdot) \bigr\} - \bigl\{ \bar {X}_{\cdot\cdot}(t) - \bar{X}_{\cdot\cdot}(\cdot)\bigr\}, \\ C_j(t) - \bar{C}_j(\cdot) &=& \bigl\{ \bar{X}_{\cdot j}(t) - \bar{X}_{\cdot j}(\cdot) \bigr\} - \bigl\{ \bar {X}_{\cdot\cdot}(t) - \bar{X}_{\cdot\cdot}(\cdot)\bigr\}, \end{eqnarray*} which are the interactions between the $i$th row and $t${th} treatment, and the $j$th column and the $t${th} treatment, respectively, in terms of potential outcomes. Intuitively, these interactions, which are functions of potential outcomes, should reside within the expectation of the mean residual sum of squares. Without invoking additivity on the potential outcomes, these interactions are not necessarily zero and, because we lack replications within blocks for either RCB or LS designs, we cannot form an interaction sum of squares from the observed data, so that the potential outcome interactions will instead be included in the expectation of the mean residual sum of squares (\citep{Fisher}, Chapters IV, V). In contrast, for randomized block designs that include replications within each block, this interaction term is no longer present in the expected mean residual sum of squares. To better understand the expected mean sums of squares for LSs, consider their difference under Neyman's simplifying assumption that $\sigma_{\eta}^2(t)$ and $r(t,t')$ are constant, so that $\sigma_{\eta}^2(t) = \sigma_{\eta}^2$ and $r(t,t') = r$ for all treatments $t, t'$. Then the difference between $\mathbb{E}(S_0^2)$ and $\mathbb{E}(S_1^2)$ under Neyman's null is \begin{eqnarray*} &&\sum_{i=1}^T \sum _{t=1}^T\bigl\{R_i(t) - \bar{R}_i(\cdot)\bigr\}^2 \\ &&\quad{}+ \sum _{j=1}^T \sum_{t=1}^T \bigl\{C_j(t) - \bar{C}_j(\cdot)\bigr\}^2 - T\sigma_{\eta}^2(1 - r), \end{eqnarray*} and this expression, in some sense, measures the difference between row/column interactions with treatment and the variance of the potential outcome residual terms (scaled by the number of treatments, $T$, times one minus the correlation between potential outcome residual terms for different pairs of treatments). Note that $0 \leq1 - r \leq2$, so $0 \leq T\sigma_{\eta}^2(1-r) \leq2T\sigma _{\eta}^2$. To interpret the difference in expectations for the general case, first note that \begin{eqnarray*} \sum_{i=1}^T \sum _{j=1}^T\bar{\eta}_{ij}( \cdot)^2 &\geq& 0\quad \Rightarrow\\ \sum_{t=1}^T \sigma_{\eta}^2(t) &\geq&-\sum_{t \neq t'} r\bigl(t,t'\bigr)\sqrt {\sigma_{\eta}^2(t) \sigma_{\eta}^2\bigl(t'\bigr)}. \end{eqnarray*} As such, $\mathbb{E}(S_0^2) - \mathbb{E}(S_1^2)$ under Neyman's null is bounded from below by \begin{eqnarray*} &&\frac{1}{(T-1)^2}\sum_{i=1}^T \sum _{t=1}^T\bigl\{R_i(t) - \bar{R}_{i}(\cdot )\bigr\}^2\\ &&\quad{} + \frac{1}{(T-1)^2}\sum _{j=1}^T \sum_{t=1}^T \bigl\{C_j(t) - \bar{C}_{j}(\cdot )\bigr\}^2 \\ &&\quad{}- \frac{T}{(T-1)^3}\sum_{t=1}^T \sigma_{\eta}^2(t), \end{eqnarray*} so that, if \begin{eqnarray*} &&\sum_{i=1}^T \sum _{t=1}^T\bigl\{R_i(t) - \bar{R}_{i}(\cdot)\bigr\}^2 + \sum _{j=1}^T \sum_{t=1}^T \bigl\{C_j(t) - \bar{C}_{j}(\cdot)\bigr\}^2\\ &&\quad{} - \frac{T}{T-1}\sum_{t=1}^T \sigma_{\eta}^2(t) \geq0, \end{eqnarray*} then $\mathbb{E}(S_0^2) \geq\mathbb{E}(S_1^2)$. Even in the most general case for LSs, $\mathbb{E}(S_0^2) - \mathbb {E}(S_1^2)$ can still be interpreted as a comparison between row/column interactions with treatment and the (scaled) sum of variances of residual potential outcomes $\eta_{ij}(t)$. In the context of an agricultural experiment, we obtain a more meaningful interpretation for this difference. Latin squares are implemented to block on fertility gradients in two direction (\citep{Neyman1935}; \citep{Fisher}, Chapter V; Hinkelmann and Kempthorne, \citeyear{Kempthorne}, Chapter~10). If the variability of specific soil fertility corrections across rows and columns (i.e., interactions between rows/columns and treatments) are substantially larger than the residual variability of the potential\vadjust{\goodbreak} outcomes [i.e., the variability of the $\eta_{ij}(t)$], then $\mathbb {E}(S_0^2) - \mathbb{E}(S_1^2)$ is larger than zero. An example was given in Table~\ref{tab2}, where \begin{eqnarray*} &&\sum_{i=1}^T \sum _{t=1}^T\bigl\{R_i(t) - \bar{R}_i(\cdot)\bigr\}^2 + \sum _{j=1}^T \sum_{t=1}^T \bigl\{C_j(t) - \bar{C}_j(\cdot)\bigr\}^2\\ &&\quad= 569.93, \\ &&-\sum_{t=1}^T \sigma_{\eta}^2(t) = -313.56, \\ &&\frac{1}{T-1}\sum_{t \neq t'} r\bigl(t,t' \bigr)\sqrt{\sigma_{\eta}^2(t) \sigma _{\eta}^2\bigl(t'\bigr)} = 62.41. \end{eqnarray*} The interaction is nearly twice the variability of the residual potential outcomes, and so the difference $\mathbb{E}(S_0^2) - \mathbb{E}(S_1^2)$ is greater than zero. For Table~\ref{tab3}, \begin{eqnarray*} &&\sum_{i=1}^T \sum _{t=1}^T\bigl\{R_i(t) - \bar{R}_i(\cdot)\bigr\}^2 + \sum _{j=1}^T \sum_{t=1}^T \bigl\{C_j(t) - \bar{C}_j(\cdot)\bigr\}^2 \\ &&\quad= 9.48, \\ &&-\sum_{t=1}^T \sigma_{\eta}^2(t) = -14.59, \\ &&\frac{1}{T-1}\sum_{t \neq t'} r\bigl(t,t' \bigr)\sqrt{\sigma_{\eta}^2(t) \sigma _{\eta}^2\bigl(t'\bigr)} = -2.11, \end{eqnarray*} and the variance of the residuals completely dominates the interaction. Hence, $\mathbb{E}(S_0^2) > \mathbb{E}(S_1^2)$ in the presence of a strong fertility gradient, with the interaction between row/column blocking factors and treatment greater than the variance of the residual potential outcomes or, alternatively, when the unit-treatment interactions are negligible. Similarly, $\mathbb{E}(S_0^2) < \mathbb{E}(S_1^2)$ in cases where no strong interaction exists between row/column blocking factors and the treatment when compared to the variability of the residual potential outcomes or, alternatively, when the unit-treatment interactions are substantial. It is important to recognize that such important interactions can never be assessed without replication, which is not available in the original LS design. \section{Controversial Connections} \label{sec:3} \subsection{Connecting Expected Mean Sums of Squares with Type I Error} \label{sec:cacophony} \citeauthor{Neyman1935} (\citeyear{Neyman1935}) calculated expectations of mean sums of squares to argue that the standard ANOVA F-test for RCB designs is valid and the test for LS designs is invalid when testing Neyman's\vadjust{\goodbreak} null: a test was said to be ``unbiased'' if $\mathbb{E}(S_0^2) = \mathbb{E}(S_1^2)$ under Neyman's null (\citep{Neyman1935}, page 144). The reasoning behind this definition is not discussed at all and, given our current understanding of hypothesis testing, seems somewhat crude. After all, to determine whether a particular testing procedure is ``biased,'' one typically calculates the probability of rejecting a true null hypothesis, which generally depends on the test statistic's distribution, not just its expectation. To better understand the logic potentially driving Neyman's reasoning, it is useful to review the testing of Fisher's sharp null. A randomization test that uses any {a priori} defined test statistic automatically yields the correct Type I error under Fisher's sharp null and regularity conditions on the potential outcomes and number of randomizations. Furthermore, when using the statistic $F = S_1^2/S_0^2$, this randomization distribution is well approximated by the F-distribution, for both RCB and LS designs. \citet{Welch} calculated the first two moments of \begin{equation} \frac{df_1 S_1^2}{df_1 S_1^2 + df_0 S_0^2} = \frac{df_1 F}{df_1 F + df_0}, \label{eq:trick} \end{equation} where $df_1$ denotes the degrees of freedom for treatment sum of squares, and $df_0$ the degrees of freedom for residual sum of squares. \citet{Pitman} calculated the first four moments of this statistic. For both RCB and LS designs, $df_1 S_1^2 + df_0 S_0^2$ remains constant over the randomizations under Fisher's sharp null, making calculation of the moments of (\ref{eq:trick}) much easier than of $F$ itself. Furthermore, under regularity conditions on the potential outcomes, it was shown that these moments are approximately equal to the corresponding moments of a Beta distribution. In this respect, the standard ANOVA F-test that uses rejection cutoffs based on the F-distribution has approximately the correct Type I error, and the F-distribution can be viewed as a simple approximation to the randomization distribution of the F-test statistic when testing Fisher's sharp null (\citep{Kempthorne1952}, pages 172, 193). Indeed, as stated by \citeauthor{Wilk1955} [(\citeyear{Wilk1955}), page 77] the amount of computation to perform a randomization test could be prohibitive, and statisticians had little recourse except to use such approximations. Kempthorne made a similar remark: \begin{quote} It should be realized that the analysis of variance test with the F distribution has a fair basis apart from normal law theory and is probably in most cases a good approximation to the randomization analysis of variance test, which is a nonparametric test. (\citep{Kempthorne1955}, page 966) \end{quote} Kempthorne earlier stated that for LSs: \begin{quote} The randomization test for the Latin Square or for any randomized design is entirely valid in the sense of controlling Type I errors, but the approximation to this test by the F-distribution when there is nonadditivity is apparently completely unknown. (\citep{Kempthorne1955}, page 965) \end{quote} \noindent As Neyman did not invoke additivity or any other regularity conditions on the potential outcomes, the reasoning outlined in the previous paragraph that establishes the F-distribution as an approximation to the true distribution of the F-test statistic is no longer valid when testing Neyman's null: for example, $df_1 S_1^2 + df_0 S_0^2$ is generally no longer constant over the randomizations, and calculating moments of equation~(\ref {eq:trick}) generally becomes very difficult. \citeauthor{Wilk1955} [(\citeyear{Wilk1955}), page~79] realized this, remarking that the standard ANOVA F-test for testing Neyman's null in RCBs depends on the assumption that block-treatment interactions are zero. \citeauthor{WilkKempthorne} [(\citeyear{WilkKempthorne}), page~228] also stated that the effect of nonadditivity on the Type I error of the standard ANOVA F-test for a LS is unknown. Bearing these facts in mind, a comparison of expected mean residual and treatment sums of squares could be viewed as a crude way of assessing whether the Type I error is correct when testing Neyman's null using the standard ANOVA F-test. \citet{Neyman1935} himself may have realized this: \begin{quote} \ldots in the case of the Randomized Blocks the z test may be considered as unbiased in the sense that the expectations of $S_0^2$ and $S_1^2$ have a common value \ldots On the other hand,\vadjust{\goodbreak} by the arrangement in Latin Square the expectation of $S_1^2$ is equal to $\frac{1}{2}n'\sigma_d^2$, while that of $S_0^2$ is generally smaller. This suggests, although it does not prove, that by the Latin Square arrangement the z test may have the tendency to detect differentiation when it does not exist. (\citep{Neyman1935}, page 144) \end{quote} \noindent After calculating expected mean sums of squares for RCBs, Neyman states that \begin{quote} If there is no differentiation among the $X_{\cdot\cdot}(k)$, then $\mathbb{E}(S_1^2) = \mathbb{E}(S_0^2)$,\vadjust{\goodbreak} and we see that the test of significance usually applied is unbiased in the sense that if there is no differentiation, then the values of $S_1^2$ and $S_0^2$ must be approximately equal. This, of course, does not prove the validity of Fisher's z test. (\citep{Neyman1935}, page 150) \end{quote} \noindent Furthermore, Neyman states that for LSs: \begin{quote} We conclude, therefore, that at present there is no theoretical justification for the belief that the z test is valid in the case of the arrangement by the Latin Square: not only is there the difficulty connected with the nonnormality of the distribution of the $\eta$'s, but also the functions which are usually considered as unbiased estimates of the same variance have generally different expectations. This may (though not necessarily so) cause a tendency to state significant differentiation when this, in fact, does not exist. \ldots These, of course, are purely theoretical conclusions, and I am personally inclined to think that from the practical point of view the existing bias will prove to be negligible. (\citep{Neyman1935}, page 154) \end{quote} This same consideration of expected mean sums of squares for hypothesis testing continues in the present literature on experimental design: \begin{quote} It is the form of the expected mean squares, $\mathbb{E}[\mathrm {MS}(i)]$, which determines, for example, how tests of hypotheses are performed and how error variances are estimated. (\citep{Kempthorne}, page~37) \end{quote} \noindent Also: \begin{quote} In this case, MS(E) is on average larger than MS(T) under the hypothesis of no treatment effects and hence the usual F-test will lead to fewer significant results. In this case the LSD is not an unbiased design. (\citep{Kempthorne}, page~387) \end{quote} It is interesting to note that the specific justification for this last statement was never made, nor was any attempt made to calculate explicitly the Type I error. Even more interesting is how these statements contradict Kempthorne's earlier\vadjust{\goodbreak} position on the connection between expected mean sums of squares and hypothesis testing (e.g., as given by \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), page 149]), for example: \begin{quote} To establish the property of unbiasedness for this design it is \ldots necessary to show that the expectation over randomizations of the error mean square resulting from this model is equal to the mean square among all observations in the absence of treatment effects. \ldots it should perhaps be noted that this property has no intrinsic relation to the concept of unbiasedness of a test. (\citep{Kempthorne1955}, page 956) \end{quote} \noindent \citet{WilkKempthorne} hold this same position, stating that: \begin{quote} We accept the view that tests of significance are evaluatory procedures leading to assessments of strength of evidence against particular hypotheses, while tests of hypotheses are decision devices. We are here concerned with the former, and in this connection it should be noted that (a) the expectations of mean squares are in some degree irrelevant to the exact (permutation) test of significance of the null hypothesis that the treatments are identical. (\citep{WilkKempthorne}, page 228) \end{quote} \subsection{Concrete Calculations} \label{sec:concrete} From Section~\ref{sec:RCBD_theory}, the F-test for RCBs is generally biased in one direction under Neyman's conception of an unbiased test, potentially leading to fewer rejections under Neyman's null. Furthermore, because we do not make any assumptions about the difference between the interactions of rows/columns with treatment and the residual variances in Section~\ref{sec:LSD_theory}, we actually cannot claim that the F-test for LSs is biased in any one direction. A more rigorous justification for the ``unbiasedness'' of the F-test for either design would compare the actual distribution of the F-test statistic to the associated F-distribution. By determining whether the distribution of $F = S_1^2/S_0^2$ is adequately approximated by the F-distribution under Neyman's null, one would be able to conclude whether the Type I error is approximately as advertised. \begin{table}[b] \caption{Table of potential outcomes for a $4 \times4$ LS, with $\mathbb{E}(S_0^2) = \mathbb{E}(S_1^2)$}\label{tab4} \begin{tabular*}{\columnwidth}{@{\extracolsep{\fill}}lcccc@{}} \hline & \textbf{Column} $\bolds{1}$ & \textbf{Column} $\bolds{2}$ & \textbf{Column} $\bolds{3}$ & \multicolumn{1}{c@{}}{\textbf{Column} $\bolds{4}$} \\ \hline Row $1$ & $(1,1,1,1)$ & $(0,0,0,0)$ & $(0,0,0,0)$ & $(0,0,0,0)$ \\ Row $2$ & $(0,0,0,0)$ & $(1,1,1,1)$ & $(0,0,0,0)$ & $(0,0,0,0)$ \\ Row $3$ & $(0,0,0,0)$ & $(0,0,0,0)$ & $(0,0,0,0)$ & $(0,0,0,0)$ \\ Row $4$ & $(0,0,0,0)$ & $(0,0,0,0)$ & $(0,0,0,0)$ & $(0,0,0,0)$ \\ \hline \end{tabular*} \end{table} \begin{figure*}[b] \includegraphics{454f01.eps} \caption{Comparison of the distributions of $S_1^2/S_0^2$ and $F_{3,6}$ for Table \protect\ref{tab4}; the distribution of $S_1^2/S_0^2$ is represented by dots and that of $F_{3,6}$ by dashes. The figure on the left is for the case with no technical errors, and the figure on the right is for technical errors with $\sigma_{\varepsilon} = 0.01$.} \label{fig1} \end{figure*} We performed this comparison for various RCBs and LSs, and observed that Neyman's definition of unbiased tests fails. In particular, we can generate infinitely\vadjust{\goodbreak} many RCBs and LSs such that $(1)$ Neyman's null holds, $(2)$ there is no interaction between blocking factor(s) and treatment, $(3)$ the expected mean residual sum of squares equals the expected mean treatment sum of squares, and yet there is zero probability of rejecting Neyman's null when the rejection rule is based on a comparison of the observed value of $S_1^2/S_0^2$ with $\alpha= 0.05$ cutoffs used in the standard ANOVA F-test. For simplicity, consider the case with no technical errors. One simple example of a $4 \times4$ LS, with $\sigma_{\eta}^2(t), r(t,t')$ constant, $\mathbb{E}(S_0^2) = \mathbb{E}(S_1^2)$, and no interactions between row/column blocking factors and the treatment, is presented in Table~\ref{tab4}. Now $F_{3,6,0.95} = 4.76$ and, as we have all potential outcomes, we can calculate the probability that $S_1^2 > k S_0^2$ for any positive number $k$ over the distribution of $S_1^2$ and $S_0^2$. These probabilities are given in the left of Figure~\ref{fig1}, which also displays probabilities that $F_{3,6} > k$; probabilities from the randomization distribution of $S_1^2/S_0^2$ are plotted as dots, and probabilities for the $F_{3,6}$ distribution as dashes. A~horizontal line at $0.05$ and a vertical line at $4.76$ were drawn to illustrate conclusions obtained at the $0.05$ significance level. The probability of rejecting Neyman's null when using the standard ANOVA F-test is zero. The crucial factor here is the structure of the potential outcomes. Fisher's sharp null holds, so the total sum of squares, and the sum of squares for row and column blocking factors, remain constant over the randomization. Furthermore, the treatment sum of squares takes only two values, corresponding to whether cells $(1,1)$ and $(2,2)$ receive the same treatment or not, and similarly the residual sum of squares takes only two values. Hence, the F-test statistic takes only two possible values, so that cutoffs given by consideration of the F-distribution will not yield approximately correct Type I errors for testing Neyman's null. Inclusion of technical errors does not change our general conclusion. Suppose technical errors are Normally distributed with $\sigma_{\varepsilon} = 0.01$. The corresponding figure for the LS in Table~\ref{tab4} is displayed in the right of Figure~\ref{fig1}. We generated this figure by simulation: we first drew $\varepsilon_{ij}(t)$, then performed the randomizations to generate the distribution of $S_1^2$ and $S_0^2$ for that specific draw of technical errors, and finally repeated this process $2000$ times to estimate the probabilities. \section{Controversial Consequences and Conclusions} \label{sec:4} \subsection{Consequences} \label{sec:consequences} The most immediate consequence of this entire controversy was the resulting hostile relationship between Neyman and Fisher for essentially the remainder of their careers, with each seeking to undermine the other. For example, Neyman was slightly critical in a discussion of a paper presented by \citet{Yates1935} on factorial designs. \citeauthor{Box} [(\citeyear{Box}), page 265] claimed that Neyman wanted to demonstrate his superiority by finding flaws in Fisher's work at this meeting. \citeauthor{Reid} [(\citeyear{Reid}), page 126] described an interesting encounter between Neyman and Fisher, taking place in Neyman's room at University College London one week after this discussion. Fisher demanded that Neyman only use Fisher's books when lecturing on statistics at the university. When Neyman refused to do so, Fisher openly declared that he would oppose Neyman in all his capacities, and banged the door when he left the room. These skirmishes continued for some time (\citep{Reid}, pages 143, 169, 183--184, 223--226, 256--257). Neyman appears to have attempted some type of reconciliation, inviting Fisher to lecture at Berkeley (\citep{Reid}, page 222), and generally became more conciliatory toward Fisher and his contributions to statistics (\citep{Neyman1976}; \citep{Reid}, page 45). In any case, these passages suggest an indirect consequence of this controversy: Neyman's decision to depart for America, where he created a world-class center for statistics at the University of California Berkeley (\citep{Reid}, page 239), established a prominent series of symposia (\citep{Reid}, pages 197--198), and helped to nurture, through his leadership, the American Statistical Association and Institute of Mathematical Statistic (\citep{Reid}, page 218). \citet{FienbergTanur1996} suggest that this break in the professional relationship between Neyman and Fisher may have led to a sharper division between the fields of sample surveys and experimental design: \begin{quote} Because of the bitterness that grew out of this dispute \ldots Fisher and\vadjust{\goodbreak} Neyman were never able to bring their ideas together and benefit from the fruitful interaction that would likely have occurred had they done so. And in the aftermath, Neyman staked out intellectual responsibility for sampling while Fisher did the same for experimentation. It was in part because of this rift between Fisher and Neyman that the fields of sample surveys and experimentation drifted apart. (\citep{FienbergTanur1996}, page 238) \end{quote} \citet{Cox2012} makes the interesting remark that more effort was devoted to issues in randomization following this controversy: \begin{quote} The general issues of the role of randomization were further discussed in the next few years, mostly in \textit{Biometrika}, with contributions from Student, Yates, Neyman and Pearson, and Jeffreys. With the exception of Student's contribution, which emphasized the role of randomization in escaping biases arising from personal judgement, the discussion focused largely on error estimation. (\citep{Cox2012}, page 3) \end{quote} Another consequence was undue emphasis on linear models for analysis of experimental data. As stated by \citeauthor{Gourlay1955a} [(\citeyear{Gourlay1955a}), page 228] Neyman's work in $1935$ led to increased attention on models (for observed data) that formed the basis of statistical analyses such as ANOVA. \citet{Eisenhart1947}, for example, explicitly laid out the four standard assumptions used to justify ANOVA, and noted the importance of additivity. Immediately following this article, \citet{Cochran1947} explored the consequences for an analysis when additivity (and the other assumptions) were not satisfied, and \citet{Bartlett1947} discussed various transformations of the data that make additivity more plausible for ANOVA. Accordingly, past and present books on experimental design tend to invoke additive models when testing Neyman's null using the standard ANOVA F-test, an assumption that automatically yields a test of Fisher's sharp null (\citep{Kempthorne1952}, Chapters~8, 9, 10; \citep{Kempthorne}, Chapters~9, 10). When additivity is believed not to hold, one is generally advised to search for a transformation that yields an additive structure on the potential outcomes. For example, \citeauthor{WilkKempthorne} [(\citeyear{WilkKempthorne}), page 229] make the strong recommendation to\vadjust{\goodbreak} transform to a scale where additivity more nearly obtains for purposes of estimation. This also reflects the motivation behind the famous \citet{BoxCox} family of transformations. Of course, greater emphasis on linear models with Normal errors for observed potential outcomes can generate doubts as to whether randomization is necessary in experimental design. What is then lost is the fact that explicit randomization, as extolled by Fisher, provides the scientist with internally consistent statistical inferences that require no standard modeling assumptions, such as those required for linear regression. It is ironic that many textbooks on experimental design focus solely on Normal theory linear models, without realizing that such models were originally motivated as approximations for randomization inference. Additivity has even been considered an essential assumption for interpreting estimands. For example, \citeauthor{Cox} [(\citeyear{Cox}), pages 16--17] states that the average difference in observed outcomes for two treatments estimates the difference in average potential outcomes for the two treatments in the finite population, but that this estimand of interest is \textit{``\ldots rather an artificial quantity''} if additivity does not hold on the potential outcomes. Perhaps \citeauthor{Kempthorne1952} [(\citeyear{Kempthorne1952}), page 136] can best justify this statement with the specific example where, for each experimental unit, the square root of the potential outcome under treatment is $5$ more than the square root of the potential outcome under control. If one experimenter has three experimental units with control potential outcomes equal to $25, 64$ and $100$, then the effect of the treatment on the raw measurement scale would range from $75$ to $125$. However, another experimenter working with units having control potential outcomes ranging from $9$ to $16$ would have treatment effects ranging from $55$ to $65$ on the raw scale. As Kempthorne states: \begin{quote} Under these circumstances both experimenters will agree only if they state their results in terms of effects on the square root of the observation. It is desirable then to express effects on a scale of measurement such that they are exactly additive. (\citep{Kempthorne1952}, page 136) \end{quote} Thus, Kempthorne's justification for additivity is that it enables externally consistent conclusions to be drawn from a particular analysis, that is, two experimenters working with different samples from the same population will reach the same\vadjust{\goodbreak} conclusion on the treatment effect. One could also interpret this as suggesting that experimenters should model the potential outcomes, with additive treatment effects being one simple model for an analysis. Kempthorne continues to state that: \begin{quote} Such a procedure has its defects, for experimenters prefer to state effects on a scale of measurement that is used as a matter of custom or for convenience reasons. It is probably difficult, for instance, to communicate to a farmer the meaning of the statement that a certain dose of an insecticide reduces the square root of the number of corn borers. A statement on the effect of number of corn borers can be made but is more complex. These difficulties are not, however, in the realm of the experimenter. He should examine his data on a scale of measurement which is such that treatment effects are additive. The real difficulty, in general, is to determine the scale of measurement that has the desired property. (\citep{Kempthorne1952}, page 136) \end{quote} We again read in this quote the perceived importance of additivity that helped motivate the \citet{BoxCox} family of transformations. We do not believe it is necessary to study treatment effects on an additive scale: it is arguably more important to have an internally consistent definition and statistical procedure for studying treatment effects before deciding on externally consistent considerations. In our opinion, an ultimate consequence of this controversy is that, by focusing almost solely on linear models, advances in experimental design have been seriously inhibited from their original, useful and liberating formulation involving potential outcomes. \subsection{Conclusions} \label{sec:conclusion} The Neyman--Fisher controversy arose in part because Neyman sought to determine whether Fisher's ANOVA F-test for RCBs and LSs would still be valid when testing Neyman's more general null hypothesis. Unfortunately, Neyman's calculations were incorrect. In fact, under Neyman's conception of unbiased tests, the F-test for RCB designs potentially rejects at most at the nominal level, yet we could never know for any particular situation whether the F-test for LS designs would reject more often than nominal or not. Furthermore, Neyman's definition of unbiased tests is too crude, because expected\vadjust{\goodbreak} mean sums of squares do not determine the Type I error of the F-test when testing Neyman's null. Two of the greatest statisticians argued over incorrect calculations and inexact measures of unbiasedness for hypothesis tests, adding an ironic aspect to this controversy. What is also ironic is that apparently no statistician deigned to check Neyman's algebra or reasoning; the only discussant who suggested there was a mistake in Neyman's algebra was Fisher, but he did not explicitly state that Neyman was missing interactions in both expected mean residual sums of squares. \citeauthor{Sukhatme} [(\citeyear{Sukhatme}), pages 166, 167] recalculated the expected mean sums of squares in the general case where $\sigma_{\eta}^2(t)$ and $r(t,t')$ are not constant, and did not catch Neyman's mistake. Sukhatme also performed sampling experiments for two examples of LSs to support Neyman's claims. In both of Sukhatme's examples, there is no interaction between row/column blocking factors and treatment, so that $\mathbb{E}(S_0^2) < \mathbb{E}(S_1^2)$. \citeauthor{Neyman1935} [(\citeyear{Neyman1935}), page 175] then considered his algebra correct, because \textit{`` \ldots none of my critics have attempted to challenge it.''} Fisher never referenced \citet{Neyman1935} in his book on experimental design and apparently ignored potential outcomes for many years (\citep{Rubin2005}; \citep{Lehmann}, page 59). Fisher's avoidance of potential outcomes led him to make certain oversights in causal inference. In particular, as described by \citet {Rubin2005}, Fisher never bridged his work on experimental design and parametric modeling, and gave generally flawed advice on the analysis of covariance to adjust for posttreatment concomitants in randomized trials. There is only one reference to \citet{Neyman1935} by \citeauthor{Kempthorne} [(\citeyear{Kempthorne}), page 387] and it was referred to as \textit{``\ldots an interesting somewhat different discussion \ldots''}. The standard accounts of Fisher and Neyman's professional careers (\citep{Box}; \citep{Reid}) do not mention any further work being done on questions raised by \citet{Neyman1935}, although Kempthorne is quoted as saying: \begin{quote} The allusion to agriculture is quite unnecessary and the discussion is relevant to experimentation in any field of human enquiry. The discussion section \ldots is interesting because of the remarks of R. A. Fisher which are informative in some respects but in other respects exhibit Fisher at his very worst \ldots. The\vadjust{\goodbreak} judgement of the future will be, I believe, that Neyman's views were in the correct direction. (\citep{Reid}, page 123) \end{quote} Even the recent account by \citeauthor{Lehmann} [(\citeyear{Lehmann}), Chapters~4, 5] does not mention any statistician addressing Neyman's claims or checking his algebra. In fact, Lehmann ends his discussion of this controversy by recounting the destruction of the physical models Neyman used to illustrate his thoughts on RCB and LS designs during his $1935$ presentation, thought to have been perpetrated by Fisher in a fit of anger (\citep{Reid}, page 124; \citep{Lehmann}, Chapter~4). We agree with Kempthorne's assessment that Neyman's views were in the correct direction in the following sense: by evaluating the frequency properties of statistics for both designs, one can see that the F-test is no longer precise without further assumptions on the potential outcomes. Such evaluations serve the important task of investigating the general properties of a design in a particular applied setting. The F-distribution is a useful approximation to the randomization distribution of the F-test statistic under Fisher's sharp null hypothesis and regularity conditions on the distribution of the potential outcomes or, alternatively, for testing Neyman's null under additivity (\citep{Welch}; \citep{Pitman}). We also agree with \citet{Cox1958} that, if block-treatment interactions are not negligible, then it is not particularly useful to test Neyman's null. More generally, we believe that one must think carefully about the type of null hypotheses one will test, and should be guided by an appropriate model on the potential outcomes. At one extreme, Fisher's sharp null hypothesis requires no model on the potential outcomes to test a reasonable, scientifically interesting null, with the reference distribution based solely on the randomization actually implemented during the experiment. To test Neyman's null, one either needs strong regularity conditions on the potential outcomes for standard procedures to work or one needs to think carefully to build and evaluate a model for the potential outcomes. In any case, one necessarily needs to make assumptions to assess more complicated null hypotheses, and it is important that assumptions on the potential outcomes are driven by actual science, routinely checked for their approximate validity, and not chosen based on necessary requirements for classical statistical procedures that have no real scientific merit. Therefore, a better strategy than focusing on satisfying additivity to use the F-test for testing Neyman's null, we believe, is to introduce a Bayesian framework into the problem (\citep {Rubin1978}). One can obtain a posterior predictive distribution for the estimand of interest (defined in terms of the potential outcomes) and evaluate relevant Bayes' rules using the same criteria that Neyman and others have considered (e.g., consistency, coverage, Type I error) (\citep{Rubin1984}). The Fisher randomization test can be viewed as a type of posterior predictive check (\citep{Rubin1984}), and it can be more enlightening (as the example in Section~\ref{sec:concrete} illustrates) to perform explicitly the Fisher randomization test for Fisher's sharp null, rather than using the F-distribution as an approximation when testing Neyman's null under additivity. When additivity may not hold, evaluating Bayes' rules motivated by the particular applied setting of a problem appears to be a more viable path to the solution of a specific problem than relying on classical statistical procedures that are imprecise without applied contexts. \section*{Acknowledgments} We are grateful to the Executive Editor, an Associate Editor and a referee for many valuable comments that improved this paper. The research of Arman Sabbaghi was supported by the United States National Science Foundation Graduate Research Fellowship under Grant No. DGE-1144152. \begin{supplement}[id=suppA] \stitle{Supplementary materials for ``Comments on the Neyman--Fisher Controversy and its Consequences''} \slink[doi]{10.1214/13-STS454SUPP} \sdatatype{.pdf} \sfilename{sts454\_supp.pdf} \sdescription{The supplementary material contains our reworking of Neyman's calculations, specifically expectations and variances of sample averages, and expectations of sums of squares for RCB and LS designs. These calculations form the basis of all results presented in this article. The supplementary material can be accessed via the following link: \url{http://www.people.fas.harvard.edu/\textasciitilde sabbaghi/sabbaghi\_rubin\_supplement.pdf}.} \end{supplement}
eb01478f1cc191361e1cfc84095aa9868588025c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Recent data on the basic double-pionic fusion reactions $pn \to d\pi^0\pi^0$ and $pn \to d\pi^+\pi^-$ demonstrate that the so-called ABC effect is tightly correlated with a narrow resonance structure in the total cross section of these reactions \cite{prl2011,MB,isofus}. The ABC effect denoting a huge low-mass enhancement in the $\pi\pi$ invariant mass spectrum is observed to occur, if the initial nucleons or light nuclei fuse to a bound final nuclear system and if the produced pion pair is isoscalar. The effect has been named after the initials of Abashian, Booth and Crowe, who first observed it in the inclusive measurement of the $pd \to ^3$HeX reaction more than fifty years ago \cite{abc}. The resonance structure with $I(J^P) = 0(3^+)$ \cite{prl2011} observed in the $pn \to d\pi\pi$ total cross section at $\sqrt s \approx$ 2.38 GeV is situated about 80 MeV below $\sqrt s = 2 m_{\Delta}$, the peak position of the conventional $t$-channel $\Delta\Delta$ process, which is also observed in this reaction. The resonance structure has a width of only 70 MeV, which is about three times narrower than the conventional process. From the Dalitz plot of the $pn \to d\pi^0\pi^0$ reaction it is concluded that this resonance nevertheless decays via the intermediate $\Delta^+\Delta^0$ system (at least predominantly) into its final $d\pi^0\pi^0$ state. In the $pn \to pp\pi^0\pi^-$ reaction the resonance has been sensed, too \cite{pp0-}, though in this case there is no ABC effect associated with the resonance. In consequence it has no longer be called ABC resonance, but $d^*$ -- adopting the notation of the predicted so-called "inevitable dibaryon" \cite{goldman} with identical quantum numbers. By subsequent quasifree polarized $\vec{n}p$ scattering measurements it has been demonstrated that there is a resonance pole in the coupled $^3D_3-^3G_3$ partial waves corresponding to the $d^*$ resonance structure in mass, width and quantum numbers \cite{np,npfull} -- supporting thus its $s$-channel character. If this scenario is correct, then also the $np \to np\pi^0\pi^0$ reaction should be affected by this resonance, since this channel may proceed via the same intermediate $\Delta^0\Delta^+$ system as the $np \to d \pi^0\pi^0$ and $pn \to pp\pi^0\pi^-$ reactions do. From a simple isospin point of view we expect the resonance effect in the $np\pi^0\pi^0$ system to be identical in size to that in the $d\pi^0\pi^0$ system. And from more refined estimates in Refs. \cite{col,oset}, which account also for the different phase space situations, we expect the resonance effect in the $np\pi^0\pi^0$ channel to be about 85$\%$ of that in the $d\pi^0\pi^0$ system. Since the peak resonance cross section in the latter is 270 $\mu$b \cite{isofus} sitting upon some background due to conventional $t$-channel Roper and $\Delta\Delta$ excitations, we estimate the peak resonance contribution in the $np\pi^0\pi^0$ system to be in the order of 200 $\mu$b. \section{Experiment} Since there exist no data at all for the $np \to np\pi^0\pi^0$ channel, we have investigated this reaction experimentally with the WASA detector at COSY (FZ J\"ulich) by using a deuteron beam with an energy of $T_d$~=~2.27~GeV impinging on a hydrogen pellet target \cite{barg,wasa}. By exploiting the quasi-free scattering process $d p \to np\pi^0\pi^0 + p_{spectator}$, we cover the full energy range of the conjectured resonance. In addition, the quasi-free process in inverse kinematics gives us the opportunity to detect also the fast spectator proton in the forward detector of WASA. The hardware trigger utilized in this analysis required at least two charged hits in the forward detector as well as two neutral hits in the central detector. The quasi-free reaction $dp \to np \pi^0\pi^0 + p_{spectator}$ has been selected in the offline analysis by requiring two proton tracks in the forward detector as well as four photon hits in the central detector, which can be traced back to the decay of two $\pi^0$ particles. That way the non-measured neutron four-momentum could be reconstructed by a kinematic fit with three over-constraints. A difficulty emerges from deuterons, which originate from the $np \to d\pi^0\pi^0$ reaction and which partly also break up while passing the detector. Since in the $\Delta E-E$ energy loss plots used for particle identification proton and deuteron bands overlap somewhat, deuterons can not be separated completely from $np$ pairs stemming from the $np \to np\pi^0\pi^0$ reaction. To suppress such misidentified events we require the angle between emitted neutron and proton to be larger than 5 degrees and also their energies to be in the expected range. Nevertheless a Monte-Carlo (MC) simulation of the $np \to d\pi^0\pi^0$ reaction, which is known in very detail \cite{prl2011}, shows that we have to expect still a contamination of about 5$\%$ in the spectra of the $np \to np\pi^0\pi^0$ reaction. In Figs. 1 - 6 the observables are shown with the MC-generated contamination events already subtracted. In the $pn$ invariant-mass spectrum $M_{pn}$, where the contamination shows up most pronounced, this concerns only the first two bins (Fig.~3, bottom). In Fig. 1 the measured and acceptance corrected spectator momentum distribution is shown in comparison with a Monte-Carlo (MC) simulation of the quasifree $dp \to np \pi^0\pi^0 + p_{spectator}$ process. Due to the beam-pipe ejectiles can only be detected in the WASA forward detector for lab angles larger than three degrees. The good agreement between data and simulation provides confidence that the data indeed reflect a quasifree process. The constraint for the suppression of breakup events (see above) causes the maximum accepted spectator momentum to be $<$ 0.14 GeV/c fulfilling the spectator momentum condition used in previous works \cite{prl2011,isofus,np} This implies an energy range of 2.35 GeV $\leq \sqrt s \leq$ 2.41 GeV being covered due to the Fermi motion of the nucleons in the deuteron. This energy range corresponds to incident lab energies of 1.07 GeV $< T_n < $ 1.23 GeV. \begin{figure} \centering \includegraphics[width=0.89\columnwidth]{spectator.eps} \caption{\small Efficiency corrected distribution of the spectator proton momenta in the $dp \to np\pi^0\pi^0 + p_{spectator}$ reaction within the WASA acceptance, which allows the detection of the spectator proton only for lab angles larger than three degrees. In addition the constraint for the suppression of breakup events has been applied (see text). Data are given by solid circles. The solid line shows the expected distribution for the quasifree process based on the CD Bonn potential \cite{mach} deuteron wavefunction. For comparison the dashed line gives the pure phase-space distribution as expected for a coherent reaction process. } \label{fig1} \end{figure} In total a sample of about 24000 good events has been selected. The requirement that the two protons have to be in the angular range covered by the forward detector and that the gammas resulting from $\pi^0$ decay have to be in the angular range of the central detector reduces the overall acceptance to about 7$\%$. Efficiency and acceptance corrections of the data have been performed by MC simulations of reaction process and detector setup. For the MC simulations model descriptions have been used, which will be discussed in the next chapter. Since the acceptance is substantially below 100$\%$, the efficiency corrections are not fully model independent. The hatched grey histograms in Figs. 3 - 6 give an estimate for systematic uncertainties due to the use of different models with and without $d^*$ resonance hypothesis for the efficiency correction. The absolute normalization of the data has been performed by the simultaneous measurement of the quasi-free single pion production process $dp \to pp \pi^0 + n_{spectator}$ and its comparison to previous bubble-chamber results for the $pp \to pp \pi^0$ reaction \cite{shim,eis}. That way the uncertainty in the absolute normalization of our data is essentially that of the previous $pp \to pp \pi^0$ data, {\it i.e.} in the order of 20$\%$. \section{Results and Discussion} In order to determine the energy dependence of the total cross section we have divided our data sample into 10 MeV bins in $\sqrt s$. The resulting total cross sections together with their statistical and systematic uncertainties are listed in Table 1. \begin{table} \caption{Total cross sections obtained in this work for the $np \to np\pi^0\pi^0$ reaction in dependence of the center-of-mass energy $\sqrt s$ and the neutron beam energy $T_n$. Systematic uncertainties are given as obtained from MC simulations for the detector performance assuming various models for the reaction process. } \begin{tabular}{llllll} \hline & $\sqrt s$ & $T_n$ &~~~~$\sigma_{tot}$ &~~~~$\Delta\sigma_{stat}$ &~~~~$\Delta\sigma_{sys}$ \\ & [MeV] & [MeV] &~~~~[$\mu$b] &~~~~[$\mu$b] &~~~~[$\mu$b] \\ \hline & 2.35 & 1.075 &~~~~127 &~~~~6 &~~~12 \\ & 2.36 & 1.100 &~~~~192 &~~~~9 &~~~20 \\ & 2.37 & 1.125 &~~~~222 &~~~11 &~~~22 \\ & 2.38 & 1.150 &~~~~269 &~~~13 &~~~27 \\ & 2.39 & 1.176 &~~~~293 &~~~14 &~~~29 \\ & 2.40 & 1.201 &~~~~295 &~~~14 &~~~29 \\ & 2.41 & 1.227 &~~~~272 &~~~13 &~~~27 \\ \hline \end{tabular}\\ \end{table} Fig.~2 exhibits the energy dependence of the total cross section for the $np \to np\pi^0\pi^0$ reaction (right) in comparison to that of the $pp \to pp\pi^0\pi^0$ reaction (left). The previous WASA results \cite{iso,TT} and the ones of this work are given by the full circles. They are compared to previous bubble-chamber measurements from KEK (open circles) \cite{shim} in case of the $pp\pi^0\pi^0$ channel. In case of the $np\pi^0\pi^0$ channel there exist no dedicated data from previous investigations. However, there are some connected data from the PINOT experiment at Saclay, where the inclusive reactions $pp \to \gamma\gamma X$ and $pd \to \gamma\gamma X$ were measured at $T_p$ = 1.3 and 1.5 GeV \cite{Scomparin}. By excluding the two-photon invariant mass regions corresponding to single $\pi^0$ or $\eta$ production the remaining two-photon events populating the combinatorial background are likely to originate from $\pi^0 \pi^0$ production. By using this feature a measure of the ratio of the cross sections $pn \to pn\pi^0\pi^0 + d\pi^0\pi^0$ to $pp \to pp\pi^0\pi^0$ has been obtained. This leads to a crude estimate for the $pn \to pn\pi^0\pi^0$ cross section to be larger than the $pp \to pp\pi^0\pi^0$ cross section by roughly a factor of two -- in qualitative support of our results from the exclusive measurements \cite{Colin}. \begin{figure*} [t] \begin{center} \includegraphics[width=0.99\textwidth]{2pi0_hc2.eps} \caption{(Color online) Total cross sections for the reactions $pp \to pp\pi^0\pi^0$ (left) and $np \to np\pi^0\pi^0$ (right). The results of this work are shown by the full circles in the right figure. Previous WASA results on the $pp\pi^0\pi^0$ channel are shown by full circles \cite{iso} and full square \cite{TT}, respectively, in the left figure, previous bubble-chamber measurements from KEK \cite{shim} by open circles. The modified Valencia model calculation is shown by the solid lines. The dash-dotted curve shows the result, if the $s$-channel $d^*$ resonance amplitude is added. The $d^*$ contribution itself is given by the dotted curve. } \label{fig2} \end{center} \end{figure*} In Fig.~2 we compare the data to theoretical calculations in the framework of the Valencia model \cite{luis}, which incorporates both non-resonant and resonant $t$-channel processes for two-pion production in $NN$ collisions. The $t$-channel resonance processes of interest here concern first of all the excitation of the Roper resonance and its subsequent decay either directly into the $N\pi\pi$ system or via the $\Delta\pi$ system as well as the excitation and decay of the $\Delta\Delta$ system. Deviating from the original Valencia calculations \cite{luis} the present calculations have been tuned to describe quantitatively the isovector two-pion production reactions $pp \to NN\pi\pi$ \cite{iso}, in particular the $pp\pi^0\pi^0$ \cite{deldel} and $nn\pi^+\pi^+$ \cite{nnpipi} channels by the following modifications: \begin{itemize} \item relativistic corrections for the $\Delta$ propagator as given by Ref.~\cite{ris}, \item strongly reduced $\rho$-exchange contribution in the $t$-channel $\Delta\Delta$ process -- in agreement with calculations from Ref.~\cite{xu}, \item reduction of the $N^* \to \Delta\pi$ amplitude by a factor of two in agreement with the analysis of photon- and pion-induced pion production on the nucleon \cite{boga} and in agreement with $pp \to pp\pi^0\pi^0$ and $pp \to pp\pi^+\pi^-$ measurements close to threshold \cite{WB,JP,ae,Roper} as well as readjustment of the total Roper excitation according to the results of the isospin decomposition of the $pp \to NN\pi\pi$ cross sections \cite{iso}, \item inclusion of the $t$-channel excitation of the $\Delta(1600)P_{33}$ resonance. \end{itemize} The latter modification was necessary, in order to account for the unexpectedly large $pp \to nn\pi^+\pi^+$ cross section \cite{nnpipi}. The predictive power of these modifications has been demonstrated by its successful applications to the recent $pp \to pp\pi^0\pi^0$ data at $T_p$ = 1.4 GeV \cite{TT} and to the $pn \to pp\pi^0\pi^-$ reaction \cite{pp0-}. Final state interaction (FSI) in the emitted $NN$ system has been taken into account in the Migdal-Watson \cite{migdal,watson} factorized form. The $NN$ FSI is by far strongest in the isovector $^1S_0$ $pn$ state and less strong in $^1S_0$ $pp$ and $^3S_1$ $pn$ states as apparent from the scattering lengths in these systems. At energies above 1 GeV the $t$-channel $\Delta\Delta$ process is the dominating one. Isospin decomposition of its contribution to the total $np \to np\pi^0\pi^0$ cross section \cite{dakhno,bys,iso} shows that in this process the $^1S_0$ final state is much less populated than the isoscalar $^3S_1$ state. The situation is somewhat different in the near-threshold region, where the Roper excitation process dominates. In this process equal amounts of $pn$ pairs are emitted in $^1S_0$ and $^3S_1$ states. Since the modified Valencia calculations have been tuned to the $pp \to pp\pi^0\pi^0$ reaction, it is no surprise that its total cross section is fairly well described -- see Fig. 2, left. For the closely related $np \to np\pi^0\pi^0$ reaction the calculations predict a similar energy dependence, but an absolute cross section, which is larger by roughly a factor of two -- whereas the data are larger by more than an order of magnitude -- see Fig. 2, right. As an independent check of these calculations we may perform an isospin decomposition of cross sections using the formulas given in Refs. \cite{dakhno,bys} and the matrix elements deduced from the analysis of the $pp$ induced two-pion production \cite{iso}. As an result of such an exercise we get agreement with the modified Valencia calculation within roughly 30$\%$. As we see from Fig.~2, the experimental cross sections obtained in this work for the $np \to np\pi^0\pi^0$ reaction are three to four times larger than predicted. This failure points to an important reaction component not included in the $t$-channel treatment of two-pion production. It is intriguing that we deal here with the energy region, where the $d^*$ resonance has been observed both in $np$ scattering \cite{np} and in the isoscalar part of the double-pionic fusion to deuterium \cite{prl2011,isofus}. Also it has been shown that the description of the $pn \to pp\pi^0\pi^-$ cross section improves greatly in this energy region, if this resonance is included \cite{pp0-}. Hence we add also here the amplitude of this resonance to the conventional amplitude. According to the predictions of F\"aldt and Wilkin \cite{col} as well as Abaladejo and Oset \cite{oset}, its contribution at the resonance maximum should be about 200 $\mu$b (dotted curve in Fig.~2) as discussed in the introduction. It is amazing, how well the resulting curve (dash-dotted line in Fig.2) describes the data. Of course, it is a pity that there are no data outside the energy region covered by our data. In particular at energies below 1 GeV and above 1.3 GeV, {\it i.e.} outside the resonance region, such data would be very helpful to examine experimentally the reliability of the predictions for the $t$-channel contributions. When binned into $\sqrt s$ bins of 10 MeV the different distributions do not exhibit any particular energy dependence in their shapes -- which is of no surprise, since the energy region covered in this measurement is dominated by the $d^*$ resonance as evident from the discussion of the total cross section. Hence we refrain from showing the differential distributions for single $\sqrt s$ bins. We rather show them unbinned, {\it i.e.}, averaged over the full energy range of the measurement, which has the advantage of better statistics and less systematic uncertainties. For a four-body final state there are seven independent differential observables. We choose to show in this paper the differential distributions for the invariant masses $M_{\pi^0\pi^0}$, $M_{pn}$, $M_{p\pi^0}$, $M_{n\pi^0}$, $M_{n\pi^0\pi^0}$ and $M_{pp\pi^0}$ as well as the differential distributions for the center-of-mass (cm) angles for protons and pions, namely $\Theta_p^{c.m.}$ and $\Theta_{\pi^0}^{c.m.}$. These distributions are shown in Figs. 3 - 6. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{inv2pi0.eps} \includegraphics[width=0.4\textwidth]{invpn.eps} \caption{(Color online) Top: distribution of the $\pi^0\pi^0$ invariant mass $M_{\pi^0\pi^0}$ for the $pn \to np\pi^0\pi^0$ reaction at $T_n$ = 1.135 GeV. Since the data are shown without separation into $\sqrt s$ bins, they correspond to the average over the energy region covered by the quasifree collision process, which is 2.35 GeV $ < \sqrt s <$ 2.41 GeV (1.07 GeV $< T_n <$ 1.23 GeV). Filled circles represent the experimental results of this work. The hatched histograms give estimated systematic uncertainties due to the incomplete coverage of the solid angle. The shaded areas denote phase space distributions. The solid lines are calculations with the modified Valencia model. The dashed (dash-dotted) lines shows the result, if the $d^*$ resonance amplitude with (without) inclusion of the $\Delta\Delta$ vertex function \cite{prl2011} is added. All calculations are normalized in area to the data. Bottom: the same as at the top, but for the $pn$ invariant mass $M_{pn}$. } \label{fig3} \end{center} \end{figure} \begin{figure} [t] \begin{center} \includegraphics[width=0.4\textwidth]{invppi0.eps} \includegraphics[width=0.4\textwidth]{invnpi0.eps} \caption{(Color online) Same as Fig. 3 but for the distributions of the invariant masses $M_{p\pi^0}$ (top) and $M_{n\pi^0}$ (bottom). } \label{fig4} \end{center} \end{figure} \begin{figure} [t] \begin{center} \includegraphics[width=0.4\textwidth]{invn2pi0.eps} \includegraphics[width=0.4\textwidth]{invpnpi0.eps} \caption{(Color online) Same as Fig. 3 but for the distributions of the invariant masses $M_{n\pi^0\pi^0}$ (top) and $M_{pn\pi^0}$ (bottom). } \label{fig5} \end{center} \end{figure} \begin{figure} [t] \begin{center} \includegraphics[width=0.4\textwidth]{cosp.eps} \includegraphics[width=0.4\textwidth]{cospi0.eps} \caption{(Color online) Same as Fig. 3 but for the distributions of the cm angles $\Theta_p^{c.m.}$ (top) and $\Theta_{\pi^0}^{c.m.}$ (bottom). } \label{fig6} \end{center} \end{figure} All measured differential distributions are markedly different in shape from pure phase space distributions (shaded areas in Figs. 3 - 6), but close to the predictions both with (dashed and dash-dotted lines) and without (solid lines) inclusion of the $d^*$ resonance. The invariant mass spectra for $M_{p\pi^0}$, $M_{n\pi^0}$, $M_{n\pi^0\pi^0}$ and $M_{pn\pi^0}$ (Figs. 4 - 5) are characterized by $\Delta$ and $N\Delta$ dynamics as they naturally appear in the deexcitation process of an intermediate $\Delta\Delta$ system created either by $d^*$ decay or via $t$-channel meson exchange. The pion angular distribution (Fig. 6) behaves as expected from the $p$-wave decay of the $\Delta$ resonance. And also the proton angular distribution is similarly curved. Both t-channel meson exchange and the $J^P = 3^+$ requirement for $d^*$ formation predict comparable shapes in agreement with the data. The $M_{pn}$ and $M_{\pi^0\pi^0}$ spectra (Fig. 3) need a more thorough discussion. The data of the $M_{\pi^0\pi^0}$ spectrum appear to be quite well described by the calculations, which hardly deviate from each other. At small invariant masses though, in the range 0.3 - 0.4 GeV/c$^2$, there is an indication of a small surplus of strength. Taken the uncertainties inherent in the data and in the theoretical description, these deviations appear not to be particularly significant. Therefore, if this constitutes a sign of the ABC effect, then it is obviously very small in this reaction. Note that contrary to the situation in the $pn \to pp\pi^0\pi^-$ reaction, where the pion pair has to be in relative $p$-wave and hence the ABC-effect is absent, the pion pair here is preferentially in relative $s$-wave allowing thus, in principle, the occurrence of the ABC effect. Hence, the finding that there is no or nearly no ABC effect comes as a surprise at least for some of its interpretations. This finding is of no surprise, if the ABC effect is described by a formfactor at the $\Delta\Delta$ vertex of the $d^*$ decay \cite{prl2011}. However, then a problem arises with the description of the $M_{pn}$ spectrum, as we discuss in the following. The $M_{pn}$ spectrum peaks sharply at its low-mass threshold, which is characteristic for a strong $np$ FSI as discussed above. This low-mass peaking is well accounted for by the modified Valencia calculations (solid lines in Figs. 3 - 6) . Inclusion of the $d^*$ resonance as outlined in Ref. \cite{prl2011} (dashed lines) exaggerates the low-mass peaking deteriorating thus the agreement with the data. The reason for this behavior is the formfactor at the $\Delta\Delta$ decay vertex of $d^*$ introduced in Ref. \cite{prl2011} for the description of the ABC effect, {\it i.e.} the low-mass enhancement in the $M_{(\pi\pi)^0}$ spectra observed in double-pionic fusion reactions. However, as already pointed out in Ref.~\cite{pp0-}, this formfactor acts only on the $M_{\pi^0\pi^0}$ and $M_{\pi^+\pi^-}$ spectra, if the nucleon pair is bound in a final nuclear system. If this is not the case, then the formfactor acts predominantly on the invariant-mass spectrum of the nucleon pair. This is illustrated by comparison of the calculations including $d^*$ with (dashed) and without (dash-dotted) this formfactor. As we see, the formfactor hardly changes the $M_{\pi^0\pi^0}$ distribution, but shuffles substantial strength in the $M_{pn}$ spectrum to low masses -- thus overshooting the observed low-mass enhancement. This finding indicates that the formfactor introduced in Ref.~\cite{prl2011} on purely phenomenological grounds for the description of the ABC effect is possibly at variance with the data for isoscalar two-pion production in non-fusion channels. Hence alternative solutions for this phenomenon may have to be looked for, such as $d$-wave contributions in the intermediate $\Delta\Delta$ system and/or final nucleon-pair \cite{Zhang,Huang}. Another alternative involving $d$-waves has been proposed recently by Platinova and Kukulin \cite{kuk}. In their ansatz they assume the $d^*$ resonance not only to decay into the $d\pi^0\pi^0$ channel via the route $d^* \to \Delta^+\Delta^0 \to d\pi^0\pi^0$ \footnote{actually they consider the decay $d^* \to D_{12}^{++}\pi^0 \to d\pi^0\pi^0$ with $D_{12}^{++}$ being a $I(J^P) = 1(2^+)$ state near the $N\Delta$ threshold, but since the pion emitted in the $d^*$ decay is in relative $p$-wave to $D_{12}$, this route is practically indistinguishable from a $d^* \to \Delta^+\Delta^0$ decay at the given kinematic conditions}, but also via the route $d^* \to d\sigma \to d\pi^0\pi^0$. Since $\sigma$ is a spin zero object, it has to be in relative $d$-wave to the deuteron in this decay process, in order to satisfy the resonance condition of $J^P~=~3^+$. In consequence the available momentum in this decay process is concentrated in the relative motion between $d$ and $\sigma$ leaving thus only small relative momenta between the two emerging pions. Therefore the $M_{\pi^0\pi^0}$ distribution is expected to be peaked at low masses -- {\it i.e.}, the low-mass enhancement (ABC effect) in this model is made by the $d\sigma$ decay branch (in the amount of about 5$\%$) and not by a formfactor as introduced in Ref. \cite{prl2011}. The enhancement in this model is further increased by interference of the $d\sigma$ decay amplitude with the decay amplitude via the $\Delta^+\Delta^0$ system. It appears straightforward to extend this ansatz also to reaction channels, where the $np$ system is unbound. However, since we hardly observe a low-mass enhancement (ABC effect) in the $M_{\pi^0\pi^0}$ spectrum, much less $d^* \to d\sigma$ contribution is needed here than in the $pn \to d\pi^0\pi^0$ reaction -- which possibly poses a consistency problem for this ansatz \cite{kuk}. Another point of concern with this ansatz is that mass and width of the sigma meson have been fitted to the $pn \to d\pi^0\pi^0$ data in Ref. \cite{kuk} with the result that $m_{\sigma} \approx$ 300 MeV and $\Gamma_{\sigma} \approx$ 100 MeV. Both values are much smaller than the generally accepted values for the sigma meson \cite{PDG}, which are $m_{\sigma}$ = (400 - 550) MeV and $\Gamma_{\sigma}$ = (400 - 700) MeV. In Ref.~\cite{kuk} it has been argued that these deviations could be a sign of chiral restoration in the hadronic/nuclear environment - in particular within the six-quark bag. However, any evidence for this hypothesis from other experiments is lacking so far. Whether the enhanced ABC effect observed in the double-pionic fusion to $^4$He \cite{AP} is in support of such an argumentation is an open question. \section{Conclusions} The $np \to np\pi^0\pi^0$ reaction, for which no dedicated previous data exist, has been investigated by exclusive and kinematically complete measurements. They have been carried out in quasifree kinematics with a deuteron beam impinging on a hydrogen pellet target. Utilizing the nucleons' Fermi motion in the deuteron projectile an energy region of 2.35 GeV $< \sqrt s <$ 2.41 GeV could be covered corresponding to an incident lab energy range of 1.07 - 1.23 GeV. This energy region covers the region of the $d^*$ resonance. The data are in agreement with a resonance contribution of about 200 $\mu$b, as predicted by F\"aldt and Wilkin \cite{col} as well as by Albaladejo and Oset \cite{oset}. In general, the differential data are reasonably well described by calculations, which include both the $d^*$ resonance and the conventional $t$-channel processes. The data indicate only a very small low-mass enhancement (ABC effect) in the $\pi^0\pi^0$-invariant mass distribution. Though this not in disagreement with the phenomenological ansatz of a formfactor at the $d^* \to \Delta\Delta$ decay vertex introduced in Ref.~\cite{prl2011}, the worsening of the description of the $M_{pn}$ spectrum by use of this formfactor calls possibly for an improved explanation of the ABC effect in connection with the $d^*$ resonance. After having found evidences for the $d^*$ resonance in the $d\pi^0\pi^0$, $d\pi^+\pi^-$ and $ pp\pi^0\pi^-$ channels, the channel investigated here has been one of the two remaining two-pion production channels, where the predicted contributions of the $d^*$ resonance had not yet been checked experimentally. As we have shown now, the data for the $np\pi^0\pi^0$ channel are consistent with the $d^*$ hypothesis and provide an experimentally determined branching for the $d^*$ decay into this channel. Since $d^*$ has been observed meanwhile also in the elastic channel by polarized $\vec{n}p$ scattering, the only remaining unexplored decay channel is $np\pi^+\pi^-$. This channel has been measured recently at HADES and preliminary results have been presented already at conferences \cite{kuril,agaki,prag}. It will be highly interesting, not only to obtain total cross sections for this channel, but also differential distributions. Of particular interest will be the $M_{pn}$ and $M_{\pi^+\pi^-}$ distributions as discussed in this work. \section{Acknowledgments} We acknowledge valuable discussions with V. Kukulin, E. Oset and C. Wilkin on this issue. We are particularly indebted to L. Alvarez-Ruso for using his code. This work has been supported by Forschungszentrum J\"ulich (COSY-FFE), DFG, the Foundation for Polish Science through the MPD programme and by the Polish National Science Centre through the Grants No. 2011/01/B/ST2/00431 and 2013/11/N/ST2/04152.
aa855a92b27e6fd1f9a94c0fb80681cfc8c4f095
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:intro} For a rational polytope $\mathcal{P} \subseteq \mathbb{R}^n$ of dimension $d$, consider the counting function $\mathcal{L}_{\polyp}(m) = |m\mathcal{P} \cap \mathbb{Z}^n|$, where $m\mathcal{P}$ is the $m$-th dilate of $\mathcal{P}$. The \emph{Ehrhart series} of $\mathcal{P}$ is \[ E_{\mathcal{P}}(t) := 1 + \sum_{m\in \mathbb{Z}_{\geq 1}} \mathcal{L}_{\polyp}(m)t^m \, . \] Let $\mathrm{den~}\mathcal{P}$ denote the least common multiple of the denominators appearing in the coordinates of the vertices of $\mathcal{P}$. Combining two well-known theorems due to Ehrhart \cite{Ehrhart} and Stanley \cite{StanleyDecompositions}, there exist values $h_0^*,\ldots,h_k^*\in \mathbb{Z}_{\geq 0}$ with $h_0^*=1$ such that \[ E_\mathcal{P}(t)=\frac{\sum_{j=0}^kh_j^*t^j}{(1-t^{\mathrm{den~}\mathcal{P}})^{d+1}} \, . \] We say the polynomial $h^*_\mathcal{P}(t):=\sum_{j=0}^kh_j^*t^j$ is the \emph{$h^*$-polynomial} of $\mathcal{P}$ (sometimes referred to as the $\delta$-polynomial of $\mathcal{P}$) and the vector of coefficients $h^*(\mathcal{P})$ is the \emph{$h^*$-vector} of $\mathcal{P}$. That $E_\mathcal{P}(t)$ is of this rational form is equivalent to $|m\mathcal{P}\cap \mathbb{Z}^n|$ being a quasipolynomial function of $m$ of degree at most $d$; the non-negativity of the $h^*$-vector is an even stronger property. If $\mathrm{den~}\mathcal{P} \neq 1$ then the form of $E_\mathcal{P}(t)$ above may not be fully reduced, yet we still refer to the coefficients of this form when discussing $h^*(\mathcal{P})$. Even more tools are available when $\mathcal{P}$ is a lattice polytope, that is, when its vertices are integral. Recent work has focused on determining when $h^*(\mathcal{P})$ is unimodal, that is, when there exists some $j$ for which $h_0^* \leq \cdots \leq h_j^* \geq \cdots \geq h_k^*$. The specific sequence in question may not be of particular interest, but unimodal behavior often suggests an underlying structure that may not be immediately apparent. Thus, the proofs of various $h^*$-vectors being unimodal are often more enlightening than the sequences themselves. There are a number of approaches possible for proving unimodality, taken from fields such as Lie theory, algebraic statistics, and others \cite{stanleylogconcave}. In this paper, we consider a variation of the Birkhoff polytope, which is defined as follows. \begin{definition} The {\em Birkhoff polytope} is the set of $n\times n$ matrices with real nonnegative entries such that each row and column sum is 1. \end{definition} We denote this polytope by $B_n$ and note that it is also often referred to as the polytope of real $n\times n$ doubly-stochastic matrices or the polytope of $n\times n$ magic squares. The vertex description of $B_n$ is due to the Birkhoff-von Neumann theorem, which finds that $B_n$ is the convex hull of the permutation matrices. The $h^*$-vector of the Birkhoff polytope is difficult to compute in general, and is known only for $n \leq 9$; its volume only for $n \leq 10$ \cite{beckpixton}. As limited as the data is, it has still been shown that $h^*(B_n)$ is symmetric as well as unimodal \cite{athanasiadisbirkhoff, stanley1973, stanley1976}. On the other hand, little is known about the polytope $\Sigma_n$ obtained by intersecting $B_n$ with the hyperplanes $x_{ij} = x_{ji}$ for all $i, j$, that is, by requiring the corresponding matrices to be symmetric. Nothing is new when $n\leq2$, but complications arise once $n \geq 3$ since the vertices of $\Sigma_n$ are no longer always integral. They are contained in the set \[ L_n = \left\{ \frac{1}{2}(P + P^T) | P \in \mathbb{R}^{n\times n} \mathrm{~is~a~permutation~matrix}\right\}, \] but $L_n$ is not necessarily equal to the vertices of $\Sigma_n$. A description of the vertices and a generating function for the number of them can be found in \cite{StanleyVol2}. In \cite{StanleyGreenBook}, Stanley shows that the dimension of $\Sigma_n$ is ${n \choose 2}$ (whereas the dimension of $B_n$ is $(n-1)^2$); he also shows that the $h^*$-vector of $\Sigma_n$ is symmetric and in \cite{StanleyVol1Ed2} computes $E_{\Sigma_n}(t)$ in a reduced form for some small $n$, but it is still unknown whether the $h^*$-vector is unimodal in this case. \begin{definition} Denote by $S_n$ the polytope containing all real $n\times n$ symmetric matrices with nonnegative entries such that every row and column sum is 2. That is, $S_n$ is the dilation of $\Sigma_n$ by two. \end{definition} Fortunately, some information about $\Sigma_n$ (such as dimension) is retained by $S_n$, a polytope that is combinatorially equivalent but with integral vertices. The main purpose of this paper is to examine what happens when trying to prove that $h^*(S_n)$ is unimodal by adapting the techniques used to prove that $h^*(B_n)$ is unimodal. Several key ingredients translate nicely to the context of $S_n$, but mysteries remain when examining its toric ideal and certain Gr\"obner bases of it, notions that will be made more precise in Section~\ref{sec:triangulations}. In this direction, we will show the following. \begin{theorem}\label{thm:mainthm} For all $n$, let $I_{S_n}$ denote the toric ideal of $S_n$. The following properties hold: \begin{enumerate} \item For any term ordering, every element of the reduced Gr\"obner basis $\mathscr{G}$ of $I_{S_n}$ with respect to this order consists of binomials, one monomial of which is squarefree. \item For any term ordering, every variable in $I_{S_n}$ appears in a degree-two binomial in $\mathscr{G}$. \item There exists a class of term orders $\prec_{S_n}$ for which the initial term of each degree-two binomial in $\mathscr{G}$ is squarefree. \item For the term orders $\prec_{S_n}$, the initial term $\mathrm{in}_{\prec_{S_n}}(g)$ of each $g \in \mathscr{G}$ is cubefree, that is, $\mathrm{in}_{\prec_{S_n}}(g)$ is not divisible by $t_i^3$ for any variable $t_i$ appearing in $g$. \end{enumerate} \end{theorem} \section{Basic Properties, Symmetry, and Integral Closure} \label{sec:basics} Although relatively little has been established about the Ehrhart theory of $S_n$, it has still been studied and some basic information is known. For $\Sigma_n$, the degrees of the constituent polynomials of its Ehrhart quasipolynomial are known. \begin{theorem}[Theorem 8.1, \cite{Jia}]\label{jia} The Ehrhart quasipolynomial of $\Sigma_n$ is of the form $f_n(t) + (-1)^tg_n(t)$, where $\deg f(t) = {n \choose 2}$ and \[ \deg g_n(t) = \left\{ \begin{array}{ll} {n - 1 \choose 2} - 1 & \mathrm{~if~} n \mathrm{~odd}\\ {n - 2 \choose 2} - 1 & \mathrm{~if~} n \mathrm{~even} \end{array} \right. . \] \end{theorem} Stanley first proved that the above degrees are upper bounds and conjectured equality \cite{stanley1976}, and the conjecture was proven using analytic methods. These degrees provide an upper bound on the degree of $h^*_{\Sigma_n}(t)$; we will provide exact degrees later. Since the Ehrhart series of $S_n$, as a formal power series, consists of the even-degree terms of the monomials appearing in $E_{\Sigma_n}(t)$, we get $\mathcal{L}_{S_n}(t) = f_n(2t) + g_n(2t)$. The defining inequalities of our polytopes will be helpful in some contexts. For $S_n$, these are \begin{eqnarray*} x_{ij} &\geq& 0 \mathrm{~for~all~} 1 \leq i \leq j \leq n,\\ x_{ij} &=& x_{ji} \mathrm{~for~all~} 1 \leq i < j \leq n,\\ \sum_{i=1}^n x_{ij} &=& 2 \mathrm{~for~each~} j = 1, \ldots, n. \end{eqnarray*} The first set of inequalities provided indicate that the facet-defining supporting hyperplanes of $S_n$ are $x_{ij} = 0$: if any of these are disregarded, the solution set strictly increases in size. \begin{definition} A lattice polytope $\mathcal{P} \subseteq \mathbb{R}^n$ is called {\em integrally closed} if, for every $v \in m\mathcal{P} \cap \mathbb{Z}^n$, there are $m$ points $v_1, \ldots, v_m \in \mathcal{P} \cap \mathbb{Z}^n$ such that $v = v_1 + \cdots + v_m$. \end{definition} This idea is not to be confused with a normal polytope, in which we instead choose $v$ from $m\mathcal{P} \cap (my + N)$ for an appropriate choice of $y \in \mathcal{P}\cap\mathbb{Z}^n$ and $N$ is the lattice \[ N = \sum_{z_1,z_2 \in \mathcal{P} \cap \mathbb{Z}^n} \mathbb{Z}(z_1 - z_2) \subseteq \mathbb{Z}^n. \] In particular, every integrally closed polytope is normal, but not every normal polytope is integrally closed. There is more discussion of this difference in \cite{gubeladzeconvexnormality}. It is currently an open problem to determine whether integrally closed polytopes have unimodal $h^*$-vectors. This is unknown even in highly restricted cases, such as if the polytope is reflexive, a simplex, or even both. The last case is explored more in \cite{BraunDavis}. We first would like to prove that $S_n$ is integrally closed. To do so, we must interpret the lattice points of $S_n$ as certain adjacency matrices of graphs. \begin{proposition}\label{intclosed} For all $n$, $S_n$ is integrally closed. \end{proposition} \begin{proof} The can be seen as a corollary of a theorem of Petersen's 2-factor theorem. For any $m \in \mathbb{Z}_{\geq 0}$, each lattice point $X = (x_{ij}) \in mS_n$ can be interpreted as the adjacency matrix of an undirected $2m$-regular multigraph $G_X$ on distinct vertices $v_1,\ldots,v_n$, with loops having degree 1. We first observe that the total number of loops will be even: if there were an odd number of loops, consider the graph with the loops removed. The sum of degrees of the vertices in the resulting graph would be odd, which is an impossibility. Denote by $V_{odd}(G_X)$ the vertices of $G_X$ with an odd number of loops, and write $|V_{odd}(G_X)| = 2mt + s$, where $t,s$ are nonnegative integers and $s < 2m$. Note in particular that $s$ will be even. Construct a new graph $G_Y$ with vertex set $V(G_Y) = \{v_1, \ldots,v_n,w_0,w_1,\ldots,w_t\}$ with the same edges as in $G_X$ with the following modifications: \begin{enumerate} \item For each $v_i \notin V_{odd}(G_X)$, $v_i$ will have $\frac{1}{2}x_{ii}$ loops in $G_Y$. \item For each $v_i \in V_{odd}(G_X)$, $v_i$ will have $\frac{1}{2}(x_{ii} - 1)$ loops and an edge between $v_i$ and the lowest-indexed $w_j$ such that $\deg w_j < 2m$. \item Vertex $w_t$ will have $\frac{1}{2}(2m - s)$ loops. \end{enumerate} This new graph will be $2m$-regular, now counting loops as degree 2. Thus, by Petersen's 2-factorization theorem, $G_Y$ can be decomposed into 2-factors. Hence the matrix $Y$ corresponding to $G_Y$ will decompose as the sum of $Y_1,\ldots, Y_m$, each summand a lattice point of $mS_{n+t+1}$. Now we must ``undo'' the changes we made to $G_X$ to obtain the desired sum. Index the rows and columns by $\{v_1, \ldots,v_n,w_0,w_1,\ldots,w_t\}$. Each edge $v_iw_j$ will appear in some $Y_k$ as a 1 in positions $(v_i,w_j)$ and $(w_j,v_i)$. Replace these entries with 0 and add 1 to entry $(v_i,v_i)$. Denote by $X_k$ the submatrix of $Y_k$ consisting of rows and columns indexed by $v_1, \ldots, v_n$ after any appropriate replacements have been made. Each replacement preserves the sum of row/column $v_i$, and applying this to each $Y_k$ leaves any entry $(v_i,w_j)$ as 0, so each $X_k$ is a lattice point of $S_n$. Thus $X = \sum X_k$, as desired. \end{proof} A second useful ingredient in proving that $h^*(B_n)$ is unimodal is proving that it has the following property. \begin{definition} For a lattice polytope $\mathcal{P} \subseteq \mathbb{R}^n$, denote by $k[\mathcal{P}]$ the semigroup algebra \[ k[\mathcal{P}] := k[x^az^m | a \in m\mathcal{P} \cap \mathbb{Z}^{n+1}] \subseteq k[x_1^{\pm 1}, \ldots, x_n^{\pm 1}, z]. \] Then $\mathcal{P}$ is called {\em Gorenstein} if $k[\mathcal{P}]$ is Gorenstein. More specifically, $\mathcal{P}$ is {\em Gorenstein of index $r$} if there exists a monomial $x^cz^r$ for which \[ k[\mathcal{P}^{\circ}] \cong (x^cz^r)k[\mathcal{P}]. \] \end{definition} Having the hyperplane description of a polytope can make it easier to determine if it is Gorenstein, as evidenced by the following lemma. \begin{lemma}[Lemma 2(iii), \cite{BrunsRomer}]\label{facetdistance} Suppose $\mathcal{P}$ has irredundant supporting hyperplanes $l_1,\ldots, l_s \geq 0$, where the coefficients of each $l_i$ are relatively prime integers. Then $\mathcal{P}$ is Gorenstein (of index $r$) if and only if there is some $c \in r\mathcal{P} \cap \mathbb{Z}^n$ for which $l_i(c) = 1$ for all $i$. \end{lemma} Generally, proving the unimodality of an $h^*$-vector is a challenging task. There are more techniques available, though, if we have a Gorenstein polytope, that is, if the semigroup algebra $k[\mathcal{P}]$ is Gorenstein. A closely related class of polytopes is the following. \begin{definition} A lattice polytope $\mathcal{P}$ is called {\em reflexive} if $0 \in \mathcal{P}^{\circ}$, that is, $0$ is in the interior of $\mathcal{P}$, and its {\em (polar) dual} \[ \mathcal{P}^{\Delta} := \{y \in \mathbb{R}^n : x \cdot y \leq 1 \mathrm{~for~all~} x \in \mathcal{P}\} \] is also a lattice polytope. A lattice translate of a reflexive polytope is also called reflexive. \end{definition} It was proven by Hibi \cite{HibiDualPolytopes} that reflexive polytope are exactly the Gorenstein polytopes of index 1. This connection has been used to reduce questions about integrally closed Gorenstein polytopes to questions about only the integrally closed reflexive polytopes, as in the following statement. \begin{lemma}[Corollary 7, \cite{BrunsRomer}]\label{reflexive} Suppose $\mathcal{P} \subseteq \mathbb{R}^n$ is a full-dimensional integrally closed Gorenstein polytope with supporting hyperplanes $l_1,\ldots,l_s$ as in Lemma~\ref{facetdistance}. Consider lattice points $v_0,\ldots,v_k$ of $\mathcal{P}$. If these points form a $k$-dimensional simplex and $l_i(v_0 + \cdots + v_k) = 1$ for each $i$, then $\mathcal{P}$ projects to an integrally closed reflexive polytope $\mathcal{Q}$ of dimension $n - k$ with equal $h^*$-vector. \end{lemma} \begin{theorem}\label{neven} $S_n$ is Gorenstein if and only if $n$ is even. When $n=2k$, $S_n$ is Gorenstein of type $k$, and $h^*(S_n)$ is the $h^*$-vector of a reflexive polytope of dimension $2k^2-2k+1$. Hence, $\deg h^*_{S_n}(t) = 2k^2 - 2k +1$. \end{theorem} \begin{proof} By Lemma~\ref{facetdistance} and knowing the facet description of $S_n$, we can see that the polytope is Gorenstein by choosing integer matrices of $S_n$ whose sum is the all-ones matrix. When $n$ is odd, this is impossible: such a matrix has an odd line sum, whereas any sum of matrices in $S_n$ has even line sum. Let $n=2k$. For each $i \in \{1, 2, \ldots, k-1\}$, construct a matrix \[ \begin{pmatrix} a_0 & a_{n-1} & a_{n-2} & \cdots & a_2 & a_1 \\ a_{n-1} & a_0 & a_{n-1} & \cdots & a_3 & a_2 \\ a_{n-2} & a_{n-1} & a_0 & \cdots & a_4 & a_3 \\ \vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\ a_2 & a_3 & a_4 & \cdots & a_0 & a_{n-1} \\ a_1 & a_2 & a_3 & \cdots & a_{n-1} & a_0 \\ \end{pmatrix} \] by setting $a_i = a_{n-i} = 1$ and $a_j = 0$ for all $j \neq i$. Construct one additional matrix by setting $a_0 = a_k = 1$ and $a_j = 0$ for all $j \neq 0, k$. Each of the $k$ matrices are symmetric and have pairwise disjoint support by construction. These are therefore vertices of a simplex of dimension $k-1$, and Lemma~\ref{reflexive} provides the reflexivity result. \end{proof} Note that this is not the only class of simplices satisfying the conditions of Lemma~\ref{facetdistance} contained in $S_n$ for even $n$; others may be found. It may be interesting to ask how many such distinct simplices in $S_n$ exist. \begin{example} For $n=6$, we construct the special simplex described above. It has three vertices, which are \[ \begin{pmatrix} 0 & 1 & 0 & 0 & 0 & 1\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0\\ \end{pmatrix}, \begin{pmatrix} 0 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 0 & 1 & 0 & 1\\ 1 & 0 & 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0 & 0 & 1\\ 1 & 0 & 1 & 0 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 0\\ \end{pmatrix}, \begin{pmatrix} 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1\\ 1 & 0 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 0 & 1\\ \end{pmatrix}. \] \end{example} \begin{proposition}\label{nodd} If $n=2k+1$, then the first scaling of $S_n$ containing interior lattice points is $\left(\frac{n+1}{2}\right)S_n$. Specifically, the number of interior lattice points in this scaling is the number of symmetric permutation matrices, i.e. the number of involutions of the set $\{1, 2, \ldots, n\}$. Thus, $\deg h^*_{S_n}(t) = 2k^2$. \end{proposition} \begin{proof} For an interior point, each matrix entry must be positive. However, the matrix of all 1s does not work since this results in an odd line sum. Thus there must be a 2 in each row and column as well. Thus by subtracting the all-1s matrix, each lattice point corresponds to a symmetric permutation matrix, that is, an involution. The line sum for the interior lattice points will be $n+1$, and we remember that the line sums of matrices in $S_n$ is 2. By Theorem 1.5 of \cite{StanleyDecompositions}, $$E_{(S_n)^{\circ}}(t) = (-1)^{{n \choose 2}}E_{S_n}\left(\frac{1}{t}\right).$$ When expanded as a power series, the lowest-degree term will be $t^{({n \choose 2} + 1) - d}$, where $d = \deg h^*_{S_n}(t)$. The degree of $h^*_{S_n}(t)$ follows. \end{proof} With these, we can deduce the degrees of $h^*_{\Sigma_n}(t)$ for each $n$. \begin{proposition} For all $n$, $h^*(S_n)$ consists of the even-indexed entries of $h^*(\Sigma_n)$. Thus, if $n$ is even, then $\deg h^*_{\Sigma_n}(t) = 2(\deg h^*_{S_n}(t))$, and if $n$ is odd, then $\deg h^*_{\Sigma_n}(t) = 2(\deg h^*_{S_n}(t)) + 1$. \end{proposition} \begin{proof} As power series, the coefficient of $t^m$ in $E_{S_n}(t)$ is the same as the coefficient of $t^{2m}$ in $E_{\Sigma_n}(t)$. Recalling Theorem~\ref{jia}, this gives $$E_{\Sigma_n}(t) = E_{S_n}(t^2) + t\sum_{m\geq 0}f(m)t^{2m}$$ for some polynomial $f$. So, as rational functions, the first summand of the above will have entirely even-degree terms in the numerator and the same denominator as the rational form of $E_{\Sigma_n}(t)$. Thus, the second summand, when written to have a common denominator as the first summand, will have entirely odd-degree terms in its numerator. Therefore, $h^*(S_n)$ consists of the even-indexed entries of $h^*(\Sigma_n)$. Since $h^*(S_n)$ is symmetric for even $n$ only, and by Proposition~\ref{nodd}, the degrees of $h^*(\Sigma_n)$ follow. \end{proof} \section{Toric Ideals and Regular, Unimodular Triangulations} \label{sec:triangulations} For a polytope $\mathcal{P} \subseteq \mathbb{R}^n$ let $\mathcal{P} \cap \mathbb{Z}^n = \{a_1,\ldots, a_s\}$. We define the {\em toric ideal} of $\mathcal{P}$ to be the kernel of the map $$\pi: T_\mathcal{P} = k[t_1,\ldots,t_s] \to k[\mathcal{P}],$$ where $\pi(t_i) = \left(\prod x^{a_i}\right)z$, using the multivariate notation. This ideal we denote $I_\mathcal{P}$. Because the lattice points of $S_n$ correspond to matrices, it will sometimes be more convenient to use the indexing $$T_{S_n} = k[t_A | A \in S_n \cap \mathbb{Z}^{n\times n}] \mathrm{~and~} k[S_n] = k[x^Az^m | A \in mS_n \cap \mathbb{Z}^{n\times n}],$$ where we now use $$x^Az^m = \prod_{0 \leq i,j \leq n}x_{ij}^{a_{i,j}}z^m$$ with $A = (a_{i,j})$. Thus $\pi: T_{S_n} \to k[S_n]$ is given by $\pi(t_M) = x^Mz$. The toric ideal of a polytope has been widely studied, in large part for its connections to triangulations of the polytope. Various properties of the initial ideal of $I_\mathcal{P}$ are equivalent to corresponding properties of the triangulation, with perhaps one of the most well-known connections being the following result. \begin{theorem}[Theorem 8.9, \cite{sturmfels}] Given a monomial ordering $\prec$ on $T_\mathcal{P}$, the initial ideal $\mathrm{in}_{\prec} (I_\mathcal{P})$ is squarefree if and only if the corresponding regular triangulation of $\mathcal{P}$ is unimodular. \end{theorem} In general, $\mathrm{in}_{\prec_{\mathrm{rlex}}}(I_\mathcal{P})$ cannot be guaranteed to be squarefree. This does not rule out the existence of $\mathrm{in}_{\prec_{\mathrm{rlex}}}(I_\mathcal{P})$ being squarefree for {\em some} ordering of their lattice points, though this may require much more work; the generators of a toric ideal are notoriously difficult to compute in general. The following order we place on the lattice points of $S_n$ experimentally appears to provide enough structure to induce regular, unimodular triangulations. \begin{definition}\label{def:ordersn} We place a total order $<_{S_n}$ on the lattice points of $S_n$ by first setting $M <_{S_n} N$ if $M$ contains more 2s in its entries than $N$. This creates a partial order on the lattice points of $S_n$; from this, any linear extension will result in a total order on the lattice points. For the remainder of this paper, we will denote any choice of these total orders by $<_{S_n}$. This class of orders induces a class of graded reverse lexicographic term orders $\prec_{S_n}$ on the variables of $T_{S_n}$, specifically $t_M \prec_{S_n} t_N$ if and only if $M <_{S_n} N$. \end{definition} We are now ready to prove Theorem~\ref{thm:mainthm}. \begin{proof}[Proof of Theorem~\ref{thm:mainthm}] First, let $\mathscr{G}$ be the reduced Gr\"obner basis of $I_{S_n}$ with respect to any ordering. It is known to consist of binomials itself. Suppose $\mathscr{G}$ has a binomial $u - v$ with both terms containing squares, and $\pi(u) = \pi(v) = x^Az^k$. Note in particular that the variables in $u$ and $v$ are distinct. Suppose $t_M$ and $t_N$ are the variables in the separate terms with powers greater than 1. Then $\pi(t_Mt_N)$ is the average of the points corresponding to $\pi(t_M^2)$ and $\pi(t_N^2)$, thus is subtractable from $A$. By the integral closure of $S_n$, there is some third monomial $b$ such that $\pi(t_Mt_Nb) = x^Az^k$. So $u - t_Mt_Nb$ is in $I_{S_n}$; however, we can factor out $t_M$ from this to get $u - t_Mt_Nb = t_M(u_1 - u_2)$. We may similarly factor $t_N$ from $v - t_Mt_Nb$ to get $t_N(v_1 - v_2)$, which must also be in $I_{S_n}$. Therefore $u_1 - u_2$ and $v_1 - v_2$ must be in $I_{S_n}$ themselves, and $u - v$ can be written as \[ u - v = u - t_Mt_Nb + t_Mt_Nb - v = t_M(u_1 - u_2) - t_N(v_2 - v_1) \] which contradicts $\mathscr{G}$ being reduced. Therefore no binomial in $\mathscr{G}$ can have both terms containing a square. For the second property, we must show that, for any lattice point $M \in S_n$, we can find a second lattice point $N \in S_n$ such that $M + N$ can be represented in a second, distinct sum. Since these are degree 2, the relation must be recorded in $I_{S_n}$, meaning both terms appear individually in $\mathscr{G}$ (even if not as part of the same binomial). While this can be proven in terms of matrices, it will be easier to work in terms of graph labelings. As we saw in Proposition~\ref{intclosed}, each lattice point $M \in S_n$ corresponds to a 2-factor $G_M$, a covering of $n$ vertices so that each vertex is incident to two edges. Thus for each 2-factor $G_M$, we want to find a second 2-factor $G_N$ such that $G_M \cup G_N$ can be written as a union of 2-factors, each distinct from both $G_M$ and $G_N$. Each covering is a disjoint union of two possible connected components: first, a path, possibly of length 0, whose endpoints also have loops; second, a $k$-cycle for some $k \leq n$. This allows us to break the remainder of the proof into three cases. First suppose $G_M$ contains a path $v_1, v_2, \ldots, v_k$, $k > 1$, with loops at its endpoints. Set $G_N$ to be the graph agreeing with $G_M$ except on these vertices. Here we place a single loop on each of $v_1$ and $v_k$, an edge between these two vertices, and two loops on each of $v_2, \ldots, v_{k-1}$. The union $G_M \cup G_N$ can be decomposed appropriately as a cycle $v_1, v_2, \ldots, v_k, v_1$ and as two loops on each vertex. Next suppose that $G_M$ contains no such paths but does contain a cycle $v_1, v_2, \ldots, v_k, v_1$ for some $k \geq 2$. Let $G_N$ be the cover with two loops on each $v_i$. Then $G_M \cup G_N$ decomposes as the path $v_1, \ldots, v_k$ with a loop on $v_1$ and $v_k$ as one covering and the other covering as the edge $v_1, v_k$ with loops $v_1, v_1$ and $v_k, v_k$ along with two loops on all other vertices. If $G_M$ does not fit into either of the previous cases, then its connected components all consist of two loops on each of the $n$ vertices. Form a new graph $G^{\prime}$ by setting it equal to $G_M$, except for two distinct vertices, $v_1$ and $v_2$. Instead, place two edges between $v_1$ and $v_2$. Then $G^{\prime}$ is also a 2-factor, and $G^{\prime} = G_N$ for some lattice point $N \in S_n$. Moreover, the entries of both $M$ and $N$ consist of only zeros or twos, so their average $A = \frac{1}{2}(M+N)$ is a lattice point of $S_n$ distinct from both $M$ and $N$. So, $G_M \cup G_N = G_A \cup G_A$. This covers all cases, so the corresponding $M$ will always appear in a degree-two binomial of $\mathscr{G}$. We restrict to the order $\prec_{S_n}$ and fix this order for the remainder of the proof. For the third property, consider $t_Mt_N - t_Xt_Y \in \mathscr{G}$. Since we know one of the monomials must be squarefree, it is enough to check the case when the other monomial is a square square. Suppose $M = N$. This can only occur if $M$ is not a vertex; hence, $M$ is the midpoint of $X$ and $Y$. Thus if any entries of $M$ are 2, the corresponding entries of $X$ and $Y$ must also be 2. Since $X$ and $Y$ are distinct, though, they have distinct support. This implies that some entry of $M$ is 1, which arises from one of the corresponding entries of $X$ and $Y$ being 0 and the other being 2. So, one of $X$ or $Y$ will contain more twos than $M$, giving us $\mathrm{in}_{\prec_{S_n}}(t_Mt_N - t_Xt_Y) = -t_Xt_Y$. Lastly, consider an arbitrary binomial $u - v$ of degree $k$ from $\mathscr{G}$. If the initial term is the squarefree term, then it is certainly cubefree. Otherwise, the binomial is of the form $t_{A_1}^{a_1}\cdots t_{A_r}^{a_r} - t_{B_1}\cdots t_{B_k}$, with $\mathrm{in}_{\prec_{S_n}}(u-v) = u = t_{A_1}^{a_1}\cdots t_{A_r}^{a_r}$ and each $a_i \geq 1$. Since we are using the order $\prec_{S_n}$, one of the variables of $v = t_{B_1}\cdots t_{B_k}$ is less than all variables in $u$; without loss of generality, assume this variable is $t_{B_1}$. Choose a nonzero entry of $B_1$. There will be some variable $t_{M_1}$ such that $M_1 \in \{A_1,\ldots,A_r\}$ and $M_1$ is also nonzero in the same position. Now, choose a nonzero entry of $B_1$ such that the position is zero in $A_1$. Then we know there is some variable $t_{M_2}$ such that $M_2 \in \{A_1,\ldots,A_r\} \setminus \{M_1\}$ and $M_2$ is nonzero in this new position. Repeating this process gives a monomial $t_{M_1}\cdots t_{M_s}$ such that $M = M_1 + \cdots + M_s$ is nonzero whenever $B_1$ is nonzero. If there are any positions that are 2 in $B_1$ and 1 in $M$, then square a variable of $t_{M_1}\cdots t_{M_s}$ whose corresponding matrix is nonzero in that position. Repeat on distinct variables if necessary. The resulting monomial, which we will call $m_1$, is cubefree, and there is some second monomial $m_2$ such that $m_1 - t_{B_1}m_2 \in I_{S_n}$. Because $t_{B_1}$ was chosen to be less than all the variables $t_{A_1},\ldots,t_{A_r}$, we know that $\mathrm{in}_{\prec_{S_n}}(m_1 - t_{B_1}m_2) = m_1$, which divides $t_{A_1}^{a_1}\cdots t_{A_r}^{a_r}$ Since our chosen binomial is in a reduced Gr\"obner basis, the two must be equal. Therefore, every initial term of a binomial in $\mathscr{G}$ is cubefree. \end{proof} If the initial terms of $\mathscr{G}$ with respect to $\prec_{S_n}$ can be proven to be squarefree, then the following conjecture holds. \begin{conjecture}\label{thm:mainconjecture} $S_n$ has a regular, unimodular triangulation, hence $h^*(S_n)$ is unimodal when $n$ is even. \end{conjecture} The second statement of the conjecture would follow due to Theorem~1 of \cite{BrunsRomer}. The last part of the previous proof adapts the method used in Theorem~14.8 of \cite{sturmfels} to show that $I_{B_n}$ has a squarefree initial ideal for any reverse lexicographic ordering. However, we cannot continue to adapt this proof so simply at this point: although one of the matrices $A_j$ coming from $u$ may be nonzero in a position that $B_1$ is also nonzero, the entry may be 1 in $A_j$ and 2 in $B_1$, and there is a priori no indication that any other variable corresponds to a matrix with a nonzero entry in the same position. \section{Future Directions, Questions, and Conjectures} Experimental data and the results we have shown lead to some natural questions and conjectures. \begin{conjecture} Let $\mathscr{G}$ be the reduced Gr\"obner basis of $I_{S_n}$, and let $g \in \mathscr{G}$ with $\deg g \geq 3$. \begin{enumerate} \item The matrix corresponding to the monomials in $g$ does not have a block form. That is, the corresponding graph is connected. \item The matrix corresponding to the monomials in $g$ has a decomposition into lattice points of $S_n$ such that one summand consists of only ones and zeros. \end{enumerate} \end{conjecture} If the second part of this conjecture holds, then Conjecture~\ref{thm:mainconjecture} holds as well. To prove that an initial term of a binomial is squarefree, one strategy would be to prove that both monomials are squarefree. We propose a term order on $T_{S_n}$ that is a refinement of $\prec_{S_n}$ and appears to hold this behavior. \begin{conjecture} Set $t_M > t_N$ if the matrix $M$ contains more twos than $N$. If neither contains a two, then set $t_M > t_N$ if $M$ contains more zeros. Then create a total order through taking a linear extension as in Definition~\ref{def:ordersn}. This refinement induces an order such that $\mathscr{G}$ consists of binomials of degree at most $n-1$, and the binomials of degree greater than 2 are squarefree in both terms. \end{conjecture} Another modification that can be made to $\Sigma_n$ is the following. Denote by $P_n$ the convex hull of the lattice points in $\Sigma_n$. In general, $P_n$ is neither Gorenstein nor integrally closed. However, based on experimental data, we conjecture the following. \begin{conjecture} For all $n$, $h^*(P_n)$ is unimodal. \end{conjecture} Many methods for showing unimodality aim to show that the $h^*$-vector of a polytope is the same as the $h$-vector of a simplicial polytope, which has a symmetric $h$-vector. However, another approach is necessary for $P_n$, as well as $S_n$ for odd $n$, since neither are Gorenstein. Instead of looking at all lattice points of $S_n$, one can form triangulations using only the vertices. These will not be unimodular triangulations, but they might lead to something interesting. \begin{conjecture} For $n \geq 2$, any reverse lexicographic initial ideal of the toric ideal $I_{S_n}$ (using only the vertices of $S_n$) is generated by monomials of degree $3(n-2)$, and its minimal generators are $n$-free. That is, the minimal generators are not divisible by $t_i^n$ for any variable $t_i$. \end{conjecture} The conjecture is experimentally true for $n=3$ by an exhaustive search. Higher dimensions result in exponentially increasing numbers of vertices, vastly increasing the computational difficulty of experimentation. \bibliographystyle{plain}
d984e06c32bb0bfc18a4139d0bf63b866ce21ca9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction and the objective} \textit{Emulation} of complex effects and systems known in condensed-matter physics by means of simpler and ``cleaner" settings, based on classical photonic, or quantum-mechanical atomic, waves has recently drawn a lot of interest \cite{emulator}. The first example is provided by the superfluidity, which may be studied in a much more accurate form in atomic Bose-Einstein condensates (BECs) \cite{Pitaevskii} and ultracold Fermi gases \cite{Fermi} than in liquid helium. Another possibility, which has come to the forefront recently, is the experimental realization \cite{Nature} and theoretical analysis \cite{Rashba-BEC} of the (pseudo-) spin-orbit coupling in a binary BEC, induced by specially designed laser fields, see a brief review of the topic in Ref. \cite{Zhai}. Similar techniques were recently developed for the creation of synthetic Abelian and non-Abelian gauges fields in atomic BEC \cite{Abelian,review}. As concerns photonics, it is well known that it allows an efficient experimental emulation of fundamentally important settings known in condensed matter, such as the Anderson localization \cite{Moti-And} (the experimental realization of this effect in BEC has been demonstrated too \cite{BEC-And}), graphene \cit {Moti-graph}, and topological insulators \cite{Moti-top-ins}. Furthermore, the use of the wave propagation in photonic media opens the way for experimental simulation, in terms of classical physics, of fundamental phenomena predicted in the quantum theory, which are very difficult to observe directly, such as non-Hermitian Hamiltonians which generate real spectra due to the $\mathcal{PT}$ symmetry \cite{PT,PT2}, and exotic relativistic effects (\textit{Zitterbewegung} and others) \cit {Longhi,Longhi2}. The photonic and matter-wave systems may often be used to emulate each other. For instance, the system of coupled Gross-Pitaevskii equations (GPEs) realizing the spin-orbit coupling in the 1D setting \cite{Kevrekidis} is exactly tantamount to the earlier studied system of coupled nonlinear Sch \"{o}dinger equations (NLSEs) modeling a twisted bimodal optical fiber \cit {me}, making solitons in these systems also mutually equivalent. The emulation methods offer an additional advantage, making it possible to attain physical conditions and effects in simulating systems which are inaccessible in the original ones. An obvious example is provided by matter-wave solitons, which can be readily created in rarefied atomic gases, cooled into the BEC state \cite{solitons}, while they are not observed in dense superfluids. Another important topic, combining semiconductor physics and photonics, which has recently drawn \ a great deal of interest, is the strong coupling of light (cavity photons) and matter (excitons, i.e., bound electron-hole states) in semiconductor microcavities \cite{deveaud}. It is well established that this interaction leads to the creation of hybrid modes in the form of exciton-polaritons (EPs) \cite{KBM+2007,rev-francesca,rev-iacopo}. The EP nonlinearity is self-defocusing due to the electrostatic repulsion between excitons. The nonlinearity plays an important role in a number of effects predicted and (partly) observed in EP systems, such as bistability \cit {BKE+2004,gip,CC2004}, wave mixing \cite{gip,CC2004,SBS+2000}, superfluidity \cite{CC2004,bogol} and the formation of dark and bright solitons \cit {agr,skryabin1,skryabin2,Kivshar,soliton-first,recent}, as well as of gap solitons (of the bright type), produced by the interplay of the self-repulsive nonlinearity with a spatially periodic linear potential \cit {skryabin3}. The nonlinearity manifests itself too in EP bosonic condensates, which have been created in the experiment \cite{exp1}, and used to demonstrate Bogoliubov excitations \cite{exp2}, diffusionless motion \cit {exp3}, persistent currents and quantized vortices \cite{exp4}, among other effects. In experiments based on incoherent pumping \cite{exp1,exp2,exp3}, off-resonance pumped polaritons scatter down, loosing the coherence inherited from the pump, and go into the condensate which emerges at the lower branch of the EP dispersion law. Real EP condensates are very well described by the extended GPE which takes into account the pump and loss \cite{rev-iacopo,exp1,berloff,carusotto}. A more general approach adopts a system of two Rabi- (linearly) coupled equations, \textit{viz}., the GPE for the wave function (order parameter) of excitons, and the propagation equation of the linear-Schr\"{o}dinger type for the amplitude of the cavity-photon field \cite{ciuti}. In particular, these equations have been used to predict the existence of the above-mentioned dark \cite{skryabin1}, bright \cite{skryabin2} and gap \cite{skryabin3} EP solitons. The same equations have been used to investigate the stability of the EP fluid under coherent pumping \cite{fran1}. In most cases, the Rabi-coupled system includes the loss and pump terms in the exciton and photon-propagation equations, respectively. In some works, it was assumed that the system maintains the background balance between the pump and loss in the first approximation, allowing one to consider effectively lossless dynamics \cite{agr,soliton-first,recent,skryabin3}. While this ``ideal" version of the EP model makes it possible to predict a number of potentially interesting effects, such as solitons, in a relatively simple form, it is not realistic for the description of the EP dynamics in semiconductor cavities. This problem suggests to look for feasible photonic systems which would be able to \emph emulate} the lossless version of the EP model. In fact, such photonic systems were proposed, without and relation to EP models, in Refs. \cit {Javid} and \cite{Arik}. They are based on asymmetric dual-core optical fibers (or a photonic-crystal fibers with two embedded cores \cit {dual-core-PCF}), with the linear coupling between the cores emulating the Rabi coupling between excitons and cavity photons in the EP system. It is assumed that only one core is nonlinear (which can be easily realized by engineering an appropriate transverse modal structure or using nonlinearity-enhancing dopants \cite{Lenstra}), operating close to the zero-dispersion point, while in the mate (linear) core the group-velocity dispersion (GVD)\ is normal or anomalous \cite{Arik}, if the nonlinearity sign in the first core is self-focusing or defocusing, respectively. Alternatively, the linear core may carry a Bragg grating \cite{Javid}, which offers the optical emulation of the model for the EP gap solitons introduced in Ref. \cite{skryabin3}. It is relevant to mention that the EP system in semiconductor microcavities may be excited solely by the pump injecting cavity photons. The emulation scheme based on the similarity to the dual-core optical fiber opens an additional possibility, to excite various states in the system by injecting the field into the nonlinear core, which simulates the excitonic wave function. The above-mentioned temporal-domain dual-fiber-based setting, which was introduced in Ref. \cite{Arik}, is exactly tantamount to the lossless limit of the one-dimensional (1D) EP system, see Eqs. (\ref{eq1pip}) and (\re {eq2pip}) below (in the optical fiber, the losses may be easily kept negligible for an experimentally relevant propagation distance, or, if necessary, compensated by built-in gain). The optical emulation of the 2D version of the EP system is more tricky, but possible too. In the latter case, one may introduce the system of \ spatiotemporal NLSEs for the dual-core planar waveguide. The 2D diffraction of cavity photons is then emulated by the combination of the transverse diffraction and anomalous GVD in the linear core. Accordingly, the nonlinearity in the mate core, kept near the zero-GVD point, must be self-defocusing. Furthermore, to suppress the transverse diffraction in the nonlinear core (to emulate the non-existing or very weak diffraction of excitons), the nonlinear core should be built as an array of fibers, rather than as a solid waveguide. This 2D setting which emulates the lossless EP system is based on Eqs. (\re {eq1}) and (\ref{eq2}) presented below. Such a planar dual-core waveguide, in which one core is solid, while the other one is represented by an array of 1D waveguides, is quite possible \cite{Nicolae}. Using the emulating counterpart of the lossless EP system, we address new possibilities suggested by this emulation . In particular, we consider the dynamics on the upper branch of the nonlinear dispersion relation, which are usually disregarded in the dissipative EP system. The issues addressed below include the modulational instability (MI)\ of uniform states corresponding to the upper and lower branches and collective excitations on top of the stable background (various forms of the dispersion relation and excitations on top of the lower polariton branch in dissipative EP systems were studied earlier \cite{dispersion}). In the case when the uniform background is stable, we consider dark solitons too (in the dissipative EP model, such solitons were recently studied in Ref. \cite{dark-EP}. The rest of the paper is structured as follows. The 2D system, which is emulated, as said above, by the planar dual-core waveguide in the spatiotemporal domain, is introduced in Section II. The MI and collective excitations on top of the stable (lower) branch of the nonlinear dispersion relation are considered in Section III. Effects of the phase gradient on the stability are investigated in Section IV. The reduction of the 2D model to 1D, and the investigation of dark solitons in the latter case, are presented in Section V. The paper is concluded by Section VI. \section{The model} The spatiotemporal evolution of complex amplitudes of the electromagnetic field in the nonlinear and linear cores of the planar waveguide, $\psi $ and $\phi $, obeys the system of coupled NLSEs \cite{Arik}, which is written here in the notation corresponding to the emulation of the EP system \cite{deveaud,KBM+2007,rev-francesca,rev-iacopo} by means of the optical model: \begin{eqnarray} i{\frac{\partial }{\partial t}}{\psi } &=&\left[ -{\frac{1}{2m_{X}}}\left( \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial y^{2} \right) +\epsilon _{X}+g\,|{\psi }|^{2}\right] {\psi }+\Gamma \ {\phi }\;, \label{eq1} \\ i{\frac{\partial }{\partial t}}{\phi } &=&\left[ -{\frac{1}{2m_{C}}}\left( \frac{\partial ^{2}}{\partial x^{2}}+\frac{\partial ^{2}}{\partial y^{2} \right) +\epsilon _{C}\right] {\phi }+\Gamma \ {\psi }\;, \label{eq2} \end{eqnarray where $t$ (corresponding to time in the EP system) is the propagation distance along the waveguide, $x$ and $y$ are, respectively, the transverse coordinate and reduced time, both corresponding to spatial coordinates in the emulated semiconductor microcavity, while $m_{X}$ and $m_{C}$, which correspond to the effective excitonic and cavity-photon masses, are actually the inverse diffraction-dispersion coefficients in the two cores \cite{deveaud,KBM+2007,rev-francesca,rev-iacopo}. The EP setting typically has $m_{C}/m_{X}\sim 10^{-4}$ \cite{rev-francesca,rev-iacopo}, which implies that, as said above, the nonlinear core of the waveguide operates very close to the zero-GVD point, while the diffraction is suppressed by the fact that this core is built as an array of 1D waveguides [it is assumed that the small residual GVD\ and diffraction in the nonlinear core are adjusted so as not to break the spatiotemporal isotropy of the optical system\ in the $\left( x,y\right) $ plane]. Further, $\epsilon _{X}$ and $\epsilon _{C}$ are propagation-constant shifts in the two waveguides, which, in terms of the EP, represent, respectively, the chemical potential of excitons and photon energy at zero wavenumber. Coefficient $g>0$ in Eq. (\ref{eq1}) represents the self-defocusing optical nonlinearity, which corresponds to the strength of the repulsive excitonic self-interaction. Lastly, the inter-core coupling constant, $\Gamma $, emulates the strength of the EP Rabi coupling. The total energy of the optical signal, which represents the number of condensed polaritons, i.e., the sum of numbers ${N}_{X}$ and $N_{C}$ of the excitons and photons, is \begin{equation} N_{0}=N_{X}+N_{C}=\int d^{2}\mathbf{r}\ \left[ |\psi (\mathbf{r ,t)|^{2}+|\phi (\mathbf{r},t)|^{2}\right] , \label{N0} \end{equation is the dynamical invariant of the lossless system. Equations (\ref{eq1}) and (\ref{eq2}) also conserve the Hamiltonian, \begin{eqnarray} H &=&\int d^{2}\mathbf{r}\Big[\frac{1}{2m_{X}}\left\vert \nabla _{\perp }\psi \right\vert ^{2}+\frac{1}{2m_{C}}\left\vert \nabla _{\perp }\phi \right\vert ^{2} \\ &+&\epsilon _{X}\,|{\psi }|^{2}+\epsilon _{C}|\phi |^{2}+\frac{g}{2}|\psi |^{4}+\Gamma \left( \psi ^{\ast }\phi +\psi \phi ^{\ast }\right) \Big]\;, \notag \end{eqnarray as well as the total 2D momentum and angular momentum. \section{Uniform states} \subsection{Stationary uniform states} Equations ( (\ref{eq1}) and (\ref{eq2}) give rise to an optical continuous wave in the dual-core waveguide with propagation constant $-\mu $, \begin{equation} \psi (\mathbf{r},t)=\psi _{0}\ e^{-i\mu t},~\phi (\mathbf{r},t)=\phi _{0}\ e^{-i\mu t}, \label{mu} \end{equation} \begin{eqnarray} \psi _{0}^{2} &=&{\frac{1}{g}}\left( \mu -\epsilon _{X}+{\frac{\Gamma ^{2}} \epsilon _{C}-\mu }}\right) \;, \label{cond1} \\ \phi _{0} &=&-{\frac{\Gamma }{(\epsilon _{C}-\mu )}}\,\psi _{0}\;, \label{cond2} \end{eqnarray which emulates the EP\ condensate with chemical potential $\mu $ and exciton and cavity-photon amplitudes $\psi _{0}$ and $\phi _{0}$. Then, one can eliminate $\mu $ from Eqs. (\ref{cond1}) and (\ref{cond2}) in favor of $\psi _{0}$: \begin{equation} \mu ^{\pm }={\frac{1}{2}}\Big(\epsilon _{X}+\epsilon _{C}+g\psi _{0}^{2}\pm \sqrt{(\epsilon _{X}-\epsilon _{C}+g\psi _{0}^{2})^{2}+4\Gamma ^{2}}\Big)\;. \label{echem} \end{equation In terms of EP, this relation includes the \textit{lower} $\mu ^{-}$ and the \textit{upper }branches, $\mu ^{-}$ and $\mu ^{+}$, respectively. As said above, the repulsive excitonic nonlinearity corresponds to $g>0$, and the physically relevant EP\ setting has $\epsilon _{X}<\epsilon _{C}$ \cite{rev-francesca,rev-iacopo}. It follows from here that the uniform EP-emulating configuration exists [i.e., Eqs. (\ref{cond1}) and (\ref{cond2 ) yield $\psi _{0}^{2}$, $\phi _{0}^{2}>0$] provided that the chemical potential satisfies conditions $\mu _{0}^{-}<\mu <\epsilon _{C}$ or $\mu >\mu _{0}^{+}$, at lower the upper branch, respectively. Here, $\mu _{0}^{-}$ and $\mu _{0}^{+}$ are obtained from Eq. (\ref{echem}) by setting $\psi _{0}=0$: \begin{equation} \mu _{0}^{\pm }=\frac{1}{2}\left( \epsilon _{X}+\epsilon _{C}\pm \sqrt (\epsilon _{C}-\epsilon _{X})^{2}+4\Gamma ^{2}}\right) . \label{mu0} \end{equation Thus, for $\Gamma =0$ one has $\mu _{0}^{-}=\epsilon _{X}$ and $\mu _{0}^{+}=\epsilon _{C}$, while for $\Gamma \neq 0$ the relevant ranges are \mu _{0}^{-}<\epsilon _{X}$ and $\mu _{0}^{+}>\epsilon _{C}$. The respective EP condensate density $n_{0}$ (alias the energy density of the optical signal in the dual-core waveguide) is \begin{equation} n_{0}=\psi _{0}^{2}+\phi _{0}^{2}~, \label{tot-cond} \end{equation cf. Eq. (\ref{N0}). Note that, due to Eq. (\ref{cond2}), $\psi _{0}$ and \phi _{0}$ have opposite signs on the lower branch, while on the upper one the signs of $\psi _{0}$ and $\phi _{0}$ are identical. In the framework of the present model, for given $\phi _{0}^{2}$ one can easily obtain the effective exciton and total densities, $\psi _{0}^{2}$ and $\phi _{0}^{2}$, along with the chemical potential, $\mu $, from from Eqs. \ref{cond1}), (\ref{cond2}), and (\ref{echem}). In Fig. \ref{fig1} we display $\psi _{0}^{2}$ (dashed lines), $\phi _{0}^{2}$ (dotted-dashed lines), and $n_{0}$ (solid lines) as functions of the scaled chemical potential, $\mu /\epsilon _{C}$, for $\epsilon _{X}=0$ and a relevant value of the linear-coupling strength, $\Gamma =0.75\epsilon _{C}$. As previously stated, the curves in the range of $\mu _{0}^{-}/\epsilon _{C}=-0.38<\mu /\epsilon _{C}<1$ correspond to the lower branch, while the range of $\mu /\epsilon _{C}>\mu _{0}^{+}/\epsilon _{C}=1.4$ pertains to the upper one. \begin{figure}[t] \centerline{\epsfig{file=excitons-f1.eps,width=8cm,clip=}} \caption{(Color online). Scaled effective densities of the uniform exciton-polariton condensate versus the scaled chemical potential, $\protec \mu /\protect\epsilon _{C}$, for the effective (emulated) Rabi coupling \Gamma =0.75\protect\epsilon _{C}$. Dashed lines: effective exciton density $\protect\psi _{0}^{2}$; dotted-dashed lines: the respective cavity-photon density $\protect\phi _{0}^{2}$; solid lines: total density n_{0}$. Other parameters are the effective exciton-exciton repulsion strength $g$, and the exciton and cavity-photon energies, $\protect\epsilon _{X}=0$ and $\protect\epsilon _{C}$ (actually emulated by the propagating-constant shifts in the dual-core waveguide), at zero wavenumber. The curves below and above $\protect\mu /\protect\epsilon _{C}=1$ correspond, respectively, to the lower and upper branches.} \label{fig1} \end{figure} \begin{figure}[t] \centerline{\epsfig{file=excitons-f2.eps,width=8cm,clip=}} \caption{(Color online). Exciton fraction $\protect\psi _{0}^{2}/n_{0}$ in the exciton-polariton condensate as a function of the scaled density of the cavity photons, $g\protect\phi _{0}^{2}/(\protect\epsilon _{C}-\protec \epsilon _{X})$. The four curves correspond to different values of the scaled Rabi coupling, $\Gamma /(\protect\epsilon _{C}-\protect\epsilon _{X}) . } \label{fig2} \end{figure} \subsection{Modulational instability (MI)\ of the uniform states} A central point of the analysis is the MI of the flat state which was obtained above. For this purpose, small perturbations ${\eta }_{X}(\mathbf{r ,t)$ and ${\eta }_{C}(\mathbf{r},t)$ are added to the uniform fields, $\psi _{0}$ and $\phi _{0}$, by setting \begin{equation} \left\{ {\psi }(\mathbf{r},t),{\phi }(\mathbf{r},t)\right\} =\left[ \left\{ \psi _{0},\phi _{0}\right\} +\left\{ {\eta }_{1}(\mathbf{r},t),{\eta }_{2} \mathbf{r},t)\right\} \right] \ e^{-i\mu t}\;. \end{equation The subsequent linearization of Eqs. (\ref{eq1}) and (\ref{eq2}) gives \begin{eqnarray} i{\frac{\partial }{\partial t}}{\eta }_{1} &=&\left( -{\frac{1}{2m_{X}} \nabla _{\bot }^{2}+\mu _{X}-2{\frac{\Gamma ^{2}}{\mu _{C}}}\right) {\eta _{1} \notag \\ &+&(\mu _{X}-{\frac{\Gamma ^{2}}{\mu _{C}}})\ {\eta }_{1}^{\ast }+\Gamma \eta }_{2}\;, \notag \\ i{\frac{\partial }{\partial t}}{\eta }_{2} &=&\left( -{\frac{1}{2m_{C}} \nabla _{\bot }^{2}-\mu _{C}\right) {\eta }_{2}+\Gamma {\eta }_{1}\;, \label{linear} \end{eqnarray where we have defined \begin{equation} \mu _{X}\equiv \mu -\epsilon _{X},\quad \quad \mu _{C}\equiv \mu -\epsilon _{C}. \label{mumumumu} \end{equation Solution to linearized equations (\ref{linear}) are looked for as \begin{eqnarray} \eta _{1}(\mathbf{r},t) &=&A_{1}\ e^{i(\mathbf{k}\cdot \mathbf{r}-\omega _{k}t)}+B_{1}\ e^{-i(\mathbf{k}\cdot \mathbf{r}-\omega _{k}t)}\;, \label{pert1} \\ \eta _{2}(\mathbf{r},t) &=&A_{2}\ e^{i(\mathbf{k}\cdot \mathbf{r}-\omega _{k}t)}+B_{2}\ e^{-i(\mathbf{k}\cdot \mathbf{r}-\omega _{k}t)}\;, \label{pert2} \end{eqnarray where $\mathbf{k}$ and $\omega _{k}$ are the wave vector and frequency of the perturbations. It is straightforward to derive a dispersion relation from Eqs. (\ref{pert1}) and (\ref{pert2}): \begin{equation} \omega _{k}^{\pm }=\sqrt{-\beta \pm \sqrt{\beta ^{2}-4\gamma }}, \label{freq} \end{equation where additional combinations are defined \begin{eqnarray} \beta &\equiv &-(a^{2}-b^{2}+c^{2}+2\Gamma ^{2}),~ \\ \gamma &\equiv &a^{2}c^{2}-b^{2}c^{2}-2ac\Gamma ^{2}+\Gamma ^{4}, \\ a &\equiv &k^{2}/(2m_{X})+\mu _{X}-2\Gamma ^{2}/\mu _{C},~ \\ b &\equiv &\mu _{x}-\Gamma ^{2}/\mu _{C}, \\ ~c &\equiv &k^{2}/(2m_{C})-\mu _{C}. \end{eqnarray In the absence of the effective Rabi coupling, i.e., in the case of the uncoupled waveguiding cores ($\Gamma =0$), branch $\omega _{k}^{-}$ in Eq. \ref{freq}) gives the familiar gapless Bogoliubov-like spectrum, \begin{equation} \omega _{k}=\sqrt{{\frac{k^{2}}{2m_{X}}}\left( {\frac{k^{2}}{2m_{X}}}+2\,\mu _{X}\right) }, \label{freq1} \end{equation while $\omega _{k}^{+}$ yields \begin{equation} \omega _{k}=\left\vert {\frac{k^{2}}{2m_{C}}}-\mu _{C}\right\vert , \label{freq2} \end{equation which may be realized as a gapped spectrum. It is easy to verify that branches (\ref{freq1}) and (\ref{freq2}) do not intersect, provided that \epsilon _{X}<\mu <\epsilon _{C}$. Notice that for $\Gamma =0$ one has \epsilon _{X}=\mu _{0}^{-}$, see Eq. (\ref{echem}). In the presence of the effective Rabi coupling ($\Gamma \neq 0$), emulated by the coupling between the parallel cores, frequencies (\ref{freq}) acquire a finite imaginary part under the condition of $\mu >\epsilon _{C}$. This means that the uniform state pertaining to the upper branch of the dispersion relation, i.e., $\mu ^{+}$ in Eq. (\ref{echem}), is always unstable. Instead, for the uniform state pertaining to the lower branch, characterized by $\mu ^{-}$ in Eq. \ref{echem}), perturbation eigenfrequencies (\ref{freq}) are always real. Indeed, in the experiments with the EP condensates, only the lower polariton branch is actually observed. Dealing with the stability region, in Fig. \ref{fig2} we plot the effective exciton fraction, $\psi _{0}^{2}/n_{0}$, of the EP-emulating state as a function of the effective scaled cavity-photon density, $g\phi _{0}^{2}/(\epsilon _{C}-\epsilon _{X})$, at four values of the scaled coupling: $\Gamma /(\epsilon _{C}-\epsilon _{X})=0.1,~0.5,1,1.5$ (solid, dotted, dashed, and dotted-dashed lines, respectively). The figure shows that the exciton fraction decreases with the increase of $\phi _{0}^{2}$, while, at a fixed value of $\phi _{0}^{2}$, this fraction slightly grows with $\Gamma $. In addition, from Fig. \ref{fig2}, and also from Eqs. (\re {cond1}) and (\ref{cond2}), one finds that the uniform state has equal effective densities of excitons and cavity photons, i.e., $\psi _{0}^{2}/n_{0}=1/2$, at $g\phi _{0}^{2}=\epsilon _{C}-\epsilon _{X}$. \subsection{Collective excitations} The uniform state is stable in the regime where frequencies (\ref{freq}) are real, i.e., as said above, for the lower branch of the nonlinear dispersion relation. Here we aim to analyze dispersion relations for collective excitations on top of the stable uniform state. In the previous section, it was demonstrated that, in the absence of the linear coupling ($\Gamma =0$), frequencies (\ref{freq}) split into two branches, the gapless Bogoliubov-like one (\ref{freq1}), and its gapped counterpart (\ref{freq2}). In the presence of the coupling ($\Gamma \neq 0 ), the two branches can be identified: the Bogoliubov-like spectrum, $\omega _{k}^{-}$, given by Eq. (\ref{freq}), is gapless, i.e. $\omega _{0}^{-}=0$, while Eq. (\ref{freq}) yields the gapped spectrum, $\omega _{k}^{+}$, with \omega _{0}^{+}\neq 0$. As mentioned above, the effective exciton and cavity-photon masses are widely different in the physically relevant setting, $m_{X}\gg m_{C}$, therefore in many case it is possible to simplify the problem by setting 1/m_{X}=0$ \cite{rev-francesca,rev-iacopo}. In this limit case, Eq. (\re {freq}) yields the first-sound velocity $c_{s}$, obtained by the expansion of the Bogoliubov-like spectrum, $\omega _{k}^{-}$, at small $k$, $\omega _{k}^{-}\approx c_{s}\,k,$ with \begin{equation} c_{s}=\sqrt{{\frac{\epsilon _{C}-\mu }{2m_{C}}}\left[ 1-{\frac{(\mu _{C}^{2}+\Gamma ^{2})^{2}}{\mu _{C}^{4}+2(\mu _{C}^{2}-\mu _{C}\mu _{X})\Gamma ^{2}+3\Gamma ^{4}}}\right] }. \end{equation In the same case, the gap of branch $\omega _{k}^{+}$ is \begin{equation} \omega _{0}^{+}=\sqrt{\mu _{C}^{2}+2{\frac{\epsilon _{C}-\epsilon _{X}} \epsilon _{C}-\mu }}\Gamma ^{2}+3{\frac{\Gamma ^{4}}{\mu _{C}^{2}}}}\;. \end{equation} In Fig. \ref{fig3} we plot the scaled energy, $m_{C}c_{s}^{2}/(\epsilon _{C}-\epsilon _{X})$, of the first-sound mode (the upper panel), and the scaled energy gap, $\omega _{0}^{+}/(\epsilon _{C}-\epsilon _{X})$, of the gapped branch (the lower panel), as functions of the scaled effective cavity-photon density, $g\phi _{0}^{2}/(\epsilon _{C}-\epsilon _{X})$. Four curves in each panel correspond to different values of the scaled linear coupling, $\Gamma /(\epsilon _{C}-\epsilon _{X})$: $0.1,0.5,1,1.5$. \begin{figure}[t] \centerline{\epsfig{file=excitons-f3a.eps,width=8cm,clip=}} \centerline{\epsfig{file=excitons-f3b.eps,width=8cm,clip=}} \caption{(Color online). The top panel: scaled energy $m_{C}c_{s}^{2}/ \protect\epsilon _{C}-\protect\epsilon _{X})$ of the first-sound mode, with the sound speed $c_{s}$, versus the scaled effective cavity-photon density, g\protect\phi _{0}^{2}/(\protect\epsilon _{C}-\protect\epsilon _{X})$. The bottom panel: scaled energy gap $\protect\omega _{0}^{+}/(\protect\epsilon _{C}-\protect\epsilon _{X})$ of the gapped branch versus the scaled cavity-photon density. In each panel, four curves correspond to different values of the scaled linear coupling, $\Gamma /(\protect\epsilon _{C} \protect\epsilon _{X})$.} \label{fig3} \end{figure} It is relevant to simulate the evolution of the stable uniform state excited by a small circular perturbation, which corresponds to an experimentally relevant situation. In Fig. \ref{fig4} we display the evolution produced by simulations of Eqs. (\ref{eq1}) and (\ref{eq2}), using a 2D real-time Crank-Nicolson method with the predictor-corrector element and periodic boundary conditions \cite{sala-numerics}. For this purpose, we choose the following initial conditions: \begin{eqnarray} \psi (x,t=0) &=&\psi _{0}, \label{initial1} \\ \phi (x,t=0) &=&\phi _{0}+A~e^{-(x^{2}+y^{2})/\sigma ^{2}}, \label{initial2} \end{eqnarray where $\psi _{0}$ and $\phi _{0}$ are solutions of Eqs. (\ref{cond1}) and \ref{cond2}), while $A$ and $\sigma $ are parameters of the perturbation, which represents a small circular hole. We here set $\epsilon _{X}=0$, \epsilon _{C}=1$, $g=1$, $\Gamma =0.75$, $m_{X}=1000$ and $m_{C}=1$. Figure \ref{fig4} displays the spatial profile of the perturbed condensate density, $n_{0}(x,t)=|\psi (x,t)|^{2}+|\phi (x,t)|^{2}$, at different values of the propagation distance (time, in terms of EP), $t=9$, $t=12$, $t=21$, for initial perturbation (\ref{initial2}) with $A=0.1$ and $\sigma =2$. In the case shown in Fig. \ref{fig4} the unperturbed amplitudes are $\psi _{0}=1$ and $\phi _{0}=-1$, which correspond in Fig. \ref{fig1} to $\mu =0.25$ (the lower branch). As observed in Fig. \ref{fig4}, the initial perturbation produces a circular pattern which expands with a radial velocity close to the speed of sound, $c_{s}$ (for further technical details, see Ref. \cit {sala-shock}). \begin{figure}[t] \vskip -0.85cm \centerline \epsfig{file=excitons-f4a.eps,height=6.8cm,width=6.8cm,clip=}} \vskip -1.2cm \centerline{\epsfig{file=excitons-f4b.eps,height=6.8cm,width=6.8cm,clip=}} \vskip -1.2cm \centerline \epsfig{file=excitons-f4c.eps,height=6.8cm,width=6.8cm,clip=}} \vskip -0.5cm \caption{(Color online). The evolution of the initial circular perturbation in the form of a small hole produced on top of the stable uniform state, per Eqs. (\protect\ref{initial1}) and (\protect\ref{initial2}). Each panel displays contour plots of the total density, $n_{0}(x,y,t)$, at fixed values of the propagation distance (effective time), $t.$ The top, middle, and bottom panels correspond to $t=9$, $t=15$, and $t=21$, respectively. Parameters are $\protect\epsilon _{X}=0$, $\protect\epsilon _{C}=1$, $g=1$, \Gamma =0.75$. Initial conditions are taken as in Eqs. (\protect\re {initial1}) and (\protect\ref{initial2}) with $\protect\psi _{0}=1$, \protect\phi _{0}=-1$, $A=0.1$ and $\protect\sigma =2$. } \label{fig4} \end{figure} We have also simulated the evolution of unstable configurations -- for instance, with unperturbed amplitudes $\psi _{0}=1$ and $\phi _{0}=1$, which correspond to $\mu =1.75$, i.e., the upper branch in Fig. \ref{fig1}. The evolution is initially similar to that shown in Fig. \ref{fig4}, but later a completely different behavior is observed, with the formation of several circles whose amplitude strongly grows in the course of the unstable evolution. \section{The uniform condensate with a phase gradient} We now analyze the existence and stability of a uniform state with a phase gradient (effective superflow), which corresponds to setting \begin{equation} \left\{ {\psi }(\mathbf{r},t),{\phi }(\mathbf{r},t)\right\} =\left\{ \psi _{0},\phi _{0}\right\} e^{i(\mathbf{q}\cdot \mathbf{r}-\mu t)} \end{equation in Eqs. (\ref{eq1}) and (\ref{eq2}), where $\mathbf{q}=(q_{x},q_{y})$ is the wave vector of the gradient, with $\psi _{0}$ and $\phi _{0}$ given by \begin{eqnarray} \psi _{0}^{2} &=&{\frac{1}{g}}\left( {\tilde{\mu}}_{X}-{\frac{\Gamma ^{2}}{ \tilde{\mu}}_{C}}}\right) \;, \label{cond1-grad} \\ \phi _{0} &=&-{\frac{\Gamma }{{\tilde{\mu}}_{C}}}\,\psi _{0}~, \label{cond2-grad} \end{eqnarray where \begin{equation} {\tilde{\mu}}_{X}=\mu -\epsilon _{X}-{\frac{q^{2}}{2m_{X}}}~,~\ {\tilde{\mu} _{C}=\mu -\epsilon _{C}-{\frac{q^{2}}{2m_{C}}}\;. \end{equation Following the same procedure as developed in the previous section, we derive the quartic dispersion equation, \begin{equation} \alpha _{4}\omega _{\mathbf{k}}^{4}+\alpha _{3}\omega _{\mathbf{k }^{3}+\alpha _{2}\omega _{\mathbf{k}}^{2}+\alpha _{1}\omega _{\mathbf{k }+\alpha _{0}=0\;, \label{bestiale} \end{equation for frequencies $\omega _{k}$ of small excitations on top of the uniform state. Here we define \begin{eqnarray} \alpha _{0} &=&a_{1}a_{2}c_{1}c_{2}-b^{2}c_{1}c_{2}-a_{1}c_{1}\Gamma ^{2}-a_{2}c_{2}\Gamma ^{2}+\Gamma ^{4}\;, \notag \\ \alpha _{1} &=&a_{1}a_{2}c_{1}-b^{2}c_{1}-a_{1}a_{2}c_{2}+b^{2}c_{2}+a_{1}c_{1}c_{2} \notag \\ &-&a_{2}c_{1}c_{2}+a_{1}\Gamma ^{2}-a_{2}\Gamma ^{2}\;, \notag \\ \alpha _{2} &=&-a_{1}a_{2}+b^{2}+a_{1}c_{1}-a_{2}c_{1}-a_{1}c_{2} \notag \\ &+&a_{2}c_{2}-c_{1}c_{2}-2\Gamma ^{2}\;, \notag \\ \alpha _{3} &=&-a_{1}+a_{2}-c_{1}+c_{2}\;, \notag \\ \alpha _{4} &=&1\;, \notag \end{eqnarray \begin{eqnarray} a_{1} &=&k^{2}/(2m_{X})+{\tilde{\mu}}_{X}-2\Gamma ^{2}/{\tilde{\mu}}_{C} \mathbf{q}\cdot \mathbf{k}/m_{X}\;, \notag \\ a_{2} &=&k^{2}/(2m_{X})+{\tilde{\mu}}_{X}-2\Gamma ^{2}/{\tilde{\mu}}_{C} \mathbf{q}\cdot \mathbf{k}/m_{X}\;, \notag \\ b &=&{\tilde{\mu}}_{X}-\Gamma ^{2}/{\tilde{\mu}}_{C}\;, \notag \\ c_{1} &=&k^{2}/(2m_{C})+{\tilde{\mu}}_{C}+\mathbf{q}\cdot \mathbf{k}/m_{C}\;, \notag \\ c_{2} &=&k^{2}/(2m_{C})+{\tilde{\mu}}_{C}-\mathbf{q}\cdot \mathbf{k}/m_{C}\;. \notag \end{eqnarray} In the absence of the linear coupling ($\Gamma =0$), Eq. (\ref{bestiale}) gives the $\mathbf{q}$-dependent gapless Bogoliubov-like spectrum, \begin{equation} \omega _{\mathbf{k}}={\frac{\mathbf{q}\cdot \mathbf{k}}{m_{X}}}\pm \sqrt{ \frac{k^{2}}{2m_{X}}}\left( {\frac{k^{2}}{2m_{X}}}+2\,\left( \mu _{X}-{\frac q^{2}}{2m_{X}}}\right) \right) }, \end{equation and the $\mathbf{q}$-dependent gapped one, \begin{equation} \omega _{\mathbf{k}}={\frac{\mathbf{q}\cdot \mathbf{k}}{m_{C}}}\pm \left\vert {\frac{k^{2}+q^{2}}{2m_{C}}}-\mu _{C}+{\frac{q^{2}}{2m_{C}} \right\vert . \end{equation In the presence of the linear coupling ($\Gamma \neq 0$) one must solve Eq. \ref{bestiale}) numerically. We direct the $x$ axis along $\mathbf{q}$, hence $\mathbf{q}=(q,0)$. In Fig. \ref{fig5} we plot frequencies $\omega _{(k,0)}$ of longitudinal perturbations, with wave vector $\mathbf{k}=\left( k,0\right) $, for three different values of the flux wavenumber, $q$. For these values of $q$, the imaginary part of frequencies $\omega _{(k,0)}$ is zero, hence the state is stable. In the first two panels of Fig. \ref{fig5} (with $q=0$ and $q=0.5$) one clearly sees gapped and gapless modes, which are (approximately) symmetric for $k>0$ and $k<0$, which corresponds to excitation waves moving in opposite directions with equal speeds. By increasing $q$ one reaches the Landau critical wavenumber, $q_{L}\simeq 0.72 , at which the gapless mode has zero frequency at a finite value of $k$, and above which there is a finite range of $k$'s where two gapless modes propagate in the same direction, i.e., the phase velocity, $\omega _{(k,0)}/k $, has the same sign for both the modes. This is shown in the lower panel of Fig. \ref{fig5} (for $q=1$), where, according to the Landau criterion, the system is not fully superfluid \cite{book-leggett}. A further increase of $q$ leads to the dynamical instability of the gapless modes through the appearance of a nonzero imaginary part of $\omega _{(k,0)}$, as shown in Fig. \ref{fig6}, where both real and imaginary parts of $\omega _{(k,0)}$ are displayed for $q=1.5$. Actually, the sound velocities of the two gapless modes moving in the same direction become equal, so that they may exchange energy and therefore become unstable, at the critical flux wavenumber $q_{c}\simeq 1.15$. The results reported in Fig. \ref{fig5} and \ref{fig6} are obtained from calculations performed at constant $\mu $. We have verified that the same phenomenon occurs as well at fixed values of the total density. \begin{figure}[t] \centerline{\epsfig{file=excitons-f5.eps,width=8cm,clip=}} \caption{(Color online). Frequencies of small excitations $\protect\omega _{(k,0)}$ above the stable uniform state with wave vector $\mathbf{q}=(q,0)$ of the phase flux. Parameters are $\protect\epsilon _{X}=0$, $\protec \epsilon _{C}=1$, $g=1$, $\Gamma =0.75$, $m_{X}=1000$, $m_{C}=1$, and \protect\mu =0.25$. Each line corresponds to a different solution (branch) of Eq. (\protect\ref{bestiale}).} \label{fig5} \end{figure} \begin{figure}[t] \centerline{\epsfig{file=excitons-f6.eps,width=8cm,clip=}} \caption{(Color online). Real and imaginary parts, $\mathrm{Re}[\protec \omega _{k}]$ and $\mathrm{Im}[\protect\omega _{k}]$, of excitation frequencies $\protect\omega _{k}$ on top of the unstable uniform state with wave vector $\mathbf{q}=\left( 1.5,0\right) $ of the phase flux. Parameters are $\protect\epsilon _{X}=0$, $\protect\epsilon _{C}=1$, $\protect\gamma =1 , $\Gamma =0.75$, $m_{X}=1000$, $m_{C}=1$, and $\protect\mu =0.25$. Each line corresponds to a different solution (branch) of Eq. (\protect\re {bestiale}).} \label{fig6} \end{figure} \section{Reduction to the one-dimensional system} As said above, the 1D lossless EP system may be straightforwardly emulated by the dual-core optical fiber, in which one core is nonlinear, operating near the zero-GVD point, while the other one is linear, carrying nonzero GVD. The accordingly simplified version of Eqs. (\ref{eq1}) and (\ref{eq2}) is written as \begin{eqnarray} i{\frac{\partial }{\partial t}}{\tilde{\psi}} &=&\left[ \epsilon _{X}+g\,| \tilde{\psi}}|^{2}\right] {\tilde{\psi}}+\Gamma \ {\tilde{\phi}}\;, \label{eq1pip} \\ i{\frac{\partial }{\partial t}}{\tilde{\phi}} &=&\left[ -{\frac{1}{2m_{C}}} \frac{\partial ^{2}}{\partial x^{2}}}+\epsilon _{C}+V(x)\right] {\tilde{\phi }+\Gamma \ {\psi }\;, \label{eq2pip} \end{eqnarray where the tildes stress the reduction to 1D. \subsection{Stability of the uniform 1D state} The results shown in Figs. \ref{fig1} and \ref{fig2} are also valid in 1D, taking into regard that $\psi _{0}^{2}$ and $\phi _{0}^{2}$ are now 1D densities. In Figs. \ref{fig7} and \ref{fig8}, we show the evolution of small perturbations on top of the uniform stable and unstable states, respectively. For this purpose, Eqs. (\ref{eq1}) and (\ref{eq2}) were simulated by means of the 1D real-time Crank-Nicolson algorithm with the use of the predictor-corrector element \cite{sala-numerics}. The initial conditions were taken a \begin{eqnarray} {\tilde{\psi}}(x,t=0) &=&\psi _{0}\;, \label{initial1pip} \\ {\tilde{\phi}}(x,t=0) &=&\phi _{0}+A~e^{-x^{2}/\sigma ^{2}}. \label{initial2pip} \end{eqnarray In both Figs. \ref{fig7} and \ref{fig8}, the same parameters of the system are used: $\epsilon _{X}=0$, $\epsilon _{C}=1$, $g=1$, $\Gamma =0.75$. Both figures \ref{fig7} and \ref{fig8} display spatial profiles of the condensate density, $n_{0}(x,t)=|{\tilde{\psi}}(x,t)|^{2}+|{\tilde{\phi}}(x,t)|^{2}$, at different values of the propagating constant (alias time, in terms of the EP system), $t=0$, $t=3$, $t=9$, $t=12$, generated by the initial perturbation (\ref{initial2pip}), with $A=-0.1$ and $\sigma =2$. In Fig. \re {fig7}, the unperturbed amplitudes are $\psi _{0}=1$ and $\phi _{0}=-1$, which correspond to $\mu =0.25$, i.e., the lower branch in terms of Fig. \re {fig1}, while in Fig. \ref{fig8} the initial amplitudes are $\psi _{0}=1$ and $\phi _{0}=1$, corresponding to $\mu =1.75$ on the upper branch. \begin{figure}[t] \centerline{\epsfig{file=excitons-f7.eps,width=8cm,clip=}} \caption{(Color online) The evolution of a small hole produced, as a perturbation, on top of a stable uniform 1D state. In each panel, density profiles $n\left( x,t\right) $ are displayed. Parameters are: $\protec \epsilon _{X}=0$, $\protect\epsilon _{C}=1$, $g=1$, $\Gamma =0.75$. Initial conditions are given by Eqs. (\protect\ref{initial1}) and (\protect\re {initial2}) with $\protect\psi _{0}=1$, $\protect\phi _{0}=-1$, $A=0.1$ and \protect\sigma =2$. } \label{fig7} \end{figure} \begin{figure}[t] \centerline{\epsfig{file=excitons-f8.eps,width=8cm,clip=}} \caption{(Color online) The evolution of a small hole produced, as a perturbation, on top of an unstable 1D state. Parameters are $\protec \epsilon _{X}=0$, $\protect\epsilon _{C}=1$, $g=1$, $\Gamma =0.75$. Initial conditions are given by Eqs. (\protect\ref{initial1}) and (\protect\re {initial2}) with $\protect\psi _{0}=1$, $\protect\phi _{0}=1$, $A=-0.1$ and \protect\sigma =2$.} \label{fig8} \end{figure} As seen in Fig. \ref{fig7}, the initial perturbation hole splits into two ones traveling in opposite directions with a velocity close to the speed of sound $c_{s}$ (see further technical details in Ref. \cite{sala-shock}). In Fig. \ref{fig8}, the dynamics is initially (at $t\leq 6$) similar to that in Fig. \ref{fig7}, but at $t>6$ it displays a completely different behavior, namely, formation of strong oscillations, which increase their amplitude in the course of the evolution, indicating a dynamical instability. \subsection{One-dimensional dark solitons , and a possibility of the existence of vortices in the 2D system} In the case when the 1D uniform background is stable, it is natural to look for solutions in the form of dark solitons (DSs). As well as in other systems, the node at the center of the DS is supported by a phase shift of \pi $ between the wave fields at $x\rightarrow \pm \infty $ \cite{Dark} (as shown below, in the present system the node and the phase shift by $\pi $ exist simultaneously in both fields, $\psi $ and $\phi $). To demonstrate the possibility of the existence of the DS, we substitute the general 1D ansatz for stationary solutions into Eqs. (\ref{eq1pip}) and (\re {eq2pip}): \begin{equation} \left\{ {\tilde{\psi}}(x,t),{\tilde{\phi}}(x,t)\right\} =\left\{ \Psi (x),\Phi (x)\right\} \ e^{-i\mu t}\;, \label{tilde} \end{equation with functions $\Psi (x)$ and $\Phi (x)$, which may be assumed real, obeying the coupled stationary equations: \begin{gather} \Gamma \,\Phi =\left( \mu -\epsilon _{X}\right) {\Psi }-g\,\Psi ^{3}, \label{Psi} \\ \left( \mu -\epsilon _{C}\right) {\Phi }+\frac{1}{2m_{C}}\Phi ^{\prime \prime }-\Gamma \ {\Psi }=0, \label{Phi} \end{gather where approximation $1/m_{X}=0$ is adopted, and $\Phi ^{\prime \prime }\equiv d^{2}\Phi /dx^{2}$. It is straightforward to check that Eqs. (\re {Psi}) and (\ref{Phi}) admit a solution with $\Psi (x)=-\Phi (x)$ only if $ \Phi }^{\prime \prime }\equiv 0$ for any $x$, i.e., DS solutions do not obey this constraint. For the analytical consideration, we assume that the second derivative in Eq. (\ref{Phi}) may be treated as a small term. Then, an approximate solution of Eq. (\ref{Phi}) i \begin{equation} \Phi \approx -\frac{\Gamma }{\epsilon _{C}-\mu }\Psi -\frac{\Gamma } 2m_{C}\left( \epsilon _{C}-\mu \right) ^{2}}\Psi ^{\prime \prime }\;, \label{PhiPsi} \end{equation and the substitution of expression (\ref{PhiPsi}) into Eq. (\ref{Psi}) leads to the following equation for $\Psi (x)$: \begin{equation} -\frac{\Gamma ^{2}}{2m_{C}\left( \epsilon _{C}-\mu \right) ^{2}}\Psi ^{\prime \prime }+g\Psi ^{3}-\left[ \left( \mu -\epsilon _{X}\right) +\frac \Gamma ^{2}}{\left( \epsilon _{C}-\mu \right) }\right] \Psi =0. \label{single} \end{equation Equation (\ref{single}) yields a commonly known exact dark-soliton solution: \begin{align} \Psi (x)& =\pm \sqrt{\frac{1}{g}\left[ \left( \mu -\epsilon _{X}\right) \frac{\Gamma ^{2}}{\left( \epsilon _{C}-\mu \right) }\right] } \notag \\ \times & \tanh \left( \frac{1}{\Gamma }\sqrt{m_{C}\left( \epsilon _{C}-\mu \right) \left[ \left( \mu -\epsilon _{X}\right) \left( \epsilon _{C}-\mu \right) +\Gamma ^{2}\right] }x\right) \;. \label{dark} \end{align} \begin{figure}[t] \centerline{\epsfig{file=excitons-f9a.eps,width=8cm,clip=}} \centerline{\epsfig{file=excitons-f9b.eps,width=8cm,clip=}} \caption{(Color online) The dark soliton for $\protect\epsilon _{X}=\protec \mu =0$, $\protect\epsilon _{C}=1$, $\protect\gamma =1$, $\Gamma =0.75$. The exact solution (solid line) is produced by Eq. (\protect\ref{darkmio}), and its approximate counterpart (dashed line) is given by Eq. (\protect\ref{dark ). The top and bottom panels display the excitonic and photonic density profiles, $|\Psi (x)|^{2}$ and $|\Phi (x)|^{2}$, respectively.} \label{fig9} \end{figure} On the other hand, in the special case of $\mu =\epsilon _{X}$, Eq. (\re {Psi}) can be used to eliminate $\Phi $ in favor of $\Psi $ \begin{equation} \Phi =-\left( g/\Gamma \right) \Psi ^{3}, \end{equation the remaining equation for $\chi \equiv \Psi ^{3}$ bein \begin{equation} \frac{1}{2m_{C}}\frac{d^{2}\chi }{dx^{2}}=\left( \epsilon _{C}-\epsilon _{X}\right) \chi -\frac{\Gamma ^{2}}{g}\chi ^{1/3}. \label{chi} \end{equation If $x$ is formally considered as time, Eq. (\ref{chi}) is the Newton's equation of motion for a particle in an effective external potential \begin{equation} U_{\mathrm{eff}}(\chi )=\frac{3\Gamma ^{2}}{4g}\chi ^{4/3}-\frac{1}{2}\left( \epsilon _{C}-\epsilon _{X}\right) \chi ^{2}. \label{U} \end{equation It is obvious that this potential gives rise to a heteroclinic trajectory which connects two local maxima of potential (\ref{U}), $\chi _{0}=\pm \left[ \Gamma ^{2}/\left( g\left( \epsilon _{C}-\epsilon _{X}\right) \right) \right] ^{3/2}$. An implicit analytical form of $\chi (x)$ for the corresponding solution given by \begin{equation} x=\int_{0}^{\chi (x)}d\xi \sqrt{\frac{2}{\left( 4s_{1}^{3}/27s_{0}^{2}\right) +s_{0}\xi ^{2}-s_{1}\xi ^{4/3}}}\;, \label{darkmio} \end{equation where $s_{0}=m_{C}(\epsilon _{C}-\epsilon _{X})$ and $s_{1}=3m_{C}\Gamma ^{2}/(2g)$. From $\chi (x)$ one obtains $\Psi (x)=\left[ \chi (x)\right] ^{1/3}$ and $\Phi (x)=-(g/\Gamma )\chi (x)$. In Fig. \ref{fig9} we compare the exact implicit DS solution (solid line), produced by Eq. (\ref{darkmio ), and its approximate counterpart (dashed line) given by Eq. (\ref{dark}). Finally, getting back to the full 2D system of Eqs. (\ref{eq1}) and (\re {eq2}), and substituting there $\left\{ {\psi }(\mathbf{r},t),{\phi } \mathbf{r},t)\right\} =\left\{ \Psi (\mathbf{r}),\Phi (\mathbf{r})\right\} e^{-i\mu t}$, cf. Eq. (\ref{tilde}), we note that the existence of 2D vortices can be predicted by means of the approximation similar to that in Eq. (\ref{PhiPsi}), i.e. \begin{equation} \Phi \approx -\frac{\Gamma }{\epsilon _{C}-\mu }\Psi -\frac{\Gamma } 2m_{C}\left( \epsilon _{C}-\mu \right) ^{2}}\nabla _{\perp }^{2}\Psi . \end{equation The substitution of this into the stationary version of Eq. (\ref{eq1}) with $1/m_{X}=0$ yields \begin{gather} -\frac{\Gamma ^{2}}{2m_{C}\left( \epsilon _{C}-\mu \right) ^{2}}\nabla _{\perp }^{2}\Psi +g\left\vert \Psi \right\vert ^{2}\Psi \notag \\ -\left[ \left( \mu -\epsilon _{X}\right) +\frac{\Gamma ^{2}}{\left( \epsilon _{C}-\mu \right) }\right] \Psi =0, \label{2Dsingle} \end{gather cf. Eq. (\ref{single}). It is the usual 2D nonlinear Schr\"{o}dinger equation with the self-defocusing nonlinearity, which gives rise to commonly known vortex states \cite{vortices}. \section{Conclusions} The objective of this work is to propose the dual-core optical waveguide, with one linear and one dispersive cores, as an emulator for the EP (exciton-polariton) system in the lossless limit, which is not currently achievable in semiconductor microcavities. In terms of this model, the first fundamental issue is the MI\ (modulational instability) of the uniform state. As might be expected, it is found that the uniform states corresponding to the upper and lower branches of the nonlinear dispersion relation are, respectively, unstable and stable. This analytical result is confirmed by direct simulations, which demonstrate the evolution of localized perturbations on top of stable and unstable backgrounds. The excitation modes supported by the stable background are analyzed too, demonstrating two gapless and gapped branches in the spectrum. The stability investigation was generalized for the uniform background with the phase flux, demonstrating that the lower-branch state loses the stability at the critical value of the flux wavenumber. Finally, approximate and exact analytical solutions for stable dark solitons supported by the 1D setting are produced too. The analysis may be extended in other directions. In particular, a challenging problem is an accurate investigation of 2D vortices, the existence of which is suggested by the approximate equation (\ref{2Dsingle}). \section*{Acknowledgments} The authors acknowledge for partial support Universit\`{a} di Padova (grant No. CPDA118083), Cariparo Foundation (Eccellenza grant 11/12), and MIUR (PRIN grant No. 2010LLKJBX). The visit of B.A.M. to Universit\`{a} di Padova was supported by the Erasmus Mundus EDEN grant No. 2012-2626/001-001-EMA2. L.S. thanks F.M. Marchetti for useful e-discussions.
cdc38c708fb369c5cec73ba8658535ea6f6186c5
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The problem of determining which geometric configurations one can find inside various subsets of Euclidean space is a classical subject matter. The basic problem is to understand how large a subset of Euclidean space must be to be sure that it contains the vertices of a congruent and possibly scaled copy of a given polyhedron or another geometric shape. In the case of a finite set, ``large" refers to the number of points, while in infinite sets, it refers to the Hausdorff dimension or Lebesgue density. The resulting class of problems has been attacked by a variety of authors using combinatorial, number theoretic, ergodic, and Fourier analytic techniques, creating a rich set of ideas and interactions. We begin with a comprehensive result due to Tamar Ziegler, \cite{Z06} which generalizes an earlier result due to Furstenberg, Katznelson and Weiss \cite{FKW90}. See also \cite{B86}. \begin{theorem} \label{z06} [Ziegler] Let $E \subset {\Bbb R}^d$, of positive upper Lebesgue density in the sense that $$ \limsup_{R \to \infty} \frac{{\mathcal L}^d \{E \cap {[-R,R]}^d \}}{{(2R)}^d}>0, $$ where ${\mathcal L}^d$ denotes the $d$-dimensional Lebesgue measure. Let $E_{\delta}$ denote the $\delta$-neighborhood of $E$. Let $V=\{ {\bf 0}, v^1, v^2, \dots, v^{k-1}\} \subset {\Bbb R}^d$, where $k \ge 2$ is a positive integer. Then there exists $l_0>0$ such that for any $l>l_0$ and any $\delta>0$ there exists $\{x^1, \dots, x^k\} \subset E_{\delta}$ congruent to $lV=\{ {\bf 0}, lv^1, \dots, lv^{k-1}\}$. \end{theorem} In particular, this result shows that we can recover every simplex similarity type and sufficiently large scaling inside a subset of ${\Bbb R}^d$ of positive upper Lebesgue density. It is reasonable to wonder whether the assumptions of Theorem \ref{z06} can be weakened, but the following result due to Maga \cite{Mag10} shows that conclusion may fail even if we replace the upper Lebesgue density condition with the assumption that the set is of dimension $d$. \begin{theorem} \label{mag10} [Maga] For any $d \ge 2$ there exists a full dimensional compact set $A \subset {\Bbb R}^d$ such that $A $ does not contain the vertices of any parallelogram. If $d=2$, then given any triple of points $x^1,x^2,x^3$, $x^j \in A$, there exists a full dimensional compact set $A \subset {\Bbb R}^2$ such that $A$ does not contain the vertices of any triangle similar to $\bigtriangleup x^1x^2x^3$. \end{theorem} In view of Maga's result, it is reasonable to ask whether interesting point configurations can be found inside thin sets under additional structural hypotheses. This question was recently addressed by Chan, \L aba, and Pramanik in \cite{CLP14}. Before stating their result, we provide two relevant definitions. \begin{definition}\label{clpConfiguration} Fix integers $n\geq 2$, $p\geq 3$, and $m= n\lceil \frac{p+1}{2} \rceil$. Suppose $B_1, \dots, B_p$ are $n \times (m-n)$ matrices. (a) We say that $E$ contains a $p-$point $\mathcal{B}-$configuration if there exists vectors $z\in \mathbb{R}^n $ and $w\in \mathbb{R}^{m-n}\backslash \vec{0}$ such that $$\{z + B_j w \}_{j=1}^p \subset E.$$ (b) Moreover, given any finite collection of subspaces $V_1,\dots, V_q \subset \mathbb{R}^{m-n}$ with $dim(V_i) < m-n$, we say that $E$ contains a non-trivial $p-$point $\mathcal{B}-$configuration with respect to $(V_1,\dots, V_q)$ if there exists vectors $z\in \mathbb{R}^n$ and $w\in \mathbb{R}^{m-n}\backslash \cup_{i=1}^{q}V_i$ such that $$\{z + B_j w \}_{j=1}^p \subset E.$$ \\ \end{definition} \begin{definition}\label{clpRank} Fix integers $n\geq 2$, $p\geq 3$, and $m= n\lceil \frac{p+1}{2} \rceil$. We say that a set of $n\times (m-n)$ matrices $\{ B_1, \dots, B_p\}$ is non-degenerate if \[rank \left( \begin{array}{c} B_{i_2}-B_{i_1}\\ \vdots\\ B_{i_{m/n}}- B_{i_1}\\ \end{array} \right)=m-n \] for any distinct indices $i_1,\dots,i_{ m/n} \in \{1,\dots,p\}$. \end{definition} \vskip.125in \begin{theorem} \label{clp} [Chan, \L aba, and Pramanik] Fix integers $n\geq 2$, $p\geq 3$, and $m= n\lceil \frac{p+1}{2} \rceil$. Let $\{B_1, \dots, B_p\}$ be a collection of $n \times (m-n)$ non-degenerate matrices in the sense of Definition \ref{clpRank}. Then for any constant $C$, there exists a positive number $\epsilon_0 = \epsilon_0(C,n,p,B_1,\dots,B_p) <<1$ with the following property: Suppose the set $E \subset \mathbb{R}^n$ with $\left|E \right|=0$ supports a positive, finite, Radon measure $\mu$ with two conditions: (a) (ball condition) $sup_{\stackrel{x\in E}{ 0<r<1}} \frac{\mu(B(x,r)}{r^{\alpha}} \le C$ if $n-\epsilon_0 <\alpha < n$, (b) (Fourier decay) $sup_{\xi \in \mathbb{R}^n } |\widehat{\mu}(\xi)| (1+ |\xi|)^{\beta/2} \le C.$ Then \vskip.125in (i) $E$ contains a $p-$point $\mathcal{B}-$configuration in the sense of Definition \ref{clpConfiguration} (a). \vskip.125in (ii) Moreover, for any finite collection of subspaces $V_1, \dots, V_q \subset \mathbb{R}^{m-n}$ with $dim(V_i) < m-n$, $E$ contains a non-trivial $p-$point $\mathcal{B}-$configuration with respect to $(V_1, \dots, V_q)$ in the sense of Definition \ref{clpConfiguration} (b). \end{theorem} \vskip.125in One can check that the Chan-\L aba-Pramanik result covers some geometric configurations but not others. For example, their non-degeneracy condition allows them to consider triangles in the plane, but not simplexes in ${\Bbb R}^3$ where three faces meet at one of the vertices at right angles, forming a three-dimensional corner. Most relevant to this paper is the fact that the conditions under which Theorem \ref{clp} holds are satisfied for chains (see Definition \ref{chaindefinition} below), but the conclusion requires decay properties for the Fourier transform of a measure supported on the underlying set. We shall see that in the case of chains, such an assumption is not needed and the existence of a wide variety of chains can be established under an explicit dimensional condition alone. \subsection{Focus of this article} In this paper we establish that a set of sufficiently large Hausdorff dimension, {\it with no additional assumptions}, contains an arbitrarily long chain with vertices in the set and preassigned admissible gaps. \vskip.125in \begin{definition} \label{chaindefinition} (See Figure 1 above) A $k$-chain in $E \subset {\Bbb R}^d$ with gaps ${\{t_i\}}_{i=1}^k$ is a sequence $$\{x^1,x^2, \dots, x^{k+1}: x^j \in E; \ |x^{i+1}-x^i|=t_i; \ 1 \leq i \leq k\}.$$ \vskip.125in We say that the chain is {\it non-degenerate} if all the $x^j$s are distinct. \end{definition} \begin{figure} \label{chainfigure} \centering \includegraphics[scale=.5]{chain.png} \caption{A 3-chain} \end{figure} \vskip.125in Our main result is the following. \begin{theorem} \label{main} Suppose that the Hausdorff dimension of a compact set $E \subset {\Bbb R}^d$, $d \ge 2$, is greater than $\frac{d+1}{2}$. Then for any $k \ge 1$, there exists an open interval $\tilde{I}$ such that for any ${\{t_i\}}_{i=1}^k \subset \tilde{I}$ there exists a non-degenerate $k$-chain in $E$ with gaps ${\{t_i\}}_{i=1}^k$. \end{theorem} \vskip.125in In the course of establishing Theorem \ref{main} we shall prove the following result which is interesting in its own right and has a number of consequences for Falconer type problems. See \cite{Fal86}, \cite{Erd05} and \cite{W99} for the background and the latest results pertaining to Falconer distance problem. \begin{theorem} \label{almostmain} Suppose that $\mu$ is a compactly supported non-negative Borel measure such that \begin{equation} \label{adupper} \mu(B(x,r)) \leq Cr^{s_{\mu}}, \end{equation} where $B(x,r)$ is the ball of radius $r>0$ centered at $x \in {\Bbb R}^d$, for some $s_{\mu}\in(\frac{d+1}{2}, d]$. Then for any $t_1, \dots, t_k>0$ and $\epsilon>0$, \begin{equation} \label{cbabove} \mu \times \mu \times \dots \times \mu \{(x^1,x^2, \dots, x^{k+1}): t_i -\epsilon \leq |x^{i+1}-x^i| \leq t_i+\epsilon; \ i=1,2, \dots, k \} \leq C\epsilon^k. \end{equation} \end{theorem} \vskip.125in \begin{corollary} \label{chainfalconer} Given a compact set $E \subset {\Bbb R}^d$, $d \ge 2$, $k\geq 1$, define $$ \Delta_k(E)=\left\{|x^1-x^2|, |x^2-x^3|, \dots, |x^k-x^{k+1}|: x^j \in E \right\}.$$ Suppose that the Hausdorff dimension of $E$ is greater than $\frac{d+1}{2}$. Then $${\mathcal L}^k(\Delta_k(E))>0.$$ \end{corollary} \vskip.125in \vskip.125in \begin{remark} Suppose that $E \subset {\Bbb R}^d$ has Hausdorff dimension $s>\frac{d+1}{2}$ and is \textbf{Ahlfors-David regular}, i.e. there exists $C>0$ such that for every $x \in E$, $$ C^{-1}r^s \leq \mu(B(x,r)) \leq Cr^s,$$ (where $\mu$ is the restriction of the $s$-dimensional Hausdorff measure to $E$). Then using the techniques in \cite{EIT11} along with Theorem \ref{almostmain}, one can show that for any sequence of positive real numbers $t_1, t_2, \dots, t_k$, the upper Minkowski dimension of $$ \{(x^1, x^2, \dots, x^{k+1}) \in E^{k+1}: |x^{j+1}-x^j|=t_j; \ 1 \leq j \leq k\}$$ does not exceed $(k+1)dim_{{\mathcal H}}(E)-k$. \end{remark} \vskip.125in \subsection{Acknowledgements} The authors wish to thank Shannon Iosevich for her help with the diagrams used in this paper. The authors also wish to thank Fedja Nazarov and Jonathan Pakianathan for helpful discussions related to the subject matter of this article. \vskip.25in \section{Proof of Theorem \ref{main} and Theorem \ref{almostmain}} \vskip.125in The strategy for this section is as follows: \vspace{.25 in} We begin by dividing both sides of equation \eqref{cbabove} by $\epsilon^k$. The left side becomes \begin{equation}\label{density} \epsilon^{-k}\mu \times \dots \times \mu \{(x^1,\dots, x^{k+1}): t_i-\epsilon \leq |x^{i+1}-x^i| \leq t_i+\epsilon; \ i=1,2, \dots, k \}, \end{equation} which can be interpreted as the density of $\epsilon$-approximate chains in $E \times \ldots \times E$. \vspace{.125 in} \noindent Theorem \ref{almostmain} gives an upper bound on this expression that is independent of $\epsilon$. This is accomplished using an inductive argument on the chain length coupled with repeated application of an earlier result from \cite{ISTU14} in which the authors establish $L^2(\mu)$ mapping properties of certain convolution operators. This upper bound is important in the final section where we define a measure on the set of chains. Next, we acquire a lower bound on \eqref{density}. This result was already established in the case that $k=1$ in \cite{IMT12} where the authors show that the density of $\epsilon$-approximate $1$-chains with gap size $t$ is bounded below independent of $\epsilon$ for all $t$ in a non-empty open interval, $I$. Using a pigeon-holing argument, we extend the result in \cite{IMT12} to obtain a lower bound on \eqref{density} in the case that every gap is of equal size, $t$, for some $t\in I$. To obtain a lower bound on chains with variable gap size, we show that the density of $\epsilon$-approximate $k$-chains is continuous as a function of gap sizes. Furthermore, we use the lower bound on chains with constant gaps to prove that this continuous function is not identically zero. We conclude that the density of $\epsilon$-approximate $k$-chains is bounded below independent of $\epsilon$ and independent of the gap sizes, as long as all gap sizes fall within some interval $\tilde{I}$ around $t$. \vskip.125in In the final section, we address the issue of non-degeneracy. To this end, we reinterpret the density of $\epsilon$-approximate $k$-chains as a measure supported in $E^{k+1}$, and show that it converges to a new measure, $\Lambda_{\vec{t}}^k$, as $\epsilon \downarrow 0$. This new measure is shown to be supported on ``exact" $k$-chains ($\epsilon = 0$) with admissible gaps. We next show that the measure of the set of degenerate chains is $0$, and we conclude that the mass of $\Lambda_{\vec{t}}^k$ is contained in non-degenerate $k-$chains. \vspace{.25 in} We shall repeatedly use the following result due to Iosevich, Sawyer, Taylor, and Uriarte-Tuero \cite{ISTU14}. \begin{theorem} \label{maintool} Let $T_{\lambda}f(x)=\lambda*(f\mu)(x)$, where $\lambda, \mu$ are compactly supported non-negative Borel measures on ${\Bbb R}^d$. Suppose that $\mu$ satisfies \eqref{adupper} and for some $\alpha>0$ $$ |\widehat{\lambda}(\xi)| \leq C{|\xi|}^{-\alpha}.$$ Suppose that $\nu$ is a compactly supported Borel measure supported on ${\Bbb R}^d$ satisfying \eqref{adupper} with $s_{\mu}$ replaced by $s_{\nu}$ and suppose that $\alpha>d-s$, where $s=\frac{s_{\mu}+s_{\nu}}{2}$. Then $$ {||T_{\lambda}f||}_{L^2(\nu)} \leq c {||f||}_{L^2(\mu)}.$$ \end{theorem} \vskip.125in In this article, we will use Theorem \eqref{maintool} with $\lambda=\sigma,$ the surface measure on a $(d-1)$-dimensional sphere in $\mathbb{R}^d$. It is known, see \cite{St93}, that $$\widehat{\sigma}(\xi) =O(|\xi|^{-(d-1)/2}).$$ \vskip.125in Since the proof of Theorem \ref{maintool} is short, we give the argument below for the sake of keeping the presentation as self-contained as possible. It is enough to show that $$ \langle T_{\lambda^{\epsilon}}f, g\nu \rangle \leq C{||f||}_{L^2(\mu)} \cdot {||g||}_{L^2(\nu)}.$$ The left hand side equals $$ \int \hat{\lambda}^{\epsilon}(\xi) \widehat{f\mu}(\xi) \widehat{g\nu}(\xi) d\xi.$$ By the assumptions of Theorem \ref{maintool}, the modulus of this quantity is bounded by $$ C \int {|\xi|}^{-\alpha} |\widehat{f\mu}(\xi)| |\widehat{g\nu}(\xi)| d\xi,$$ and applying Cauchy-Schwarz bounds this quantity by \begin{equation} \label{2square} C {\left( \int {|\widehat{f\mu}(\xi)|}^2 {|\xi|}^{-\alpha_{\mu}} d\xi \right)}^{\frac{1}{2}} \cdot {\left( \int {|\widehat{g \nu}(\xi)|}^2 {|\xi|}^{-\alpha_{\nu}} d\xi \right)}^{\frac{1}{2}} \end{equation} for any $\alpha_{\mu}, \alpha_{\nu}>0$ such that $\alpha=\frac{\alpha_{\mu}+\alpha_{\nu}}{2}$. \vskip.125in By Lemma (\ref{fenergy}) below, the quantity (\ref{2square}) is bounded by $C {||f||}_{L^2(\mu)} \cdot {||g||}_{L^2(\nu)}$ after choosing, as we may, $\alpha_{\mu}>d-s_{\mu}$ and $\alpha_{\nu}>d-s_{\nu}$. This completes the proof of Theorem \ref{maintool}. \vskip.25in \vskip.25in \subsection{Proof of Theorem \ref{almostmain} and Corollary \ref{chainfalconer}} Let $\epsilon>0$. Divide both sides of (\ref{cbabove}) by $\epsilon^k$, and note that it suffices to establish the estimate \begin{equation}\label{up} C_k^{\epsilon}(\mu)=\int \left( \prod_{i=1}^k \sigma_{t_i}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i) \right) d\mu(x^{k+1}) \le c^k,\end{equation} where $c$ is independent of $\epsilon$, and $t_1, \ldots,t_ k>0$. Here $\sigma_r^{\epsilon}(x)=\sigma_r*\rho_{\epsilon}(x)$, with $\sigma_r$ the Lebesgue measure on the sphere of radius $r$, $\rho$ a smooth cut-off function with $\int \rho=1$ and $\rho_{\epsilon}(x)=\epsilon^{-d} \rho \left( \frac{x}{\epsilon} \right)$. Assume in addition that $\rho$ is non-negative and that $\rho(x) =\rho(-x)$. \vskip.125in \vskip.125in Let $\sigma$ denote the Lebesgue measure on the $(d-1)-$dimensional sphere in $\mathbb{R}^d$. Set $T_j^\epsilon = T_{\sigma_{t_{j}}}^\epsilon$, where $T_{\sigma_{t_j}}^{\epsilon}f(x) = \sigma_{t_j}*(f\mu)(x)$ was introduced in Theorem \ref{maintool}. Define \begin{equation}\label{fk}f_k^{\epsilon}(x) = T_k^\epsilon \circ \cdots \circ T_1^{\epsilon}(1)(x),\end{equation} and $$f_0^{\epsilon}(x)=1.$$ It is important to note that $f_k(x)$ depends implicitly on the choices of $t_1, \dots, t_k>0$, and this choice will be made explicit throughout. \vskip.125in Observe that \begin{equation}\label{beauty}f_{k+1}^{\epsilon}=T_{k+1}^{\epsilon}f_k^{\epsilon}.\end{equation} \vskip.125in Re-writing the left-hand-side of \eqref{up}, it suffices to show \begin{equation}\label{upup}C_k^{\epsilon}(\mu) = \int f_k^{\epsilon}(x) d\mu(x)\le c^k.\end{equation} Using Cauchy-Schwarz (and keeping in mind that $\int d\mu(x) = 1$), we bound the left-hand-side of \eqref{upup} by \begin{equation}C_k^{\epsilon}(\mu) = \int f_k^{\epsilon}(x) d\mu(x)\le \|f_k^{\epsilon}\|_{L^2(\mu)} .\end{equation} We now use induction on $k$ to show that \begin{equation}\label{mini}\|f_k^{\epsilon}\|_{L^2(\mu)} \le c^{k},\end{equation} where $c$ is the constant obtained in Theorem \ref{maintool}. For the base case, $k=0$, we have $\|f_0^{\epsilon}\|_{L^2(\mu)}= \int d\mu(x) =1.$ Next, we assume inductively that $\|f_k^{\epsilon}\|_{L^2(\mu)} \le c^{k}$. We now show that, for any $t_{k+1}>0$, $$\|f_{k+1}^{\epsilon}\|_{L^2(\mu)}\le c^{k+1}.$$ First, use \eqref{beauty} to write $$\|f_{k+1}^{\epsilon}\|_{L^2(\mu)}=||T_{k+1}^{\epsilon}f_k^\epsilon||_{L^2(\mu)}.$$ Next, use Theorem \ref{maintool} with $\lambda=\sigma$, the Lebesgue measure on the sphere, and $\alpha = \frac{d-1}{2}$ (see the comment immediately following Theorem 2.1 to justify this choice of $\alpha$) to show that $$||T_{k+1}^{\epsilon}f_k^\epsilon||_{L^2(\mu)} \le c\|f_{k}^{\epsilon}\|_{L^2(\mu)}$$ whenever $s_{\mu}>d-\alpha =\frac{d+1}{2}$. We complete the proof by applying the inductive hypothesis. This completes the verification of \eqref{mini}. \\ We now recover Corollary \ref{chainfalconer}. Let $s_{\mu} \in \left(\frac{d+1}{2}, dim(E)\right)$, and choose a probability measure, $\mu$, with support contained in $E$ which satisfies \eqref{adupper}; the existence of such a measure is provided by Frostman's lemma (see \cite{Falc86}, \cite{W04} or \cite{M95}). Cover $\Delta_k(E)$ with cubes of the form $$ \bigcup_i \prod_{j=1}^d (t_{ij}, t_{ij}+\epsilon_i),$$ where $\prod$ denotes the Cartesian product. We have \begin{align*} \label{covering} 1&=\mu\times \cdots \times \mu(E^{k+1})\\ & \leq \sum_i \mu \times \cdots \times \mu \{(x^1, \dots, x^{k+1}): t_{ij} -\epsilon\leq |x^{j+1}-x^j| \leq t_{ij}+\epsilon_i; \ 1 \leq j \leq k \}. \end{align*} By Theorem \ref{almostmain}, the expression above is bounded by \begin{equation} \label{ksum} C \sum_i \epsilon_i^k. \end{equation} and we conclude that (\ref{ksum}) is bounded from below by $\frac{1}{C}>0$. It follows that $\Delta_k(E)$ cannot have measure $0$ and the proof of Corollary \ref{chainfalconer} is complete. \vskip.25in We now continue with the proof of Theorem \ref{main}. \subsection{Lower bound on $C^\epsilon_k(\mu)$} Let $s_{\mu} \in \left(\frac{d+1}{2}, dim(E)\right)$, and choose a probability measure, $\mu$, with support contained in $E$ which satisfies \eqref{adupper}. \vskip.125in We now establish the existence of a non-empty open interval $\tilde{I}$ such that \begin{equation} \label{lowerboundest} \liminf_{\epsilon \to 0} C^{\epsilon}_k(\mu) >0 \end{equation} where each $t_i$ belongs to $\tilde{I}$, $C^\epsilon_k(\mu)$ is as in \eqref{up}. Note that this positive lower bound alone establishes the existence of vertices $x^1, \dots, x^{k+1} \in E$ so that $|x^{i+1}-x^{i}|=t_i$ for each $i\in \{1,\dots, k\}$ (this follows, for instance, by Cantor's intersection theorem and the compactness of the set $E$). Extra effort is made in the next section in order to guarantee that we may take $x^1, \dots, x^{k+1}$ distinct. We first prove estimate \eqref{lowerboundest} in the case that all gaps are equal. This is accomplished using a pigeon holing argument on chains of length one. We then provide a continuity argument to show that the estimate holds for variable gap values, $t_i$, belonging to a non-empty open interval $\tilde{I}$. The second argument relies on the first precisely at the point when we show that the said continuous function is not identically equal to zero. \vskip.125in \noindent \textbf{Lower bound for constant gaps:} \\ The proof of estimate \eqref{lowerboundest} in the case when $k=1$ was already established in \cite{IMT12} provided that $\mu$ to satisfies the ball condition in \eqref{adupper} with $\frac{d+1}{2}<s_{\mu}<dim_{{\mathcal H}}(E)$. The existence of such measures is established by Frostman's lemma (see e.g. \cite{Falc86}, \cite{W04} or \cite{M95} ). More specifically, it is demonstrated in \cite{IMT12} that there exists $c(1)>0$, $\epsilon_0>0$, and a non-empty open interval $I \subset (0, diameter(E))$ so that if $t\in I$ and $0<\epsilon < \epsilon_0$, then $$C_1^{\epsilon}= \int \sigma_t^{\epsilon}*\mu(x) d\mu(x)> 2c(1).$$ To establish estimate \eqref{lowerboundest} for longer chains, we rely on the following lemmas. \begin{lemma}\label{babylower} Set $$G_{t,\epsilon}(1) = \{x\in E: \sigma_t^{\epsilon}*\mu(x) >c(1)\}.$$ There exists $m(1) \in \mathbb{Z}^+$ so that if $t\in I$ and $0<\epsilon <\epsilon_0$, then $$\mu( G_{t,\epsilon}(1) ) \geq 2^{-2m(1)}.$$ \end{lemma} \vskip.125in \begin{lemma}\label{mamalower} Set $$G_{t,\epsilon}(j+1) = \{x\in E: \sigma_t^{\epsilon}*\mu|_j(x) >c(j+1)\},$$ where $j \in \{1,\cdots, (k-1)\}$, $\mu|_j(x)$ denotes restriction of the measure $\mu$ to the set ${G_{t, \epsilon }(j)}$, and $$c(j+1) =\frac{1}{2}c(j)\mu(G_{t,\epsilon}(j)).$$ Then there exists $m(j+1) \in \mathbb{Z}^+$ so that if $t\in I$ and $0<\epsilon <\epsilon_0$, then $$\mu( G_{t,\epsilon}(j+1) )> 2^{-2m(j+1)}.$$ \end{lemma} \vskip.125in We postpone the proof of Lemmas \ref{babylower} and \ref{mamalower} momentarily, and we apply these lemmas to obtain a lower bound on $C_{k}^{\epsilon}(\mu)$. \\ We write $$C_{k}^{\epsilon}(\mu) = \int f_k^{\epsilon}(x)d\mu(x),$$ where $f_k^{\epsilon}$ was introduced in \eqref{fk} and here $t_1=\cdots =t_k =t$. \vskip.125in Now \begin{align*} C_{k}^{\epsilon}(\mu) =& \int f_k^{\epsilon}(x)d\mu(x)\\ =& \iint \sigma_{t}^{\epsilon}(x-y) f_{k-1}(y) d\mu(y) d\mu(x)\\ \end{align*} Integrating in $x$ and restricting the variable $y$ to the set $G_{t,\epsilon}(1)$, we write \begin{align*} C_{k}^{\epsilon}(\mu) \geq & \int_{G_{t, \epsilon}(1)} \sigma_t^{\epsilon}*\mu(y) f_{k-1}(y) d\mu(y) \\ \geq & \, c(1) \int_{G_{t, \epsilon}(1)} f_{k-1}(y) d\mu(y) \\ =& \, c(1) \int f_{k-1}(y) d\mu_1(y). \end{align*} To achieve a lower bound, we iterate this process. For each $j\in \{2, \dots, k-1\}$ we have: \begin{align*} & \int f_{k-j}^{\epsilon}(x)d\mu_j(x) \\ = & \iint \sigma_{t}^{\epsilon}(x-y) f_{k-j-1}(y) d\mu(y) d\mu_j(x) \\ \geq & \int_{G_{t, \epsilon}(j+1)} \sigma_t^{\epsilon}*\mu_j(y) f_{k-j-1}(y) d\mu(y) \\ \geq & \, c(j+1) \int_{G_{t, \epsilon}(j+1)} f_{k-j-1}(y) d\mu(y) \\ =& \, c(j+1) \int f_{k-j-1}(y) d\mu_{j+1}(y) \\ \end{align*} It follows that \begin{align*} C_{k}^{\epsilon}(\mu) &\geq \left(\prod_{j=1}^{k-1}c(i)\right) \int \int \sigma_t^{\epsilon}(x-y) d\mu_{k-1}(y) d\mu(x)\\ &\geq \left(\prod_{j=1}^{k}c(i)\right) \mu(G_{t,\epsilon}(k)),\\ \end{align*} and we are done in light of Lemma \ref{mamalower}. Given Lemmas \ref{babylower} and \ref{mamalower}, we have shown that for all $t\in I$ and for all $0<\epsilon<\epsilon_0$, we have \begin{equation}\label{upperconstant}\liminf_{\epsilon \to 0} C_k^{\epsilon}(\mu)> 0,\end{equation} where all gap lengths, $t_1, \dots, t_k$ constantly equal to $t$. This concludes the proof of estimate \eqref{lowerboundest} in the case of constant gaps. \vskip.125in \noindent We now proceed to the proofs of Lemmas \ref{babylower} and \ref{mamalower}. \begin{proof}(Lemma \ref{babylower})\\ We write \begin{align*} 2c(1) &< \int \sigma_t^{\epsilon}*\mu(x) d\mu(x) \\ & \le \left(\int_{(G_{t,\epsilon}(1))^c} \sigma_t^{\epsilon}*\mu(x) d\mu(x) \right) + \left( \int_{G_{t,\epsilon}(1)} \sigma_t^{\epsilon}*\mu(x) d\mu(x) \right) \\ &=\mathcal{I} + \mathcal{II}\\ \end{align*} where $A^c$ denotes the compliment of a set $A\subset E.$ \vspace{.125 in} \noindent We first observe that $$\mathcal{I} \le c(1).$$ Next, we estimate $\mathcal{II}$. Let $m\in \mathbb{Z}^+$, and write $$G_{t,\epsilon}(1) = \{x\in E: c(1) < \sigma_t^{\epsilon}*\mu(x) \le 2^m\} \cup \{x\in E: 2^m \le \sigma_t^{\epsilon}*\mu(x) \}.$$ Then \begin{align*} \mathcal{II} &= \left( \int_{\{x\in E: c(1) < \sigma_t^{\epsilon}*\mu(x) \le 2^m\}} \sigma_t^{\epsilon}*\mu(x) d\mu(x) \right) + \left( \int_{\{x\in E: 2^m \le \sigma_t^{\epsilon}*\mu(x) \}} \sigma_t^{\epsilon}*\mu(x) d\mu(x) \right) \\ & \le 2^m \mu( G_{t,\epsilon}(1)) +\left( \sum_{l=m} 2^{l+1}\cdot \mu ( \{x\in E: 2^l \le \sigma_t^{\epsilon}*\mu(x) \le 2^{l+1}\}) \right) . \\ \end{align*} We use Theorem \ref{maintool} to estimate $$\mu ( \{x\in E: 2^l \le \sigma_t^{\epsilon}*\mu(x) \le 2^{l+1}\}) \le c_d \cdot 2^{-2l},$$ where the constant $c_d$ depends only on the ambient dimension $d$. Now, \begin{align*} \mathcal{II} &\le 2^m \mu( G_{t,\epsilon}(1)) + \left(2 c_d \cdot \sum_{l=m} 2^{l}\cdot 2^{-2l}\right) \\ & \lesssim 2^m \mu( G_{t,\epsilon}(1)) + 2^{-m}. \\ \end{align*} It follows that $$2c(1) \le \mathcal{I} + \mathcal{II} \lesssim c(1) + 2^m \mu( G_{t,\epsilon}(1)) + 2^{-m}. $$ Taking $m \in \mathbb{Z}^+$ large enough, we conclude that $$\mu( G_{t,\epsilon}(1)) \geq 2^{-2m}.$$ \end{proof} \begin{proof}(Lemma \ref{mamalower})\\ We prove the Lemma by induction on $j$. The base case, $j=1$, was established in Lemma \ref{babylower}. Next, assume that there exists $ m(j) \in \mathbb{Z}^+$ such that $$2^{-m(j)} < \mu(G_{t,\epsilon}(j) )$$ for all $0<\epsilon< \epsilon_0$ and $t\in I$. \\ By the definition of $G_{t, \epsilon}(j)$, \begin{align*} c(j)\mu(G_{t,\epsilon}(j)) &< \int_{G_{t, \epsilon}(j)} \sigma^{\epsilon}_t*\mu|_{G_{t,\epsilon}(j-1)} (x) d\mu(x) . \\ \end{align*} Set $c(j+1) = \frac{1}{2}c(j)\mu(G_{t,\epsilon}(j))$. By assumption, $2c(j+1)= c(j)\mu(G_{t,\epsilon}(j) \geq c(j)2^{-m(j)}$, and in particular this quantity is positive. Next, we obtain a bound from above: \begin{align*} \int_{G_{t, \epsilon}(j)} \sigma^{\epsilon}_t*\mu|_{G_{t,\epsilon}(j-1)} (x) d\mu(x) & \le \int_{G_{t, \epsilon}(j)} \sigma^{\epsilon}_t*\mu (x) d\mu(x) \\ &= \int \sigma^{\epsilon}_t*\mu|_{j} (x) d\mu(x) \\ &= \left(\int_{(G_{t, \epsilon}(j+1))^c} \sigma^{\epsilon}_t*\mu|_{j} (x) d\mu(x)\right) + \left(\int_{G_{t, \epsilon}(j+1)} \sigma^{\epsilon}_t*\mu|_{j} (x) d\mu(x)\right) \\ &=\mathcal{I} + \mathcal{II}. \\ \end{align*} First we observe that $$\mathcal{I} \le c(j+1).$$ Next, we estimate $\mathcal{II}.$ Let $m\in \mathbb{Z}^+$, and write $$G_{t,\epsilon}(j+1) = \{x\in E: c(j+1) < \sigma_t^{\epsilon}*\mu|_{j}(x) \le 2^m\} \cup \{x\in E: 2^m \le \sigma_t^{\epsilon}*\mu|_{j}(x) \}.$$ Then \begin{align*} \mathcal{II} &= \left( \int_{\{x\in E: c(j+1) < \sigma_t^{\epsilon}*\mu|_{j}(x) \le 2^m\}} \sigma_t^{\epsilon}*\mu|_{j}(x) d\mu(x) \right) + \left( \int_{\{x\in E: 2^m \le \sigma_t^{\epsilon}*\mu(x) \}} \sigma_t^{\epsilon}*\mu|_{j}(x) d\mu(x) \right) \\ & \le 2^m \cdot \mu( G_{t,\epsilon}(j+1)) +\left( \sum_{l=m} 2^{l+1}\cdot \mu ( \{x\in E: 2^l \le \sigma_t^{\epsilon}*\mu|_{j}(x) \le 2^{l+1}\}) \right) . \\ \end{align*} We use Theorem \ref{maintool} to estimate $$\mu ( \{x\in E: 2^l \le \sigma_t^{\epsilon}*\mu|_{j}(x) \le 2^{l+1}\}) \le c_d \cdot 2^{-2l},$$ where the constant $c_d$ depends only on the ambient dimension $d$ and the choice of the measure $\mu$. Now, \begin{align*} \mathcal{II} &\le 2^m \mu( G_{t,\epsilon}(j+1)) + \left(2 c_d \cdot \sum_{l=m} 2^{l}\cdot 2^{-2l}\right) \\ & \lesssim 2^m \mu( G_{t,\epsilon}(j+1)) + 2^{-m}. \\ \end{align*} It follows that $$2c(j+1) \le \mathcal{I} +\mathcal{II} \lesssim c(j+1) + 2^m \mu( G_{t,\epsilon}(j+1)) + 2^{-m}. $$ Taking $m \in \mathbb{Z}^+$ large enough, we conclude that $$\mu( G_{t,\epsilon}(j+1)) \geq 2^{-2m}.$$ \end{proof} \vskip.125in \noindent \textbf{Lower bound for variable gaps} \smallskip We now verify \eqref{lowerboundest} in the case of variable gap lengths. In more detail, we show that, for all $k \in \mathbb{Z}^+$ and for values of $t_i$ in a non-empty open interval $\tilde{I}$, we have \begin{equation}\label{repeat}\liminf_{\epsilon \to 0} \int f_k^\epsilon (x)d\mu(x) > 0,\end{equation} where $f_k^{\epsilon}$ is defined in \eqref{fk} with $0<t_1,\dots, t_k \in \tilde{I}$. \vskip.125in The following lemma captures the strategy of proof, and establishes \eqref{repeat}. \begin{lemma} \label{epsLimit} \begin{equation}\label{rewrite}C^{\epsilon}_k(\mu)=\int f_{k}^\epsilon (x)d\mu(x) =M_k(t_1, \dots, t_k) - \sum_{j=1}^{k} R_{k,j}^{\epsilon}(t_1,\dots, t_k),\end{equation} where \begin{equation}\label{continous}M_k(t_1,t_2,\cdots, t_k) = \int \hat{\sigma}_{t_{k}}(\xi) \widehat{f_{k-1}\mu}(-\xi)\hat{\mu}(\xi ) d \xi\end{equation} is continuous and bounded below by a positive constant (independent of $\epsilon$) on $\tilde{I}\times \cdots \times \tilde{I}$, for a non-empty open interval $\tilde{I}$, and \begin{align}\label{remainder} R_{k,j}^{\epsilon}(t_1,t_2,\cdots, t_k) =& \int \hat{\sigma}(t_{j}\xi) \left(1-\hat{\rho}(\epsilon\xi) \right) \widehat{f_{j-1}\mu}(\xi)\widehat{g^{\epsilon}_{j+1}\mu}(-\xi)d \xi\\ =& \mathcal{O}\left(\epsilon^{\alpha\left(s-\frac{d+1}{2}\right)}\right) \end{align} for some $\alpha>0$. \end{lemma} \vskip.125in In proving the lemma, we utilize the following notation: \begin{equation}\label{back}g_j^{\epsilon}(x) = T_j^{\epsilon}\circ \cdots \circ T_k^{\epsilon}(1)(x),\end{equation} and $$g_{k+1}(x)=1.$$ It is important to note that $g_j(x)$ depends implicitly on the choices of $t_1, \dots, t_k>0$, and this choice will be made explicit throughout. First, we demonstrate equation \eqref{rewrite} with repeated use of Fourier inversion. We again employ a variant of the argument in \cite{IMT12}. Write \begin{align*} \int f_{k}^\epsilon (x)d\mu(x) &= \int \int \sigma_{t_{1}}^\epsilon (x-y) g^{\epsilon}_{2}(y)d\mu(x)d\mu(y)\\ &=\int \int (\sigma_{t_{1}} * \rho_\epsilon)(x-y) g_{2}^{\epsilon}(y)d\mu(x)d\mu(y). \end{align*} \vskip.125in \noindent Using Fourier inversion and properties of the Fourier transform, this is equal to $$\int \int \int e^{2\pi i(x-y) \cdot \xi}\hat{\sigma}_{t_{1}}(\xi)\hat{\rho}_\epsilon(\xi) g_{2}^{\epsilon}(y)d\mu(x)d\mu(y)d \xi.$$ \vskip.125in \noindent Simplifying further, we write \begin{align*} & \int f_{k}^\epsilon (x)d\mu(x)\\ =&\int \hat{\sigma}_{t_{1}}(\xi) \hat{\rho}(\epsilon\xi) \hat{\mu}(\xi)\widehat{g_{2}^{\epsilon}\mu}(-\xi)d \xi \\ =&\left(\int \hat{\sigma}_{t_{1}}(\xi)\hat{\mu}(\xi)\widehat{g_{2}^{\epsilon}\mu}(-\xi)d \xi\right) + \left(\int \hat{\sigma}_{t_{1}}(\xi) \left( 1- \hat{\rho}(\epsilon\xi) \right) \hat{\mu}(\xi)\widehat{g_{2}^{\epsilon}\mu}(-\xi)d \xi\right) \\ =&\left(\int \hat{\sigma}_{t_{1}}(\xi)\hat{\mu}(\xi)\widehat{g_{2}^{\epsilon}\mu}(-\xi)d \xi\right) + R_{k,1}^{\epsilon}(t_1,t_2,\cdots, t_k) \end{align*} With repeated use of Fourier inversion, we get \begin{align*} & \int f_{k}^\epsilon (x)d\mu(x)\\ =&\left( \int\hat{ \sigma}_{t_j}(\xi) \cdot \widehat{ f_{j-1}\mu}(-\xi) \cdot \widehat{g^{\epsilon}_{j+1}\mu}(\xi) d\xi\right) + \sum_{l=1}^j R_{k,l}^{\epsilon}(t_1,t_2,\cdots, t_k)\\ =& \, \, \cdots \\ =&\left( \int\widehat{ \sigma}_{t_k}(\xi) \cdot \widehat{f_{k-1}\mu}(-\xi) \cdot \widehat{\mu}(\xi) d\xi\right) + \sum_{l=1}^k R_{k,l}^{\epsilon}(t_1,t_2,\ldots, t_k)\\ =& \, \,M_k(t_1,t_2,\cdots, t_k) \, \, + \, \, \sum_{l=1}^k R_{k,l}^{\epsilon}(t_1,t_2,\ldots, t_k) \end{align*} We now prove that $M_k(t_1,t_2,\ldots, t_k)$ is continuous on any compact set away from $(t_1,\dots,t_k)=\vec{0}$ and that \begin{equation}\label{smallr}R_{k,j}^{\epsilon}(t_1,\ldots, t_k)= \mathcal{O}\left(\epsilon^{\alpha\left(s-\frac{d+1}{2}\right)} \right).\end{equation} Once these are established, we observe that the lower bound on constant chains established in \eqref{upperconstant} combined with \eqref{smallr} imply that $M_k(t_1,\dots,t_k)$ is positive when $t_1 =\cdots =t_k =t$ for any given $t \in I$. Fixing any such $t\in I$, it will then follow by continuity that $M_k(t_1,\dots, t_k)$ is bounded from below on $\tilde{I}\times \cdots \times \tilde{I}$ where $\tilde{I}$ is a non-empty interval. \vspace{.125 in} We now use the Dominated Convergence Theorem to verify the continuity of $M_k(t_1, \dots, t_k)$ on any compact set away from $(t_1,\dots,t_k)=\vec{0}$. Let $t_1, \cdots, t_k>0$. Using properties of the Fourier transform and recalling the definition of $f_j$ from \eqref{fk} and $g_j$ from \eqref{back}, we write $$M_k(t_1,t_2,\cdots, t_k)=\int\widehat{ \sigma}_{t_j}(\xi) \cdot \widehat{f_{j-1} \mu}(-\xi) \cdot \widehat{ g_{j+1}\mu}(\xi) d\xi$$ for any $j \in \left\{ 1, \ldots, k \right\}$. \vskip.125in Let $h_1, \ldots, h_k \in \mathbb{R}$ so that $(h_1, \ldots, h_k) \downarrow 0$. Let $$\tilde{f_j}= T_{t_{j} + h_{j}}\circ \cdots \circ T_{t_1 + h_1}(1) $$ and $$\tilde{g_j}= T_{t_{j}+ h_{j}}\circ \cdots \circ T_{t_k + h_k}(1).$$ We have \\ \begin{align*} &M_k(t_1 + h_1,t_2 + h_2,\cdots, t_k + h_k)\\ &=\int\widehat{ \sigma}_{t_j + h_j}(\xi) \cdot \widehat{\tilde{f}_{j-1}\mu}(-\xi) \cdot \widehat{\tilde{g}_{j+1} \mu}(\xi) d\xi. \\ \end{align*} The integrand goes to $0$ as $h_j$ goes to $0$. Now for $t_j$ in a compact set, the expression above is bounded by $$C(t_j) \int |\xi|^{-(d-1)/2} \left| \widehat{\tilde{f}_{j-1} \mu}(-\xi) \right| \left| \widehat{\tilde{g}_{j+1} \mu}(\xi)\right| d\xi. $$ To proceed, we will utilize the following calculation. \begin{lemma} \label{fenergy} Let $\mu$ be a compactly supported Borel measure such that $\mu(B(x,r)) \leq Cr^s$ for some $s \in (0,d)$. Suppose that $\alpha>d-s$. Then for $f \in L^2(\mu)$, \begin{equation} \label{hibob} \int {|\widehat{f\mu}(\xi)|}^2 {|\xi|}^{-\alpha} d\xi \leq C'{||f||}^2_{L^2(\mu)}. \end{equation} \end{lemma} \vskip.25in To prove Lemma \ref{fenergy}, observe that \begin{equation} \label{fenergysetup} \int {|\widehat{f\mu}(\xi)|}^2 {|\xi|}^{-\alpha} d\xi=C \int \int f(x)f(y) {|x-y|}^{-d+\alpha} d\mu(x)d\mu(y)=\langle Tf,f \rangle, \end{equation} where $$ Tf(x)=\int {|x-y|}^{-d+\alpha} f(y)d\mu(y)$$ and the inner product above is with respect to $L^2(\mu)$. The positive constant, $C$, appearing in $\eqref{fenergysetup}$ depends only on the ambient dimension, $d$. Observe that $$ \int {|x-y|}^{-d+\alpha} d\mu(y) \approx \sum_{j>0} 2^{j(d-\alpha)} \int_{|x-y| \approx 2^{-j}} d\mu(y) \leq C \sum_{j>0} 2^{j(d-\alpha-s)} \leq C'$$ since $\alpha>d-s$. \vskip.125in By symmetry, $\int {|x-y|}^{-d+\alpha} d\mu(x) \leq C'$. It follows by using Schur's test (\cite{Schur11}, see also Lemma 7.5 in \cite{W04}) that $$ {||Tf||}_{L^2(\mu)} \leq C' {||f||}_{L^2(\mu)}.$$ \vskip.125in This implies that conclusion of Lemma \ref{fenergy} by applying the Cauchy-Schwarz inequality to (\ref{fenergysetup}). This completes the proof of Lemma \ref{fenergy}. We note that Lemma \ref{fenergy} can also be recovered from the fractal Plancherel estimate due to R. Strichartz \cite{Str90}. See also Theorem 7.4 in \cite{W04} where a similar statement is proved by the same method as above. We already established using Theorem \cite{ISTU14} that finite compositions of the operators $T_l$ applied to $L^2(\mu)$ functions are in $L_2(\mu)$. Using the Cauchy-Schwarz inequality and in light of Lemma \ref{fenergy}, we $M_k(t_1 + h_1,t_2 + h_2,\cdots, t_k + h_k)$ is bounded. We proceed by applying the Dominated Convergence Theorem. We have \begin{align*} &\lim_{h_j\downarrow 0} M_k(t_1 + h_1,t_2 + h_2,\cdots, t_k + h_k)\\ & =\int\widehat{ \sigma}_{t_j}(\xi) \cdot \widehat{\tilde{g}_{j-1} \mu}(-\xi) \cdot \widehat{ \tilde{f}_{j+1} \mu}(\xi) d\xi.\\ & =\int\widehat{ \sigma}_{t_j}(\xi) \cdot \left( T_{t_{j-1} + h_{j-1}}\circ \cdots \circ T_{t_1 + h_1}(1)\cdot \mu\right)^{\widehat{}}(-\xi) \cdot \left( T_{t_{j+1}+ h_{j+1}}\circ \cdots \circ T_{t_k + h_k}(1)\cdot \mu\right)^{\widehat{}}(\xi) d\xi.\\ \end{align*} We then rewrite the procedure, isolating $\widehat{\sigma}_{t_j}$ for each $j \in \{1, \dots, k\}$, and repeat the process above a total of $k$ times. \vskip.125in \noindent \textbf{Bounding the remainder:} Next, we wish to show that $\lim_{\epsilon \downarrow 0} R_k^\epsilon(t_1,\cdots, t_k) = 0$. Fix $\epsilon>0$. Recall that $R_k^\epsilon(t_1,\cdots, t_k)$ is equal to $$\int (1-\hat{\rho}(\epsilon \xi) )\hat{\sigma}(t\xi)\hat{\mu}(\xi)\widehat{f_k \mu}(-\xi)d \xi.$$ We consider the integral over $|\xi|<\left(\frac{1}{\epsilon}\right)^{\alpha}$ and the integral over $|\xi|>\left(\frac{1}{\epsilon}\right)^{\alpha}$ separately, where $\alpha \in (0,1)$ will be determined. Assume that $s>\frac{d+1}{2}.$ \begin{lemma}\label{lipschitz} Let $\rho:\mathbb{R}^d\rightarrow \mathbb{R}$ satisfy the following properties: $\rho\geq 0$, $\rho(x) =\rho(-x)$, the support of $\rho$ is contained in $\{x: |x|<c\}$, and $\int \rho=1$. Then $$0\le 1-\widehat{\rho}(\xi) \le 2\pi c|\xi|.$$ \end{lemma} To prove the Lemma \eqref{lipschitz}, write $$\widehat{\rho}(\xi) = \int \cos(2\pi x\cdot \xi) \rho(x) dx .$$ We observe that $\cos(x) + |x|>1$, and conclude that the lemma follows when $|x|<c.$ \\ It follows that \begin{align*} & \int_{|\xi|<\left(\frac{1}{\epsilon}\right)^{\alpha}} \left|\hat{\rho}(\epsilon \xi) - 1\right| |\hat{\sigma}(t\xi)| |\hat{\mu}(\xi)||\widehat{f_k\mu}(-\xi)|d \xi \\ &\lesssim \epsilon^{1-\alpha}\left( \int |\hat{\sigma}(t\xi)| |\hat{\mu}(\xi)||\widehat{f_k\mu}(-\xi)|d \xi \right) \\ & \lesssim\epsilon^{1-\alpha},\end{align*} where the last line is justified in the estimation of $M_k(t)$ above. \vspace{.25 in} It remains to estimate the quantity $$\int_{|\xi|>\left(\frac{1}{\epsilon}\right)^{\alpha}} |\widehat{\sigma}(t\xi)| |\widehat{\mu}(\xi)| |\widehat{f_k\mu}(-\xi)| d\xi.$$ Proceeding as in the estimation of $M_k(t)$ above, we bound this integral above with $$Ct^{-\frac{d-1}{2}}\int_{|\xi| >\left(\frac{1}{\epsilon}\right)^{\alpha}} |\xi|^{-\frac{d-1}{2}}|\hat{\mu}(\xi)||\widehat{f_k\mu}(-\xi)|d \xi$$ and then use Cauchy-Schwarz to bound it further with $$Ct^{-\frac{d-1}{2}}\left( \int_{|\xi| > \left(\frac{1}{\epsilon}\right)^{\alpha}} |\xi|^{-\frac{d-1}{2}}|\hat{\mu}(\xi)|^2 d \xi\right)^{1/2}\left( \int_{|\xi| > \left(\frac{1}{\epsilon}\right)^{\alpha}} |\xi|^{-\frac{d-1}{2}}|\widehat{f_k\mu}(\xi)|^2 d \xi\right)^{1/2}.$$ \vskip.125in \noindent We have already shown that the second integral is finite. The first integral is bounded by $$\sum_{j >\alpha \log_2(1/\epsilon)} 2^{-j(\frac{d-1}{2})} \int_{2^j \leq |\xi| <2^{j+1}} |\hat{\mu}(\xi)|^2 d\xi.$$ We may choose a smooth cut-off function $\psi$ such that the inner integral is bounded by $$\int {|\widehat{\mu}(\xi)|}^2 \widehat{\psi}(2^{-j}\xi) d\xi.$$ By Fourier inversion, this integral is equal to $$ 2^{dj} \int \int \psi(2^j(x-y)) d\mu(x) d\mu(y) \leq C2^{j(d-s)}.$$ Returning to the sum, we now have the estimate $$C\sum_{j >\alpha\log_2(1/\epsilon)} 2^{-j(\frac{d-1}{2})} \cdot 2^{j(d-s)} \leq C \sum_{j > \alpha\log_2(1/\epsilon)} 2^{j(\frac{d+1}{2}-s)}.$$ As long as $s > \frac{d+1}{2}$, this is $<< \epsilon^{\alpha(s-\frac{d+1}{2})}$. Thus $R_k^\epsilon(t_1,\dots, t_k)$ tends to 0 with $\epsilon$ as long as $\dim_\mathcal{H}(E) > \frac{d+1}{2}$. \vskip.125in In conclusion we have \begin{equation} \label{goodshit} \, \lim_{\epsilon \downarrow 0} \int \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i)\right) d\mu(x^{k+1})>c_k>0\end{equation} for all $t_j\in \tilde{I}$. \vskip.25in To complete the proof of Theorem \ref{main}, it remains to verify that $E$ contains a non-degenerate $k-$chain with prescribed gaps. This is the topic of the next section. \section{Non-degeneracy}\label{degeneracy} An important issue we have not yet addressed is that the chains we have found may be degenerate. As an extreme example, consider the case where $t_i = 1 $ for all $i$. Then included in our chain count are chains which simply bounce back and forth between two different points. We now take steps to insure that we can indeed find chains with distinct vertices. \vskip.125in We verified above that there exists a non-empty open interval $\tilde{I}$ so that $$\lim _{\epsilon\downarrow 0} \int \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i)\right) d\mu(x^{k+1})$$ is bounded above and below for $t_1, \dots, t_k \in \tilde{I}$. The upper bound appears in \eqref{up} and the lower bound appears in \eqref{goodshit}. \vskip.125in From here onward, we fix $ t_1, \dots, t_k \in \tilde{I}$ and set $\vec{t} = (t_1,\dots, t_k)$. We now define a non-negative Borel measure on the set of $k-$chains with the gaps $\vec{t}$. Let $\Lambda_{\vec{t}}^k$ denote a non-negative Borel measure defined as follows $$\Lambda^k_{\vec{t}}(A) =\lim _{\epsilon\downarrow 0} \int_A \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i)\right) d\mu(x^{k+1}),$$ where $A\subset E\times \cdots \times E$, the $(k+1)-$fold product of the set $E$. \vskip.125in It follows that $\Lambda_{\vec{t}}^k$ is a finite measure which is not identically zero: \begin{equation}\label{nonzed}0< \Lambda^k_{\vec{t}}(E\times \cdots \times E ) .\end{equation} \vskip.125in The strategy we use to demonstrate the existence of non-degenerate $k-$chains in $E$ is as follows: We first show that $\Lambda_{\vec{t}}^k$ has support contained in the set of $k-$chains. This is accomplished by showing that the measure has support contained in all ``approximate'' $k-$chains. We then show that the measure of the set of degenerate chains is zero. It follows, since the $\Lambda_{\vec{t}}^k$-measure of the set of $k-$chains is positive and the $\Lambda_{\vec{t}}^k$-measure of the set of degenerate $k-$chains is zero, that the set of non-degenerate $k-$chains in $E$ is non-empty. \vskip.125in For each $n \in \mathbb{Z}^+$, define the sets of $\frac{1}{n}-$approximate $k$-chains and the set of exact $k$-chains as follows: $$A_{n,k}=\left\{\left(x^1, \dots, x^{k+1}\right)\in E\times \cdots \times E :t_i-\frac{1}{n}\le |x^{i+1} -x^i| \le t_i+ \frac{1}{n}, \text{ for each } i=1, \dots, k\right\},$$ and $$A_k=\left\{\left(x^1, \dots, x^{k+1}\right)\in E\times \cdots \times E: |x^{i+1} -x^i| =t_i \text{ for each } i=1, \dots, k\right\}.$$ \vskip.125in Observe that $$\bigcap_n A_{n,k}=A_k.$$ \vskip.125in We now observe that the support of $\Lambda_t^k$ is contained in the set of all approximate chains. This follows immediately from the observation that $$\Lambda^k_{\vec{t}}( A_{n,k}^c) =0,$$ for each $n\in \mathbb{Z}^+$, where $ A_{n,k}^c$ denotes the compliment of the set $ A_{n,k}$ in $ E\times \cdots \times E$. Next, we observe that the support of $\Lambda_{\vec{t}}^k$ is contained in the set of exact chains. Indeed, it follows from the previous equation that $$\Lambda^k_{\vec{t}}\left( \bigcup_n A_{n,k}^c\right) \le \sum_n \Lambda^k_{\vec{t}}( A_{n,k}^c)=0 .$$ Recalling \eqref{nonzed}, we conclude that \begin{equation} 0< \Lambda^k_{\vec{t}}(E\times \cdots \times E ) = \Lambda^k_{\vec{t}}\left( \bigcup_n A_{n,k}^c\right) + \Lambda^k_{\vec{t}}\left( \bigcap_n A_{n,k}\right) ,\end{equation} and so $$\Lambda^k_{\vec{t}} (A_{k})=\Lambda^k_{\vec{t}}\left( \bigcap_n A_{n,k}\right)>0 .$$ Since $ t_1, \dots, t_k \in \tilde{I}$ were chosen arbitrarily, we have shown that $\Lambda^k_{\vec{t}} (A_{k})>0 $ whenever $\vec{t} = (t_1, \dots, t_k)$ and $t_i \in \tilde{I}$. \\ We now verify that the set of degenerate chains has $\Lambda^k_{\vec{t}}-$measure zero. \iffalse \begin{lemma} Fix $t_i$ for $1 \leq i \leq k$. Let $$D_k = \{(x^1, . . ., x^{k+1}): x^i = x^j \text{ for some } i \neq j\}.$$ Let $D_k^\delta$ be the $\delta$-thickening of $D_k$, i.e. $$D_k^\delta = \{(x^1, . . ., x^{k+1}): |x^i - x^j| < \delta \text{ for some } i \neq j\}.$$ Let $$N_k = \{(x^1, \dots, x^{k+1}): t_i \leq |x^{i+1}-x^i| \leq t_i+\epsilon; \ 1 \leq i \leq k\}$$ Then $$ \lim_{\epsilon \downarrow 0} \epsilon^{-k}\mu \times \mu \times \dots \times \mu( N_k \backslash D_k^{\delta})> a_k>0$$ for sufficiently small $\delta$. \end{lemma} \vspace{.25 in} Notice that $N_k \backslash D_k$ is simply the set of \textit{non-degenerate} $\epsilon$-approximations to the $t_1, \ldots, t_k$ chain, and therefore this lemma immediately verifies the existence of non-degenerate chains of the chosen type. \vspace{.25 in} To prove the lemma, recall that we have already shown in theorem \ref{lower bound} that $$\liminf_{\epsilon \downarrow 0} C_k^\epsilon = \liminf_{\epsilon \downarrow 0} \epsilon^{-k}\mu \times \mu \times \dots \times \mu( N_k)> c_k>0.$$ That is, we know $$\int f_{k+1}(x) d\mu(x) >c_k.$$ Now we will show that $$\int_{D_k^\delta} \left( \prod_{i=1}^k \sigma_{t_i}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i) \right) d\mu(x^{k+1}) < c(\delta),$$ where $s = \dim_\mathcal{H}(E)$ and $c(\delta) \downarrow 0$ as $\delta \downarrow 0$. \vspace{.25 in} Define $\chi_{S}$ be the characteristic function for a set $S$ and we will instead examine $$\int_{(\mathbb{R}^d)^{k+1}} \left( \prod_{i=1}^k \sigma_{t_i}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i) \right) \chi_{D_k^\delta}(x^1, \ldots, x^{k+1}) d\mu(x^{k+1}).$$ We can use Cauchy-Schwarz to bound this expression by \begin{equation}\label{degenbound} \left(\int \left( \prod_{i=1}^k \sigma_{t_i}^{\epsilon}(x^{i+1}-x^i) \right)^2d\mu(x^1)\ldots d\mu(x^{k+1})\right)^{1/2} \left(\int \chi_{D_k^\delta}(x^1, \ldots, x^{k+1}) d\mu(x^1) \ldots d\mu(x^{k+1})\right)^{1/2}. \end{equation} \vspace{.25 in} First we examine the the factor on the right. Notice that we can bound $\chi_{D_k^\delta}(x^1, \ldots, x^{k+1})$ by 1 and that $$\int d\mu(x^1) \ldots d\mu(x^{k+1}) = 1.$$ Therefore, we can apply the Dominated Convergence Theorem and say that $$\lim_{\delta \downarrow 0} \int \chi_{D_k^\delta}(x^1, \ldots, x^{k+1}) d\mu(x^1) \ldots d\mu(x^{k+1}) = \int \chi_{D_k}(x^1, \ldots, x^{k+1}) d\mu(x^1) \ldots d\mu(x^{k+1}).$$ We will show that this integral is equal to 0. Observe that we can bound this integral above by $$\sum_{1 \leq i < j \leq k+1} \int_{\{(x^1, \ldots ,x^{k+1}: x^i = x^j\}} d\mu(x^1) \ldots d\mu(x^{k+1}).$$ Without loss of generality, we look at the term in which $x^k = x^{k+1}$ and write this integral as $$\int_{(\mathbb{R}^d)^k} \left(d\mu(x^1) \ldots d\mu(x^k) \int_{\{x^{k+1}: x^{k+1} = x^k\}}d\mu(x^{k+1})\right).$$ But the integral over a singleton set is $0$, and therefore the above sum must be 0. We conclude that $$\left(\int \chi_{D_k^\delta}(x^1, \ldots, x^{k+1}) d\mu(x^1) \ldots d\mu(x^{k+1})\right)^{1/2}$$ can be made as small as desired by taking $\delta$ sufficiently small. For the first factor in \fi \begin{lemma} Let $$D_k = \{(x^1, . . ., x^{k+1})\in E\times \cdots \times E: x^i = x^j \text{ for some } i \neq j\}.$$ Then $$\Lambda^k_{\vec{t}}(D_k) =0.$$ \end{lemma} \vspace{.25 in} To prove the lemma, we first investigate the quantity $$ \int_{D_k} \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i)\right) d\mu(x^{k+1}).$$ By the definition of $D_k$, we can bound this quantity above by $$\sum_{1 \leq m <n\leq k+1} \int_{\{(x^1, \ldots , x^{k+1} : x^m = x^n\}} \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) d\mu(x^i)\right) d\mu(x^{k+1}).$$ We can rewrite the integral as $$\int_{(\mathbb{R}^d)^k} \int_{\{x: x = x^m\}} \left(\prod_{j=1}^{k} \sigma_{t_j}^{\epsilon}(x^{i+1}-x^i) \right) d\mu(x^n) d\mu(x^1) \cdots d\mu(x^{n-1})d\mu(x^{n+1})\cdots d\mu(x^{k+1}).$$ Since the inside integral is taken over a region of measure 0, this whole integral must be $0$. This holds for every choice of $m$ and $n$, and thus the entire sum must be $0$. This completes the proof of the lemma. \\ In conclusion, we have shown that the set of exact $k-$chains has positive measure, $\Lambda^k_{\vec{t}}(A_k)>0$, and that the set of degenerate chains has zero measure, $\Lambda^k_{\vec{t}}(D_k) =0$. It follows that $A_k\neq D_k$ and $A_k\neq \emptyset$. In other words, there exists a non-empty open interval $\tilde{I}$, and there exists \textit{distinct} elements $x^1, \cdots, x^{k+1} \in E$ so that $|x^{i+1}-x^i|=t^i$ for each $i\in \{1, \dots, k\}$. \\
ae4fb3642e946dcb9562652cc407b324cd53fff7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Electroencephalography (EEG) is a non-invasive method that measures the electrical activity in the brain. When a person listens to speech, the EEG signal measured has been shown to contain information related to different features of the presented continuous speech. We can relate these speech features to the EEG activity using machine learning models to investigate if and how the brain processes continuous speech.\\ Currently, primarily linear models are used to relate continuous speech to EEG \citep[e.g.,][]{Ding2012, Vanthornhout2018, Crosse2016, DECHEVEIGNE2018, CrosseReview2021}. Such models are used to either predict EEG from speech (forward modeling) or to reconstruct speech from EEG (backward modeling). Once the EEG (or speech) is approximated, the correlation between the predicted and the ground truth signal is computed and considered a measure of neural tracking. Such models are limited as they assume pure linearity between the EEG and speech signals, inadequately fitting the nonlinear nature of the auditory system. For example, it is well known that depending on the level of attention and state of arousal of a person response latencies can change \citep{Ding2012}, which cannot be modeled with a single linear model.\\ Deep neural networks (DNNs) have been recently introduced to this field. Many studies have shown the ability of deep learning models to relate EEG to speech (see Section \ref{Review}), be it for neural tracking assessment \citep[e.g.,][]{Katthi2021DeepMC, Accou2021ModelingTR, Monesi2020, Thornton_2022} or to decode auditory attention \citep[e.g.,][]{deTaillez2020MachineSpeech, Ciccarelli2019, kuruvila2021extracting}.\\ Many approaches have been used in EEG-speech decoding tasks. An overview of the findings would be beneficial to inform the pros and cons of the existing models. Considerations to avoid methodological pitfalls when using deep learning models in this context are also necessary.\\ In this study, we aim to summarize the methods present in the literature to relate EEG to continuous speech using deep learning models. In Section \ref{Background}, we first describe a brief overview of the general processing pipeline, notably the pre-processing, the different neural network architectures, and the evaluation metrics. In Section \ref{Review}, we categorize the studies, summarize their findings provide individual critical reviews. Finally, in Section \ref{Pitfalls} we address the methodological pitfalls to avoid when using such models and considerations to establish a standard benchmark for models' analyses. \section{Background}\label{Background} In this section, we give a brief overview of the typical processing pipeline to relate EEG to speech using deep neural networks. We briefly explain the pre-processing step, general network architecture types, and the paradigms used to train the models. We only cover what is used in the papers reported on in Section \ref{Review}. \subsection{Pre-processing} For more details, we refer to \cite{CrosseReview2021}, who review pre-processing methods for linear modeling of EEG responses to continuous speech. \subsubsection{EEG} Typical EEG pre-processing steps include: \begin{itemize} \item Downsampling to reduce processing time \item Artifact removal (e.g., ICA-based techniques \citep{PMID:11997722, VIGARIO1997395}) \item Re-referencing (often to common average) \item Band-pass filtering to have a signal in the same frequency range as the stimulus. \end{itemize} \noindent\subsubsection{Speech} Various speech features can be extracted from the raw signal, including those from three main categories: acoustic features such as Mel spectrogram or envelope (e.g., \cite{Ding2012, DiLiberto2015, Monesi2020, Accou2021ModelingTR}) , lexical features like phonemes (e.g., \cite{DiLiberto2015}), and linguistic features like word surprisal or semantic dissimilarity (e.g., \cite{Gillis2021, BRODBECK20183976}). \subsection{Artificial neural network architecture types}\label{architecture} Throughout this study, we refer to fully connected neural network (FCNN), radial basis function (RBF) network, general regression neural network (GRNN), convolutional neural network (CNN), long-short term memory (LSTM) based neural network, gated-recurrent unit (GRU) based neural network, autoencoders and transformers. These network types are explained in more detail in Appendix A. \subsection{Evaluation metrics and training paradigms} \subsubsection{Classification} \hfill\\ \textbf{General classification:} An example of general classification is decoding whether speaker A or B is attended in a two-talker scenario. Among other measurements, such as ROC analysis or a confusion matrix, the model performance can be quantified via classification accuracy, which is defined as the number of correctly predicted outcomes divided by the total number of predictions.\\ The classification accuracy $P$ can be transformed into the information transfer rate $R$ (i.e., the effective bitrate) defined in the Equation \ref{eq:bitrate}, $N$ being the number of classes (e.g., $N=2$ in a two-speaker AAD scenario). The bitrate is scaled with the time window length to yield the number of correct decisions per minute (bit/min).\\ \begin{equation} R = log_{2}N + Plog_{2}P + (1-P)log_{2}\frac{1-P}{N-1} \label{eq:bitrate} \end{equation} \hfill\\ \textbf{Match-mismatch:} In the match-mismatch (MM) paradigm, relating EEG to speech is formulated as a classification task \citep{DECHEVEIGNE2018, Accou2021ModelingTR, Monesi2020, Puffay2022}. The model is trained to associate a segment of EEG with the corresponding segment of speech (see Figure \ref{fig:MM_task}). The average accuracy obtained over all the segments extracted from the signal can be used as a measure of neural tracking.\\ Multiple variations of the MM task have been used, with different numbers of speech \citep{Monesi2020} or EEG segments \citep{DeCheveigne2021}, different time-shifts between segments, and sometimes different stimuli between the matched and mismatched segments. The EEG and speech segment selection has a direct influence on the difficulty of the task. We discuss this in Section \ref{Pitfalls}.\\ \begin{figure}[htpb!] \centering \includegraphics[width=0.8\textwidth]{figures/new_MM_task.pdf} \caption{Match-mismatch task. The model has to associate the 5~s yellow EEG segment with the 5~s yellow speech segment. Two possibilities are provided, respectively, the yellow and the black speech segment.} \label{fig:MM_task} \end{figure} A typical MM architecture is detailed in Section \ref{supplementary_MM_task}. \subsubsection{Regression} The regression task consists in finding a relationship between a dependent ($Y$) and independent variables ($X_{1}, X_{2},..., X_{n}$). $\sigma$ being a function that maps ($X_{1}, X_{2},..., X_{n}$) to $Y$. The regression parameters are optimized on a metric that relates the predicted dependent variable ($\hat{Y}$) and the ground truth ($Y$).\\ Neural tracking of speech can thus be measured using this method. In this context, the model has either to predict EEG from speech (forward modeling) or reconstruct speech from EEG (backward modeling). A metric to measure the similarity (e.g., Pearson correlation) between the predicted (or reconstructed respectively) signal and the ground truth is then computed. The hypothesis is as follows: if the model can find a relationship between EEG and speech (i.e., if a significant correlation has been found), it means there is speech tracking in the brain. \section{Review of deep-learning-based studies to relate EEG to continuous speech}\label{Review} \begin{table*} \hspace*{-1cm}\begin{tabular}{@{}ll@{}}\toprule Search engine & Search query \\ \midrule Google Scholar & ("EEG" OR "Electroencephalography" OR "Electroencephalogram") AND speech \\ & AND ("deep learning" OR "deep neural networks")\\ \rule{0pt}{4ex} IEEE Xplore & (("All Metadata":EEG) OR ("AlklMetadata":Electroencephalogram))\\ & AND ("All Metadata":speech)\\ & AND (("All Metadata":deep neural networks)\\ & OR ("All Metadata": non-linear) OR ("All Metadata": nonlinear) ) \\ \rule{0pt}{4ex} Science Direct & (EEG OR Electroencephalogram) AND ("continuous speech" OR "natural speech")\\ & AND ( "neural network")) \\ \rule{0pt}{4ex} Pubmed & (EEG OR Electroencephalography OR Electroencephalogram)\\ & AND (speech) AND ( deeplearning OR deep learning OR neural networks)\\ & NOT (imagined[title]) NOT (motor[title]) NOT emotion[title] \\ \rule{0pt}{4ex} Web of Science & ((((((((TS=(("EEG" or "Electroencephalography" or "Electroencephalogram")))\\ & AND TS=(("speech" or "audio" or "auditory" )))\\ & AND TS=(("artificial neural network*" or "ANN" or "deep learning" or\\ & "deeplearning" or "CNN" or "convolutional" or "recurrent"\\ & or "LSTM"))) NOT TI=("imagined" or "motor imagery" or "parkinson"))\\ & NOT TI=("emotion")) NOT TS=(("dysphasia" or "alzeimer*")))\\ & NOT TS=("seizure"))) AND DOP=(2010-01-01/2022-05-15)\\ \\\bottomrule \end{tabular} \vspace{0.3cm} \caption{Search queries for each search engine during paper collection.} \label{tab:search_query} \end{table*} Using Google Scholar, IEEE Xplore, Science Direct, Pubmed and Web of Science, we collected papers using search queries reported in Table \ref{tab:search_query}. As a last step, we pruned the paper selection manually to exclude studies not including EEG data, continuous speech stimuli or deep learning models. We classified the models based on their application (see Figure \ref{fig:tree}). We defined two main categories: the models using multiple (simultaneous) speech sources ($N>1$) or a single speech source ($N=1$). In the first category, the model has to choose between different speech sources (e.g., either to classify the locus of attention or the identity of the speaker). In the second category, the model has to relate EEG with a single speech source: it can be either a match-mismatch task (i.e., choose to associate an EEG segment with a synchronized speech segment among other delayed segments), a reconstruction/prediction task (i.e., reconstruct speech from EEG or predict EEG from speech) or other tasks (e.g., detection of semantic incongruities).\\ In this section, we review the papers in each category, grouping them by network architecture type according to section \ref{architecture}.\\ \begin{figure}[htpb!] \centering \includegraphics[width=0.6\textwidth]{figures/treev4.pdf} \caption{Categories of studies relating EEG and continuous speech using deep neural networks.} \label{fig:tree} \end{figure} \subsection{Multiple sources ($N>1$)} In the publications that relate EEG to multiple speech sources, we identified two main paradigms: detecting the identity of the speaker or the locus of attention. Both belong to the field of AAD. The key features of each paper are reported in Table \ref{table1} \subsubsection{Attended speaker identity}\label{attended_speaker} \begin{landscape} \begin{table*} \hspace{-4cm}\begin{threeparttable} \vspace{2.5cm}\begin{tabular}{@{}lllllll@{}}\toprule Article & Architecture & Feature & Split (train/val/test) & Window (s) & Accuracy(\%) & Participants - stimuli \\ \midrule \cite{Ciccarelli2019} & CNN & E & 60/10/10 (unclear) & 10 & 81 & $11 - 40~min$ \\ \cite{su2021auditory} & CA + CNN & E & 60/20/20 (subject) & 0.1 to 2& 77.2 to 88.3 & $Das2019$ \\ \cite{deTaillez2020MachineSpeech} & FCNN & E & 80/10/10 (dataset) & 60-10-5 & 97.6-86-79 & $16 - 50~min$ \\ \cite{kuruvila2021extracting} & CNN-LSTM & S & 75/12.5/12.5 (per trial) & 2 to 5 &72-75 & $27 -30 min$, $Das2019$, $DTU$ \\ \cite{lu2021auditory} & LSTM+FC & E & 60/0/40 (per trial) & 0.25 to 4& 96 & $21sub - 100~s$\\ \cite{xu2022auditory} & LSTM & E & 15/5/80 (per trial) & 0.15& 73.35 & $21 sub - 40~min$ \\ \cite{xu2022decoding} & transformer & E & 15/5/80 (per trial) & 0.15 & 74 & $21 sub - 40~min$ \\ \cite{thornton2022robust} & CNN & E & 9/3/3 (trial) & 10 & 80 & $18 sub - 10~min$ \\ \cite{tian2020auditory} & CNN & EEG & x & x & x & $42 sub - 120\times2~s$ \\ \cite{zakeri2021supervised} & Bi-LSTM & E & 63/11/26 (per trial) & 1 to 40 & 66-84 & $12 sub - 43min$ \\ \cite{shree2016novel} & GRNN & E & 50/0/50 (unclear) & 60 & 99.05 (locus) & $20 sub - 3~min$ \\ \cite{Vandecappelle2021} & CNN & E & 3/1 (stimulus) & 1–2 & 81 (locus) & $Das2019$ \\ \cite{Hosseini2021ICASSP} & AE & E & 23/2/5 (trials) & x & x & $34~sub - 30~min$ \\\bottomrule \end{tabular} \vspace{0.3cm} \caption{Overview of multiple speech source papers. Architecture=main layers used in the neural network, CA=channel attention; Feature=speech feature used in the model, E=envelope, S=spectrogram; Split (train/val/test)=how data were split for train, validation and test; Window (s)=decision window length in seconds; Accuracy=attention decoding accuracy in \%, Participants - stimuli= Number of participants and length of the presented stimulus; $Das2019$=public dataset, 16 participants, 48~min of stimuli each. \citep{das_neetha_2019_3377911}; $DTU$=public dataset, 29 participants, 2~h of stimuli each \citep{fuglsang2017}.} \label{table1} \end{threeparttable} \end{table*} \end{landscape} \hfill\\ \underline{General regression neural network (GRNN):} \hfill\\ \noindent\textit{Machine learning for decoding listeners' attention from electroencephalography evoked by continuous speech (de Taillez et al. 2020): } This is one of the first studies to use neural networks in auditory attention decoding. The authors start from the backward linear model, trying to predict the envelope from EEG, and add non-linearities in a two-competing talker scenario.\\ The dataset contains 20 participants with a total of 50 minutes of data each. 10 participants attended solely one speaker, 10 participants the other one. The whole dataset is divided into 10\% validation, 10\% testing, and 80\% training data. \\ The model consists of 2 fully-connected (FC) layers . The authors compare different combinations of parameters such as the EEG bandpass filter cutoff frequencies, the length of the training window (or kernel size), the type of cost function (mean squared error or Pearson correlation), the decision window's length and the activation function.\\ The authors report the rate for identifying the attended speaker in terms of bits/minute as defined in Section \ref{Background}. We converted that metric into attention decoding accuracy to ease comparison to other studies. The decoding accuracy obtained with the NN for 60~s, 10~s and 5~s are 97.6\%, 86\% and 79\% respectively. The linear model presented in \citep{Osullivan2015} is outperformed by the neural network.\\ The same stimuli from the two speakers are presented to all participants. Since the split of the dataset is across participants, it is likely that the model sees the whole attended (or unattended) stimulus during training. This could lead to overfitting on the stimulus content, hence boosting the model's decoding accuracy. \\ \underline{Convolutional neural networks (CNNs):}\\ \noindent\textit{Comparison of Two-Talker Attention Decoding from EEG with Nonlinear Neural Networks and Linear Methods (Ciccarelli et al. 2019):} Often AAD tasks are solved by two subsequent models: an EEG-based speech reconstruction model followed by a model to determine the similarity between the candidate speech streams and the reconstruction, and classify which one was attended. \cite{Ciccarelli2019} introduce an architecture that integrates these two models in one. They evaluate it on a two-competing talker scenario, and compare it to a linear stimulus-reconstruction decoder \citep{Osullivan2014} as well as the neural-network stimulus-reconstruction decoder from \cite{deTaillez2020MachineSpeech} we reviewed.\\ The dataset consists of 11 participants listening to 40 minutes of two-competing speaker stimulus (four 5-minutes stimuli used once as a distractor, once as an attended speech). The model is a CNN-based architecture for direct talker classification. It consists of two convolutional layers, with respectively a kernel of three and one samples. They are followed by a set of four fully connected layers of decreasing size. The training was subject-specific using cross-validation.\\ The CNN classifier approach dramatically outperformed both the neural network and the linear decoder traditional segregated architecture decoding accuracy on a 10~s decision window (81\% vs 62\% vs 66\% respectively).\\ One issue to address is the subject-specific training. The models are not trained with the perspective to generalize across subjects, which is one of the expected improvements from deep learning models over linear models (see Section \ref{Pitfalls}).\\ \noindent\textit{Auditory attention tracking states in a cocktail party environment can be decoded by deep convolutional neural networks (Tian and Ma, 2020):} The authors propose a CNN model with source-spatial feature images (SSFIs) as the input to decode auditory attention tracking states in a cocktail party environment. The dataset consists of 42 participants listening to two different Mandarin speakers. Each subject completes 120 trials of 2~s.\\ The neural network consists of three convolutional layers. After each convolutional layer, a max-pooling operation is performed. The output of the last max-pooling layer is flattened and fed into a 128-dimensional FC layer, which is fed to the two-dimensional output layer (i.e., representing the probability of the two classes correct or wrong).\\ For the behavioral performance score, they utilize the signed residual time \citep{Maris2012}. The signed residual time measures the trade-off between response accuracy and task execution speed. Among 42 subjects, 30 with a high signed residual time score are classified as the high behavior performance group (H-group), whereas 12 with low scores are classified as the low behavior performance group (L-group).\\ For both H- and L-groups, the model is trained on all subjects minus one from the same group and evaluated on the remaining one (i.e., inter-subject leave-one-out cross-validation) to obtain the classification accuracy. Considering the small size of the L-group, a fine-tuning condition is also performed. For each subject from the L-group, the model is pre-trained on the H-group and fine-tuned with an inter-subject leave-one-out cross-validation on the L-group. These models are therefore subject-independent. The average auditory tracking state classification accuracy (correct or wrong) of the H-group is 80.4\%, while the classification accuracy of the L-group is 69.9\%. When fine-tuning the subjects of the L-group on the model of the H-group, in the same inter-subject leave-one-out cross-validation setting, a classification accuracy of 75.2\% is achieved.\\ One caveat here is the small stimulus content (eighteen possible 3-word combinations with common words). The model could possibly identify the EEG response to these words and fail on unseen words (i.e., overfitting).\\ \noindent\textit{Auditory attention detection with EEG channel attention (Su et al., 2021):} \cite{su2021auditory} propose an AAD system that integrates a neural channel attention mechanism and a convolutional neural network (CNN) classifier. A publicly available dataset was used \citep{das_neetha_2019_3377911} (we will refer to it as Das2019). The 16 normal-hearing participants are instructed to attend to one of two competing speakers. Four Dutch short stories (12~min each), narrated by different male speakers, were used as speech stimuli. The dataset is split into a 60/20/20 train/test/validation scheme for each subject's dataset. The authors propose a channel attention mechanism that predicts a channel mask, corresponding to spatial mapping of the EEG electrodes. This channel mask varies per story and subject. To measure the relationship between EEG channels and speech stimuli, the cosine similarity is used. After calculating the cosine similarity, the channel attention model contains two fully connected layers and a softmax layer, to calculate the final channel mask. This channel mask is then applied to the EEG channels. The model combines the channel attention mechanism with the CNN model proposed by \cite{Vandecappelle2021}.\\ The authors report results for window lengths of $0.1, 0.5, 1$, and $2$ seconds. For 0.1 seconds, the obtained accuracy is $77.2 \%$, for 2 seconds this accuracy goes up to $88.3 \%$.\\ The accuracy obtained on 2~s is very high compared to other studies (e.g., 72\% for 2~s \citep{kuruvila2021extracting}, 79\% for 5~s \citep{deTaillez2020MachineSpeech}). It may be prudent to evaluate the generalization capabilities of this complex model that has achieved good performance in a subject-specific training by evaluating it within a subject-independent training paradigm. This topic is addressed in Section \ref{SI_model}\\ \noindent \textit{Robust decoding of the speech envelope from EEG recordings through deep neural networks (Thornton et al. 2022):} \cite{thornton2022robust} propose a neural network structure to reconstruct the envelope from EEG recordings in different listening conditions. The authors validate their models both on single-speaker datasets and on the competing talker scenario. In this section, we will focus on the competing talker scenario. The competing talker dataset contains 18 subjects who listen to a male and a female speaker, who are simultaneously narrating an audiobook. In the first scenario, subjects are instructed to listen to the male speaker, in the second scenario they are instructed to listen to the female speaker. Each listening condition has a total of 10 minutes of data per subject, recorded in 4 different trials. As a baseline, the authors use a linear backward model. Two neural networks are compared. The first is an FCNN. A segment of 50-time samples (400~ms) is first flattened and then put through fully connected layers, with each layer containing fewer neurons than the preceding layer. The output of the final layer is a scalar, representing an estimate of the envelope at the onset of the segment. The second network is a CNN inspired by EEGnet \citep{Lawhern_2018} and performs a temporal convolution on the input, followed by a spatial convolution and a depthwise separable convolution. The output is then flattened and reduced to a single scalar output by taking a linear combination. The models are subject-specific. \\ The correlation between the reconstructed envelope and the attended and unattended speaker is calculated and the classifier picks the envelope with the highest correlation as the attended speaker envelope. The classification accuracy is calculated for windows of 2.5 and 10 seconds. Both the CNN and the FCNN offer a clear improvement over the linear model. For a window length of 10 seconds, the CNN achieves around $80\%$ classification accuracy, while the linear model achieves around $68\%$.\\ One issue to address in this study is the use of DNNs as subject-specific models. Although better results are obtained with subject-specific models using their dataset in the single-speaker paradigm (see summary in Section \ref{single_source}); with the perspective of gathering more data (e.g., by using public datasets, see Section \ref{public_dataset}), showing results of subject-independent models would have been relevant. The motivations to use subject-independent models is developed in Section \ref{SI_model}.\\ \underline{Recurrent neural networks (RNNs):}\\ \textit{Extracting the auditory attention in a dual-speaker scenario from EEG using a joint CNN-LSTM model (Kuruvila et al., 2021):} \cite{kuruvila2021extracting} introduce a joint CNN-LSTM model that takes the EEG signals and the spectrogram of the multiple speakers as inputs and classifies the attention to one of the speakers.\\ They use three datasets, two of which are publicly available (i.e., \cite{das_neetha_2019_3377911} and \cite{fuglsang2017}). The third dataset contains data from 27 subjects, listening to German news extracts, amounting to a total of 30 minutes per subject. The model uses the spectrogram of the speech stimuli as input to the speech network. Both EEG and spectrogram are first put through a CNN network, then concatenated and put into the last part of the network together. The EEG subnetwork contains 4 convolutional layers. The audio subnetwork contains 5 convolutional layers. The EEG output and the two audio outputs are then put into the final network, consisting of a bidirectional LSTM (bi-LSTM) block, followed by a few fully connected layers. The dataset is split into a 75/12.5/12.5 train/test/validation scheme within trials.\\ The best performances on unseen data were obtained when training the model on all three datasets together, suggesting that the model might generalize better when training on more data or that heterogeneous data leads to better generalization. For a window of 2, 3, 4, and 5 seconds, the authors report a mean accuracy of respectively $70.9 \%, 73.9\%, 75.2\% $, and $75.5\%$. Although the model contains 400 000 parameters, a pruning analysis shows that sparsity of 40~\% can be achieved without obtaining significantly lower accuracy.\\ \noindent\textit{Auditory attention decoding from electroencephalography based on long short-term memory networks (Lu et al., 2021):} \cite{lu2021auditory} propose an LSTM-based architecture to decode auditory attention in a competing two-talker scenario. The aim is to investigate whether an end-to-end nonlinear framework can outperform the state-of-the-art linear models on short-decision windows (1~s).\\ 21 subjects participated and each listened to two concurrent speech segments of China Nation Radio. Each speech stimulus has a duration of 52~s and the subjects listened to the stimuli twice, alternating their attention in between the presentation. Each stimulus was split into 30 seconds of training data and 22 seconds of testing data. The number of training samples ranges between 28 and 478, depending on the window length.\\ The authors propose a model consisting of 6 LSTM blocks in total, followed by a fully connected layer and a 2-node output layer. First, for both the attended and the unattended streams, the speech envelope and EEG first go through two separate LSTM blocks (four in total). Then, for both attended and unattended streams, the two LSTM's outputs are subtracted element-wise (e.g, for the attended stream $S_{attended} = LSTM_1(Speech) - LSTM_2(EEG)$) and the resulting signal is then put into another LSTM block (two in total). The outputs of both attended and unattended streams are then concatenated and put into a fully-connected layer with 40 hidden nodes, followed by the final output layer which predicts the attended speaker.\\ For a window length of 1 second, the authors report an average accuracy of $96.12\%$, which is an average over 86 1-s testing sample sequences, with a model trained on 118 1-s sample sequences.\\ These accuracies obtained are extremely high for 1-s windows. Considering the complexity of the model (6 LSTM layers with 20 hidden units each) and the very low amount of data per subject (30~s for training, 22~s for testing), it is plausible that the model overfits the data. This study would benefit from testing its generalizability over subjects and different speech content (see Section \ref{Pitfalls}).\\ \textit{Auditory attention decoding from EEG-based mandarin speech envelope reconstruction (Xu et al., 2022):} The authors introduce an LSTM-based architecture to decode auditory attention using a Mandarin as the stimulus' language. 21 subjects participated in the study. Each subject listened to the same 40~minutes of speech stimuli, spoken by two different Mandarin speakers. For each subject, the resulting 40~minutes of EEG are split up as follows: randomly per subject, 6 minutes are extracted for training, 2 minutes for validation, and the remaining 32 minutes for testing. This amounts to a total of 126 minutes for training and 672 minutes for testing, with the 40 minutes of stimuli present in both sets.\\ In \cite{xu2022auditory}, the authors propose to use an LSTM-based model that receives EEG as input and has to output the attended envelope. Pearson correlation is used as a loss metric. As far as we can infer, the model uses 10 consecutive LSTM blocks, each containing 64 hidden nodes. There is no mention of the output layer.\\ The authors report that the best working model uses broadband information, 17 channels, and has an average accuracy of 74.29\% for a window length of 0.15s.\\ We here address the issue of randomly selecting the training, validation, and testing sets for each subject from the 40-minute recordings in the protocol. There is a non-negligible possibility that over 21 subjects, the whole speech stimulus corpus is seen by the model during training. Under this setting, a verification of the generalization to unseen stimuli is missing. We explain potential solutions to this pitfall further in Section \ref{SI_model}.\\ \noindent\textit{Supervised binaural source separation using auditory attention decoding in realistic scenarios (Zakeri et al., 2021):} The authors propose a complete pipeline from speech mixture to a denoised signal based on AAD. Their model attempts to separate the attended speaker from the unattended speaker in realistic scenarios, using different signal-to-noise ratios, levels of reverberation times for a simulated room, and different speaker positions.\\ The dataset contains data from 12 subjects, each listening to 48 trials with a length of 54 seconds. These stimuli are split up into 34 seconds train, 6 seconds validation, and 14 seconds of test data. The subjects listened to two audio stories. Stimuli are generated for different SNRs of 4 dB, 9 dB, for clean speech, and for different reverberation times of 0, 0.5, and 1 second. The different spatial configurations of attended and unattended speakers are (-90\degree, 90\degree), (30\degree, 90\degree), (-30\degree, -90\degree) and (-5\degree, 5\degree). Both spectral and spatial features are jointly used to train a set of deep neural networks (DNN) responsible for dereverberation and denoising. The AAD processing stage takes the EEG as input and then calculates the phase-locking value (PLV), which is then fed to a Bi-LSTM, and detects the attended speaker. Based on the classification scores of the AAD module, different portions of the calculated masks are selected to resynthesize the attended speaker. The Bi-LSTM contains 10 layers with each 100 hidden units.\\ The authors report an average accuracy of the AAD phase of 66.37\% for 1~s up to 84.57\% for 40~s of data.\\ The accuracy obtained is quite low. As a comparison, \cite{su2021auditory} obtained $77.2 \%$ on 0.1~s, and $88.3\%$ for 2~s. The model was trained on an AAD task, however it was in the meantime utilized to denoise stimuli with low SNRs. This could account for the reduced accuracy compared to other studies that utilized clean speech.\\ \underline{Transformers:}\\ \noindent\textit{Decoding selective auditory attention with EEG using a transformer model (Xu et al., 2022):} \cite{xu2022decoding} propose a transformer architecture to decode auditory attention in a competing two-talker scenario. The aim is to investigate whether an end-to-end nonlinear framework can outperform state-of-the-art linear models.\\ The dataset used is the same as \cite{xu2022auditory}.\\ The EEG signal is provided to the model as input to an AAD-based transformer containing an encoder and decoder block. The encoder contains positional encoding, channel attention, and temporal self-attention models. The following decoder takes as input the encoded EEG and reconstructs the speech envelope. The Pearson correlation is computed between the reconstructed left and right speech envelopes. The attended envelope is selected as the one having the highest correlation with the reconstructed envelope.\\ For a 0.15~s window, the authors obtained an average accuracy of 76.35 \%.\\ The issues previously mentioned for \cite{xu2022auditory} apply to this study as well: the model may be overfitted to the stimulus. \subsubsection{Locus of attention} While in the previous studies, the attended speaker was classified by comparing the individual speaker signal to the EEG signal, it is also possible to decode the locus of attention, i.e., the direction in which the listener is focusing their attention. In this case, the acoustic signals of the individual speakers are not needed; the classification can be done purely from the EEG signal \citep{GeirnaertCSP}.\\ \underline{General regression neural network (GRNN):}\\ \noindent \textit{A novel technique for identifying attentional selection in a dichotic environment (Shree et al., 2016):} \cite{shree2016novel} implement a GRNN model to classify left/right attention. The dataset contains 20 subjects. 10 out of these subjects were instructed to listen to the story played in the left ear, whereas the other 10 had to pay attention to the story played in the right ear. Each subject listened to three trials of 1 minute. The authors used randomized sub-sampling, a method that randomly selects 50\% of the data for training and the other 50\% for testing. It is unclear from the paper whether this split is done per subject or for all subjects. We thus cannot state if the training is subject-specific or subject-independent.\\ A general regression neural network (GRNN) based classifier is proposed, a single-pass architecture that estimates the conditional mean $E[\frac{Y}{X}]$ in order to estimate the most probable values of output $Y$ given input $X$. The model has 4 fully-connected layers. The last layer gives the conditional mean as output.\\ The average classification accuracy on a 60~s window of the model is 99.05\%.\\ The classification accuracy is surprisingly high compared to previous work with linear models \citep{Osullivan2015} that obtained 89\%. Linear unsupervised approaches also performed below 90\% on public datasets \citep{geirnaert2021unsupervised}. One possible explanation could be that the model associates a subject with a side as eacht subject either focused on the left or right story. Another explanation is the small amount of data (3~min) per subject, which considering a network with 4 fully-connected layers might be too little. It is not clear from the paper how many neurons are attributed to each layer, hence we cannot estimate the number of parameters. In addition, a major critic to this paper is the lack of information: the training paradigm (subject-specific or subject-independent) is unclear and we do not have information about the model's parameters. However, the two explanations stated above apply in any case.\\ \underline{Convolutional neural networks (CNNs):}\\ \noindent\textit{EEG-based detection of the locus of auditory attention with convolutional neural networks \citep{Vandecappelle2021}:} A CNN-based model is introduced to classify left or right attention and uses segments of 1-2~s. The Das2019 dataset is used.\\ The proposed model receives an EEG matrix as input, and passes it to a convolutional layer with a ReLU activation function. An averaging pooling step is then used over the time dimension, reducing each of the time series dimensions to a single number. After the pooling step, there are two fully connected (FC) layers. The first FC layer contains one neuron per time series dimension and is followed by a sigmoid activation function. Finally, the second FC layer contains two (output) neurons. As a baseline, a linear decoder model was implemented to be evaluated on the same task.\\ The CNN-based model outperformed the linear decoder on a 1~s decision window (81\% vs 58\% respectively). As a comparison, the common spatial pattern approach defined by \cite{GeirnaertCSP} reached 80\%.\\ Although locus of attention decoding is a much simpler task than decoding the stimulus of the attended speaker, the obtained performance on 1~s is higher than in the paper from \cite{zakeri2021supervised} (66\%).\\ \underline{Autoencoders (AEs):}\\ \noindent\textit{Speaker-independent brain enhanced speech denoising (Hosseini et al., 2021):} The Brain Enhanced Speech Denoiser (BESD) is a speech denoiser; it is provided with the EEG and the multi-talker speech signals and reconstructs the attended speaker speech signal. The paper is divided into two tasks: one speaker-specific task, during which the attended speaker identity is provided to the model (i.e., the model is trained speaker-specifically). The performance of the BESD model is compared to a classical denoising autoencoder using solely the mixture speech without EEG. The second task is a speaker-independent denoising task. The model is trained in a different configuration (i.e., different attended speakers) and therefore needs the EEG information to denoise the mixture speech.\\ The dataset consists of 34 participants, each listening to 30 trials of 1~minute each. Two participant groups were formed; 17 were asked to pay attention to the left speaker, and 17 to the right. For each subject, five trials were assigned to the test set, 2 to the validation set, and 23 to the training set. The proposed BESD architecture has an autoencoder structure: one encoder for the speech mixture and one for the EEG activity. Each encoder includes three convolutional blocks with a feature-wise linear modulation (FiLM) \citep{Perez_Strub_de_Vries_Dumoulin_Courville_2018}, which an affine transformation applied to the output of the convolutional blocks. A last convolution is applied after these convolutional blocks. The output of both encoders is concatenated in the latent space in a so-called fusion layer. The latter is fed into a decoder block composed of two convolutional blocks similar to the encoder's (filter size 52 and 100 for convolution 1 and 2, respectively). The last layer is a 1D convolution of filter size one followed by a hyperbolic tangent. As the loss function, the authors used a scale-invariant signal to-distortion ratio (SI-SDR) \citep{LeRoux2019SDRH}, that has been shown to perform well as a general-purpose loss function for time-domain speech enhancement \citep{Kolbaek2020}.\\ The BESD model outperformed the classical denoising autoencoder on the speaker-specific denoising task and significantly enhanced the speech compared to the noisy mixtures without having any prior information on the attended speaker. It is also an end-to-end approach in which all the algorithm modules are trained simultaneously. In contrast, two separately trained networks are sometimes used \citep{CEOLINI2020117282} (e.g., one for speaker separation and a second to select the attended speaker and enhance it).\\ Although this technique might be useful for AAD rather than the original objective of EEG-enhanced speech denoising, evaluation on public datasets using common metrics for the AAD might extend the impact of this work (see Section \ref{public_dataset} and \ref{SI_model}). \subsection{Single source ($N=1$)} \label{single_source} As opposed to the previous section, here we describe papers that relate EEG to a single speech or sound source using deep learning models. We divided the papers according to the task: match-mismatch, reconstruction/prediction (forward and backward models), and others. The key features of each paper are reported in Table \ref{table2}. \subsubsection{Match-mismatch (MM)} One method to relate EEG to a single speech source is called a match-mismatch task \citep{DECHEVEIGNE2018, Accou2021ModelingTR, Monesi2020} defined in Section \ref{Background}. In this paradigm, a model is trained to associate a segment of EEG with the corresponding segment of speech. \noindent We provide an overview of the general model architecture of all these papers in Figure~\ref{fig:MM_task_general}.\\ \underline{Convolutional neural networks (CNNs):}\\ \textit{Modeling the relationship between acoustic stimulus and EEG with a dilated convolutional neural network (Accou et al., 2020):}\\ \cite{Accou2021ModelingTR} here present a dilated-convolutional network, and compare its performance to a CNN baseline, and to a linear decoder.\\ The dataset consists of EEG data from 48 normal-hearing participants listening to 10 stories (each about 14~min) in Flemish (Dutch). They use the speech envelope as the stimulus feature. The training, validation, and testing sets were divided per subject with an 80:10:10 ratio.\\ \noindent This network takes one segment of EEG and two segments (matched and mismatched) of speech and processes them in separate streams. The model uses multiple dilated convolutions as the encoding layers for both speech and EEG streams. The resulting embedded EEG representation is compared with both embedded stimulus representations using cosine similarity. The decision layer is a sigmoid layer that classifies match/mismatch based on the cosine similarity scores.\\ The model is evaluated using the classification accuracy on the match-mismatch task averaged over subjects. It outperformed a CNN baseline as well as a linear decoder baseline on the same MM task for 5~s and 10~s decision windows (respectively 80\% and 85\%).\\ The same model was used by \cite{Accou2021PredictingSI} to predict speech intelligibility. Being able to derive an objective measure for speech understanding is crucial as behavioral tests are not possible for a part of the population.\\ We address one critic in \cite{Accou2021ModelingTR, Accou2021PredictingSI}, regarding the classification accuracy estimation quality. In this study, the authors use one mismatched segment (1~s after the end of the matched segment). To get a better estimate of the classification accuracy, an alternative is to take multiple shifted mismatched segments and average the accuracy obtained \citep{DeCheveigne2021}. Considerations about the negative sample(s) selection within the MM task are discussed in Section \ref{Pitfalls}.\\ \underline{Recurrent neural networks (RNNs):}\\ \noindent\textit{A LSTM-based architecture to relate speech stimulus to EEG (Monesi et al., 2020):} Compared to \cite{Accou2021ModelingTR, Accou2021PredictingSI}, \cite{Monesi2020} introduced a model integrating LSTM layers (see description in Section \ref{appendix_architecture}).\\ The dataset is an extension of the dataset used in \cite{Accou2021ModelingTR, Accou2021PredictingSI} with 90 subjects rather than 48 before. In this paper a variation on the match-mismatch paradigm explained above is used: the model takes one EEG segment and one speech segment (the speech envelope) as inputs and has to decide whether they match or not. On the speech stream, the dimensionality reduction block consists of a convolutional layer followed by an LSTM layer. As the LSTM memory capacity is limited, the CNN is used as a pre-processing layer that reduces the number of recurrence steps (kernel of more than 1 sample slides along the time axis and reduces its dimension) the LSTM will have to perform. On the EEG stream, the dimensionality reduction block consists of a convolutional layer, followed by two dense layers. Both EEG and the speech segment are hence projected into a common embedded space. Similarly to the dilated-convolutional model from \cite{Accou2021ModelingTR}, the cosine similarity is here used as the correspondence measure, except that this time it is computed along the time axis (i.e., the output will be a vector with the length being the number of time samples). The decision layer block is composed of a time-distributed dense layer followed by a calculation of the mean to select the outcome (i.e., matched or mismatched). The data split for training, validation, and testing is 80:10:10 as in \cite{Accou2021ModelingTR}.\\ As the dilated-convolutional model presented in the two previous papers, this LSTM-based architecture outperforms a CNN baseline as well as a linear decoder baseline (85\%, 73\%, and 69\% classification accuracy respectively). Subject-independent training leads to higher accuracy than subject-dependent with this model which is a great advantage over linear models and avoids re-training to evaluate the performance of new subjects.\\ The same architecture was used with other speech features in \cite{Monesi2021INTERSPEECH}, such as such as the Mel spectrogram, voice activity, phoneme identity, and word embedding. The results suggest that the model exploits information about silences, intensity, and broad phonetic classes from the EEG. Furthermore, the Mel spectrogram, which contains all this information, yields the highest accuracy (84\%) among all the features.\\ As opposed to \cite{Accou2021ModelingTR, Accou2021PredictingSI}, no negative sample is taken for the MM task. A negative sample is added in \cite{Monesi2021INTERSPEECH}, but in both studies, it remains a binary classification that raises the same criticism mentioned for \cite{Accou2021ModelingTR, Accou2021PredictingSI}.\\ \subsubsection{Reconstruction/Prediction (R/P)} EEG can also be related to speech in a reconstruction (or a prediction) task. In this case a stimulus feature is decoded from the EEG (or the EEG predicted from the speech, respectively), and compared to the original. This relates to the commonly used linear backward (or forward, respectively) models (see Section \ref{Background}).\\ \underline{Fully-connected neural networks (FCNNs):}\\ \noindent\textit{Deep Canonical Correlation Analysis For Decoding The Auditory Brain (Katthi et al., 2020):} Canonical correlation analysis (CCA) is a linear method to project two signals to a latent space that maximizes the correlation between the two signals. Following-up on \cite{DECHEVEIGNE2018}, \cite{Katthi2021DeepMC} introduce a deep CCA (DCCA) model for speech-EEG data. This model has shown improvement over linear CCA (LCCA) on image data and is expected, along with appropriate regularization methods, to improve the resulting correlations on speech-EEG data.\\ The dataset used is from \cite{DiLiberto2015}. 6 subjects listened to single-speaker audio books within 20 trials of duration 160~s). 19 trials are used for training and 1 for testing. Within the training set, 90\% is allocated for training and 10\% for validation. The authors tried two neural network architectures for the deep CCA models. The first architecture contains a 2 hidden layer network for each of the envelope and the EEG side, followed by a 1-dimensional output layer. The second architecture contains 4 hidden layers. They use a leaky ReLU activation function, with a negative slope coefficient of 0.1 at the output of the deep CCA model. The neural network is trained to maximize the correlation between the embedded representations of speech and EEG. The first architecture consistently outperformed the second one so only these results will be mentioned here. The authors experimented with several configurations of the deep CCA model and show that the model consistently outperforms the linear counterpart (best subject correlation: 0.4 and 0.31; worst subject correlation: 0.21 and 0.18 for DCCA and LCCA respectively).\\ The comparison between the linear CCA's performance and the deep CCA's performance is evaluated in a subject-specific paradigm. Hence, the model is not trained in the perspective to generalize across subjects, which is one of the expected improvements from deep learning models over linear models (see Section \ref{Pitfalls}). Another issue is the difficulty to interpret CCA correlation coefficients: the correlation between the EEG and speech latent representations is hardly comparable to the correlation between the predicted and ground truth signals usually reported in reconstruction/prediction studies.\\ \noindent\textit{Deep Correlation Analysis for Audio-EEG Decoding (Katthi et al 2021):} The authors develop two models: one for intra-subject audio-EEG correlation analysis (DCCA model introduced in \cite{Katthi2021DeepMC}) and another for an inter-subject configuration (the deep multiway canonical correlation analysis, or DMCCA). Intra- and inter-subject correlation analyses attempt to suppress the EEG artifacts while preserving the components related to the stimulus. DCCA uses the intra-subject information to do so, while DMCCA uses information shared across subjects to attempt EEG denoising.\\ The authors here compare 4 architectures and evaluate the Pearson correlation between the transformed EEG and audio signals. Two possible inter-subject architectures are firstly used: linear-multiway canonical correlation analysis (LMCCA) and DMCCA; then two possible intra-subject architectures: LCCA and DCCA. This leads to 4 possible combinations: LMCCA/LCCA, LMCCA/DCCA, DMCCA/LCCA, and DMCCA/DCCA) from which performances are compared.\\ The dataset used is from \cite{DiLiberto2015}. 8 subjects listened to single-speaker audiobooks within 20 trials of duration 160~s). \\ The structure of DCCA is explained in \cite{Katthi2021DeepMC}. LCCA and LMCCA \citep{DECHEVEIGNE2019} are used here as linear baselines. The DMCCA model is an autoencoder finding a latent representation of each EEG with $d$ dimensions ($d$ is a hyperparameter to optimize) and reconstructing the original responses from them. The encoder has two hidden layers and an output layer of $d$ units. The decoding part has two hidden layers.\\ For a majority of the subjects, the authors found that the deep learning combination DMCCA/DCCA improved the Pearson correlation significantly over the other linear combination LMCCA/LCCA (average correlation for DMCCA/DCCA: 0.344, LMCCA/LCCA: 0.270). As a conclusion, more optimal transforms can therefore be found using deep models.\\ We address one drawback of this method: the deep CCA model projects the EEG and speech signals into a common latent space that maximizes the correlation between them. Such correlations are higher for low-frequency bands \citep{Etard2019}, which might bias the model to omit some higher-frequency band information. It becomes thus challenging to compare correlations obtained with CCA and regression tasks correlations computed between the ground truth and the reconstructed/predicted signals.\\ \noindent \textit{Robust decoding of the speech envelope from EEG recordings through deep neural networks (Thornton et al. 2022):} In \cite{thornton2022robust}, the authors propose two neural network structures to reconstruct the envelope from EEG recordings in different listening conditions.\\ Data is included from 13 subjects who listen to a single-speaker audiobook, in noiseless and anechoic conditions, for a total of 40 minutes per subject, recorded in 15 different trials. 9 of these trials were used for training, 3 for evaluation, and 3 for testing.\\ As a baseline, the authors use a linear backward model. Two neural networks are compared. The first is an FCNN. A segment of 50-time samples (400~ms) is first flattened and then put through $L$ (a hyperparameter to optimize) fully connected layers, with each layer containing fewer neurons than the preceding layer. The output of the final layer is a scalar, representing an estimate of the envelope at the onset of the segment. The second network is a CNN inspired by EEGnet \citep{Lawhern_2018} and performs a temporal convolution on the input, followed by a spatial convolution and a depthwise separable convolution. The output is then flattened and reduced to a single scalar output by taking a linear combination.\\ For both the subject-independent and subject-specific training paradigms, both the FCNN and the CNN models significantly outperformed the linear baseline (subject-independent: median Pearson $r=0.12$, $r=0.13$ and $r=0.08$ for the FCNN, CNN and linear baseline respectively; subject-specific: median Pearson $r=0.22$, $r=0.22$ and $r=0.18$ for the FCNN, CNN and linear baseline respectively).\\ As reported in \cite{Accou2023}, correlation distribution of the FCNN and CNN models remained very similar when evaluated on unseen stimulus segments from the same subjects used during training (median: $r=0.14$ and $r=0.14$ respectively), but also when evaluating it on different subjects from the DTU dataset (median: $r=0.11$ and $r=0.14$). These findings indicate a good generalization of the model across subjects and speech content (see Section \ref{SI_model}).\\ \underline{Recurrent neural network:}\\ \noindent\textit{Native language and stimuli signal prediction from EEG (Sakthi et al. 2019):} The authors implemented an LSTM-based backward model to reconstruct the envelope and the spectrogram from the EEG. There were 15 native and 14 non-native English speakers (with Mandarin as their native language) in the study. In one condition, the participants were asked to pay attention to the story (continuous speech stimulus) and ignore a tone (discrete sound stimulus). In the other condition, the participants were asked to pay attention to the tone and ignore the story. Each session had a duration of 1~min and there were 30 sessions per condition per subject while their EEG activity was recorded. The train and test split were chosen as 80\% and 20\% randomly.\\ The envelope model is a sequential model with a single LSTM layer followed by a dense layer to obtain a single channel output. The correlation between predicted and ground truth envelopes was computed. The spectrogram model is also a sequential model with two LSTM layers each followed by a dense layer to obtain the 16-channel output. The channel-wise correlations between predicted and ground truth spectrograms were computed. For the two previously presented models, an equivalent linear model baseline \citep{Crosse2016} model was computed. The LSTM-based model, for both spectrogram and envelope, led to higher Pearson correlation values than the linear baseline. For native speakers, the mean correlation across subjects increased from 0.15 to 0.20 for the envelope, and from 0.07 to 0.09 for Mel spectrogram. As far as we can infer from the article, the LSTM-based model was trained in a subject-specific paradigm. Considering the memory capacity of LSTM layers, the model could simply remember characteristics from a subject such as EEG electrodes placement. A solution is to train the model on EEG responses from multiple subjects listening to a common stimulus to compensate for these differences (see Section \ref{SI_model}).\\ The three next papers \citep{Krishna2020, KrishnaEUSIPCO2020, Krishna2021NER} are from the same research group, and they use the same dataset, we therefore discuss them together. \\ The dataset consists of 4 subjects listening to 4 different natural utterances (2-word sentences). The authors collected 70 speech-EEG recordings per subject per sentence. As far as we can infer one recording corresponds to the EEG response to one sentence. The train:validation:test ratio was 80:10:10 and supposedly constant across subjects.\\ \noindent \textit{Speech synthesis using EEG (Krishna et al. 2020):} Reconstructing (e.g., imagined) speech from EEG is a way to envision a brain-computer interface (BCI) to enable communication when overt speech is impossible. Speech synthesis was attempted in \cite{Krishna2020}. This paper aims at developing deep learning architectures with recurrent layers such as GRUs to reconstruct presented continuous speech or listening utterances from their EEG.\\ The speech synthesis model first has two GRU layers. The last GRU layer is connected to a time-distributed dense layer with 13 units which corresponds to the number of speech features to predict at every time step (i.e., Mel frequency cepstrum coefficient, or MFCC). The model was evaluated using two metrics: the Mel cepstral distortion (MCD) and the root-mean-squared error (RMSE) between the predicted and ground truth MFCC. \\ On both continuous speech and listening utterances, the presented GRU-based speech synthesis model outperformed LSTM-based models introduced in \cite{Krishna2020ARXIV} (which was not published). We do not report the performance figures here as they are absolute RMSE and MCD values which makes it complicated to compare with other papers.\\ \noindent \textit{Generating EEG features from Acoustic features (Krishna et al. 2020, EUSIPCO):} In \cite{KrishnaEUSIPCO2020}, the authors developed an RNN-based forward regression model. It can be seen as the inverse problem of the EEG-based synthesis from \cite{Krishna2020}. They present two models, a regression and a generative adversarial network (GAN) to predict EEG from acoustic features (MFCC).\\ The regression model has a similar structure as in \cite{Krishna2020} for speech synthesis. MFCC is fed into two GRU layers, followed by a time-distributed dense layer with the number of units corresponding to the dimensions of the EEG feature set used. An alternative regression model with bi-GRU layers instead of GRU layers was also evaluated.\\ A GAN \citep{NIPS2014_GAN} architecture was also built to predict EEG from acoustic features. The motivation to use a GAN is that the loss function is learned whereas in a regression model the loss is fixed (here MSE). The generator is here made of two bi-GRU layers followed by a time-distributed dense layer with the number of units corresponding to the number of EEG feature set dimensions. The output of the generator is referred to as fake EEG. The discriminator applies two bi-GRU layers connected in parallel. At each training step, a pair of inputs is fed into the discriminator. The discriminator takes pairs of (real MFCC features, fake EEG) and (real MFCC features, real EEG). The outputs of the two parallel Bi-GRUs are concatenated and then fed to a GRU layer. The last time step output of the GRU layer is fed into the dense layer with a sigmoid activation function. If appropriately trained, the generator should be able to generate realistic EEG corresponding to a given MFCC input, thus fulfilling the same task as a forward model.\\ The authors obtained a lower RMSE than in the speech synthesis task \citep{Krishna2020} and therefore demonstrated that using their dataset with their processing steps, it was easier for a recurrent deep learning model to predict EEG from acoustic features than the inverse task.\\ \noindent \textit{Advancing Speech Synthesis using EEG (Krishna et al. 2021):} In \cite{Krishna2021NER}, the authors attempt speech synthesis using an attention-regression model (i.e., a regression model with an attention mechanism, AR). Their contribution consists of many points: first, they use an AR model to predict acoustic features from EEG features. Secondly, they use a two-step approach to solve the same task (i.e., one AR model to predict articulatory features from EEG features, then another AR to predict acoustic from articulatory features). Thirdly, they propose a deep learning model that takes raw EEG waveform signals as input and directly produces audio waveform as output. Finally, they demonstrate predicting 16 acoustic features from EEG features.\\ The architecture of the AR model is as follows: first, an encoder GRU layer, then a Luong dot product attention layer \citep{luong-etal-2015-effective}. The context vectors obtained are then provided to the decoder GRU layer. The decoder GRU outputs are passed to a time-distributed dense layer with a linear activation function.\\ The architecture for speech synthesis of an audio waveform using EEG is as follows: the input EEG is fed into a temporal convolutional network (TCN) layer. Features extracted by the TCN are up-sampled. A TCN layer is then applied before the final up-sampling layer. The up-sampled features are then passed to a time-distributed dense layer consisting of a linear activation function which directly outputs the audio waveform.\\ The last model to predict 16 acoustic features from EEG features was developed. It is a GRU-based regression model. First, the EEG input is provided to a GRU layer, then passed through a time-distributed layer with a linear activation function.\\ The results presented in this paper show how different acoustic features are related to EEG recorded during speech perception and production. Their newly introduced AR models outperform the regression model they introduced in \cite{Krishna2020} (i.e., higher MCD for the new model on a majority of subjects). They also show some RMSE values obtained per subject in the raw EEG to audio waveform task but without comparison.\\ We address a common issue for \cite{Krishna2020, KrishnaEUSIPCO2020, Krishna2021NER}: the stimulus content is limited. It is tricky to evaluate the backward/forward modeling performance of a model with four utterances. First, it might be that the model overfits completely on these utterances as they are very short (2 words per utterance). In addition, considering the 80:10:10 ratio used for each subjects, the model certainly saw all the utterances during training, which comforts the overfitting hypothesis.\\ In \cite{KrishnaEUSIPCO2020}, the authors obtained better performance for EEG prediction than for speech reconstruction. This finding is surprising as it is theoretically possible to reconstruct the MFCC features perfectly from the EEG, however considering that the EEG contains speech-unrelated components, it is normally impossible to predict the whole EEG signal from speech.\\ \underline{Autoencoders (AEs):}\\ \noindent\textit{Learning subject-invariant representations from speech-evoked EEG using variational autoencoders (Bollens et al. 2021):} The authors introduce a model aiming to disentangle subject-invariant and subject-specific information from speech-evoked EEG using variational autoencoders (VAEs). For that purpose, EEG is modeled with two disentangled latent spaces (i.e., one that models subject information and one content information). The disentanglement accuracy is then measured using subject classification, and content classification tasks from the latent representations learned by the VAE. \\ The dataset is an extension of the one used in \cite{Accou2021ModelingTR}, and consists of 100 normal-hearing native Flemish participants. Each listened to a minimum of 6 up to 8 stories (each around 14 minutes). The presented model is a factorized hierarchical variational autoencoder (FHVAE) encoding EEG signals in two disentangled latent spaces. One captures high-level slow-varying information, while the other captures residual fast-changing information. It aims at modeling neural responses for short EEG segments (i.e., 500~ms). $z_{1}=\mu_{1} + \epsilon\sigma_{1}$ is defined as the latent content variable (i.e., containing content-information) whereas $z_{2}=\mu_2 + \epsilon\sigma_{2}$ is defined as the latent subject variable (i.e., containing subject information). $\mu_{k}$ is the conditional mean and $\sigma_{k}$ the conditional variance, with k=1,2, the index of the corresponding latent variable.\\ The architecture is as follows: first, in the encoder, $z_1$ and $z_2$ are predicted by a stacked LSTM network of two layers, followed by two separate single fully connected layers, predicting the conditional mean $\mu_{k}$ and variance $\sigma_{k}$, respectively. The second part is a decoder, a two-layer stacked LSTM decoder network, which is fed at each time step with the concatenation of the sampled $z_{1}$ and $z_{2}$ from the posterior distribution. Subsequently, two separate single fully-connected layers take the output from the LSTM layers and predict the probability distribution of the corresponding time frame. For more details about the implementation, the authors refer to \cite{NIPS2017_3f5ee243}. The model is trained first to learn subject information (i.e., FHVAE), and then $z_{1}$ regularization to enhance content separation is added to a second model (i.e., extended FHVAE). The dataset split for train validation and test sets is 80:10:10 respectively as used in \cite{Monesi2020, Accou2021ModelingTR}.\\ Subject-classification accuracy reached 98.94\% and 98.96\% for the latent $z_{2}$ representations of respectively the FHVAE and the Extended FHVAE architecture, suggesting both models succeeded at extracting subject information. On the other hand, subject-classification accuracy for the latent $z_{1}$ representation was around 2\%, which suggests very good disentanglement.\\ The extended FHVAE improves content classification from $z_1$ from 53.89\% to 62.91\% on a binary classification task (chance level $50\%$), while the classification performance from $z_2$ decreases, confirming that the extended model succeeds at modeling content-generating factors in $z_{1}$ but not in $z_{2}$. These results are a first step towards making deep learning models relating speech to EEG either generalizable, which, considering the idiosyncratic nature of EEG across subjects, is highly relevant and necessary.\\ \subsubsection{Others (O)} This third section gathers paradigms relating EEG to speech which are not MM or reconstruction/prediction tasks.\\ \underline{Recurrent neural network:}\\ \noindent\textit{Sequential Attention-based Detection of Semantic Incongruities from EEG While Listening to Speech (Motomura et al. 2020):} A classification paradigm is used to detect semantic incongruities in sentences from EEG recordings. In this paper, the authors use an attention-based recurrent neural network (ARNN) to detect whether a given sentence contains an anomalous word or not.\\ In this study, 19 participants listened to 200 sentences in Japanese: 40 semantically correct, 40 semantically incorrect, 40 syntactically correct, 40 syntactically incorrect, and 40 filler sentences. The concatenated data from 13 subjects were used for training, from 2 subjects for the validation set, and 4 subjects for the test set.\\ The authors evaluated two models: a bi-GRU layer with or without an attention mechanism respectively. The attention mechanism provides attention weights to each time point with the hypothesis that they might not all have the same importance for the classification (e.g., around the onset of the anomalous word in the sentence, the attention weights have a higher magnitude). The output was obtained using a weighted sum over all time points with the attention weights.\\ The accuracy obtained on the binary classification reaches 63.5\% which is statistically above the chance level, as well as previous models including anomalous word onset (50.9\%).\\ \noindent\textit{Keyword-spotting and speech onset detection in EEG-based Brain-Computer Interfaces (Sakthi et al. 2021):} Three tasks were investigated using an LSTM- and GRU-based network: a sentence spotter (SS), a phoneme vs silence classification (PS), and an audio vs audio-visual stimuli classification (AV). The performance of a deep neural network with recurrent layers is investigated to further be integrated into BCI systems.\\ The dataset consists of 16 native English speakers, they listened to four blocks of 125 sentences (all unique sentences) and the fifth block of 100 sentences (10 unique sentences repeated). The same 70\% of the stimulus data per subject was used for training and the remaining 30\% for testing.\\ The SS model is a binary classification model to predict whether the input is the start of a sentence or not. The EEG signal is segmented (from 500ms before to 250ms after the sentence onset) and a PCA is applied. The resulting embedded representation is then fed to two subsequent GRU layers. Finally, a dense layer is applied with one hidden unit and a sigmoid function. Alternatively, a second version of the model was trained, with LSTM layers instead of the GRU layers.\\ The PS model is a binary classification model to predict whether the input is a phoneme or a silence. A PCA is applied to the segmented EEG input The resulting embedded representation is then fed to two GRU layers and afterward a dense layer was applied with a sigmoid as an activation function. A similar model was trained using LSTM instead of GRU layers.\\ The AV model is a binary classification model to identify if a given EEG response was evoked by audio or audio-visual stimuli. The EEG signal is segmented in 3~s chunks and a PCA is applied as for the previous models. The model has 4 GRU layers. The resulting output was processed by a fully connected layer of size 1 with sigmoid activation. As a baseline, a Naive Bayes model was trained on all the tasks listed above.\\ The recurrent architectures consistently outperformed the simple naive Bayes model for all tasks. All three models gave a better performance with the GRU layer rather than the LSTM layers. We here do not provide the exact figures as the SS and PS models are evaluated using the F1 score which cannot be compared to the other studies reviewed here. The reason why the accuracy values are so high is because of the rare presence of onsets in data, hence the use of F1 scores. The AV classifier obtained an accuracy of 98.15\%.\\ \begin{landscape} \begin{table*} \hspace{-4cm}\begin{threeparttable} \vspace{2.5cm}\begin{tabular}{@{}llllllll@{}}\toprule Article & Architecture & Feature & Task & Split (train/test/val) & Window & Performance & participants - stimuli \\ \midrule \cite{Monesi2020} & LSTM & E & MM & 80/10/10 (stim.) & 5s; 10s & 80; 85\% & $90 - 10\times~12~min$ \\ \cite{Monesi2021INTERSPEECH} & LSTM & M & MM & 80/10/10 (stim.) & 5s & 84\% & $90 - 10\times~12~min$ \\ \cite{Accou2021ModelingTR} & Dilated CNN & E & MM & 80/10/10 (stim.) & 10s & 90.6\% & $48 - 10\times~12~min$ \\ \cite{Accou2021PredictingSI} & Dilated CNN & E & MM & 80/10/10 (stim.) & 10s & 90.6\% & $48 - 10\times~12~min$ \\ \cite{Katthi2021DeepMC} & FCNN & E & R & CV (stim.) & x & from 0.2 to 0.4 & $6 - 20\times~160~s$ \\ \cite{Katthi2021DCA} & AE+FCNN & E & R(B) & CV (stim.) & x & 0.27 to 0.344 & $8 - 20\times~160~s$ \\ \cite{DMCCASakthi} & FCNN & E & R(B) & 18/1/1 (sub.) & x & 0.255 to 0.344 & $20 - 20\times~160~s$\\ \cite{thornton2022robust} & FCNN & E & R(B) & 9/3/3 (sub.) & x & 0.255 to 0.344 & $13 - 40~min$\\ \cite{Sakthi2019} & LSTM & E/S & R(B)/O & 80/20\% (sub.) & x & 0.20-0.22/0.09-0.12 & $15 - 30\times1~min$\\ \cite{Krishna2020} & GRU & MFCC & R(B) & 80/10/10\% (sub.) & x & MSE, MCD & $4 - 70\times2~words$\\ \cite{KrishnaEUSIPCO2020} & GRU, GAN & MFCC & R(F) & 80/10/10\% (sub.) & x & MSE, MCD & $4 - 70\times2~words$\\ \cite{Krishna2021NER} & GRU+Att. & MFCC & R(B) & 80/10/10\% (sub.) & x & MSE, MCD & $4 - 70\times2~words$\\ \cite{Bollens2021} & AE & None & O & 80/10/10 (stim.) & x & 98.96\% - 62.91\% & $100 - 8\times15min$\\ \cite{Motomura2020} & bi-GRU+Att. & Se. & O & 13/2/4 (sub.) & x & 63.5\% & $19 - 200~sentences$\\ \cite{Sakthi2021} & LSTM/GRU & Se./P & O & 70/30\% (stim.) & x & F1 & $16 - 600~sentences$\\ \bottomrule \end{tabular} \vspace{0.3cm} \caption{Key figures for single speech source papers. E=envelope, M=multiple, S=spectogram, Se.=sentence, P=phoneme, MM=Match-mismatch, R/P=Reconstruction/Prediction, O=Other, B=backward model, F=Forward model, CV=cross-validation, stim.=within stimulus, sub.=within subject. The performance values reported depend on the task: for MM and D it is a classification accuracy (\%) and for R it is a Pearson correlation value. Study-specific metrics are specified in the table as names instead of numbers.} \label{table2} \end{threeparttable} \end{table*} \end{landscape} \section{Overfitting, interpretation of results, recommendations} \label{Pitfalls} \subsection{Preamble} In our own practice with auditory EEG we noticed how easily the deep learning models overfit to specific trials, subjects or datasets. This is mainly due to the relatively small amount of data typically available, compared to other domains such as image or speech recognition. A very careful selection of the test set is therefore needed, and the results of a number of the studies reviewed above may be overly optimistic. In the following experiments, we demonstrate how this can happen and propose a number of good practices to avoid overfitting and how to calculate results on a sufficiently independent test set. We first introduce two different datasets which we use for respectively single speech source or multiple speech source tasks. \subsubsection{Single speech source (N=1) dataset} For the single speech source experiments, namely subsections \ref{windowSize}, \ref{SI_model} and \ref{MM_negative_sample} below, we select a publicly available dataset \citep{K3VSND_2023}. It consists of EEG data from 85 normal hearing subjects while they listened to 10 unique stories of roughly 14 minutes each, narrated in Flemish (Dutch).\\ \noindent We use the LSTM-based model proposed by \cite{Monesi2020} trained on the match-mismatch task defined in Section \ref{Background}. An exception is made for subsection \ref{windowSize} which uses linear decoders as correlation analysis requires a regression task rather than a match-mismatch task. \subsubsection{Multiple speech sources (N$>$1) dataset} For the multiple-speech source experiment, namely subsection \ref{AAD} below, we use the publicly available dataset from \cite{das_neetha_2019_3377911}. It contains data from 16 subjects. In total, there are 4 Dutch stories, spoken by male speakers, of 12 minutes each. Each story is split up into 2 parts of 6 minutes and all stories are played twice, alternating the attended story. Each subject listens to 8 trials of 6 minutes.\\ \noindent We use the CNN model proposed by \cite{Vandecappelle2021} to conduct our experiments. The architecture of this model is depicted in Figure \ref{fig:cnn_vandecapelle}. \begin{figure} \centering \includegraphics[width=\textwidth]{figures/vandecapelle.pdf} \caption{Architecture of the CNN model \citep{Vandecappelle2021}} \label{fig:cnn_vandecapelle} \end{figure} \subsection{Selection of train, validation and test sets}\label{AAD} \iffalse \begin{itemize} \item big networks combined with very small dataset = bad idea \item network can then just remember all the samples it has seen \item do a power analysis - make sure the network size and data samples are in balance \end{itemize} bullet points: \begin{itemize} \item problem: there is no one method to compare the performance of AAD algorithms \item papers do not use cross-validation and get unreasonably high accuracy scores \item held-out trials to test vs within trials \item on public Das2019 dataset --> framework \begin{itemize} \item network \item evaluation paradigm --> 1) within trial 2) held-out trials \item compare differences to support the claim \end{itemize} \end{itemize} \fi When training neural networks, the split of the dataset into a training, validation, and testing partition is an important aspect. Generally, deep neural networks have much more parameters than linear models, which makes them prone to overfitting. Some common solutions that can be taken against overfitting are regularization, dropout, and early stopping while training. \\ In addition to these measures, in the multiple-speaker scenario, it is important to pay extra attention to the split of the dataset. In the two-speaker case, the task of the model is usually to predict which one of the two speakers is the attended one and which one is the unattended one. When recording the EEG, the measurement is usually spread out over multiple trials. In each trial, the subjects have to pay attention to one of the two speakers. Then, in the next trial, they have to pay attention to another speaker, to generate a balanced dataset. Translating this into output labels, means that there is usually one label per trial, e.g., \textit{left speaker} or \textit{right speaker}. \\ A common way to split these datasets up into training/validation and testing sets is to split each trial into e.g., 80/10/10 \% training/validation/testing. While this might not seem problematic, this allows models to overfit the training data. As there is only one label per trial (left/right), the model might learn to identify the trial from which the segment of EEG was taken, rather than solving the auditory attention task. If the validation and test set are taken from within the same trial, they have the same correct label, and information from the training set can leak into the validation and test set. This leads to models that seemingly perform great, but are unable to generalize and do not score well on unseen trials. To prevent this, we propose to always use held-out trials for the test set. If the dataset contains 10 trials, 8 could be used for training, 1 for validation, and 1 for testing. Since the trial used for testing is never seen by the model before, it cannot rely on identifying which trial the EEG segment is taken from and has to learn to identify the underlying speaker information. \\ To demonstrate the necessity of this split, we conducted experiments with two different splits of the dataset and show that this leads to substantially different results.\\ We use the model proposed by \cite{Vandecappelle2021} and follow the training procedure, using two different splits of the dataset. In the first experiment, we split each trial of 6 minutes into an 80:10:10 training/validation/test partition. In the second experiment, we implement a 4-fold cross-validation scheme. Each fold contains unique stories, ensuring that the stories seen in training do not occur in the test set. Looking at Table~\ref{tab:Das2019_dataset}, we divide the folds as (trial1, trial5), (trial2, trial6), (trial3, trial7) and (trial4, trial8). The results of both experiments can be seen in Figure~\ref{fig:aad_crossval}. The average accuracy of the first experiment for segments with a window length of 1 second is 76.25~\%, while the average accuracy of the second experiment does not exceed 51.10~\%, showing the need for a cross-validation scheme when applying deep learning models to the auditory attention paradigm. The leave-one-story+speaker-out method was tested by \cite{Vandecappelle2021}, and strong overfitting effects were found when within-trial splitting was performed. \begin{table} \centering \begin{tabular}{c|c | c |c} Trial& Left stimulus & Right stimulus & Attended side \\ \hline 1 & story1, part1 & Story2, part1 & Left \\ 2 & story2, part2 & Story1, part2 & Right \\ 3 & story3, part1 & Story3, part1 & Left \\ 4 & story4, part2 & Story4, part2 & Right \\ 5 & story2, part1 & Story1, part1 & Left \\ 6 & story1, part2 & Story2, part2 & Right \\ 7 & story4, part1 & Story3, part1 & Left \\ 8 & story3, part2 & Story4, part2 & Right \\ \end{tabular} \caption{Example division for the Das2019 dataset for 1 subject. Between subjects, the attended direction is alternated.} \label{tab:Das2019_dataset} \end{table} \begin{figure} \centering \includegraphics[width=\linewidth]{figures/aad_crossval_acc.pdf} \caption{Results for training the model from \cite{Vandecappelle2021} using different sets of training/test. Box plots shown over 18 subjects. Each point in the boxplot corresponds to the auditory attention detection accuracy for one subject, averaged over all segments for. Split within trials: each trial of 6 minutes is split into 80/10/10 training/validation/test. Split between trials: out of the 8 trials per subject, use 6 for training, 1 for validation and 1 for test. } \label{fig:aad_crossval} \end{figure} \subsection{Benchmarking model evaluation using public datasets} \label{public_dataset} For multiple speech sources (here auditory attention decoding), some datasets are available (\cite{das_neetha_2019_3377911, fuglsang2017}), while for single speech source only a subset of the \cite{fuglsang2017}, and the datasets from \cite{BRODERICK2018803}, and \cite{weissbart_hugo_2022_7086168} can be used. This lack of data encouraged researchers to collect their own datasets to train and evaluate models. This is sub-optimal, for several reasons.\\ On the one hand, EEG data collection is expensive and time-consuming, leading to small datasets, which hinders the use of large deep learning models. Additionally, the high cost of collecting new EEG data creates a high barrier to entry for researchers in the field. On the other hand, the lack of a larger public dataset makes it difficult to benchmark the models, and training and evaluating on a specific dataset can result in overfitting and lack of generalizability.\\ \noindent An potential point of improvement for most of the papers from this review is to evaluate the developed architectures on datasets recorded with various EEG devices (e.g., different number and location of electrodes) and experimental set-ups (e.g.,varying listening circumstances). We could thus not properly evaluate the generalizability of the models reported here. To illustrate good practices, some generalization experiments were conducted in \cite{Accou2023}: the authors trained a model on their dataset and they evaluated it on a publicly available dataset \citep{fuglsang2017}.\\ \noindent Considering the above-stated issues, the solution to share data publicly seems straightforward. However, sharing EEG data is complicated due to their biological nature. In many countries, the participants have to agree explicitly to their data being shared (anonymously/pseudonimyzed) in a publicly available dataset.\\ \noindent Therefore, we encourage the research groups to work towards establishing a common dataset to facilitate model comparison. This will be a huge time gain and be a good control for possible pitfalls in recordings, pre-processing, or model evaluation. As a comparison, most deep learning models in automatic speech recognition are evaluated on shared datasets and with common error measures (e.g., Librispeech ASR corpus from \cite{Librispeech}). \subsection{Interpretation of correlation scores}\label{windowSize} When decoding continuous speech features such as the envelope from EEG, quality is often estimated by correlating the reconstructed speech envelope with the presented stimulus envelope. In many papers, this correlation metric is reported as a metric of the performance of the model being used. However, this correlation's significance is dependent on the window size, dataset and preprocessing techniques used. Shorter windows will lead to more variability in correlation scores, changing the significance level.\\ To show this experimentally, linear decoders with integration windows of 250~ms were trained in a 5-fold cross-validation setup on each recording of the 48 subjects dataset separately \citep{Crosse2016}. These decoders were evaluated on the test fold by randomly sampling 1000 windows for each predefined length (0.344, 0.5, 1, 2, 5, 10, 30, 60 seconds) with replacement, yielding 1000 Pearson r scores per window size. The predicted envelope segments were randomly paired with the stimulus envelope segments before correlating to construct a null distribution. The null distribution scores were statistically compared to the actual scores using a Wilcoxon rank-sum test with Holm-Bonferroni correction for multiple comparisons. Histograms of both null and actual prediction scores for one representative example subject are shown in Figure \ref{fig:linear_decoder_distributions}.\\ In total, significant differences from the null distribution were found for 0, 3, 19, 29, 38, 45, 48 and 47 out of 48 subjects for windows sizes of 0.344, 0.5, 1, 2, 5, 10, 30, 60 seconds respectively. Figure \ref{fig:std_pearson} shows the relationship between window size and standard deviation for both the prediction and null distributions. Standard deviation decreases logarithmically as a function of window size. This finding highlights the importance of evaluating the null-distribution systematically, taking into account the impact of window size and data quantity on the standard deviation of correlation scores.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{figures/std_pearson.pdf} \caption{Standard deviations of the distributions of actual prediction scores and null distribution scores for linear decoders trained and evaluated on the 48-subject dataset. The null distribution scores were obtained by permuting the prediction and stimulus envelope pairs before correlating. A inverse square root relationship is observed between standard deviation and window size.} \label{fig:std_pearson} \end{figure} Careful consideration is required when interpreting significant correlations. Studies commonly filter EEG data into separate bands (e.g., delta band [0.5-4Hz], theta band [4-8Hz], etc.) These bands have been linked to different processing stages (e.g. \cite{Etard2019}). When filtering data, caution has to be applied when filtering the target signal (i.e., EEG in forward models, speech features in backward models), as this directly influences the difficulty of the task (e.g., a narrowly bandpassed low frequency target signal is easier to predict than a broadband target signal), possibly making the task trivial. This also complicates using correlation scores as a metric for general model performance, as some models might perform well using broadband EEG/stimuli features (e.g., \cite{Accou2021PredictingSI}), while others might benefit from a more narrowband features (e.g., linear decoders \cite{Vanthornhout2018}). Finally, auditory EEG datasets are often recorded with varying equipment, varying methodologies and different languages of both stimuli and listeners, which can significantly influence the obtained correlation scores and thus make correlation scores unfit for comparison of model performance across datasets. When comparing model performance, models should be trained and evaluated on the same data (preprocessed in the same way) for each model to enable fair comparison.\\ \subsection{Model generalization to unseen subjects}\label{SI_model} Subject-independent models are particularly attractive as they can cope with the vast diversity of the EEG and speech signals.\\ Ideally, they do not require training data from new subjects, and are therefore particularly useful in practical applications, for instance in diagnostics of hearing \citep{Vanthornhout2018, Accou2021PredictingSI, GILLIS2022}.\\ Across subjects, the EEG cap placement can vary and so does the brain activity. Training models on multiple subjects enables the model to learn these differences. That remark also applies to different EEG systems with different densities and locations of electrodes, or experiment protocols.\\ The performance of subject-independent models, especially on subjects not seen during training, depends on the training data. To illustrate this, the LSTM model of \cite{Monesi2020} is trained on 1 up to 28 subjects of the 48-subject dataset, and evaluated on the test set of the 20 remaining subjects. The results are displayed in Figure \ref{fig:lstm_learning_curve}. With this analysis, we show that given the model and the collected dataset, the performance reaches a plateau at 28 subjects. This control enables making sure that the model performance were maximized with the amount of training data.\\ \begin{figure} \centering \includegraphics[width=\linewidth]{figures/learning_curve_lstm.pdf} \caption{The LSTM model of \cite{Monesi2020} trained on 1-28 subjects of the 48 subject dataset, and evaluated on the test-set of the remaining 20 subjects. Each point in the boxplot corresponds to the match/mismatch accuracy for one subject, averaged over recordings. Function $y = \frac{50}{tanh(\frac{x}{a})^{-b}}+50$) was used to establish a relationship between the number of subjects seen in training and the match/mismatch accuracy on the holdout subjects. After fitting, the function allows predicting match/mismatch accuracy for any given number of subjects in the training set. Note that these predictions are merely indicative and dependent on specific characteristics of the collected data and model architecture.} \label{fig:lstm_learning_curve} \end{figure} \subsection{Negative samples selection in MM task}\label{MM_negative_sample} When training models in MM task, the choice of the mismatched segments (negative samples) is important to train a model that can generalize well. The main idea is that the negative samples should be what is called "hard negatives" in the deep learning and contrastive learning literature \citep{Oord2018RepresentationLW}. The negative samples should be challenging enough for the model such that it forces the model to learn useful information about the positive samples themselves (here the matched segment) rather than only learning to distinguish between positive and negative samples. For example, if we sample the mismatched segments from white noise or any signal that has a very different distribution than that of the positive samples, then a model, with enough parameters and capacity, will not learn the relation between EEG and speech but it will rather learn that there is a difference between positive and negative speech samples.\\ \noindent It is shown on speech data that taking negative samples from the same sequence or same speaker yields the best accuracy in a phone classification task \citep{robinson_contrastive_2021}. This is in line with the theory of using hard negatives to train contrastive models as mentioned above. As a result, to train a model in match/mismatch that can relate EEG to speech, it is better to sample mismatched segments from the same sequence the matched segments are drawn from, such that we have a similar distribution for the matched and mismatched segments. We would even take this one step further and recommend designing the training in such a way that the mismatched segments also appear as matched segments with other EEG segments (see Figure \ref{fig:mismatch_samples}), such that the only way to determine whether a candidate speech segment is matched or mismatched is to use the corresponding EEG segment.\\ \noindent In our work \citep{Monesi2020, Accou2021ModelingTR}, we have used mismatched segments from the same speech sequence. More specifically, we selected the mismatched segment 1 second after the end of the matched segment. We have also tried taking mismatched segments from the past instead of the future, which led to the same match-mismatch classification performance. In our setup, we make sure that the mismatched segments are temporally close enough to the matched segments. But most importantly, we make sure that a mismatched segment is also a matched segment with another EEG segment as mentioned above. Finally, we report the results of two experiments to illustrate the importance of the above-mentioned points while training the LSTM-based model in the MM task. In the first experiment, we designed our match/mismatched segments in a way that violated the setup in figure \ref{fig:mismatch_samples} such that the mismatched segments were never matched segments exactly. More specifically, we used 65 time samples (instead of 64) as a space between end of the matched and start of the mismatched segment in combination with using a window shift of 64 time samples (one second). As a result, matched segments had overlap with mismatched segments but they were never exactly mismatched segments with other EEG segments, thus resulting in two different distributions for matched and mismatched segments.We used the 48-subjects dataset where subjects listened to 8 stories. Each recording was split into training, validation, and test sets using 80\%, 10\%, and 10\% ratios, respectively. The training set comprised 40\% from the start and 40\% from end of the recording and the remaining 20\% was further split into validation and test sets. As shown in figure \ref{fig:unique_mismatch}, the model performs poorly when mismatched segments are never matched segments. Note that the dataset has only around 2.5 hours of unique speech. As a result, the model succeeds in remembering the match/mismatched speech segments (of the train set, the train accuracy is around 90\%) instead of relating them to EEG. In the second experiment, we took our mismatched speech candidates from another story. The other story was randomly chosen from a set of, normally 7, stories available for the subject. We compare the results with our proposed default setup where we take mismatched speech candidates from the same speech sequence. For each training scenario, we evaluated the trained model on both of the setups. When we choose mismatched segments from another story, the model does not generalize well (52\% classification accuracy) to unseen data as it even performs poorly (53\% classification accuracy) on the same setup it is trained on. On the other hand, we observe that the model trained in our recommended setup, in which mismatched speech segments are taken from the same story, performs well on the default setup that is trained (84\% classification accuracy) as well as on the new setup (84\% classification accuracy) . This implies that the model has learned to find the relation between EEG and the story (stimulus).\\ \begin{figure}[htbp!] \begin{minipage}[b]{1\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{figures/mismatch_samples.png}} \end{minipage} % \caption{Illustration of the choice of mismatched segments in the match-mismatch task. Two important points to consider: 1. The mismatched segments are taken from the same sequence. 2. The mismatched segments are also matched segments (and vice versa) depending on the EEG segment. For example, segment 2 speech is the mismatched candidate with segment1 EEG but it will be a matched speech candidate with segment2 EEG.} \label{fig:mismatch_samples} % \end{figure} \begin{figure}[htb] \begin{minipage}[b]{1\linewidth} \centering \centerline{\includegraphics[width=\textwidth]{figures/unique_mismatch.pdf}} \end{minipage} % \caption{Classification accuracy of the LSTM-based model in the match-mismatch task. Box plots are shown over 48 subjects. \textbf{Our recommended setup}: our proposed training setup where a mismatched segment is also a matched segment with another EEG segment. \textbf{Mismatched speech is never matched speech}: in this setup, a mismatched speech segment will never be exactly a matched segment. As a result, some speech segments will only be matched and others only mismatched.} \label{fig:unique_mismatch} % \end{figure} \section{Conclusions} We gave an overview of the methods to relate EEG to continuous speech using deep learning models. Although many different network types have been implemented across studies, there is no consensus on which one gives the best performance. Performance is difficult to compare across studies as most research groups use their own dataset (e.g., EEG device, participants) and training paradigms. As we suspected many cases of overfitting, we suggested guidelines to make the performance evaluation less biased, and more comparable across studies. \\ The first point addressed the importance of the training, validation and test set selection. We demonstrated with an experiment that in multiple speech sources paradigms, the split must not be done within trials but between them. Many studies we reviewed have done such a split and show unreasonably high decoding accuracies, possibly remembering each trial's label when the split is done within trials.\\ We then addressed the need to use and share public datasets to encourage researchers to improve models and have a common general evaluation benchmark to do so. Gathering diverse data is also necessary to make models more generalizable across devices or experimental setups. We propose to proceed similarly to ASR and computer vision research by gathering large public datasets rather than working separately on small personal datasets.\\ In decoding tasks, correlation is used to measure the model's performance. The significance of correlations depends on the window size, dataset, and pre-processing techniques used and should therefore be analyzed carefully. We show that a decrease in window size leads to an increase in the variance of the prediction distribution, which can result in high correlations that are not statistically significant. Therefore, it is important to carefully consider the correlation values obtained with a given model and to compare them to the corresponding null distribution.\\ Particularly in backward models, the correlation values obtained strongly depend on how the target speech signal is pre-processed (e.g., which frequency band is filtered). We addressed the need to train and evaluate backward models on data with similar pre-processing for a fair comparison.\\ We also encourage the use of subject-independent models because, when trained on a sufficient amount of data, they can cope with dataset diversity due to, e.g., EEG devices, protocols, brain anatomy or speech content. We expect deep learning models to generalize and researchers to test their ability to do so, notably by evaluating models on other datasets or ensuring they were trained on enough data to reach their optimal performance. As an example experiment, we characterized an LSTM model's performance as a function of the number of subjects included in the training.\\ Finally, we highlight the importance of the negative sample selection in a match-mismatch task. With such temporally auto-correlated signals, the difficulty level of a match-mismatch task is also defined through the negative sample selection. Hence, two important characteristics of the negative sample selection are that the mismatched segment is taken from the same speech segment and that each mismatch speech segment is also a matched speech segment with another EEG segment. These two points constrain the model to use the EEG data provided to the model, ensuring the model cannot find the matching segment solely from the speech data.\\ \section{Acknowledgements} Funding was provided by the KU Leuven Special Research Fund C24/18/099 (C2 project to Tom Francart and Hugo Van hamme), FWO fellowships to Bernd Accou (1S89622N), Corentin Puffay (1S49823N), Lies Bollens (1SB1423N) and Jonas Vanthornhout (1290821N). \bibliographystyle{plainnat} \newcommand{}{}
e631a9bf6ba961a99f6edc3a2e82b7a73f1e0783
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \IEEEPARstart{W}{elcome} to the updated and simplified documentation to using the IEEEtran \LaTeX \ class file. The IEEE has examined hundreds of author submissions using this package to help formulate this easy to follow guide. We will cover the most commonly used elements of a journal article. For less common elements we will refer back to the ``IEEEtran\_HOWTO.pdf''. This document applies to version 1.8b of IEEEtran. The IEEEtran template package contains the following example files: \begin{list}{}{} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{list} These are ``bare bones" templates to quickly understand the document structure. It is assumed that the reader has a basic working knowledge of \LaTeX. Those who are new to \LaTeX \ are encouraged to read Tobias Oetiker's ``The Not So Short Introduction to \LaTeX '', available at: \url{http://tug.ctan.org/info/lshort/english/lshort.pdf} which provides an overview of working with \LaTeX. \section{The Design, Intent and \\ Limitations of the Templates} \noindent The templates are intended to {\bf{approximate the final look and page length of the articles/papers}}. Therefore, {\bf{they are NOT intended to be the final produced work that is displayed in print or on IEEEXplore\textsuperscript{\textregistered}}}. They will help to give the authors an approximation of the number of pages that will be in the final version. The structure of the \LaTeX files, as designed, enable easy conversion to XML for the composition systems used by the IEEE's outsource vendors. The XML files are used to produce the final print/IEEEXplore\textsuperscript{\textregistered} pdf and then converted to HTML for IEEEXplore\textsuperscript{\textregistered}. Have you looked at your article/paper in the HTML version? \section{\LaTeX \ Distributions: Where to Get Them} \noindent IEEE recommends using the distribution from the \TeX User Group at \url{http://www.tug.org}. You can join TUG and obtain a DVD distribution or download for free from the links provided on their website: \url{http://www.tug.org/texlive/}. The DVD includes distributions for Windows, Mac OS X and Linux operating systems. \section{Where to get the IEEEtran Templates} \noindent The {\bf{IEEE Template Selector}} will always have the most up-to-date versions of the \LaTeX\ and MSWord templates. Please see: \url{https://template-selector.ieee.org/} and follow the steps to find the correct template for your intended publication. Many publications use the IEEETran LaTeX templates, however, some publications have their own special templates. Many of these are based on IEEEtran, but may have special instructions that vary slightly from those in this document. \section{Where to get \LaTeX \ help - user groups} \noindent The following on-line groups are very helpful to beginning and experienced \LaTeX\ users. A search through their archives can provide many answers to common questions. \begin{list}{}{} \item{\url{http://www.latex-community.org/}} \item{\url{https://tex.stackexchange.com/} } \end{list} \section{Document Class Options in IEEEtran} \noindent At the beginning of your \LaTeX\ file you will need to establish what type of publication style you intend to use. The following list shows appropriate documentclass options for each of the types covered by IEEEtran. \begin{list}{}{} \item{Regular Journal Article} \item{{\tt{$\backslash$documentclass[journal]{IEEEtran}}}}\\ \item{{Conference Paper}} \item{{\tt{$\backslash$documentclass[conference]{IEEEtran}}}}\\ \item{Computer Society Journal Article} \item{{\tt{$\backslash$documentclass[10pt,journal,compsoc]{IEEEtran}}}}\\ \item{Computer Society Conference Paper} \item{{\tt{$\backslash$documentclass[conference,compsoc]{IEEEtran}}}}\\ \item{{Communications Society Journal Article}} \item{{\tt{$\backslash$documentclass[journal,comsoc]{IEEEtran}}}}\\ \item{{Brief, Correspondence or Technote}} \item{{\tt{$\backslash$documentclass[9pt,technote]{IEEEtran}}}} \end{list} There are other options available for each of these when submitting for peer review or other special requirements. IEEE recommends to compose your article in the base 2-column format to make sure all your equations, tables and graphics will fit the final 2-column format. Please refer to the document ``IEEEtran\_HOWTO.pdf'' for more information on settings for peer review submission if required by your EIC. \section{How to Create Common Front Matter} \noindent The following sections describe general coding for these common elements. Computer Society publications and Conferences may have their own special variations and will be noted below. \subsection{Paper Title} \noindent The title of your paper is coded as: \begin{verbatim} \title{The Title of Your Paper} \end{verbatim} \noindent Please try to avoid the use of math or chemical formulas in your title if possible. \subsection{Author Names and Affiliations} \noindent The author section should be coded as follows: \begin{verbatim} \author{Masahito Hayashi \IEEEmembership{Fellow, IEEE}, Masaki Owari \thanks{M. Hayashi is with Graduate School of Mathematics, Nagoya University, Nagoya, Japan} \thanks{M. Owari is with the Faculty of Informatics, Shizuoka University, Hamamatsu, Shizuoka, Japan.} } \end{verbatim} Be sure to use the $\backslash$IEEEmembership command to identify IEEE membership status. Please see the ``IEEEtran\_HOWTO.pdf'' for specific information on coding authors for Conferences and Computer Society publications. Note that the closing curly brace for the author group comes at the end of the thanks group. This will prevent you from creating a blank first page. \subsection{Running Heads} \noindent The running heads are declared by using the $\backslash${\tt{markboth}} command. There are two arguments to this command: the first contains the journal name information and the second contains the author names and paper title. \begin{verbatim} \markboth{Journal of Quantum Electronics, Vol. 1, No. 1, January 2021} {Author1, Author2, \MakeLowercase{\textit{(et al.)}: Paper Title} \end{verbatim} \subsection{Copyright Line} \noindent For Transactions and Journals papers, this is not necessary to use at the submission stage of your paper. The IEEE production process will add the appropriate copyright line. If you are writing a conference paper, please see the ``IEEEtran\_HOWTO.pdf'' for specific information on how to code "Publication ID Marks". \subsection{Abstracts} \noindent The abstract is the first element of a paper after the $\backslash${\tt{maketitle}} macro is invoked. The coding is simply: \begin{verbatim} \begin{abstract} Text of your abstract. \end{abstract} \end{verbatim} Please try to avoid mathematical and chemical formulas in the abstract. \subsection{Index Terms} \noindent The index terms are used to help other researchers discover your paper. Each society may have it's own keyword set. Contact the EIC of your intended publication for this list. \begin{verbatim} \begin{IEEEkeywords} Broad band networks, quality of service \end{IEEEkeywords} \end{verbatim} \section{How to Create Common Body Elements} \noindent The following sections describe common body text elements and how to code them. \subsection{Initial Drop Cap Letter} \noindent The first text paragraph uses a ``drop cap'' followed by the first word in ALL CAPS. This is accomplished by using the $\backslash${\tt{IEEEPARstart}} command as follows: \begin{verbatim} \IEEEPARstart{T}{his} is the first paragraph of your paper. . . \end{verbatim} \subsection{Sections and Subsections} \noindent Section headings use standard \LaTeX\ commands: $\backslash${\tt{section}}, $\backslash${\tt{subsection}} and $\backslash${\tt{subsubsection}}. Numbering is handled automatically for you and varies according to type of publication. It is common to not indent the first paragraph following a section head by using $\backslash${\tt{noindent}} as follows: \begin{verbatim} \section{Section Head} \noindent The text of your paragraph . . . \end{verbatim} \subsection{Citations to the Bibliography} \noindent The coding for the citations are made with the \LaTeX\ $\backslash${\tt{cite}} command. This will produce individual bracketed reference numbers in the IEEE style. At the top of your \LaTeX\ file you should include: \begin{verbatim} \usepackage{cite} \end{verbatim} For a single citation code as follows: \begin{verbatim} see \cite{ams} \end{verbatim} This will display as: see \cite{ams}\\ For multiple citations code as follows: \begin{verbatim} \cite{ams,oxford,lacomp} \end{verbatim} This will display as \cite{ams,oxford,lacomp} \subsection{Figures} \noindent Figures are coded with the standard \LaTeX\ commands as follows: \begin{verbatim} \begin{figure}[!t] \centering \includegraphics[width=2.5in]{fig1} \caption{This is the caption for one fig.} \label{fig1} \end{figure} \end{verbatim} The [!t] argument enables floats to the top of the page to follow IEEE style. Make sure you include: \begin{verbatim} \usepackage{graphicx} \end{verbatim} \noindent at the top of your \LaTeX file with the other package declarations. To cross-reference your figures in the text use the following code example: \begin{verbatim} See figure \ref{fig1} ... \end{verbatim} This will produce:\\ See figure \ref{fig1} . . . \begin{figure}[!t] \centering \includegraphics[width=2.5in]{fig1} \caption{This is the caption for one fig.} \label{fig1} \end{figure} \subsection{Tables} \noindent Tables should be coded with the standard \LaTeX\ coding. The following example shows a simple table. \begin{verbatim} \begin{table} \begin{center} \caption{Filter design equations ...} \label{tab1} \begin{tabular}{| c | c | c |} \hline Order & Arbitrary coefficients & coefficients\\ of filter & $e_m$ & $b_{ij}$ \\ \hline 1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$\\ \hline 2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\ \hline 3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$,\\ \hline \end{tabular} \end{center} \end{table} \end{verbatim} To reference the table in the text, code as follows: \begin{verbatim}Table~\ref{tab1} lists the closed-form...\end{verbatim} to produce: Table~\ref{tab1} lists the closed-form . . . \begin{table} \begin{center} \caption{A Simple Table Example.} \label{tab1} \begin{tabular}{| c | c | c |} \hline Order & Arbitrary coefficients & coefficients\\ of filter & $e_m$ & $b_{ij}$ \\ \hline 1& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$\\ \hline 2&$\beta_{22}=(~1,-1,-1,~~1,~~1,~~1)$ &\\ \hline 3& $b_{ij}=\hat{e}.\hat{\beta_{ij}}$, & $b_{00}=0$,\\ \hline \end{tabular} \end{center} \end{table} \subsection{Lists} \noindent In this section, we will consider three types of lists: simple unnumbered, numbered and bulleted. There have been numerous options added to IEEEtran to enhance the creation of lists. If your lists are more complex than those shown below, please refer to the ``IEEEtran\_HOWTO.pdf'' for additional options.\\ \noindent{\bf A plain unnumbered list} \begin{list}{}{} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{list} \noindent coded as: \begin{verbatim} \begin{list}{}{} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{list} \end{verbatim} \noindent{\bf A simple numbered list} \begin{enumerate} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{enumerate} \noindent coded as: \begin{verbatim} \begin{enumerate} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{enumerate} \end{verbatim} \noindent{\bf A simple bulleted list} \begin{itemize} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{itemize} \noindent coded as: \begin{verbatim} \begin{itemize} \item{bare\_jrnl.tex} \item{bare\_conf.tex} \item{bare\_jrnl\_compsoc.tex} \item{bare\_conf\_compsoc.tex} \item{bare\_jrnl\_comsoc.tex} \end{itemize} \end{verbatim} \subsection{Other Elements} \noindent For other less common elements such as Algorithms, Theorems and Proofs, and Floating Structures such as page-wide tables, figures or equations, please refer to the ``IEEEtran\_HOWTO.pdf'' section on ``Double Column Floats.'' \section{How to Create Common Back Matter Elements} \noindent The following sections demonstrate common back matter elements such as Acknowledgments, Bibliographies, Appendicies and Author Biographies. \subsection{Acknowledgments} \noindent This should be a simple paragraph before the bibliography to thank those individuals and institutions who have supported your work on this article. \begin{verbatim} \section{Acknowledgments} \noindent Text describing those who supported your paper. \end{verbatim} \subsection{Bibliographies} \noindent {\bf{References Simplified:}} A simple way of composing references is to use the $\backslash${\tt{bibitem}} macro to define the beginning of a reference as in the following examples:\\ \noindent [6] H. Sira-Ramirez. ``On the sliding mode control of nonlinear systems,'' \textit{Systems \& Control Letters}, vol. 19, pp. 303--312, 1992. \noindent coded as: \begin{verbatim} \bibitem{Sira3} H. Sira-Ramirez. ``On the sliding mode control of nonlinear systems,'' \textit{Systems \& Control Letters}, vol. 19, pp. 303--312, 1992. \end{verbatim} \noindent [7] A. Levant.``Exact differentiation of signals with unbounded higher derivatives,'' in \textit{Proceedings of the 45th IEEE Conference on Decision and Control}, San Diego, California, USA, pp. 5585--5590, 2006. \noindent coded as: \begin{verbatim}\bibitem{Levant} A. Levant. ``Exact differentiation of signals with unbounded higher derivatives,'' in \textit{Proceedings of the 45th IEEE Conference on Decision and Control}, San Diego, California, USA, pp. 5585--5590, 2006. \end{verbatim} \noindent [8] M. Fliess, C. Join, and H. Sira-Ramirez. ``Non-linear estimation is easy,'' \textit{International Journal of Modelling, Identification and Control}, vol. 4, no. 1, pp. 12--27, 2008. \noindent coded as: \begin{verbatim} \bibitem{Cedric} M. Fliess, C. Join, and H. Sira-Ramirez. ``Non-linear estimation is easy,'' \textit{International Journal of Modelling, Identification and Control}, vol. 4, no. 1, pp. 12--27, 2008. \end{verbatim} \noindent [9] R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. ``Stabilization of food-chain systems using a port-controlled Hamiltonian description,'' in \textit{Proceedings of the American Control Conference}, Chicago, Illinois, USA, pp. 2245--2249, 2000. \noindent coded as: \begin{verbatim} \bibitem{Ortega} R. Ortega, A. Astolfi, G. Bastin, and H. Rodriguez. ``Stabilization of food-chain systems using a port-controlled Hamiltonian description,'' in \textit{Proceedings of the American Control Conference}, Chicago, Illinois, USA, pp. 2245--2249, 2000. \end{verbatim} \subsection{Accented Characters in References} \noindent When using accented characters in references, please use the standard LaTeX coding for accents. {\bf{Do not use math coding for character accents}}. For example: \begin{verbatim} \'e, \"o, \`a, \~e \end{verbatim} will produce: \'e, \"o, \`a, \~e \subsection{Use of BibTeX} \noindent If you wish to use BibTeX, please see the documentation that accompanies the IEEEtran Bibliography package. \subsection{Biographies and Author Photos} \noindent Authors may have options to include their photo or not. Photos should be a bit-map graphic (.tif or .jpg) and sized to fit in the space allowed. Please see the coding samples below: \begin{verbatim} \begin{IEEEbiographynophoto}{Jane Doe} Biography text here without a photo. \end{IEEEbiographynophoto} \end{verbatim} or a biography with a photo \begin{verbatim} \begin{IEEEbiography}[{\includegraphics [width=1in,height=1.25in,clip, keepaspectratio]{fig1.png}}] {IEEE Publications Technology Team} In this paragraph you can place your educational, professional background and research and other interests. \end{IEEEbiography} \end{verbatim} Please see the end of this document to see the output of these coding examples. \section{Mathematical Typography \\ and Why It Matters} \noindent Typographical conventions for mathematical formulas have been developed to {\bf provide uniformity and clarity of presentation across mathematical texts}. This enables the readers of those texts to both understand the author's ideas and to grasp new concepts quickly. While software such as \LaTeX \ and MathType\textsuperscript{\textregistered} can produce aesthetically pleasing math when used properly, it is also very easy to misuse the software, potentially resulting in incorrect math display. IEEE aims to provide authors with the proper guidance on mathematical typesetting style and assist them in writing the best possible article. As such, IEEE has assembled a set of examples of good and bad mathematical typesetting. You will see how various issues are dealt with. The following publications have been referenced in preparing this material: \begin{list}{}{} \item{\emph{Mathematics into Type}, published by the American Mathematical Society} \item{\emph{The Printing of Mathematics}, published by Oxford University Press} \item{\emph{The \LaTeX Companion}, by F. Mittelbach and M. Goossens} \item{\emph{More Math into LaTeX}, by G. Gr\"atzer} \item{AMS-StyleGuide-online.pdf, published by the American Mathematical Society} \end{list} Further examples can be seen at \url{http://journals.ieeeauthorcenter.ieee.org/wp-content/uploads/sites/7/IEEE-Math-Typesetting-Guide.pdf} \subsection{Display Equations} \noindent A simple display equation example shown below uses the ``equation'' environment. To number the equations, use the $\backslash${\tt{label}} macro to create an identifier for the equation. LaTeX will automatically number the equation for you. \begin{equation} \label{deqn_ex1} x = \sum_{i=0}^{n} 2{i} Q. \end{equation} \noindent is coded as follows: \begin{verbatim} \begin{equation} \label{deqn_ex1} x = \sum_{i=0}^{n} 2{i} Q. \end{equation} \end{verbatim} To reference this equation in the text use the $\backslash${\tt{ref}} macro. Please see (\ref{deqn_ex1})\\ \noindent is coded as follows: \begin{verbatim} Please see (\ref{deqn_ex1})\end{verbatim} \subsection{Equation Numbering} \noindent {\bf{Consecutive Numbering:}} Equations within an article are numbered consecutively from the beginning of the article to the end, i.e., (1), (2), (3), (4), (5), etc. Do not use roman numerals or section numbers for equation numbering.\\ \noindent {\bf{Appendix Equations:}} The continuation of consecutively numbered equations is best in the Appendix, but numbering as (A1), (A2), etc., is permissible.\\ \noindent {\bf{Hyphens and Periods}}: Hyphens and periods should not be used in equation numbers, i.e., use (1a) rather than (1-a) and (2a) rather than (2.a) for sub-equations. This should be consistent throughout the article. \subsection{Multi-line equations and alignment} \noindent Here we show several examples of multi-line equations and proper alignments. \noindent {\bf{A single equation that must break over multiple lines due to length with no specific alignment.}} \begin{multline} \text{The first line of this example}\\ \text{The second line of this example}\\ \text{The third line of this example} \end{multline} \noindent is coded as: \begin{verbatim} \begin{multline} \text{The first line of this example}\\ \text{The second line of this example}\\ \text{The third line of this example} \end{multline} \end{verbatim} \noindent {\bf{A single equation with multiple lines aligned at the = signs}} \begin{align} a &= c+d \\ b &= e+f \end{align} \noindent is coded as: \begin{verbatim} \begin{align} a &= c+d \\ b &= e+f \end{align} \end{verbatim} The {\tt{align}} environment can align on multiple points as shown in the following example: \begin{align} x &= y & X & =Y & a &=bc\\ x' &= y' & X' &=Y' &a' &=bz \end{align} \noindent is coded as: \begin{verbatim} \begin{align} x &= y & X & =Y & a &=bc\\ x' &= y' & X' &=Y' &a' &=bz \end{align} \end{verbatim} \subsection{Subnumbering} \noindent The amsmath package provides a {\tt{subequations}} environment to facilitate subnumbering. An example: \begin{subequations}\label{eq:2} \begin{align} f&=g \label{eq:2A}\\ f' &=g' \label{eq:2B}\\ \mathcal{L}f &= \mathcal{L}g \label{eq:2c} \end{align} \end{subequations} \noindent is coded as: \begin{verbatim} \begin{subequations}\label{eq:2} \begin{align} f&=g \label{eq:2A}\\ f' &=g' \label{eq:2B}\\ \mathcal{L}f &= \mathcal{L}g \label{eq:2c} \end{align} \end{subequations} \end{verbatim} \subsection{Matrices} \noindent There are several useful matrix environments that can save you some keystrokes. See the example coding below and the output. \noindent {\bf{A simple matrix:}} \begin{equation} \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{matrix} 0 & 1 \\ 1 & 0 \end{matrix} \end{equation} \end{verbatim} \noindent {\bf{A matrix with parenthesis}} \begin{equation} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{pmatrix} 0 & -i \\ i & 0 \end{pmatrix} \end{equation} \end{verbatim} \noindent {\bf{A matrix with square brackets}} \begin{equation} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{bmatrix} 0 & -1 \\ 1 & 0 \end{bmatrix} \end{equation} \end{verbatim} \noindent {\bf{A matrix with curly braces}} \begin{equation} \begin{Bmatrix} 1 & 0 \\ 0 & -1 \end{Bmatrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{Bmatrix} 1 & 0 \\ 0 & -1 \end{Bmatrix} \end{equation}\end{verbatim} \noindent {\bf{A matrix with single verticals}} \begin{equation} \begin{vmatrix} a & b \\ c & d \end{vmatrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{vmatrix} a & b \\ c & d \end{vmatrix} \end{equation}\end{verbatim} \noindent {\bf{A matrix with double verticals}} \begin{equation} \begin{Vmatrix} i & 0 \\ 0 & -i \end{Vmatrix} \end{equation} is coded as: \begin{verbatim} \begin{equation} \begin{Vmatrix} i & 0 \\ 0 & -i \end{Vmatrix} \end{equation}\end{verbatim} \subsection{Arrays} \noindent The {\tt{array}} environment allows you some options for matrix-like equations. You will have to manually key the fences, but you'll have options for alignment of the columns and for setting horizontal and vertical rules. The argument to {\tt{array}} controls alignment and placement of vertical rules. A simple array \begin{equation} \left( \begin{array}{cccc} a+b+c & uv & x-y & 27\\ a+b & u+v & z & 134 \end{array}\right) \end{equation} is coded as: \begin{verbatim} \begin{equation} \left( \begin{array}{cccc} a+b+c & uv & x-y & 27\\ a+b & u+v & z & 134 \end{array} \right) \end{equation} \end{verbatim} A slight variation on this to better align the numbers in the last column \begin{equation} \left( \begin{array}{cccr} a+b+c & uv & x-y & 27\\ a+b & u+v & z & 134 \end{array}\right) \end{equation} is coded as: \begin{verbatim} \begin{equation} \left( \begin{array}{cccr} a+b+c & uv & x-y & 27\\ a+b & u+v & z & 134 \end{array} \right) \end{equation} \end{verbatim} An array with vertical and horizontal rules \begin{equation} \left( \begin{array}{c|c|c|r} a+b+c & uv & x-y & 27\\ \hline a+b & u+v & z & 134 \end{array}\right) \end{equation} is coded as: \begin{verbatim} \begin{equation} \left( \begin{array}{c|c|c|r} a+b+c & uv & x-y & 27\\ a+b & u+v & z & 134 \end{array} \right) \end{equation} \end{verbatim} Note the argument now has the pipe "$\vert$" included to indicate the placement of the vertical rules. \subsection{Cases Structures} \noindent Many times we find cases coded using the wrong environment, i.e., {\tt{array}}. Using the {\tt{cases}} environment will save keystrokes (from not having to type the $\backslash${\tt{left}}$\backslash${\tt{lbrace}}) and automatically provide the correct column alignment. \begin{equation*} {z_m(t)} = \begin{cases} 1,&{\text{if}}\ {\beta }_m(t) \\ {0,}&{\text{otherwise.}} \end{cases} \end{equation*} \noindent is coded as follows: \begin{verbatim} \begin{equation*} {z_m(t)} = \begin{cases} 1,&{\text{if}}\ {\beta }_m(t),\\ {0,}&{\text{otherwise.}} \end{cases} \end{equation*} \end{verbatim} \noindent Note that the ``\&'' is used to mark the tabular alignment. This is important to get proper column alignment. Do not use $\backslash${\tt{quad}} or other fixed spaces to try and align the columns. Also, note the use of the $\backslash${\tt{text}} macro for text elements such as ``if'' and ``otherwise''. \subsection{Function Formatting in Equations} In many cases there is an easy way to properly format most common functions. Use of the $\backslash$ in front of the function name will in most cases, provide the correct formatting. When this does not work, the following example provides a solution using the $\backslash${\tt{text}} macro. \begin{equation*} d_{R}^{KM} = \underset {d_{l}^{KM}} {\text{arg min}} \{ d_{1}^{KM},\ldots,d_{6}^{KM}\}. \end{equation*} \noindent is coded as follows: \begin{verbatim} \begin{equation*} d_{R}^{KM} = \underset {d_{l}^{KM}} {\text{arg min}} \{ d_{1}^{KM}, \ldots,d_{6}^{KM}\}. \end{equation*} \end{verbatim} \subsection{ Text Acronyms inside equations} \noindent This example shows where the acronym ``MSE" is coded using $\backslash${\tt{text\{\}}} to match how it appears in the text. \begin{equation*} \text{MSE} = \frac {1}{n}\sum _{i=1}^{n}(Y_{i} - \hat {Y_{i}})^{2} \end{equation*} \begin{verbatim} \begin{equation*} \text{MSE} = \frac {1}{n}\sum _{i=1}^{n} (Y_{i} - \hat {Y_{i}})^{2} \end{equation*} \end{verbatim} \subsection{Obsolete Coding} \noindent Avoid the use of outdated environments, such as {\tt{eqnarray}} and \$\$ math delimiters, for display equations. The \$\$ display math delimiters are left over from PlainTeX and should not be used in \LaTeX, ever. Poor vertical spacing will result. \subsection{Use Appropriate Delimiters for Display Equations} \noindent Some improper mathematical coding advice has been given in various YouTube\textsuperscript{TM} videos on how to write scholarly articles, so please follow these good examples:\\ For {\bf{single-line unnumbered display equations}}, please use the following delimiters: \begin{verbatim}\[ . . . \] or \end{verbatim} \begin{verbatim}\begin{equation*} . . . \end{equation*}\end{verbatim} Note that the * in the environment name turns off equation numbering.\\ For {\bf{multiline unnumbered display equations}} that have alignment requirements, please use the following delimiters: \begin{verbatim} \begin{align*} . . . \end{align*} \end{verbatim} For {\bf{single-line numbered display equations}}, please use the following delimiters: \begin{verbatim} \begin{equation} . . . \end{equation} \end{verbatim} For {\bf{multiline numbered display equations}}, please use the following delimiters: \begin{verbatim} \begin{align} . . . \end{align} \end{verbatim} \section{LaTeX Package Suggestions} \noindent Immediately after your documenttype declaration at the top of your \LaTeX\ file is the place where you should declare any packages that are being used. The following packages were used in the production of this document. \begin{verbatim} \usepackage{amsmath,amsfonts} \usepackage{algorithmic} \usepackage{array} \usepackage[caption=false,font=normalsize, labelfont=sf,textfont=sf]{subfig} \u00sepackage{textcomp} \usepackage{stfloats} \usepackage{url} \usepackage{verbatim} \usepackage{graphicx} \usepackage{balance} \end{verbatim} \section{Additional Advice} Please use ``soft'' (e.g., \verb|\eqref{Eq}|) or \verb|(\ref{Eq})| cross references instead of ``hard'' references (e.g., \verb|(1)|). That will make it possible to combine sections, add equations, or change the order of figures or citations without having to go through the file line by line. Please note that the \verb|{subequations}| environment in {\LaTeX} will increment the main equation counter even when there are no equation numbers displayed. If you forget that, you might write an article in which the equation numbers skip from (17) to (20), causing the copy editors to wonder if you've discovered a new method of counting. {\BibTeX} does not work by magic. It doesn't get the bibliographic data from thin air but from .bib files. If you use {\BibTeX} to produce a bibliography you must send the .bib files. {\LaTeX} can't read your mind. If you assign the same label to a subsubsection and a table, you might find that Table I has been cross referenced as Table IV-B3. {\LaTeX} does not have precognitive abilities. If you put a \verb|\label| command before the command that updates the counter it's supposed to be using, the label will pick up the last counter to be cross referenced instead. In particular, a \verb|\label| command should not go before the caption of a figure or a table. Please do not use \verb|\nonumber| or \verb|\notag| inside the \verb|{array}| environment. It will not stop equation numbers inside \verb|{array}| (there won't be any anyway) and it might stop a wanted equation number in the surrounding equation. \balance \section{A Final Checklist} \begin{enumerate}{}{} \item{Make sure that your equations are numbered sequentially and there are no equation numbers missing or duplicated. Avoid hyphens and periods in your equation numbering. Stay with IEEE style, i.e., (1), (2), (3) or for sub-equations (1a), (1b). For equations in the appendix (A1), (A2), etc.}. \item{Are your equations properly formatted? Text, functions, alignment points in cases and arrays, etc. } \item{Make sure all graphics are included.} \item{Make sure your references are included either in your main LaTeX file or a separate .bib file if calling the external file.} \end{enumerate} \section{Introduction} \label{intro} \IEEEPARstart{I}{n} the past years, Convolution Neural Networks (CNNs) have dominated a wide variety of vision tasks such as classification \cite{AlexNet,vgg,resnet,googlenet,liu2022ConvNeXt, yixing_tmm,ZhaoXBGXD21}, object detection \cite{fastrcnn,yolov3,ssd,retinanet,he2017mask} and semantic segmentation \cite{unet,DeepLabV3,fcn}, attributing to the inductive bias of convolution operations, \textit{i.e.}, local connections and weight sharing. However, convolution only models local dependencies of pixels, which ignores the dependency modeling between distant pixels to some extent \cite{wang2018nonlocal}. Inspired by sequence modeling tasks \cite{Brown2020Language,radford2018improving} in natural language processing (NLP)\cite{Vaswani2017atten,radford2018improving,Beltagy2020Longformer}, pioneer works \cite{dosovitskiy2020image,deit,Touvron2021deeper,chu2021cpe,MaGSMKCTYF21} introduce Transformers with long-range dependency modeling ability into computer vision, achieving exciting results in various vision tasks. \begin{figure}[t] \centering \vspace{-2.5em} \includegraphics[width= 1.1\linewidth]{FLOPs.pdf} \vspace{-3em} \caption{Performance comparisons with respect to FLOPs on ImageNet-1K classification. Without extra training data, our DilateFormer variants achieve comparable or even better performance with fewer FLOPs.} \vspace{-2em} \label{fig:flop_acc1} \end{figure} With global attention, the vanilla Vision Transformers (ViTs) \cite{dosovitskiy2020image,deit} can conduct dependency modeling between arbitrary image patches. However, the \textbf{global} attended receptive field of ViTs leads to quadratic computational cost, and modeling dependencies among all patches may be redundant for mainstream vision tasks. To reduce the computational cost and redundancy of global attention, some works \cite{liu2021swin,yang2021focal,zhang2021vil,chu2021twins,Hassani2022nat} introduce inductive bias explored in CNNs, performing \textbf{local} attention only in small neighborhoods. However, local attention naturally suffers from small attended receptive fields, which results in a lack of capability to model long-range dependencies. In this work, we explore an effective Vision Transformer to pursue a preferable trade-off between the computational complexity and the size of the attended receptive field. By analyzing the patch interaction of global attention in ViTs \cite{dosovitskiy2020image,deit}, we find that the attention matrix in shallow layers has two key properties, namely \textit{locality} and \textit{sparsity}. As shown in Figure \ref{atten_fig}, in the third attention block of ViT-Small, relevant patches are sparsely distributed in the neighborhood of the query patch. Such a locality and sparsity property indicates that distant patches in shallow layers are mostly irrelevant in semantics modeling for mainstream vision tasks, and thus there is much redundancy to be reduced in the costly global attention module. Based on the above analysis, we propose a Sliding Window Dilated Attention (SWDA) operation, which performs self-attention among patches sparsely selected in the surrounding field. To make further use of the information within the attended receptive field, we propose Multi-Scale Dilated Attention (MSDA), which simultaneously captures semantic dependencies at different scales. MSDA sets different dilation rates for different heads, enabling the ability of multi-scale representation learning. Following PVT \cite{wang2021pyramid} and Swin \cite{liu2021swin}, we adopt a pyramid architecture to develop a new effective Transformer model, namely Multi-Scale Dilated Transformer (DilateFormer), which stacks MSDA in shallow stages to capture low-level information and global Multi-Head Self-Attention \cite{dosovitskiy2020image,deit} in deeper stages to model high-level interaction. For model evaluation, we design variants of DilateFormer with different capacities and apply them to different vision tasks. Experimental results show that our proposed DilateFormer outperforms state-of-the-art Vision Transformers \cite{deit,dosovitskiy2020image,liu2021swin,zhang2021vil,Hassani2022nat,yang2021focal} on various datasets across different model sizes. As depicted in Figure \ref{fig:flop_acc1}, we demonstrate the performance of our DilateFormers on ImageNet-1K classification task. Without extra training data, our Dilate-S (4.8 GFLOPs) achieves comparable performance with Swin-B (15.4 GFLOPs) \cite{liu2021swin} on ImageNet-1K using only 1/3 FLOPs. With the assistance of Token Labeling \cite{jiang2021tlt}, our DilateFormers achieve better performance than LV-ViTs \cite{jiang2021tlt} at different model sizes. Specifically, our Dilate-S$^{\star}$ (4.9 GFLOPs) and our Dilate-B$^{\star}$ (10.0 GFLOPs) achieve 83.9\% and 84.9\% respectively, surpassing LV-ViT-S \cite{jiang2021tlt} (6.6 GFLOPs) and LV-ViT-M \cite{jiang2021tlt}(16 GFLOPs). Besides, our Dilate-B achieves 85.6\% top-1 accuracy on ImageNet-1K classification \cite{deng2009large} task, 53.5\% box mAP/46.1\% mask mAP on COCO \cite{lin2014microsoft} object detection/instance segmentation task and 51.1\% MS mIoU on ADE20K \cite{zhou2017scene} semantic segmentation task. \begin{figure*}[t] \centering \vspace{-1em} \includegraphics[width=\linewidth]{attention_map.pdf} \vspace{-1.5em} \caption{Visualization of attention maps of the third Multi-Head Self-Attention block of ViT-Small\protect\footnotemark[1]. We visualize the activations in attention maps of the query patches (in the red box). The attention maps show that patches with high attention scores sparsely scatter around the query patch, and other patches have low attention scores. } \vspace{-1.5em} \label{atten_fig} \end{figure*} \stepcounter{footnote}\footnotetext{We use the official checkpoint from \url{https://github.com/google-research/vision_transformer}} \section{Related Work} \label{related_work} A comparison of technical details with various models is shown in Table \ref{tab:other_model}. We summarize and classify our DilateFormer and related vision transformer models from the perspectives of overlapping tokenizer/downsampler, positional embedding, attention type and multi-scale. In the following section, we detail some related works. \begin{table*}[t] \small \vspace{-1em} \begin{center} \captionsetup{justification=centering} \caption{\textsc{Comparison of technical details with other models. ``-'' indicates these modules do not exist. For overlapping tokenizer/downsampler, ``$\checkmark$'' and ``×'' indicate whether these modules are overlapping or not. For positional embedding, ``APE'', ``RPE'' and ``CPE'' indicate absolute positional embedding, relative positional embedding and convolutional positional embedding, respectively. For other technical details, ``$\checkmark$'' and ``×'' indicate these modules are used or not.}} \label{tab:other_model} \vspace{-0.5em} \resizebox{\textwidth}{!}{ \begin{tabular}{ccccccccc} \toprule \multirow{2}{*}{Model} & Overlapping & Overlapping & Positional & \multicolumn{3}{c}{Attention} & \multicolumn{2}{c}{Multi-scale} \\ & Tokenizer & Downsampler & Embedding & Local & Global & Sparse & Stage-level & Block-level \\ \midrule ViT\cite{dosovitskiy2020image}/DeiT\cite{deit} & × & - & APE & × & $\checkmark$ & × & × & × \\ PVT\cite{wang2021pyramid} & × & × & APE & × & $\checkmark$ & × & $\checkmark$ & × \\ Swin\cite{liu2021swin} & × & × & RPE & $\checkmark$ & × & × & $\checkmark$ & × \\ Twins\cite{chu2021twins} & × & × & CPE & $\checkmark$ & $\checkmark$ & × & $\checkmark$ & × \\ GG\cite{yu2021glance} & × & × & RPE/APE & × & × & $\checkmark$ & $\checkmark$ & × \\ Shuffle\cite{Huang2021shuffle} & $\checkmark$ & × & RPE & $\checkmark$ & × & $\checkmark$ & $\checkmark$ & × \\ MaxViT\cite{tu2022maxvit} & $\checkmark$ & $\checkmark$ & CPE & $\checkmark$ & × & $\checkmark$ & $\checkmark$ & × \\ CrossFormer\cite{Wang2021crossf} & $\checkmark$ & $\checkmark$ & RPE & $\checkmark$ & × & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ ViL\cite{zhang2021vil} & × & × & RPE/APE & $\checkmark$ & $\checkmark$ & × & $\checkmark$ & × \\ NAT\cite{Hassani2022nat} & $\checkmark$ & $\checkmark$ & RPE & $\checkmark$ & × & × & $\checkmark$ & × \\ Mobile-Former\cite{chen2021Mobileformer} & - & - & CPE & × & $\checkmark$ & × & $\checkmark$ & × \\ Conformer\cite{peng22021conformer} & $\checkmark$ & - & - & × & $\checkmark$ & × & $\checkmark$ & × \\ Shunted\cite{ren2021Shunted} & $\checkmark$ & $\checkmark$ & CPE & × & $\checkmark$ & × & $\checkmark$ & $\checkmark$ \\ MPViT\cite{ren2021Shunted} & $\checkmark$ & $\checkmark$ & CPE & × & $\checkmark$ & × & $\checkmark$ & $\checkmark$ \\ ViTAE\cite{xu2021vitae} & $\checkmark$ & - & APE & × & $\checkmark$ & × & $\checkmark$ & $\checkmark$ \\ UniFormer\cite{li2022uniformer} & $\checkmark$ & $\checkmark$ & CPE & × & $\checkmark$ & × & $\checkmark$ & × \\ Focal\cite{yang2021focal} & × & × & RPE & $\checkmark$ & $\checkmark$ & × & $\checkmark$ & $\checkmark$ \\ \rowcolor{gray!20} DilateFormer (ours) & $\checkmark$ & $\checkmark$ & CPE & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ \bottomrule \end{tabular} } \end{center} \vspace{-2em} \end{table*} \subsection{Global Attention in Vision Transformers} \label{global_atten} Inspired by the success in NLP \cite{Devlin2019bert,Vaswani2017atten,Brown2020Language}, the vanilla Vision Transformers (ViTs) \cite{dosovitskiy2020image,deit} directly apply self-attention mechanisms to patches split from images. By utilizing sufficient training data \cite{zhai2021scale,dosovitskiy2020image} and strong data augmentation strategies \cite{deit,zhang2018mixup,Yun2019cutmix,Szegedy2016smooth,Hoffer2020Repetition,Zhong2020Erasing}, Transformer-based methods \cite{wu2021rpe,yue2021psvit,yuan2021volo,zhang2022nested,pisltrc2022pan,ZhangZCHLJ22,LinYXYZL22,MaGSMKCTYF21} achieve exciting performance improvements on various vision tasks, \textit{i.e.}, image classification \cite{deit,liu2021swin,li2022uniformer,li2021ContextualTrans,dong2021cswin,peng22021conformer}, object detection \cite{zhang2021vil,dong2021cswin,Hassani2022nat,Carion2020tettrans,gao2021fastdetr,dai2021dynamichead,MaGSMKCTYF21,LiPLW21}, semantic segmentation \cite{ren2021Shunted,lee2021mpvit,chu2021twins,guo2021sotr,zhu2021pvtss,LinYXYZL22,ZhouGLF21}, and re-identification \cite{chen2022rest,he2021transreid,zheng2022template}. Since the computational complexity of the self-attention mechanism is quadratic \textit{w.r.t.} the number of patches, global attention is difficult to apply in high-resolution image encoding. Furthermore, according to our analysis in Sec. \ref{intro}, the long-range modeling capability of the global attention mechanism in shallow layers of ViTs is redundant. To reduce the redundancy and computational cost of the self-attention mechanism, some works \cite{wang2021pyramid, chu2021twins, yu2021glance} introduce sub-sampling operations in self-attention blocks while preserving the global receptive field. Such sub-sampling operations require complex designs and introduce extra parameters or computational cost. Different from these works, our Sliding Window Dilated Attention (SWDA) is easy to implement for reducing the redundancy of self-attention mechanism in a dilated manner. \subsection{Local Attention in Vision Transformers} \label{local_atten} In order to make the self-attention mechanism applicable for high-resolution image encoding, some works \cite{liu2021swin,dong2021cswin,GongYLFLFL22} apply the self-attention mechanism to patches in a fixed local region to reduce computational cost. For example, Swin \cite{liu2021swin} applies self-attention to the patches within fixed windows and then adopts a window-shifting strategy in the next layer for information exchange between the patches in different windows. CSwin \cite{dong2021cswin} improves the window-fixed setting in Swin \cite{liu2021swin}, performing self-attention to cross-shaped windows. Other works \cite{Huang2021shuffle,Wang2021crossf, tu2022maxvit} use grouped sampling or spatial shuffling operation for information exchange between different local windows. Inspired by the convolution operation in CNNs \cite{AlexNet,resnet,googlenet,liu2022ConvNeXt,li2021simvit}, ViL \cite{zhang2021vil} and NAT \cite{Hassani2022nat} propose sliding window attention, which models dependencies only with neighboring patches in the window centering each query patch. Moreover, some works \cite{xiao2021EarlyConv,peng22021conformer,chen2021Mobileformer,guo2021cmt,yuan2021ceit,Srinivas2021BoTNet,XiaLL22} combine CNNs and Transformers for introducing the locality prior, and they usually design hand-crafted and complex modules for interaction between CNNs and Transformers features, leading to a lack of scalability to large-scale parameters \cite{kaiming2021mae,xie2021simim}. However, some works \cite{liu2021swin, dong2021cswin,Hassani2022nat,zhang2021vil} above only consider the locality of the self-attention mechanism and lack consideration of the sparsity. Although some works \cite{yu2021glance, Huang2021shuffle, tu2022maxvit, Wang2021crossf} above perform self-attention in a sparse and uniform manner, they are designed to approximate the global attended receptive field. In comparison, our Sliding Window Dilated Attention (SWDA) takes both the locality and sparsity of self-attention mechanism into consideration. Our SWDA introduces a prior to reduce the redundancy of self-attention mechanism, which performs self-attention in a dilated window centered on query patch. \subsection{Multi-scale Vision Transformer} \label{multi_scale} The vanilla Vision Transformer \cite{dosovitskiy2020image, deit} is a “columnar” structure for visual tasks. Since multi-scale information \cite{AlexNet,resnet,googlenet,YuZW22,LiuLLSL22,ChenLLM22,LiuFWLSZ22,ZuoWFHSW22} is beneficial for dense prediction tasks such as object detection, instance and semantic segmentation, recent works \cite{wang2021pyramid,liu2021swin,dong2021cswin, zhang2021vil,Hassani2022nat, ren2021Shunted,Huang2021shuffle,yu2022metaformer,lee2021mpvit,Wang2021crossf,fan2021multiscale,MaGSMKCTYF21} introduce multi-scale modeling capability by using a pyramid structure to design their transformer backbones. Several works \cite{Wang2021crossf,chen2021crossvit,lee2021mpvit,ren2021Shunted, xu2021vitae,yang2021focal,peng22021conformer,chen2021Mobileformer} introduce multi-scale information in patch embedding layers \cite{Wang2021crossf} or self-attention blocks \cite{lee2021mpvit,ren2021Shunted,yang2021focal} or add extra branches \cite{peng22021conformer,chen2021Mobileformer,xu2021vitae} to perform convolution operation. CrossFormer\cite{Wang2021crossf} utilizes different convolution operations or different patch sizes for designing patch embedding. Shunted Transformer\cite{ren2021Shunted} uses multi-scale token aggregation for obtaining keys and values of various sizes. MPViT \cite{lee2021mpvit} consists of multi-scale patch embedding and multi-path transformer blocks. Conformer \cite{peng22021conformer}, Mobile-Former \cite{chen2021Mobileformer} and ViTAE \cite{xu2021vitae} design additional convolution branches outside or inside the self-attention blocks to integrate multi-scale information. The above methods all require complex design, which inevitably introduce additional parameters and computational cost. Our Multi-Scale Dilated Attention (MSDA) extracts multi-scale features by setting different dilation rates, which is simple and does not need to introduce extra parameters and computational cost. \subsection{Dilated Convolution} \label{dilate_conv2d} Traditional Convolution-based networks \cite{AlexNet,resnet,googlenet,liu2022ConvNeXt} usually use downsampling or convolution with a large stride to increase the receptive field and reduce computational cost. However, these approaches \cite{AlexNet,resnet,googlenet,liu2022ConvNeXt} result in reduced resolution of feature maps, affecting model performance in many tasks such as object detection \cite{fastrcnn,yolov3,ssd,retinanet,he2017mask} and semantic segmentation \cite{unet,DeepLabV3,fcn}. Therefore, Cohen et al. \cite{yu2015multi,cohen2016group} propose dilated convolution\cite{MaSC22,YanZZZZ22}, which increases the receptive field without reducing the resolution and extracts the information of the feature map at different scales by setting different dilation rates. Dilated convolution with dynamic weights \cite{chen2020dynamic}, namely Dynamic Dilated Convolution (DDC), uses the entire feature map to generate the kernel parameter of convolution, which is data-specific at the feature-map level. Different from existing works, we propose a simple yet effective Dilated Attention operation by introducing various dilation rates at the same semantic level into a single self-attention operation, which more flexibly models multi-scale interaction. Although ours is a sliding window based dilated attention, ours differs from DDC because our modelling performs self-attention on keys and values sparsely selected in a sliding window centered on the query patch, which is data-specific at the token level. In addition, we also notice a concurrent work, DiNAT \cite{hassani2022dilated}, which uses a single-scale and fixed dilation rate in each block of the same stage, lacking multi-scale interaction. In contrast, our DilateFormer uses a multi-scale strategy in each block i.e., setting different dilation rates for different heads, which can capture and fuse multi-scale semantic feature. \section{Multi-Scale Dilated Transformer} \label{method} In this section, we introduce our proposed Multi-Scale Dilated Transformer (DilateFormer) in details. In Sec.\ref{dilate_atten}, we introduce our Sliding Window Dilated Attention (SWDA) operation, towards effective long-range dependency modeling in feature maps. In Sec.\ref{dilate_block}, we design Multi-Scale Dilated Attention (MSDA), which simultaneously captures contextual semantic dependencies at different scales to make good use of the information inside the block. The overall framework and variants of the proposed Multi-Scale Dilated Transformer (DilateFormer) are illustrated in Sec.\ref{Arc}. \begin{figure*}[t] \centering \vspace{-1em} \includegraphics[width=\textwidth]{model.pdf} \vspace{-2em} \caption{The overall architecture of our DilateFormer. The top part shows the proposed Multi-Scale Dilated Attention (MSDA) block, consisting of DwConv, Multi-Scale Sliding Window Dilated Attention operation (SWDA) and MLP. The bottom part shows DilateFormer, consisting of Overlapping Tokenizer, Overlapping Downsampler, Multi-Scale Dilated Attention (MSDA) block and Multi-Head Self-Attention (MHSA) block.} \label{arc_fig} \vspace{-1em} \end{figure*} \begin{figure}[t] \centering \vspace{-1em} \includegraphics[width=\linewidth]{MSDA.pdf} \caption{\textbf{Illustration of Multi-Scale Dilated Attention (MSDA).} First, the channels of the feature map are split into different heads. Then, the self-attention operation is performed among the colored patches in the window surrounding the red query patch, using different dilation rates in different heads. Besides, features in different heads are concatenated together and then fed into a linear layer. By default, we use a $3\times3$ kernel size with dilation rates $r$ = 1, 2 and 3, and the sizes of attended receptive fields in different heads are $3\times3$, $5\times5$ and $7\times7$.} \vspace{-1em} \label{msda_fig} \end{figure} \subsection{Sliding Window Dilated Attention} \label{dilate_atten} According to the locality and sparsity properties observed in the global attention of shallow layers in vanilla Vision Transformers (ViTs), we propose a Sliding Window Dilated Attention (SWDA) operation, where the keys and values are \textit{sparsely} selected in a sliding window centered on the query patch. Self-attention is then performed on these representative patches. Formally, our SWDA is described as follows: \begin{equation} \label{swda} \vspace{-0.5em} X = \mathrm{SWDA}(Q,K,V,r), \end{equation} where $Q$, $K$ and $V$ represent the query, key and value matrix, respectively. Each row of the three matrices indicates a single query/key/value feature vector. For the query at location $(i,~j)$ in the original feature map, SWDA sparsely selects keys and values to conduct self-attention in a sliding window of size $w\times w$ centered on $(i,~j)$. Furthermore, we define a dilation rate $r\in\mathbb{N}^+$ to control the degree of sparsity. Particularly, for the position $(i,~j)$, the corresponding component $x_{ij}$ of the output $X$ from $\mathrm{SWDA}$ operation is defined as follows: \begin{equation} \begin{aligned} \label{softmax} x_{ij} & = \mathrm{Attention}(q_{ij}, K_{r}, V_{r}),\\ & = \mathrm{Softmax}\left(\frac{q_{ij} K^{T}_{r}}{\sqrt{d_{k}}} \right)V_{r}, ~~1 \leq i \leq W, ~1 \leq j \leq H, \end{aligned} \end{equation} where $H$ and $W$ are the height and width of the feature map. $K_{r}$ and $V_{r}$ represent keys and values selected from the feature maps $K$ and $V$. Given the query positioned at $(i,~j)$, keys and values positioned at the following set of coordinate $(i',~j')$ will be selected for conducting self-attention: \begin{equation} \begin{aligned} \label{index} \Big\{(i',j') \Big|& i'=i+p \times r, j'=j+q \times r \Big\}, \\ & -\frac{w}{2} \leq p,~q \leq \frac{w}{2}. \end{aligned} \end{equation} Our SWDA conducts the self-attention operation for all query patches in a sliding window manner. For the query at the edge of the feature map, we simply use the zero padding strategy commonly used in convolution operations to maintain the size of the feature map. By sparsely selecting keys and values centered on queries, the proposed SWDA explicitly satisfies the locality and sparsity property and can model the long-range dependency effectively. \subsection{Multi-Scale Dilated Attention} \label{dilate_block} To exploit the sparsity at different scales of the self-attention mechanism in block-level, we further propose a Multi-Scale Dilated Attention (MSDA) block to extract multi-scale semantic information. As shown in Figure \ref{msda_fig}, given a feature map $X$, we obtain corresponding queries, keys and values by linear projection. After that, we divide the channels of the feature map to $n$ different heads and perform multi-scale SWDA in different heads with different dilation rates. Specifically, our MSDA is formulated as follows: \begin{equation} h_{i} = \mathrm{SWDA}(Q_{i},K_{i},V_{i},r_{i}),~~~~1 \leq i \leq n, \end{equation} \begin{equation} X = \mathrm{Linear}(\mathrm{Concat}[h_{1},...,h_{n}]), \end{equation} where $r_{i}$ is the dilation rate of the $i$-th head and $Q_{i}$, $K_{i}$ and $V_{i}$ represent slices of feature maps fed into the $i$-th head. The outputs $\{h_i\}_{i=1}^n$ are concatenated together and then sent to a linear layer for feature aggregation. By setting different dilation rates for different heads, our MSDA effectively aggregates semantic information at various scales within the attended receptive field and efficiently reduces the redundancy of self-attention mechanism without complex operations and extra computational cost. \begin{table*}[t] \centering \small \vspace{-1em} \captionsetup{justification=centering} \caption{\textsc{Model variants of our DilateFormer. MSDA and MHSA represent Multi-Scale Dilated Attention and Multi-Head Self-Attention, respectively.} ``d'', ``h'', ``ks.'' \textsc{and} ``dr.''\textsc{ indicate feature dimension, the number of head, kernel size and dilation rate, respectively.}} \vspace{-0.5em} \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ccccc} \hline Resolution & Block & Tiny & Small & Base \\ \hline \begin{tabular}[c]{@{}c@{}@{}}Stage 1\\ $(56\times56)$\\\end{tabular}& {\begin{tabular}[c]{@{}c@{}}MSDA\end{tabular} } & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}72\text{-d, } 3\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 2$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}72\text{-d, } 3\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 3$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}96\text{-d, } 3\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 4$\\ \end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}@{}}Stage 2\\ $(28\times28)$\\\end{tabular}& {\begin{tabular}[c]{@{}c@{}}MSDA\end{tabular} } & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}144\text{-d, } 6\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 2$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}144\text{-d, } 6\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 5$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}192\text{-d, } 6\text{-h}\\\text{ks. }3\times3 \\\text{dr. } [1,2,3]\end{bmatrix}\times 8$\\ \end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}@{}}Stage 3\\ $(14\times14)$\\\end{tabular}& {\begin{tabular}[c]{@{}c@{}}MHSA\end{tabular} } & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}288\text{-d, } 12\text{-h} \end{bmatrix}\times 6$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}288\text{-d, } 12\text{-h} \end{bmatrix}\times 8$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}384\text{-d, } 12\text{-h} \end{bmatrix}\times 10$\\\end{tabular} \\ \hline \begin{tabular}[c]{@{}c@{}@{}}Stage 4\\ $(7\times7)$\\\end{tabular}& {\begin{tabular}[c]{@{}c@{}}MHSA\end{tabular} } & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}576\text{-d, } 24\text{-h}\end{bmatrix}\times 2$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}576\text{-d, } 24\text{-h}\end{bmatrix}\times 3$\\ \end{tabular} & \begin{tabular}[c]{@{}c@{}}\specialrule{0em}{1pt}{3pt}$\begin{bmatrix}768\text{-d, } 24\text{-h}\end{bmatrix}\times 3$\\ \end{tabular} \\ \hline \end{tabular} \normalsize \label{Model_table} \vspace{-1em} \end{table*} \subsection{Overall Architecture} \label{Arc} With a pyramid structure, we propose the Multi-Scale Dilated Transformer (DilateFormer) as shown in Figure \ref{arc_fig}. According to the locality and sparsity property of shallow layers in ViTs, the first two stages of DilateFormer use Multi-Scale Dilated Attention (MSDA) proposed in Sec. \ref{dilate_block} while the latter two stages utilize ordinary Multi-Head Self-Attention (MHSA). What's more, we use the overlapping tokenizer \cite{xiao2021EarlyConv} for patch embedding, which uses multiple overlapping $3\times 3$ convolution modules with zero-padding. The resolution of the output feature map can be adjusted by controlling the stride size of convolution kernels to be 1 or 2 alternately. To merge patches in the previous stage, we utilize the overlapping downsampler \cite{Hassani2022nat}, a convolution module with an overlapping kernel size of 3 and a stride of 2. To make the position encoding adaptive to inputs of different resolutions, we use Conditional Position Embedding (CPE) proposed in CPVT \cite{chu2021cpe} whenever inputs are fed into MSDA or MHSA blocks. Specifically, our overall architecture is described as follows: \begin{equation} X = \mathrm{CPE}(\hat{X}) + \hat{X} = \mathrm{DwConv}(\hat{X}) + \hat{X}, \end{equation} \begin{equation} Y = \begin{dcases} \mathrm{MSDA}(\mathrm{Norm}(X)) + X, & \text{at~low-level~stages}, \\ \mathrm{MHSA}(\mathrm{Norm}(X)) + X, & \text{at~high-level~stages}, \end{dcases} \end{equation} \begin{equation} Z = \mathrm{MLP}(\mathrm{Norm}(Y)) + Y, \end{equation} where $\hat{X}$ is the input of the current block, \textit{i.e.}, the image patches or the output from the last block. In practice, we implement CPE as a depth-wise convolution (DwConv) module with zero-padding and $3\times 3$ kernel size. We add MLP following prior works \cite{deit,liu2021swin}, which consists of two linear layers with the channel expansion ratio of 4 and one GELU activation. Based on the above network structure, we introduce three variants of the proposed DilateFormer (\textit{i.e.}, Tiny, Small, and Base), and the specific model settings are given in Table \ref{Model_table}. \section{Experiments} \label{exp} To evaluate the performance of our Multi-Scale Dilated Transformer (DilateFormer), we take our model as a vision backbone for ImageNet-1K \cite{deng2009large} classification, COCO \cite{lin2014microsoft} object detection and instance segmentation, and ADE20K \cite{zhou2017scene} semantic segmentation. Furthermore, we evaluate the effectiveness of our key modules via ablation studies. All experiments are conducted on a single server node with 8 A100 GPUs. \begin{table} \centering \vspace{-1em} \captionsetup{justification=centering} \caption{\textsc{Comparison with the state-of-the-art on ImageNet-1K. `$\star$' indicates Token Labeling proposed in LV-ViT \cite{jiang2021tlt}, and `$\uparrow$' indicates that the model is fine-tuned at a larger resolutions.}} \vspace{-0.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{c|cc|cc|c} \hline \multirow{2}{*}{Method} & Params & FLOPs & \multirow{2}{*}{Train} & \multirow{2}{*}{Test} & Top1 \\ & (M) & (G) & & & (\%) \\ \hline \hline RegNetY-4G \cite{radosavovic2020designing} & 21 & 4.0 & 224 & 224 & 80.0 \\ ResNet-50 \cite{resnet} & 25 & 4.1 & 224 & 224 & 78.5 \\ ConvNeXt-T \cite{liu2022ConvNeXt} & 28 & 4.5 & 224 & 224 & 82.1 \\ Mobile-Former-508M \cite{chen2021Mobileformer} & 14 & 1.0 & 224 & 224 & 79.3 \\ PVT-S \cite{wang2021pyramid} & 25 & 3.8 & 224 & 224 & 79.8 \\ DW-Conv.-T \cite{han2021connection} & 24 & 3.8 & 224 & 224 & 81.3 \\ CoAtNet-0 \cite{dai2021coatnet} & 25 & 4.2 & 224 & 224 & 81.6 \\ Swin-T \cite{liu2021swin} & 29 & 4.5 & 224 & 224 & 81.3 \\ CvT-13 \cite{wu2021cvt} & 20 & 4.5 & 224 & 224 & 81.6 \\ GG-T \cite{yu2021glance} & 28 & 4.5 & 224 & 224 & 82.0 \\ DeiT-S \cite{deit} & 22 & 4.6 & 224 & 224 & 79.9 \\ Distilled DeiT-S \cite{deit} & 22 & 4.6 & 224 & 224 & 81.2 \\ ViL-S \cite{zhang2021vil} & 25 & 4.9 & 224 & 224 & 82.0 \\ TNT-S \cite{han2021tnt} & 24 & 5.2 & 224 & 224 & 81.3 \\ NesT-T \cite{zhang2021aggregating} & 17 & 5.8 & 224 & 224 & 81.3 \\ BoTNet-S1-59 \cite{Srinivas2021BoTNet} & 34 & 7.3 & 224 & 224 & 81.7 \\ \rowcolor{gray!20} Dilate-T (ours) & 17 & 3.2 & 224 & 224 & 82.1 \\ \rowcolor{gray!20} Dilate-T$^{\star}$ (ours) & 18 & 3.2 & 224 & 224 & \textbf{82.8} \\ \midrule CvT-13 $\uparrow384$ \cite{wu2021cvt} & 20 & 16.3 & 224 & 384 & 83.0 \\ \rowcolor{gray!20} Dilate-T$^{\star}\uparrow384$ (ours) & 18 & 10.2 & 224 & 384 & \textbf{83.8}\\ \hline \hline ResNet-101 \cite{resnet} & 44 & 7.9 & 224 & 224 & 79.8 \\ ConvNeXt-S \cite{liu2022ConvNeXt} & 50 & 8.7 & 224 & 224 & 83.1 \\ RegNetY-16G \cite{radosavovic2020designing} & 84 & 16.0 & 224 & 224 & 82.9 \\ Focal-T \cite{yang2021focal} & 29 & 4.9 & 224 & 224 & 82.2 \\ CrossFormer-S \citeonline{Wang2021crossf} & 31 & 4.9 & 224 & 224 & 82.5 \\ T2T-14 \cite{yuan2021tokens} & 22 & 5.2 & 224 & 224 & 80.7 \\ DiNAT-T \cite{hassani2022dilated} & 28 & 4.3 & 224 & 224 & 82.7 \\ LV-ViT-S$^{\star}$ \cite{jiang2021tlt} & 26 & 6.6 & 224 & 224 & 83.3 \\ CvT-21 \cite{wu2021cvt} & 32 & 7.1 & 224 & 224 & 82.5 \\ Twins-SVT-B \cite{chu2021twins} & 56 & 8.3 & 224 & 224 & 83.1 \\ Swin-S \cite{liu2021swin} & 50 & 8.7 & 224 & 224 & 83.0 \\ PoolFormer-M36 \cite{yu2022metaformer} & 56 & 8.8 & 224 & 224 & 82.1 \\ PVT-L \cite{wang2021pyramid} & 61 & 9.8 & 224 & 224 & 81.7 \\ NesT-S \cite{zhang2021aggregating} & 38 & 10.4 & 224 & 224 & 83.3 \\ DeepVit-L & 55 & 12.5 & 224 & 224 & 82.2\\ CoaT-S & 22 & 12.6 & 224 & 224 & 82.1 \\ TNT-B \cite{han2021tnt} & 66 & 14.1 & 224 & 224 & 82.8 \\ \rowcolor{gray!20} Dilate-S (ours) & 21 & 4.8 & 224 & 224 & 83.3 \\ \rowcolor{gray!20} Dilate-S$^{\star}$ (ours) & 22 & 4.9 & 224 & 224 & \textbf{83.9} \\ \hline CoAtNet-0$\uparrow384$ \cite{dai2021coatnet} & 20 & 13.4 & 224 & 384 & 83.9 \\ T2T-14 $\uparrow384$ \cite{yuan2021tokens} & 22 & 17.1 & 224 & 384 & 83.3 \\ LV-ViT-S$^{\star} \uparrow384$ \cite{jiang2021tlt} & 26 & 22.2 & 224 & 384 & 84.4 \\ CvT-21 $\uparrow384$ \cite{wu2021cvt} & 32 & 24.9 & 224 & 384 & 83.3\\ \rowcolor{gray!20} Dilate-S$^{\star}\uparrow384$ (ours) & 22 & 15.5 & 224 & 384 & \textbf{84.9} \\ \hline \hline ResNet-152 \cite{resnet} & 60 & 11.6 & 224 & 224 & 80.8 \\ EffNet-B7 \cite{tan2019efficientnet} & 54 & 39.2 & 600 & 600 & 84.3 \\ Next-ViT-L \cite{li2022next} & 58 & 10.8 & 224 & 224 & 83.6 \\ PoolFormer-M48 \cite{yu2022metaformer} & 73 & 11.6 & 224 & 224 & 82.5 \\ DeepViT-L \cite{zhou2021DeepViT} & 55 & 12.5 & 224 & 224 & 83.1 \\ DW-Conv.-B \cite{han2021connection} & 74 & 12.9 & 224 & 224 & 83.2 \\ DiNAT-S \cite{hassani2022dilated} & 51 & 7.8 & 224 & 224 & 83.8 \\ T2T-24 \cite{yuan2021tokens} & 64 & 13.2 & 224 & 224 & 82.2 \\ ViL-B \cite{zhang2021vil} & 56 & 13.4 & 224 & 224 & 83.2 \\ Twins-SVT-L \cite{chu2021twins} & 99 & 14.8 & 224 & 224 & 83.3 \\ Swin-B \cite{liu2021swin} & 88 & 15.4 & 224 & 224 & 83.4 \\ Shuffle-B \cite{Huang2021shuffle} & 88 & 15.6 & 224 & 224 & 84.0 \\ CoAtNet-2 \cite{dai2021coatnet} & 75 & 15.7 & 224 & 224 & 84.1 \\ Focal-B \cite{yang2021focal} & 90 & 16.0 & 224 & 224 & 83.8 \\ LV-ViT-M$^{\star}$ \cite{jiang2021tlt} & 56 & 16.0 & 224 & 224 & 84.1\\ CrossFormer-L \cite{Wang2021crossf} & 92 & 16.1 & 224 & 224 & 84.0 \\ MPViT-B \cite{lee2021mpvit} & 75 & 16.4 & 224 & 224 & 84.3 \\ DeiT-B \cite{deit} & 86 & 17.5 & 224 & 224 & 83.4 \\ Distilled DeiT-B \cite{deit} & 86 & 17.5 & 224 & 224 & 81.8 \\ NesT-B \cite{zhang2021aggregating} & 68 & 17.9 & 224 & 224 & 83.8 \\ BoTNet-T7 \cite{Srinivas2021BoTNet} & 79 & 19.3 & 256 & 256 & 84.2 \\ \rowcolor{gray!20} Dilate-B (ours) & 47 & 10.0 & 224 & 224 & 84.4\\ \rowcolor{gray!20} Dilate-B$^{\star}$ (ours) & 48 & 10.0 & 224 & 224 & \textbf{84.9}\\ \hline CoAtNet-1 $\uparrow384$ \cite{dai2021coatnet} & 42 & 27.4 & 224 & 384 & 85.1 \\ LV-ViT-M$^{\star}\uparrow384$ \cite{jiang2021tlt} & 56 & 42.2 & 224 & 384 & 85.4\\ BoTNet-S1-128$\uparrow384$ \cite{Srinivas2021BoTNet} & 79 & 45.8 & 256 & 384 & 84.7 \\ \rowcolor{gray!20} Dilate-B$^{\star}\uparrow384$ (ours) & 48 & 31.1 & 224 & 384 & \textbf{85.6} \\ \hline \end{tabular} } \label{cls_table} \end{table} \subsection{Image Classification on ImageNet-1K} \label{cls} \noindent \textbf{- Dataset and implementation details.} ImageNet-1k \cite{deng2009large} is a large-scale 1000-classes dataset that contains 1.28 million training images and 50,000 validation images. We conduct classification experiments on ImageNet-1K dataset to evaluate our variants, following the same training strategies of baseline Transformers as DeiT \cite{deit} and PVT \cite{wang2021pyramid} for a fair comparison. We use the AdamW optimizer \cite{Loshchilov2019adamw} with 300 epochs including the first 10 warm-up epochs and the last 10 cool-down epochs and adopt a cosine decay learning rate scheduler decayed by a factor of 10 every 30 epochs with a base learning rate of 0.001, a batch size of 1024, and a weight decay of 0.05. To further demonstrate the performance of DilateFormer, Token Labeling \cite{jiang2021tlt} is used to auxiliarily train DilateFormer. We add an extra fully connected layer and an auxiliary loss to DilateFormer and follow the training strategy of LV-ViT \cite{jiang2021tlt} where CutMix \cite{zhang2018mixup} and Mixup \cite{Yun2019cutmix} are replaced by MixToken \cite{jiang2021tlt}. For fine-tuning our models on a larger resolution, \textit{i.e.}, 384×384, the special hyperparameters are set as follows: weight decay, learning rate, batch size, warm-up epoch and total epoch are set to 1e-8, 5e-6, 512, 5 and 30. \begin{figure}[t] \centering \vspace{-3em} \includegraphics[width=1.05\linewidth]{para.pdf} \vspace{-2.5em} \caption{Performance comparisons with respect to model parameters on ImageNet-1K classification. Without extra training data, our DilateFormer variants achieve comparable or even better performance with fewer model parameters.} \vspace{-1.5em} \label{fig:para} \end{figure} \noindent \textbf{- Results and analysis.} As shown in Table \ref{cls_table}, Figure \ref{fig:flop_acc1} and Figure \ref{fig:para}, our proposed DilateFormer outperforms previous state-of-the-art models at different model sizes. Specifically, Dilate-S achieves 83.3\% top-1 accuracy on ImageNet-1K with a resolution of 224, surpasses Swin-T \cite{liu2021swin}, ViL-S \cite{zhang2021vil} by 2.0\% and 1.3\% respectively and has fewer parameters and FLOPs than these models. With the assistance of Token Labeling \cite{jiang2021tlt} (denoted by '$\star$') , our models achieve better performance than LV-ViTs \cite{jiang2021tlt} at different model sizes, \textit{i.e.}, Dilate-S$^{\star}$ (4.9 GFLOPs) and Dilate-B$^{\star}$ (10.0 GFLOPs) achieve 83.9\% and 84.9\% respectively, surpassing LV-ViT-S \cite{jiang2021tlt}(6.6 GFLOPs) and LV-ViT-M \cite{jiang2021tlt} (16 GFLOPs). The results in Table \ref{cls_table} also show the efficiency and effectiveness of the proposed model. Without extra assistance or high-resolution finetuning, Dilate-T consumes only 3.2 GFLOPs and achieves 82.1\% accuracy, which is comparable to the performance of ViL-S \cite{zhang2021vil} (4.9G, 82.0\%), Focal-T \cite{yang2021focal} (4.9G, 82.2\%) and PVT-L \cite{wang2021pyramid} (9.8G, 81.7\%). Similar conclusions can be found in larger models: our Dilate-S (83.3\%) with 4.8 GFLOPs outperforms ViL-B \cite{zhang2021vil} (13.4G, 83.2\%), Swin-B \cite{liu2021swin} (15.4G, 83.4\%), and DeiT-B \cite{deit} (17.5G, 81.8\%), indicating that our MSDA can effectively capture long-range dependencies as previous methods but save up to 70\% FLOPs. To demonstrate the strong learning capability of DilateFormer, our Dilate-B fine-tuned on 384×384 images obtains 85.6\% top-1 accuracy and outperforms LV-ViT-M \cite{jiang2021tlt} (85.4\%) which needs 1.37 times more FLOPs. \begin{table*} \centering \small \vspace{-1em} \captionsetup{justification=centering} \caption{\textsc{Object detection and instance segmentation with Mask R-CNN on COCO val2017.}} \vspace{-0.5em} \renewcommand\tabcolsep{1.5pt} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} &\small Params & \small FLOPs & \multicolumn{6}{c|}{Mask R-CNN $1\times$ schedule} & \multicolumn{6}{c}{Mask R-CNN $3\times$ + MS schedule} \\ & \small (M) & \small (G) & ~{AP}$^b$~ & {AP}$^{b}_{50}$ & {AP}$^{b}_{75}$ & ~{AP}$^{m}$~ & {AP}$^{m}_{50}$ & {AP}$^{m}_{75}$ & ~{AP}$^b$~ & {AP}$^{b}_{50}$ & {AP}$^{b}_{75}$ & ~{AP}$^{m}$~ & {AP}$^{m}_{50}$ & {AP}$^{m}_{75}$ \\ \midrule Res50 \cite{resnet} & 44 & 260 & 38.0 & 58.6 & 41.4 & 34.4 & 55.1 & 36.7 & 41.0 & 61.7 & 44.9 & 37.1 & 58.4 & 40.1 \\ NAT-T \cite{Hassani2022nat} & 48 & 258 & - & - & - & - & - & - & 47.7 & 69.0 & 52.6 & 42.6 & 66.1 & 45.9 \\ Swin-T \cite{liu2021swin} & 48 & 264 & 42.2 & 64.6 & 46.2 & 39.1 & 61.6 & 42.0 & 46.0 & 68.2 & 50.2 & 41.6 & 65.1 & 44.8 \\ MPViT-S \cite{lee2021mpvit} & 43 & 268 & - & - & - & - & - & - & 48.4 & 70.5 & 52.6 & \textbf{43.9} & 67.6 & \textbf{47.5} \\ UniFormer-S$_{h14}$ \cite{li2022uniformer} & 41 & 269 & 45.6 & 68.1 & 49.7 & 41.6 & 64.8 & \textbf{45.0} & 48.2 & 70.4 & 52.5 & 43.4 & 67.1 & 47.0 \\ Focal-T \cite{yang2021focal} & 49 & 291 & 44.8 & 67.7 & 49.2 & 41.0 & 64.7 & 44.2 & 47.2 & 69.4 & 51.9 & 42.7 & 66.5 & 45.9 \\ TRT-ViT-C \cite{xia2022trt} & 86 & 294 & 44.7 & 66.9 & 48.8 & 40.8 & 63.9 & 44.0 & 47.3 & 68.8 & 51.9 & 42.7 & 65.9 & 46.0 \\ PVT-M \cite{wang2021pyramid} & 64 & 302 & 42.0 & 64.4 & 45.6 & 39.0 & 61.6 & 42.1 & 44.2 & 66.0 & 48.2 & 40.5 & 63.1 & 43.5 \\ \rowcolor{gray!20} Dilate-S (ours) & 44 &262 & \textbf{45.8} & \textbf{68.2} &\textbf{50.1} & \textbf{41.7} & \textbf{65.3} & 44.7 & \textbf{49.0} & \textbf{70.9} & \textbf{53.8} & 43.7 & \textbf{67.7} & 46.9 \\ \midrule X101-32 \cite{xie2017AgRTr} & 63 &340 & 41.9 & 62.5 & 45.9 & 37.5 & 59.4 & 40.2 & 44.0 & 64.4 & 48.0 & 39.2 & 61.4 & 41.9 \\ NAT-S \cite{Hassani2022nat} & 70 & 330 & - & - & - & - & - & - & 48.4 & 69.8 & 53.2 & 43.2 & 66.9 & 46.5\\ TRT-ViT-D \cite{xia2022trt} & 121 & 375 & 45.3 & 67.9 & 49.6 & 41.6 & 64.7 & 44.8 & 48.1 & 69.3 & 52.7 & 43.4 & 66.7 & 46.8 \\ Focal-S \cite{yang2021focal} & 71 & 401 & 47.4 & 69.8 & 51.9 & 42.8 & 66.6 & 46.1 & 48.8 & 70.5 & 53.6 & 43.8 & 67.7 & 47.2 \\ PVT-L \cite{wang2021pyramid} & 81 & 494 & 42.9 & 65.0 & 46.6 & 39.5 & 61.9 & 42.5 & 44.5 & 66.0 & 48.3 & 40.7 & 63.4 & 43.7 \\ Swin-B \cite{liu2021swin} & 107 & 496 & 46.9 & - & - & 42.3 & - & - & 48.5 & 69.8 & 53.2 & 43.4 & 66.8 & 46.9 \\ MPViT-B \cite{lee2021mpvit} & 95 & 503 & - & - & - & - & - & - & 49.5 & 70.9 & 54.0 & \textbf{44.5} & 68.3 & \textbf{48.3} \\ Focal-B \cite{yang2021focal} & 110 & 533 & \textbf{47.8} & - & - & 43.2 & - & - & 49.0 & 70.1 & 53.6 & 43.7 & 67.6 & 47.0 \\ \rowcolor{gray!20} Dilate-B (ours) & 67 & 370 & 47.6 & \textbf{70.2} & \textbf{55.2} & \textbf{43.4} & \textbf{67.2} & \textbf{46.8} & \textbf{49.9} & \textbf{71.9} & \textbf{55.1} & \textbf{44.5} & \textbf{68.9} & 47.7 \\ \bottomrule \end{tabular} \vspace{-1em} \label{coco_mrcnn_tab} \end{table*} \subsection{Object Detection and Instance Segmentation on COCO} \label{detect} \noindent \textbf{- Dataset and implementation details.} We evaluate our variants on object detection and instance segmentation on COCO2017 dataset \cite{lin2014microsoft}. COCO2017 dataset contains 118K images for training, 5K images for validation and 20K images for testing. We utilize two representative frameworks: Mask R-CNN \cite{he2017mask} and Cascade Mask R-CNN \cite{cai2019cascade} implemented in mmdetection \cite{chen2019mmdetection} and adopt the ImageNet-1K pre-trained variants as backbones. For Mask R-CNN and Cascade Mask R-CNN frameworks, we use the AdamW optimizer with a base learning rate of 0.0001, a weight decay of 0.05, and a batch size of 16. For a fair comparison, we train our variants Dilate-S and Dilate-B via two strategies: (1) 1× schedule with 12 epochs where the shorter side of the image is resized to 800 and the longer side is less than 1333; (2) 3× schedule with 36 epochs where the multi-scale training strategy is adopted and the shorter side of the image is resized in $[480, 800]$. Because image resolution in object detection and instance segmentation is generally larger than that in image classification, we use a combination of local window attention, local window attention with shifted operation \cite{liu2021swin} and global attention in stage3 of DilateFormer to reduce computational cost. \vspace{0.1cm} \noindent \textbf{- Results and analysis.} Table \ref{coco_mrcnn_tab} and Table \ref{coco_cascsde_tab} report box mAP ($\rm AP^b$) and mask mAP ($\rm AP^m$) of Mask R-CNN framework and Cascade Mask R-CNN framework, respectively. Our DilateFormer variants outperform recent Transformers on both object detection and instance segmentation in two frameworks. For Mask R-CNN $1\times$ schedule, DilateFormer surpasses Swin Transformer \cite{liu2021swin} by 2.8-3.6\% of box mAP and 2.5-2.6\% mask mAP at comparable settings, respectively. For $3\times$ + MS schedule, Dilate-B achieves 49.9\% box mAP and 43.7\% mask mAP in Mask R-CNN framework, 53.3\% box mAP and 46.1\% mask mAP in Cascade Mask R-CNN framework. Furthermore, our Dilate-S outperforms PVT-M \cite{wang2021pyramid} by 2.2\% box mAP, 2.7\% mask mAP at $1 \times$ schedule with 13.2\% fewer FLOPs. \subsection{Semantic Segmentation on ADE20K} \label{seg} \noindent \textbf{- Dataset and implementation details.} ADE20K dataset \cite{zhou2017scene} contains 150 semantic categories, and there are 20,000 images for training, 2000 images for validation and 3000 images for testing. We evaluate the proposed variants for DilateFormer on semantic segmentation on ADE20K and utilize two representative frameworks: Upernet \cite{xiao2018unified} and Semantic FPN \cite{kirillov2019panoptic} implemented in mmsegmentation \cite{mmseg2020} with our ImageNet-1K pre-trained variants as backbones. For training Upernet, we follow the configuration of Swin Transformer and train our variants for 160K iterations. We employ the AdamW \cite{Loshchilov2019adamw} optimizer with a base learning rate of 0.00006, a weight decay of 0.01, a batch size of 16, and a linear scheduler with a linear warmup of 1,500 iterations. As for Semantic FPN 80K iterations, we follow the same configuration of PVT with a cosine learning rate schedule with an initial learning rate of 0.0002 and a weight decay of 0.0001. \begin{table*}[h] \centering \small \captionsetup{justification=centering} \caption{ \textsc{Object detection and instance segmentation with Cascade Mask R-CNN on COCO val2017.}} \vspace{-0.5em} \begin{tabular}{c|c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & Params & FLOPs & \multicolumn{6}{c}{$3\times$ + MS schedule} \\ & (M) & (G) & {AP}$^b$ & {AP}$^{b}_{50}$ & {AP}$^{b}_{75}$ & {AP}$^{m}$ & {AP}$^{m}_{50}$ & {AP}$^{m}_{75}$ \\ \midrule Res50 \cite{resnet} & 82 & 739 & 46.3 & 64.3 & 50.5 & 40.1 & 61.7 & 43.4 \\ NAT-T \cite{Hassani2022nat} & 85 & 737 & 51.4 & 70.0 & 55.9 & 44.5 & 67.6 & 47.9 \\ ConvNeXt-T \cite{liu2022ConvNeXt} & 86 & 741 & 50.4 & 69.1 & 54.8 & 43.7 & 66.5 & 47.3 \\ Swin-T \cite{liu2021swin} & 86 & 745 & 50.5 & 69.3 & 54.9 & 43.7 & 66.6 & 47.1 \\ Shuffle-T \cite{Huang2021shuffle} & 86 & 746 & 50.8 & 69.6 & 55.1 & 44.1 & 66.9 & 48.0 \\ UniFormer-S$_{h14}$ & 79 & 747 & 52.1 & 71.1 & 56.6 & 45.2 & 68.3 & 48.9 \\ DeiT-S \cite{deit} & 80 & 889 & 48.0 & 67.2 & 51.7 & 41.4 & 64.2 & 44.3 \\ \rowcolor{gray!20} Dilate-S (ours) & 82 & 740 & \textbf{52.4} & \textbf{71.6} & \textbf{56.9} & \textbf{45.2} & \textbf{68.6} & \textbf{49.0} \\ \midrule X101-32 \cite{xie2017AgRTr} & 101 & 819 & 48.1 & 66.5 & 52.4 & 41.6 & 63.9 & 45.2 \\ NAT-S \cite{Hassani2022nat} & 108 & 809 & 52.0 & 70.4 & 56.3 & 44.9 & 68.1 & 48.6 \\ ConvNeXt-S \cite{liu2022ConvNeXt} & 108 & 827 & 51.9 & 70.8 & 56.5 & 45.0 & 68.4 & 49.1 \\ Swin-S \cite{liu2021swin} & 107 & 838 & 51.8 & 70.4 & 56.3 & 44.7 & 67.9 & 48.5 \\ NAT-B \cite{Hassani2022nat} & 147 & 931 & 52.3 & 70.9 & 56.9 & 45.1 & 68.3 & 49.1 \\ ConvNeXt-B \cite{liu2022ConvNeXt} & 146 & 964 & 52.7 & 71.3 & 57.2 & 45.6 & 68.9 & 49.5 \\ Swin-B \cite{liu2021swin} & 145 & 982 & 51.9 & 70.5 & 56.4 & 45.0 & 68.1 & 48.9 \\ \rowcolor{gray!20} Dilate-B (ours) & 105 & 849 & \textbf{53.5} & \textbf{72.4} & \textbf{58.0} & \textbf{46.1} & \textbf{69.9} & \textbf{50.3} \\ \bottomrule \end{tabular} \vspace{1em} \label{coco_cascsde_tab} \end{table*} \vspace{0.1cm} \noindent \textbf{- Results and analysis.} Table \ref{tab:seg_tab} shows the results of DilateFormer equipped with UperNet and Semantic FPN frameworks. Our variants DilateFormer-Small/Base equipped with UperNet framework achieve 47.1/50.4\% mIoU and 47.6/50.5\% MS mIoU, outperforming Swin \cite{liu2021swin} by at least 2.6\% of mIoU and 1.0\% of MS mIoU respectively. For Semantic FPN framework, our variants achieve 47.1/48.8\% mIoU, and exceed Swin \cite{liu2021swin} by 3.6-5.6\%. \begin{table*}[t] \setlength{\tabcolsep}{6pt} \renewcommand{\arraystretch}{1} \captionsetup{justification=centering} \caption{\textsc{Semantic segmentation experimental results on ADE20K validation set. \\ Left: with Upernet; Right: with semantic FPN.}} \vspace{-0.5em} \centering \small \begin{tabular}{c|cccc} \toprule \multirow{3}{*}{Method} & \multicolumn{4}{c}{Upernet 160K} \\ & Params & FLOPs & mIoU & MS mIoU \\ & (M) & (G) & (\%) & (\%)\\ \midrule Res101 \cite{resnet} & 86 & 1029 & - & 44.9 \\ Twins-S \cite{chu2021twins} & 54 & 901 & 46.2 & 47.1 \\ TwinsP-S \cite{chu2021twins} & 55 & 919 & 46.2 & 47.5 \\ ConvNeXt-T \cite{liu2022ConvNeXt} & 60 & 939 & 46.0 & 46.7 \\ TRT-ViT-B \cite{xia2022trt} & 81 & 941 & 46.5 & 47.5 \\ GG-T \cite{yu2021glance} & 60 & 942 & 46.4 & 47.2 \\ Swin-T \cite{liu2021swin} & 60 & 945 & 44.5 & 45.8 \\ Shuffle-T \cite{Huang2021shuffle} & 60 & 949 & 46.6 & \textbf{47.8} \\ Focal-T \cite{yang2021focal} & 62 & 998 & 45.8 & 47.0 \\ \rowcolor{gray!20} Dilate-S (ours) & 54 & 935 & \textbf{47.1} & 47.6 \\ \midrule NAT-S \cite{Hassani2022nat} & 82 & 1010 & 48.0 & 49.5 \\ Twins-B \cite{chu2021twins} & 89 & 1020 & 47.7 & 48.9 \\ ConvNeXt-S \cite{liu2022ConvNeXt} & 82 & 1027 & 48.7 & 49.6 \\ Swin-S \cite{liu2021swin} & 81 & 1038 & 47.6 & 49.5 \\ GG-S \cite{yu2021glance} & 81 & 1035 & 48.4 & 49.6 \\ TRT-ViT-D \cite{xia2022trt} & 144 & 1065 & 48.8 & 49.8 \\ Shuffle-B \cite{Huang2021shuffle} & 121 & 1196 & 49.0 & 50.5 \\ Next-ViT-L \cite{li2022next} & 92 & 1072 & 50.1 & 50.8 \\ Focal-S \cite{yang2021focal} & 85 & 1130 & 48.0 & 50.0 \\ \rowcolor{gray!20} Dilate-B (ours) & 79 & 1046 & \textbf{50.8} & \textbf{51.1} \\ \bottomrule \end{tabular} \label{seg_upernet_tab} \hspace{30pt} \centering \small \begin{tabular}{c|ccc} \toprule \multirow{3}{*}{Method} & \multicolumn{3}{c}{Semantic FPN 80K} \\ & Params & FLOPs & mIoU \\ & (M) & (G) & (\%) \\ \midrule Res50 \cite{resnet} & 29 & 183 & 36.7 \\ Twins-S \cite{chu2021twins} & 28 & 144 & 43.2 \\ PVT-S \cite{wang2021pyramid} & 28 & 161 & 39.8 \\ TwinsP-S \cite{chu2021twins} & 28 & 162 & 44.3 \\ XCiT-S12/8 \cite{ali2021XCiT} & 30 & - & 44.2 \\ TRT-ViT-B \cite{xia2022trt} & 46 & 176 & 45.4 \\ Swin-T \cite{liu2021swin} & 32 & 182 & 41.5 \\ Next-ViT-S \cite{li2022next} & 36 & 208 & 46.5 \\ CrossFormer-S \cite{Wang2021crossf} &34 & 209 & 46.4 \\ \rowcolor{gray!20} Dilate-S (ours) & 28 & 178 & \textbf{47.1} \\ \midrule Res101 \cite{resnet} & 48 & 260 & 38.8 \\ Next-ViT-B \cite{li2022next} & 49 & 260 & 48.6 \\ XCiT-S24/8 \cite{ali2021XCiT} & 52 & - & 47.1 \\ Swin-S \cite{liu2021swin} & 53 & 274 & 45.2 \\ PVT-L \cite{wang2021pyramid} & 65 & 283 & 42.1 \\ TwinsP-L \cite{chu2021twins} & 65 & 283 & 46.4 \\ TRT-ViT-D \cite{xia2022trt} & 106 & 296 & 46.7 \\ CrossFormer-B \cite{Wang2021crossf} & 56 & 320 & 48.0 \\ Swin-B \cite{liu2021swin} & 91 & 422 & 46.0 \\ \rowcolor{gray!20} Dilate-B (ours) & 51 & 288 & \textbf{48.8} \\ \bottomrule \end{tabular} \label{seg_fpn_tab} \vspace{-1.5em} \label{tab:seg_tab} \end{table*} \subsection{Ablation Studies} \label{ablate_study} We conduct ablation studies from the perspectives of sparse and local patterns, dilation scale, block setting, stage setting and overlapping tokenizer/downsampler. More ablation studies about the kernel size are given in the supplementary material. \vspace{0.1cm} \noindent \textbf{- SWDA vs. other sparse and local patterns.} We replace Sliding Window Dilated Attention (SWDA) in the first two stages with other sparse and local patterns, \textit{i.e.}, Dilated Convolution (DC) \cite{yu2015multi}, Dynamic Dilated Convolution (DDC) \cite{chen2020dynamic} , Window Attention with Spatial Shuffle (WASS\protect\footnotemark[2]) \cite{Huang2021shuffle} and Sliding Window Attention (SWA) \cite{li2021simvit}. \stepcounter{footnote}\footnotetext{The WASS is an approximate sparse sampling operation which divides patches into local Windows like Swin \cite{liu2021swin} and then shuffles keys and values between different windows.} As shown in Table \ref{tab:other_local_sparse}, our SWDA outperforms other sparse and local patterns in various vision tasks. SWDA achieves 82.1\% Top-1 accuracy on ImageNet-1K, 44.9\% box mAP/40.9\% mask mAP on COCO and 45.84\% mIoU on ADE20K. SWDA outperforms DC (+0.4\%, +1.4\%/+0.6\%, +1.69\%) because attention is data-specific compared to conventional convolution. Although DDC is local, sparse and data-specific like SWDA, SWDA still outperforms DDC (+0.3\%, +0.6\%/+0.3\%, +0.94\%). DDC uses the entire feature map to generate the kernel parameter of convolution, which is data-specific at the feature-map level; and in comparison, SWDA performs self-attention on keys and values sparsely selected in a sliding window centered on the query patch, which is data-specific at the token level. Therefore, SWDA has a stronger modeling capability than DDC. SWDA also outperforms WASS (+0.3\%, +0.8\%/+0.5\%, +1.18\%) and SWA (+0.3\%, +0.5\%/+0.1\%, +2.21\%), which demonstrates the importance of considering locality and sparsity in self-attention of the shallow layers. \begin{table}[t] \centering \renewcommand\arraystretch{1.1} \captionsetup{justification=centering} \caption{\textsc{Experiment results with local and sparse patterns in the first two stages. The} Top-1 \textsc{is on ImageNet-1K, AP$^{b}$ and AP$^{m}$ are on COCO val2017 with Mask R-CNN 1× schedule,} mIoU \textsc{is on ADE20K validation set with semantic FPN.}} \vspace{-0.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c} \toprule \multirow{2}{*}{Pattern} & \multirow{2}{*}{Locality} & \multirow{2}{*}{Sparity} & Data & Top-1 & {AP}$^b$ & {AP}$^m$ & mIoU \\ & & & Specific & (\%) & (\%) & (\%) & (\%) \\ \midrule DC & $\checkmark$ & $\checkmark$ & & 81.7 & 43.5 & 40.3 & 44.15 \\ DDC & $\checkmark$ & $\checkmark$ & $\checkmark$ & 81.8 & 44.3 & 40.6 & 44.90 \\ WASS & & $\checkmark$ & $\checkmark$ & 81.8 & 44.1 & 40.4 & 44.66 \\ SWA & $\checkmark$ & & $\checkmark$ & 81.8 & 44.4 & 40.8 & 43.63 \\ \rowcolor{gray!20} SWDA & $\checkmark$ & $\checkmark$ & $\checkmark$& \textbf{82.1} & \textbf{44.9} & \textbf{40.9} & \textbf{45.84} \\ \bottomrule \end{tabular} } \vspace{0.5em} \label{tab:other_local_sparse} \end{table} \begin{table}[t] \renewcommand\arraystretch{1.1} \makeatletter\def\@captype{table} \centering \captionsetup{justification=centering} \caption{\textsc{Top-1 accuracy on ImageNet-1K of different dilation scales.}} \begin{tabular}{c|c|c|c} \toprule \multirow[c]{2}{*}{Scale} & Head Num &Dilation & Top-1 \\ & in Stage1/2 &Rate & (\%)\\ \midrule \multirow{5}{*}{Multi-} &$[2,~4]$ & $[1,~2]$ & 81.7 \\ \cline{2-4} & \multirow{3}{*}{$[3,~6]$} & $[1,~2,~3]$ & \textbf{82.1} \\ & ~ & $[2,~3,~4]$ & 81.8 \\ & ~ & $[3,~4,~5]$ & 81.7 \\ \cline{2-4} &$[4,~8]$ & $[1,~2,~3,~4]$ & 81.9 \\ \cline{1-4} \multirow{3}{*}{Single}& \multirow{3}{*}{$[3,~6]$} & $[1]$ & 81.7 \\ & & $[2]$ & 81.9 \\ & & $[3]$ & 81.8 \\ \bottomrule \end{tabular} \vspace{-1.5em} \label{dilate_scale_tab} \end{table} \noindent \textbf{- Dilation scale.} \label{dilation_scale} Since the number of heads must be multiple of the number of dilation scales, we change the number of heads and feature dimensions in each head, keeping the same total length according to the number of dilation scales. We analyze the effects of dilation scales according to the performance on ImageNet-1K classification task. The number of heads in stage 1 or 2, dilation scales and top-1 accuracy are shown in Table \ref{dilate_scale_tab}. With the same number of heads in the block, the top-1 accuracy (82.1\%) of multi-scale dilated attention, \textit{i.e.}, $[1,~2,~3]$, is better than that of single-scale, \textit{i.e.}, $[1]$, $[2]$, and $[3]$, because multi-scale can provide richer information than single-scale. What is more, the dilation rates in the block need to be moderate so that it can simultaneously model both locality and sparsity of attention, without introducing redundant information modeling due to the large receptive field such as global attention. Therefore, we set the dilation scale of the model to 3, \textit{i.e.}, $[1,~2,~3]$ by default. \vspace{0.1cm} \noindent \textbf{- MSDA vs. other blocks setting.} In our DilateFormer, we stack Multi-Scale Dilated Attention (MSDA) blocks in the first two stages. To demonstrate the effectiveness of our proposed MSDA, we replace MSDA in the first two stages of the default setting (D-D-G-G) with local attention in a shifted window (L-L-G-G) \cite{liu2021swin} and global attention (G-G-G-G) \cite{dosovitskiy2020image} for comparison. We also compare with the global attention cooperated with a naïve downsampling technique, namely global attention with spatial reduction (G-G-G-G + sr.) \cite{wang2021pyramid}, which reduces the redundant interaction between patches by decreasing the number of patches. The maximum size of attended receptive field in MSDA is $7\times7$ with dilation, the size of attended receptive field in local attention is $7\times7$, and the size of attended receptive field in global attention is the size of the entire feature map. \vspace{0.1cm} Table \ref{blocks_setting_table} summarizes the comparison results. By using the same size of maximum attended receptive field, our MSDA (82.1\%) outperforms local attention with shifted window (L-L-G-G) \cite{liu2021swin} (81.7\%) with fewer FLOPs, which demonstrates the effectiveness of sparse and local attention mechanisms in shallow layers. Compared with the global attention (G-G-G-G) \cite{dosovitskiy2020image}, our MSDA achieves an improvement of 0.3\% with half of FLOPs, which further demonstrates the effectiveness and efficiency of the proposed local and sparse attention mechanism. Also, the superiority of MSDA against the global attention shows the redundancy of modeling dependencies among all image patches. To reduce the redundant interaction, the global attention with spatial reduction utilizes downsampling by convolution but introduces extra parameters. By contrast, our MSDA exploits the locality and sparsity property without extra parameters. The results show that our MSDA surpasses the global attention with spatial reduction by 0.5\%, which indicates the effectiveness of redundancy reduction of the proposed MSDA. In downstream tasks, our MSDA block also outperforms other types of blocks, indicating that MSDA has a stronger modeling capability. \begin{table}[t] \centering \renewcommand\arraystretch{1.2} \captionsetup{justification=centering} \caption{\textsc{Experiment results with different blocks in Stage1/2.} ``sr.'' \textsc{indicates spatial reduction operation. ``D'', ``G'' and ``L'' indicate dilation, global and local operations, respectively. The} Top-1 \textsc{ is on ImageNet-1K, AP$^{b}$ and AP$^{m}$ are on COCO val2017 with Mask R-CNN 1 × schedule,} mIoU \textsc{is on ADE20K validation set with semantic FPN.}} \vspace{-0.5em} \begin{tabular}{c|c|c|c|c|c|c} \toprule \makecell[c]{Block \\ Type} & \makecell[c]{Params \\ (M) } & \makecell[c]{FLOPs \\ (G) } & \makecell[c]{Top-1 \\ (\%)} & \makecell[c]{{AP}$^b$ \\ (\%)} & \makecell[c]{{AP}$^{m}$ \\ (\%)} & \makecell[c]{mIoU \\ (\%)} \\ \midrule G-G-G-G~+~sr. & 20.6 & 3.02 & 81.6 & 40.9 & 37.9 & 44.4 \\ G-G-G-G & 17.2 & 6.36 & 81.8 & 42.0 & 38.7 & 44.5 \\ L-L-G-G & 17.2 & 3.24 & 81.7 & 40.9 & 37.6 & 44.3 \\ \rowcolor{gray!20} D-D-G-G & 17.2 & 3.18 & \textbf{82.1} & \textbf{44.2} & \textbf{40.9} & \textbf{45.8} \\ \bottomrule \end{tabular} \label{blocks_setting_table} \vspace{0.5em} \end{table} \begin{table}[t] \centering \captionsetup{justification=centering} \caption{\textsc{ Analysis of Multi-Scale Dilated Attention blocks in different stages on ImageNet-1K.}} \begin{tabular}{c|c|c} \toprule Stage & FLOPs & Top-1 \\ Setting & (G) & (\%) \\ \midrule G-G-G-G & 6.36 & 81.8 \\ D-G-G-G & 3.53 & 82.2 \\ \rowcolor{gray!20} D-D-G-G & 3.18 & \textbf{82.1} \\ D-D-D-G & 3.05 & 81.3 \\ D-D-D-D & 3.04 & 80.5 \\ \bottomrule \end{tabular} \vspace{-1.5em} \label{stages_setting_table} \end{table} \begin{table}[t] \centering \renewcommand\arraystretch{1.2} \captionsetup{justification=centering} \caption{Top-1 \textsc{accuracy on ImageNet-1K of using Overlapping Tokenizer and Downsampler.}} \vspace{-0.5em} \begin{tabular}{c|c|c|c|c} \toprule Overlapping & Overlapping & Params & FLOPs & Top-1 \\ Tokenizer & Downsampler & (M) & (G) & (\%) \\ \midrule ~ & ~ & 16.1 & 2.62 & 81.7 \\ ~ & $\checkmark$ & 17.2 & 2.74 & 81.8 \\ $\checkmark$ & ~ & 16.2 & 3.12 & 81.9 \\ \rowcolor{gray!20} $\checkmark$ & $\checkmark$ & 17.2 & 3.18 & \textbf{82.1} \\ \bottomrule \end{tabular} \vspace{0.5em} \label{overlap_table} \end{table} \vspace{0.1cm} \noindent \textbf{- Stage setting.} To demonstrate the modeling capability of the MSDA block at shallow stages, we conduct a set of experiments to explore the performance of using MSDA in different stages. In the four stages of the model, we progressively replace the global MHSA block in each stage with the MSDA block. Table \ref{stages_setting_table} shows FLOPs and top-1 accuracy of models with different structures. The model performance shows a decreasing trend, from 82.2\% down to 80.5\%, as the proportion of MSDA blocks in the model stage increases. The results show that it is more effective to consider the locality and sparsity property of the self-attention mechanism in shallow stages rather than in deeper stages. What's more, the model with MSDA block only in stage1 (82.2\%) performs slightly better than the model with MSDA blocks in both stage1 and stage2 (82.1\%), but the former has larger FLOPs (+ 0.35G). Therefore, we use MSDA blocks in both stage1 and stage2 by default. \begin{table}[t] \renewcommand\arraystretch{1.2} \makeatletter\def\@captype{table} \centering \captionsetup{justification=centering} \caption{\textsc{Comparison of model inference. “Mem” denotes the peak memory for evaluation. “FPS” is the number of images processed for one second.}} \vspace{-0.5em} \resizebox{\linewidth}{!}{ \begin{tabular}{c|c|c|c|c|c} \toprule \multirow{2}{*}{Method} & Params & FLOPs & FPS & Mem. & Top-1 \\ & (M) & (G) & (s) & (G) & (\%) \\ \midrule ConvNeXt-T \cite{liu2022ConvNeXt} & 28 & 4.5 & 2450 & 3.5 & 82.1 \\ Swin-T \cite{liu2021swin} & 28 & 4.5 & 1681 & 5.0 & 81.3 \\ NAT-T \cite{Hassani2022nat} & 28 & 4.5 & 1515 & 3.7 & 83.2 \\ DiNAT-T \cite{hassani2022dilated} & 28 & 4.5 & 1479 & 3.7 & 82.7 \\ \rowcolor{gray!20} Dilate-S (ours) & 21 & 4.8 & 1368 & 3.1 & \textbf{83.3} \\ \midrule ConvNeXt-S \cite{liu2022ConvNeXt} & 50 & 8.7 & 1558 & 4.8 & 83.1 \\ Swin-S \cite{liu2021swin} & 50 & 8.7 & 1045 & 6.7 & 83.0 \\ NAT-S \cite{Hassani2022nat} & 51 & 7.8 & 1055 & 5.0 & 83.7 \\ DiNAT-S \cite{hassani2022dilated} & 51 & 7.8 & 1069 & 5.0 & 83.8 \\ \rowcolor{gray!20} Dilate-B (ours) & 47 & 10.0 & 1122 & 4.0 & \textbf{84.4} \\ \bottomrule \end{tabular} } \vspace{-2em} \label{tab:fps} \end{table} \vspace{0.1cm} \noindent \textbf{- Overlapping Tokenizer/Downsampler.} \label{overlapping} We further study how the overlapping tokenizer or downsampler affect the performance. While keeping the same settings, we replace our overlapping tokenizer or downsampler with a simple non-overlapping tokenizer or downsampler, i.e., convolution with kernel size 4 and stride 4 or convolution with kernel size 2 and stride 2. As shown in Table \ref{overlap_table}, our model achieves a slight improvement (+0.4\%) with overlapping tokenizer/downsampler, indicating that the main improvement of our model does not rely on these two modules. \vspace{0.1cm} \noindent \textbf{- Comparisons of real running times.} We provide a comparison of model inference about FPS, peak memory about our DilateFormers and current SOTA models in Table \ref{tab:fps}. FPS and peak memory usage are measured from forward passes with a batch size of 256 on a single A100 GPU. With comparable model parameters and FLOPs, our DilateFormers have comparable FPS and better performance than current SOTA models. \vspace{0.1cm} \noindent \textbf{- Grad-CAM Visualization.} \label{grad_cam} To further illustrate the recognition ability of the proposed DilateFormer, we apply Grad-CAM \cite{jacobgilpytorchcam} to visualize the areas of the greatest concern in the last layer of DeiT-Tiny \cite{deit}, Swin-Tiny \cite{liu2021swin} and Dilate-Tiny. As shown in Figure \ref{fig:grad_cam}, our Dilate-Tiny model performs better in locating the target objects and attends to semantic areas more continuously and completely, suggesting the stronger recognition ability of our model. Such ability yields better classification performance compared with DeiT-Tiny and Swin-Tiny. \vspace{0.1cm} \noindent \textbf{- More Visualization Results on Global Attention.} \label{more_attn_results} In Sec.\ref{intro}, we discuss two key properties \textit{i.e.}, \textit{locality} and \textit{sparsity} of global attention in shallow layers. To further analyze these two properties, we visualize more attention maps in the shallow layers of ViT-Small \cite{dosovitskiy2020image}. As shown in Figure \ref{fig:more_atten}, the attention maps in the shallow layers of ViT-Small show that activated key patches are sparsely distributed in the neighborhood of the query patch. Specifically, the patches with high attention scores sparsely scatter around the query patch and other patches have low attention scores. \begin{figure}[t] \centering \vspace{-2em} \includegraphics[width=\linewidth]{grad_cam.pdf} \caption{Grad-CAM Visualization of the last layer of DeiT-Tiny, Swin-Tiny and Dilate-Tiny. Images are from the validation set of ImageNet-1k.} \vspace{-1.5em} \label{fig:grad_cam} \end{figure} \section{Conclusion} \label{conclude} In this work, we propose a strong and effective Vision Transformer, called DilateFormer, which can provide powerful and general representations for various vision tasks. Our proposed Multi-Scale Dilated Attention (MSDA) takes both the locality and sparsity of the self-attention mechanism in the shallow layers into consideration, which can effectively aggregate semantic multi-scale information and efficiently reduce the redundancy of the self-attention mechanism without complex operations and extra computational cost. Extensive experiment results show that the proposed method achieves state-of-the-art performance in both ImageNet-1k classification and down-stream vision tasks such as object detection and semantic segmentation. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{atten_different_layer.pdf} \caption{More Visualization of attention maps of shallow layers of ViT-Small. We visualize the activations in attention maps of the query patches (in the red box). The attention maps show that patches with high attention scores sparsely scatter around the query patch, and other patches have low attention scores.} \vspace{-1.5em} \label{fig:more_atten} \end{figure} \section*{Acknowledgments} This work was supported partially by the NSFC (U21A20471,U1911401,U1811461), Guangdong NSF Project (No. 2020B1515120085, 2018B030312002), Guangzhou Research Project (201902010037), and the Key-Area Research and Development Program of Guangzhou (202007030004). \bibliographystyle{ieee_fullname} \normalem
a907e61ca14a49341a657d8de8f3f91f36a40098
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In the past decade the effort invested in ultrarelativistic heavy ion collisions (URHIC) has grown considerably \cite{QM95}. The general hope is, that at some time in the near future one may be able to observe an excursion of strongly interacting matter from the state of hadrons before the collision into the phase of a quark--gluon plasma (QGP). Consequently, the discussion of possible signals from such a shortlived state is quite vivid: Weakly interacting probes like photons or lepton pairs, as well as strongly interacting signals like those presented by quark flavors of higher mass have been proposed. Similar to most of these investigations is the assumption of a {\em thermalized\/} plasma phase, followed by the calculation of the time evolution along one or the other line of physical reasoning. The present paper is a study of the {\em time scales\/} necessary for such a thermalization. Ultimately, it is the goal to investigate a physical scenario that one may reach in future URHIC: A sea of gluons, initially at low temperature, is heated to a very high temperature over a short time. In this {\em hot glue\/}, quark-antiquark pairs are popping up -- until at the very end a thermal equilibrium in the sense of a degenerate plasma is reached. For the purpose of this conference contribution however, the calculations will be presented on a more abstract level. The full consideration is the subject of an extended paper\cite{h95neq}. The primary motivation for this study are serious doubts that the requirements for the applicability of standard transport theory ( = kinetic gas theory) are fulfilled in a QGP: The thermal scattering of constituents occurs so frequently, that subsequent collisions overlap quantum mechanically. This implies, that a treatment in terms of quasi-particles is inadequate, one has to account for a nontrivial spectral function of the system components \cite{L88,h94rep} The use of a finite temperature field theoretical formulation with continuous spectral function is also suggested by the Narnhofer-Thirring theorem\cite{NRT83}, which states, that interacting systems at finite temperature {\em cannot\/} be described by particles with a sharp dispersion law. As an additional benefit, this approach is free of unphysical infrared singularities occuring in standard perturbation theory. The paper is organized as follows: In the next section a brief introduction to the formalism necessary for non-equilibrium quantum fields is given. In section 3 the approximate spectral function is discussed, followed by a solution of the quantum transport equation in section 4. In section 5 a generalized kinetic equation is solved which stands between the usual Boltzmann equation and the quantum transport equation of section 4. Conclusions are drawn in the final section of the present work. \section{Matrix-valued Schwinger-Dyson equation} As has been pointed out by various authors, the description of dynamical (time dependent) quantum phenomena in a statistical ensemble necessitates a formalism with a doubled Hilbert space\cite{D84a,RS86,LW87}. For the present purpose the relevant content of this formalism is, that its two-point Green functions are 2$\times$2 matrix-valued. It is left to the reader to chose either the conventional Schwinger-Keldysh, or Closed-Time Path (CTP) Green function formalism,\cite{SKF} or the technically simpler method of Thermo Field Dynamics (TFD)\cite{Ubook}. Within this matrix formulation, consider the Schwinger-Dyson equation for the full quark propagator $ S = S_0 + S_0 \odot \Sigma \odot S $. Here $S_0$ is the free and $S$ the full two-point Green function of the quark field, $\Sigma$ is the full self energy and the generalized product of these is to be understood as a matrix product (thermal and spinor indices) and an integration (each of the matrices is a function of two space coordinates). Throughout this paper the convention is used to write space-time and momentum variables also as lower indices, e.g. $\Sigma_{xy}\equiv \Sigma(x,y)$. In the CTP formulation as well as in the $\alpha=1$ parameterization of TFD\cite{hu92}, the matrix elements of $S$, $S_0$ and $\Sigma$ obey \begin{equation}\label{sme} S^{11}_{(0)}+S^{22}_{(0)}=S^{12}_{(0)}+S^{21}_{(0)} \;\;\;\;\;\;\Sigma^{11}+\Sigma^{22}=-\Sigma^{12}-\Sigma^{21} \;.\end{equation} Therefore the four components of the Schwinger-Dyson equation are not independent, the matrix equation can be simplified by a linear transformation which one may conveniently express as a matrix product \cite{RS86,hu92}. It achieves a physical interpretation only in the TFD formalism, see ref. \cite{h94rep}. The transformation matrices ${\cal B}$ are \begin{equation}\label{lc} {\cal B}(n) = \left(\array{lr}(1 - n) &\; -n\\ 1 & 1\endarray\right) \;,\end{equation} depending on one parameter only. For example, the third term in the Schwinger Dyson equation becomes \begin{equation}\label{qptp} {\cal B}(n)\,\tau_3\,S_0\odot\Sigma\odot S\,({\cal B}(n))^{-1} = \left({\array{lr} S_0^R\odot\Sigma^R\odot S^R & \mbox{something} \\ & S_0^A\odot\Sigma^A\odot S^A \endarray}\right) \;.\end{equation} Here, $\tau_3 = \mbox{diag}(1,-1)$, $\Sigma^{R,A}$ are the retarded and advanced full self energy function, and $S^{R,A}$ are the retarded and advanced full propagator (similarly for $S_0$) \begin{eqnarray}\label{sra}\nonumber \Sigma^R = \Sigma^{11}+\Sigma^{12}\;,\;\;\;\; &\Sigma^A = \Sigma^{11}+\Sigma^{21}\\ S^R = S^{11}-S^{12}\;,\;\;\;\; &S^A = S^{11}-S^{21} \;.\end{eqnarray} The diagonal elements of the transformed equation therefore are {\em retarded\/} and {\em advanced\/} Schwinger-Dyson equation. The off-diagonal element is a {\em transport equation\/}. Now one switches to the mixed (or Wigner) representation of functions depending on two space-time coordinates: $ \tilde\Sigma_{XP} = \int\!\!d^4(x-y) \, \exp\left({\mathrm i} P_\mu (x-y)^\mu\right)\Sigma_{xy} $ with $X = (x+y)/2$, the $\tilde{}$-sign will be dropped henceforth. The Wigner transform of the convolution $\Sigma\odot G$ is a nontrivial step. Formally it may be expressed as a gradient expansion \begin{equation}\label{gex} \int\!\!d^4(x-y) \; \exp\left({\mathrm i} P_\mu (x-y)^\mu\right)\; \Sigma_{xz}\odot G_{zy} = \exp\left(-{\mathrm i}\Diamond\right)\,\tilde\Sigma_{XP} \, \tilde{G}_{XP} \;.\end{equation} $\Diamond$ is a 2nd order differential operator acting on both functions appearing behind it, $ \Diamond A_{XP} B_{XP} = \frac{1}{2}\left(\partial_X A_{XP} \partial_P B_{XP}- \partial_P A_{XP}\partial_X B_{XP}\right) $. Obviously, this first-order term in the application of the infinite-order differential operator $\exp(-{\mathrm i} \Diamond)$ is the Poisson bracket\cite{h94rep}. Henceforth this operator is formally split into $\cos\Diamond-{\mathrm i}\sin\Diamond$. Similarly, one defines real Dirac matrix-valued functions as real and imaginary part of propagator and self energy: \begin{equation}\label{split} S^{R,A}_{XP} = G_{XP} \mp {\mathrm i} \pi {\cal A}_{XP} \;\;\;\;\; \Sigma^{R,A}_{XP} = \mbox{Re}\Sigma_{XP} \mp {\mathrm i}\pi \Gamma_{XP} \;.\end{equation} ${\cal A}_{XP}$ is the generalized spectral function of the quantum field. Now consider the equations obtained by action of Dirac differential operators (= {\em inverse free propagators\/}) on the matrix-transformed Schwinger-Dyson equation\cite{h94gl3}. The diagonal components are \begin{eqnarray}\nonumber &&\mbox{Tr}\left[\left( P^\mu\gamma_\mu- m \right) {\cal A}_{XP}\right] = \cos\Diamond\,\mbox{Tr}\left[ \mbox{Re}\Sigma_{XP}\, {\cal A}_{XP} + \Gamma_{XP} \, G_{XP}\right]\\ \label{k8c} &&\mbox{Tr}\left[\left( P^\mu\gamma_\mu- m \right) G_{XP}\right] = \mbox{Tr}\left[1\right] + \cos\Diamond\, \mbox{Tr}\left[ \mbox{Re}\Sigma_{XP} \, G_{XP} -\pi^2\,\Gamma_{XP}\,{\cal A}_{XP}\right] \;.\end{eqnarray} Two important facts about these equations have to be emphasized. First notice that these equations do not in general admit a $\delta$-function solution for the spectral function ${\cal A}_{XP}$ even in zero order of the gradient expansion. This has led to erroneous statements in papers deriving transport equations from the Schwinger-Dyson equation\cite{hm93}, because the right side of (\ref{k8c}) may not be disregarded. In short terms, there is not such thing as a mass shell constraint in {\em quantum\/} transport theory ! Secondly, the equations do not contain odd powers of the differential operator $\Diamond$. This implies, that when truncating the Schwinger-Dyson equation to first order in this differential operators (the usual order for the approximations leading to {\em kinetic\/} equations), the spectral function ${\cal A}_{XP}$ may still be obtained as the solution of an algebraic equation. The off-diagonal component of the transformed Schwinger-Dyson equation reads, after acting on it with the inverse free propagator \cite{h94rep,h94gl3} \begin{equation}\label{k5} \widehat{S}^{-1}_0 S^K_{xy} = \Sigma^R_{xz} \odot S^K_{zy} - \Sigma^K_{xz} \odot S^A_{zy} \;,\end{equation} with kinetic components $S^K = \left( 1- n\right)\,S^{12} + n\,S^{21}$ and $\Sigma^K = \left( 1- n\right)\,\Sigma^{12} + n\,\Sigma^{21}$. Inserting the real functions defined before, this leads to a differential equation, which henceforth is labeled {\em quantum transport equation\/}\cite{h94rep,h94gl3}: \begin{eqnarray}\label{tpe1} \nonumber \mbox{Tr}\left[\left( \partial_X^\mu\gamma_\mu + 2 \sin\Diamond\; \mbox{Re}\Sigma_{XP} + \cos\Diamond\;2\pi\Gamma_{XP} \right) S^K_{XP}\right]& = \\ 2{\mathrm i} \mbox{Tr}\left[ {\mathrm i}\sin\Diamond\;\Sigma^K_{XP} \, G_{XP} - \cos\Diamond\;\Sigma^K_{XP} \, {\mathrm i}\pi{\cal A}_{XP}\right]& \;.\end{eqnarray} Note, that here even as well as odd powers of the operator $\Diamond$ occur. The solution in zero order $\Diamond$ is not trivial, since it leads to the diagonalization of the propagator in equilibrium states\cite{h94rep,hu92}. \section{Effective fermion propagator and spectral functions} In a thermal equilibrium state at temperature $T$, the full propagator of a fermionic quantum field has to obey the Kubo-Martin-Schwinger boundary condition\cite{KMS,LW87,hu92}: \begin{equation}\label{kmf} \left(1 - n_F(E)\right)S^{12}_{\mbox{\small eq}}(E,\vec{p}) + n_F(E) S^{21}_{\mbox{\small eq}}(E,\vec{p}) = 0 \;.\end{equation} $n_F(E)$ is the Fermi-Dirac equilibrium distribution function at temperature $T$,\\ $n_F(E) = (\mbox{e}^{ \beta (E-\mu)}+1)^{-1} $. As seen above, the matrix valued propagator has only three independent components, two of which are furthermore complex conjugate. One may now use the KMS condition to eliminate the off-diagonal component of the equilibrium propagator in favor of $n(E)$\cite{h94rep,hu92}: \begin{eqnarray}\nonumber &&S^{(ab)}_{\mbox{\small eq}}(p_0,\vec{p}) = \int\limits_{-\infty}^\infty\!\!dE\, {\cal A}(E,\vec{p})\;\times \\ \label{fsk1} &&\tau_3\, ({\cal B}(n(E)))^{-1}\; \left(\!{\array{ll} {\displaystyle \frac{1}{p_0-E+{\mathrm i}\epsilon}} & \\ & {\displaystyle \frac{1}{p_0-E-{\mathrm i}\epsilon}} \endarray}\right)\; {\cal B}(n(E)) \;.\end{eqnarray} Here ${\cal A}(E,\vec{p})$ is the spectral function of the quark field, properly normalized and approaching a $\delta$-function for vanishing interaction. With the present paper one is addressing non-equilibrium states. For such states one may {\em not\/} derive a spectral representation of the propagator in general\cite{Ubook}, but one may still exploit the fact that retarded and advanced propagator are by definition analytical functions of the energy parameter in the upper or lower complex energy half plane. Hence, even for non-equilibrium states one may write in the mixed (or Wigner) representation \begin{equation}\label{rapf} S^{R,A}(E,\vec{p},X) = \mbox{Re}{G}_{XP} \mp \pi {\mathrm i} {\cal A}_{XP} = \int\limits_{-\infty}^{\infty}\!\!dE^\prime\; {\cal A}(E^\prime,\vec{p},X)\; \frac{1}{E-E^\prime\pm{\mathrm i}\epsilon} \;,\end{equation} since this is nothing but the Wigner transform of $ S^{R,A}_{xy} = \mp 2\pi{\mathrm i}\Theta\left(\pm(x_0-y_0)\right) {\cal A}_{xy} $. By inspection of eq. (\ref{k8c}) one finds, that only a self energy function is needed for a full determination of the function ${\cal A}_{XP}$. This self energy function is in general a functional of ${\cal A}_{XP}$ again -- which then leads to a complicated set of integro-differential equations for the self consistent determination of the retarded and advanced propagator. For the limited purpose of the present paper however, one makes some physically motivated assumptions: \begin{enumerate} \item The self energy function for the quarks is dominated by gluonic contributions. This is justified because the quark-quark scattering cross section is much smaller than the quark-gluon cross section. \item The gluon background is dominated by external conditions, i.e., we neglect the back-reaction of quarks on the gluon distribution. \item The external conditions determining the gluon field are changing in a short time interval, and the system is translationally invariant in 3-dimensional coordinate space. \item One neglects the influence of anti-quarks in the spectral function. This restriction is removed in the extended version of this paper, ref. \cite{h95neq}. \end{enumerate} For these assumptions also exists a practical reason: They allow a clean separation of various aspects of the quantum transport problem, whereas this separation is difficult (if not impossible) when considering more realistic systems. These assumptions lead to the following ansatz for the imaginary part of the self energy function: \begin{equation}\label{ss1} \pi\Gamma_{XP}\equiv\gamma^0\;\Gamma_t = \gamma^0\;g T(t)\,= \gamma^0\;g\, \left\{ {\array{lll} T_i & \mbox{if} & -\Lambda> t\\ \displaystyle \frac{(t + \Lambda)\, T_f - t\, T_i}{\Lambda} & \mbox{if} & 0 > t > -\Lambda \\ T_f & \mbox{if} & t > 0 \endarray} \right. \;.\end{equation} Within this ansatz the limit of $\Lambda\rightarrow 0$ is discussed separately, it corresponds to instantaneous heating of the gluon background. Furthermore one ensures causality by calculating the real part of the self energy through a dispersion integral. This integral is divergent, hence in principle one also needs a regularization procedure -- but the effects of this divergence cancel in the equations. For the quark spectral function, one uses the simple form \begin{equation} {\cal A}(E,\vec{p},t) = \frac{\gamma^0}{\pi} \frac{\gamma_t}{ \left(E - \omega_t\right)^2 + \gamma_t^2} \;.\end{equation} Hence, one approximates the quark spectral function by two time-dependent parameters $\omega_t$ and $\gamma_t$, which may be interpreted as effective mass and effective spectral width. One may argue about the validity of this approach, in particular whether not a momentum dependent spectral width is an absolute necessity for a realistic calculation. However, first of all one may safely assume that the quarks appearing in the hot medium are slow -- hence the properties of the quark distribution may be approximated by those of quarks at rest. A second argument in favor of this ansatz is the question of causality: The expectation value of the anti-commutator of two quark fields is nothing but the Fourier transform of the spectral function. Hence, while for some more general spectral function causality may be violated, the above ansatz guarantees it when supplemented with a corresponding antiparticle piece\cite{h95neq,h95comm}. With the above spectral function the coupled system (\ref{k8c}) reduces to {\em a single\/} nonlinear equation for $\gamma_t$, plus the condition $ \omega^2_t = \omega^2_0 = \vec{p}^2 + m^2 $. \begin{figure}[t] \vspace*{75mm} \special{psfile=gamma.ps hscale=75 vscale=75 hoffset=-40pt voffset=-180pt} \caption{Time dependent spectral width parameter $\gamma_t$.\protect\newline Parameters are $g$=0.12, $T_i=$ 1 MeV, $T_f=$ 200 MeV, $m=$ 10 MeV.\protect\newline Thin lines: $\Gamma_t$ from eq. (\ref{ss1}), thick lines: $\gamma_t$ from eqs. (\ref{k10c}), (\ref{k9c});\protect\newline continuous lines: $\Lambda=0$, dashed lines: $\Lambda=$ 4 fm/c. } \vspace*{1mm} \hrule \end{figure} This latter condition is more complicated, when the anti-particle piece of the spectral function is taken into account\cite{h95neq}. The energy parameter is chosen as $E=\omega_0$, which yields instead of eq. (\ref{k8c}) as the Schwinger-Dyson equation for the retarded (or advanced) two-point function of the quarks: \begin{eqnarray}\nonumber \gamma_t & = & g T_i + g (T_f - T_i)\,\Theta(t)+ g (T_f - T_i)\, \left( \frac{t+\Lambda}{\Lambda} -\frac{1}{2 \gamma_t \Lambda}\right) \,\Theta(-t)\Theta(t+\Lambda)\\ \label{k10c} && \hphantom{ g T_i} -\frac{ g(T_f - T_i)}{2 \gamma_t \Lambda}\, \left(\Theta(t)\,{\mathrm e}^{{\displaystyle -2 \gamma_t t}} -\Theta(t+\Lambda)\,{\mathrm e}^{{ \displaystyle -2 \gamma_t (t+\Lambda)}}\right) \end{eqnarray} In the limit $\Lambda\rightarrow 0$, this becomes even simpler: \begin{equation} \gamma_t = g T_i + g (T_f - T_i)\,\Theta(t)\, \left(1-{\mathrm e}^{{\displaystyle -2 \gamma_t t}}\right) \label{k9c} \end{equation} In Fig. 1, the solution of these equations is plotted in comparison to the time dependent imaginary part of the self energy function from eq. (\ref{ss1}). It is obvious, that the solution of the nonlinear equations (\ref{k10c}) resp. (\ref{k9c}) approaches the imaginary part of the self energy function with a characteristic delay time. Simply using $\Gamma_t$ from eq. (\ref{ss1}) instead of $\gamma_t$ -- which would correspond to an {\em adiabatic\/} approximation -- therefore ignores this delay time. In ref.\cite{h95neq} it is discussed how this delay time is calculated from the system parameters. \section{Transport equation}\label{tpe} As was stated above, the off-diagonal component of the transformed Schwinger-Dyson equation is a transport equation\cite{RS86,h94rep}. To see this more clearly, {\em define\/} the generalized covariant distribution function $N_{XP}$ through the equation \begin{equation}\label{nde} \left(1-N_{XP}\right)\,S_{XP}^{12} + N_{XP}\,S_{XP}^{21} =0 \,.\end{equation} Note the similarity with eq. (\ref{kmf}): The above equation indeed ensures, that in the limit of thermal equilibrium one achieves \begin{figure}[t] \vspace*{75mm} \special{psfile=nbnfull.ps hscale=75 vscale=75 hoffset=-40pt voffset=-180pt} \caption{Normalized time dependent fermionic distribution function for slow quarks.\protect\newline Parameters as in Fig. 1; thin lines: $N^B_t/n_F(m,T_f)$ from the Boltzmann equation (\ref{tpe3}), thick lines: $N_t/n_F(m,T_f)$ from the quantum transport equation (\ref{tpe2});\protect\newline continuous lines: $\Lambda=0$, dashed lines: $\Lambda=$ 4 fm/c. } \vspace*{1mm} \hrule \end{figure} $ \lim_{\mbox{\footnotesize equil}} N_{XP} = n_F(E) $. For the purpose of the present paper $N_{XP}$ is taken as a scalar function. The description of phenomena like spin diffusion requires to use a Dirac matrix valued $N_{XP}$\cite{hm93}. It follows that $ S_{XP}^K = 2\pi{\mathrm i}\,\left(N_{XP} - n\right)\,{\cal A}_{XP} $. From this step the mathematical interpretation of the generalized distribution function $N_{XP}$ is obvious: It is the parameter which diagonalizes the the full non-equilibrium matrix-valued propagator through the Bogoliubov matrix ${\cal B}$ from (\ref{lc})\cite{h94rep,hu92}: \begin{equation} {\cal B}(N_{XP})\,\tau_3\,S_{XP}\,({\cal B}(N_{XP}))^{-1}= \left({\array{rr} G_{XP}-{\mathrm i}\pi{\cal A}_{XP} & \\ & G_{XP}+{\mathrm i}\pi{\cal A}_{XP} \endarray}\right) \;.\end{equation} For the following, one furthermore defines a ``pseudo-equilibrium'' distribution function: The 2$\times$2 matrix structure of self energy function allows to diagonalize it also by a Bogoliubov transformation\cite{h94rep} with a parameter $N^0_{XP}$ such that \begin{equation}\label{psef} \Sigma^{12}_{XP} = 2\pi{\mathrm i} N^0_{XP}\,\Gamma_{XP}\;\;\;\;\;\;\; \Sigma^{21}_{XP} = 2\pi{\mathrm i} \left(N^0_{XP}-1\right)\,\Gamma_{XP} \;.\end{equation} In the present approach $N^0_{XP}$ is determined by the hot gluon gas acting as background, hence without the back-reaction it is equal to the equilibrium function, $N^0_{XP} \equiv n_F(E,T(t))$ with a time dependence due to the time dependence of the temperature. Looking at slow quarks with $E=m$, one furthermore replaces $N(X;m,\vec{p})$ by $N_t$ and neglects the energy derivative of $n_F(E,T(t))$. The resulting quantum transport equation according to (\ref{tpe1}) then is: \begin{equation}\label{tpe2} \frac{d}{d t}N_t = -2 \,\gamma_t\left( N_t - n_F(m,T(t)) \right) \;\end{equation} with $T(t)$ as defined in eq. (\ref{ss1}). This equation looks surprisingly similar to a kinetic equation in relaxation time approach. However, this similarity is superficial: The {\em kinetic\/} equation, or Boltzmann equation, derived for this simple model system reads \begin{equation}\label{tpe3} \frac{d}{d t}N^B_t = -2 \,\Gamma_t\left( N^B_t - n_F(m,T(t)) \right) \;,\end{equation} with the imaginary part of the self energy $\Gamma_t$ from eq. (\ref{ss1}) instead of the spectral width parameter $\gamma_t$. That these differ substantially in the beginning of the relaxation process has been shown in the previous section. Fig. 2 depicts the influence of this difference on the solution of the transport equation. The result is that the {\em relaxation process\/} is slowed by the inclusion of the spectral function of the system components. Please observe, that the curves of Fig. 2 employ the same behavior as seen in Fig.1: The relaxation {\em rate\/} is similar in the quantum transport and the Boltzmann equation, but the former experiences a characteristic {\em delay time\/} with respect to the latter. This delay time is almost doubled with respect to the delay time occuring in the spectral width parameter $\gamma_t$, an asymptotic calculation is carried out in ref.\cite{h95neq}. \section{Gradient expansion} One may now raise the question, whether one can produce an equation which at least takes some of the quantum features of particles into account in an otherwise kinetic picture. The reason for this is, that in a general non-equilibrium system one cannot hope to reduce the equations (\ref{k8c}) and (\ref{tpe1}) to such simple forms as obtained above. Even a purely numerical solution of these equations seems to be impractical if not impossible. Therefore, to answer the question, consider the two steps which are between the equations (\ref{tpe2}) and (\ref{tpe3}): First of all a quasi-particle approximation, secondly an expansion of the operator $\exp(-{\mathrm i}\Diamond)$ to first order, i.e., replacing it by $1 - {\mathrm i}\Diamond$. The first of these steps would be in contradiction to the philosophy outlined in the introduction to this work. The second step however may be kept: To expand the diagonal as well as the off-diagonal pieces of the original matrix-valued Schwinger-Dyson equation to first order in the operator $\Diamond$\cite{h94rep,h94gl3}. The necessary differential equation for $N_{XP}$ has been derived in ref.\cite{h94gl3}, correct to first order in the gradient expansion it reads \begin{eqnarray} \nonumber &&\mbox{Tr}\left[ {\cal A}_{XP} \left\{ \vphantom{\int\limits_0^0} \left( \vphantom{\int\limits} P_\mu\gamma^\mu - m - \mbox{Re} \Sigma_{XP} \right), N_{XP} \right\}\right]\\ \nonumber &&\;\;= {\mathrm i} \mbox{Tr}\left[\vphantom{\int\limits_0^0} {\cal A}_{XP} \left( \vphantom{\int\limits} N_{XP} \Sigma^{21}_{XP} - \left( N_{XP}-1\right) \Sigma^{12}_{XP}\right) \right]\\ \nonumber &&\;\; -{\mathrm i} \int\limits_{-\infty}^0\!\!d\tau \int\!\frac{dE}{2\pi}\, \sin(\tau E)\,\mbox{Tr}\left[ \left\{ \vphantom{\int\limits_0^0} {\cal A}(X;P_0+E,\vec{P}),\right.\right.\\ \label{tpe1a} &&\left. \left.\left(\vphantom{\int\limits} N_{XP} \Sigma^{21}(t+\tau/2,\vec{X};P) - \left(N_{XP}-1\right) \Sigma^{12}(t+\tau/2,\vec{X};P)\vphantom{\int\limits}\right)\vphantom{\int\limits_0^0} \right\}_{N}\right] \;.\end{eqnarray} \begin{figure}[t] \vspace*{75mm} \special{psfile=nbngnfull.ps hscale=75 vscale=75 hoffset=-40pt voffset=-180pt} \caption{Normalized time dependent fermionic distribution function for slow quarks.\protect\newline Parameters as in Fig. 1; thin lines: left $N^B_t/n_F(m,T_f)$ from the Boltzmann equation (\ref{tpe3}), right $N_t/n_F(m,T_f)$ from the quantum transport equation (\ref{tpe2}); thick lines: $N^G_t/n_F(m,T_f)$ from the generalized kinetic equation (\ref{tpe4}); continuous lines: $\Lambda=0$, dashed lines: $\Lambda=$ 4 fm/c. } \vspace*{1mm} \hrule \end{figure} In this equation, $\left\{\vphantom{\int\limits}\cdot,\cdot\right\}$ denotes the Poisson bracket, the index $N$ means that the derivatives are not acting on $N_{XP}$. Here, as outlined before, one may use a spectral function which is the solution of an algebraic equation. For the present simple model this means to replace $\gamma_t$ by $\Gamma_t$ in the function ${\cal A}$. Note, that the above equation is strictly causal: It involves a time integral only over the past history of the system, and its derivation is based on the dispersion integral (\ref{rapf}). The problem of unphysical singularities in the propagator therefore does not occur. Furthermore, replacing $N_{XP}$ by the unknown function $N^G_t$ and inserting all the previous definitions, one obtains the nonlinear equation \begin{eqnarray}\nonumber \frac{d}{d t} N^G_t & = & - 2 \Gamma_t\,\left(N^G_t - n_F(m,T(t))\right) \\ \nonumber && + 2 g\left(T_f-T_i\right)\,\left[\vphantom{\int\limits_0^0} \Theta(t)\left(\frac{t}{\Lambda} + \frac{1}{2 \Gamma_t \Lambda}\right) \exp(-2 \Gamma_t t)\right.\\ \nonumber &&- \Theta(t+\Lambda)\left(\frac{t+\Lambda}{\Lambda} + \frac{1}{2 \Gamma_t \Lambda}\right)\exp(-2 \Gamma_t (t+\Lambda))\\ &&+\left.\vphantom{\int\limits_0^0} \Theta(-t)\Theta(t+\Lambda)\,\frac{1}{2\Gamma_t\Lambda}\right]\, \left(N^G_t - \frac{n_F(m,T_f) T_f - n_F(m,T_i) T_i}{T_f - T_i}\right) \label{tpe4} \;.\end{eqnarray} In the limit $\Lambda\rightarrow 0$ this may be simplified to \begin{eqnarray}\nonumber \frac{d}{d t} N^G_t & = & - 2 \Gamma_t\,\left(N^G_t - n_F(m,T(t))\right) \\ \nonumber && + 4\,t\,\Theta(t)\, \left(g\left(T_f-T_i\right)\right)^2\, \exp(-2 \Gamma_t t)\,\\ &&\;\;\;\;\; \left(N^G_t - \frac{n_F(m,T_f) T_f - n_F(m,T_i) T_i}{T_f - T_i}\right) \label{tpe5} \;.\end{eqnarray} Shown in Fig.3 is the numerical solution for $N^G_t$ in comparison to the Boltzmann solution $N^B_t$ as well as the full quantum transport solution $N_t$. \section{Discussion and Conclusion} The comparison of the three methods to describe the relaxation problem of a quark--gluon plasma (QGP) shows, that the full quantum transport equation results in a {\em much \/} slower equilibration process than the Boltzmann equation. This result is in agreement with other attempts to solve the quantum relaxation problem\cite{D84a,h93trans}: The quantum system exhibits a memory, it behaves in an essentially non-Markovian way. In particular, for the physical scenario studied here, the system ``remembers'' that it has been equilibrated some time ago. The relaxation {\em rate\/} then is very similar to the Boltzmann rate, but the system follows with a characteristic delay time. This delay time depends on the system parameters in a non-algebraic way, hence one may be subject to surprises for physical examples. In the present quantum transport example for the QGP, the time to reach 1-1/e${}^2\approx$ 86 \% of the equilibrium quark occupation number is almost doubled (14.7 fm/c as compared to 8.2 fm/c in the Boltzmann case). Thus, it may be carefully stated, that the question of the applicability of {\em standard\/} transport theory with quasi-particles needs further investigation: It might turn out, that quantum effects (= memory as decribed in this contribution) substantially hinder the thermalization of a QGP over long time scales. One also finds, that this result holds for instantaneous as well as fast ($\Lambda$ = 4 fm/c) heating of the bosonic background. Without elaboration at this point it may be stated that the inclusion of antiquarks into the spectral function does not change these figures substantially; it only leads to small oscillations of the relaxation rate around the value given in Fig. 1. The calculated numerical value of 14.7 fm/c for the thermalization time of slow quarks is certainly so large, that the cooling of the bosonic background has to be taken into account for realistic estimates. Thus however one runs into the principal problem of non-equilibrium quantum field theory: The solution of time-dependent coupled equations for the Green's functions, hardly possible in any concrete case. A way out of this dilemma might be offered by the generalized kinetic equation\cite{h94rep,h94gl3} (\ref{tpe1a}), which is related to the quantum transport equation as well as to the Boltzmann equation: It does not contain the convolutions over coordinate space that are hidden in the Schwinger-Dyson equation. However, it does contain the gradient approximation of standard transport theory - and thus its applicability to the system studied here is questionable, since a step function in time certainly involves large gradients. The present comparison is therefore justified only through its results: The fact, that with the generalized transport equation one does at least partially describe the memory effects in a quantum system (the characteristic time now is 11.4 fm/c) is encouraging. Applications of this transport equation to more complicated systems seem to be possible, at least in cases where one previously has used Boltzmann-like or Vlasov-like equations which also contain this gradient expansion to first order. As a more general remark at the end of this paper it might be added, that the present results certainly demonstrate the importance of solving all three components of the matrix-valued Schwinger-Dyson equation on the same level of approximation. Using only a trivial approximation to the diagonal equations, i.e., replacing the spectral functions of the model by some ``mass-shell constraint'', is not justified for strongly interacting hot systems.
f7ae34b2581fa80d55c349afab641f303c1696cb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\subsection*{Acknowledgements} This work was supported by PPARC: MG by studentship number 93300941 and MH by Advanced Fellowship number B/93/AF/1642.
cba207f3f338be0caabb0e275c9c5697543f046e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In the standard model of cosmology, the early universe is described by a homogeneous and isotropic Friedmann-Lema\^{\i}tre-Robertson-Walker model. Small linear metric perturbations are responsible for both, the large-scale structure of the present-day universe and the tiny deviations from the anisotropy of the cosmic microwave background. The corresponding linear perturbation theory has been developed by Lifshitz \cite{Lifshitz} half a century ago who found the solutions when the energy-momentum tensor is that of a perfect fluid. (For recent reviews of further developments since see Ref.~\cite{Kodama} and \cite{Mukhanov}.) With more complicated forms of matter it is typically necessary to resort to numerical integrations \cite{Peebles70,McCone70,Peebles73,Bond83} of coupled Einstein-Boltzmann equations \cite{Ehlers,Stewart72}. In the case of collisionless matter, some analytic results have been obtained by Zakharov and Vishniak \cite{Zakharov}, but also there, the perturbation equations were eventually solved numerically \cite{McCone70,Bond83}. In Ref.~\cite{Kraemmer}, a novel framework for the study of cosmological perturbations has been developed which is based on thermal field theory. In this formalism the connection between the perturbed metric and the perturbed energy-momentum tensor is provided by the (thermal) gravitational polarization tensor. Concentrating on post-Planckian and post-inflationary epochs, we assume that $T\ll m_{\rm Planck}$ and therefore that the gravitational field can be treated as a classical gravitational background field. The momentum scale of cosmological perturbations is set by the inverse Hubble radius $H^{-1}\sim T^2/m_{\rm Planck}$, which is thus much smaller than the temperature. If the particles are furthermore ultrarelativistic, i.e., their masses are negligible when compared to temperature, it is only the high-temperature limit of the gravitational polarization tensor which is needed to determine the response of the primordial plasma to metric perturbations. The leading high-temperature contributions to the gravitational polarization tensor, which have been calculated first in Ref.~\cite{Rebhan91} (see also Ref.~\cite{ABFT}), describe collisionless ultrarelativistic matter. Using them to provide the right-hand side of the perturbed Einstein equations, one obtains self-consistent and manifestly gauge-invariant perturbation equations, for which exact, analytic solutions were found in Ref.~\cite{Kraemmer,Rebhan92a,Schwarz,Rebhan94}. In this case, one can show \cite{Rebhan94} that the perturbation equations are equivalent to a certain gauge-invariant reformulation of the Einstein-Vlasov equations \cite{Kasai}. In Ref.~\cite{NRS1} we have started to extend the thermal-field-theory approach to weakly self-interacting thermal matter, for which we have chosen scalar particles with quartic self-coupling. With the slight generalization to an $O(N)$-symmetric model, the Lagrangian is given by \begin{equation} \label{L} {\cal L}(x)=\sqrt{-g(x)}\left\{\textstyle{1\over2} g^{\mu\nu}\partial_\mu\phi \partial_\nu\phi - \textstyle{1\over2}\xi R \phi^2 - {3\over (N+2)}\lambda \phi^4 \right\}. \end{equation} With $\xi = 1/6$ this Lagrangian is conformally invariant. The precise value of $\xi$ does not enter in our calculations, since curvature corrections to the energy-momentum tensor and its perturbations are suppressed by a factor $T^2/m^2_{\rm Planck}\ll1$. We assume that self-interactions are much mor important than those, i.e. $\lambda \gg T^2/m^2_{\rm Planck}$. The leading self-interaction effects show up as two-loop corrections to the gravitational polarization tensor (or, in particle physics terminology, to the thermal graviton self-energy). As it is the case with its one-loop high-temperature limit, it satisfies a conformal Ward identity, which makes it possible to use momentum-space techniques to evaluate this nonlocal object completely by going first to flat space-time and then transforming to the curved background geometry of the cosmological model, which in virtually all cases of interest is conformally flat. The corrections to the perturbation equations of the collisionless (one-loop) case turn out to be such that it is still possible to solve these equations exactly in terms of rapidly converging power series. The resulting changes turned out to be perturbative on scales comparable to or larger than the Hubble horizon, whereas the large-time behaviour of subhorizon-sized perturbations become increasingly sensitive to formally higher order effects \cite{NRS1}. In the present paper, we complete the derivation of the effects proportional to the scalar self-coupling $\lambda$ and go on to include the next-to-next-to-leading terms, which are of order $\lambda^{3/2}$. To this order, we have calculated the gravitational polarization tensor in Ref.~\cite{NRS2}, which required the use of a resummed perturbation theory. Besides the necessity to resum the induced thermal masses ($\sim \sqrt{\lambda} T$) acquired by soft excitations in the scalar plasma, this also requires a resummation of nonlocal graviton-scalar vertices. These effects can be included systematically by our quantum-field-theoretic framework, while it is unclear how they could be taken into account in a kinetic-theory approach. Again, it turned out that the results for the gravitational polarization-tensor which are proportional to $\lambda^{3/2}$ satisfy the conformal Ward identity which allows us to transform them to curved space. Although the results are much more complicated than the previous ones at order $\lambda^1$, their structure is again such that the perturbation equations can be solved exactly. Since the gravitational polarization tensor also satisfies the Ward identities required by diffeomorphism invariance, we can continue to work with manifestly gauge invariant variables for metric perturbations. Following largely the notation of Bardeen \cite{Bardeen}, our gauge invariant set-up is laid down in Sect.~2 for the background geometry of the radiation-dominated spatially flat Einstein-de Sitter model. In Sect.~3, the solutions for scalar, vector, and tensor perturbations are constructed after allowing also for an arbitrary admixture of a perfect radiation fluid, with some of the calculational details relegated to the Appendix. As found already in Ref.~\cite{NRS1}, the corrections to the solutions caused by the scalar self-interactions are perturbative except when the horizon has grown to become much larger than the wavelength (of a Fourier mode) of the perturbation. In Sect.~4 we show that the behaviour of the perturbative series can be greatly improved by rewriting it in terms of a Pad\'e approximant. This can be tested in the limit $N\to\infty$ of our model (\ref{L}). Encouraged by these findings, we consider also the late-time behaviour of our solutions in Sect.~5. Sect.~6 summarizes our results. \section{Gauge-invariant setup} We will consider a radiation dominated Einstein-de Sitter background \begin{equation} ds^2 = S^2(\tau) \left( -d\tau^2 + \delta_{ij}dx^i dx^j \right) \label{ds2} \end{equation} with $\tau$ being the conformal time which measures the size of the horizon in comoving coordinates. The overall evolution is determined by the cosmic scale factor $S(\tau )$ which satisfies the Friedmann equation \begin{equation} H^2 := \left({1\over S^2}{d S \over d \tau}\right)^2 = {8\pi G\over 3} \rho \ . \end{equation} The background energy density is related to the pressure by the equation of state $\rho = 3 P.$ In flat space-time the ring-resummed pressure of the ultrarelativistic matter described by (\ref{L}) reads \cite{Kapusta} \begin{equation} P = N {\pi^2 T^4 \over 90} \left(1 - {15\over8}{\lambda\over\pi^2} + {15\over 2} \left(\lambda\over\pi^2\right)^{\frac32} + O(\lambda^2) \right) \ . \end{equation} In the Einstein-de Sitter background the same expression holds true, but now with scale dependent temperature $T(S) = T(S\negthinspace =\negthinspace 1) S^{-1}$. The existence of a thermal equilibrium state for relativistic matter is guaranteed by a conformal timelike Killing vector field $ u^\mu / T$. The corresponding energy momentum tensor \begin{equation} T^{\mu}{}_{\nu} = P (4 u^{\mu}u_{\nu} + \delta^{\mu}_{\nu}) \ , \qquad u_{\mu} = S\delta_\mu^0 \ , \end{equation} is formally that of a perfect fluid and is traceless. This reflects the conformal symmetry of the effective action for ultrarelativistic plasmas. We make use of the gauge-invariant metric potentials and matter variables introduced by Bardeen \cite{Bardeen}. The linear perturbations may be split into scalar, vector, and tensor parts \cite{Lifshitz}. Since we work in spatially flat space-time all variables can be decomposed into plane waves with comoving wave number $k$. Then the linearized Einstein equations may be written in terms of the variable \begin{equation} \label{xdef} x := \tau k \ , \end{equation} which measures the number of (physical) half wave lengths inside the Hubble radius $H^{-1}$, i.e. \begin{equation} {x\over \pi} = H^{-1}\left/ \left(\lambda\over 2\right)\right. \ . \end{equation} In what follows it will act as both, a dimensionless time variable and a dimensionless measure of the size of perturbations. To model a two component universe composed of a relativistic plasma (RP) and a perfect fluid (PF), we introduce the mixing-factor \begin{equation} \alpha ={\rho_{\rm RP}\over \rho_{\rm RP} + \rho_{\rm PF}} \ . \end{equation} Gauge-invariant matter variables $X$ are then decomposed as \begin{equation} X = \alpha X_{\rm RP} + (1-\alpha) X_{\rm PF} \ , \end{equation} whereas gauge-invariant metric perturbations $Y$ are given by \begin{equation} Y = Y_{\rm RP} + Y_{\rm PF} \ . \end{equation} This split is possible since we shall consider only small linearized perturbations. \subsubsection{Scalar perturbations} The scalar or density perturbations obey the equations \begin{mathletters} \label{phipi} \begin{eqnarray} \label{em} {x^2\over 3}\Phi &=& \epsilon_m \\ \label{pit} x^2 \Pi &=& \pi_T^{(0)}\ , \end{eqnarray} \end{mathletters} with $\epsilon_m$ being the density contrast on hypersurfaces that represent everywhere the local restframe of matter, and $\pi_T^{(0)}$ is the anisotropic pressure. $\Phi$ and $\Pi$ are the metric potentials, related to Bardeen's definition by \begin{eqnarray*} \Phi_H &=& {1\over 2} \Phi \\ \Phi_A &=& - \Pi - {1\over 2} \Phi \ . \end{eqnarray*} For our purposes another variable for the density contrast turns out to be useful \begin{equation} \label{eg} \epsilon_g = \epsilon_m - \frac 4x v_s^{(0)} \ . \end{equation} The matter velocity $v_s^{(0)}$ (related to the amplitude of the shear \cite{Bardeen}) is given by the solution of \begin{equation} \label{scon} v_s^{\prime (0)} + {1\over x} v_s^{(0)} = {1\over 4} \left( \epsilon_m + \eta - {2\over 3} \pi_T^{(0)} \right) - {1\over 2} \Phi - \Pi. \end{equation} The quantity $\eta$ is the entropy perturbation ($\eta = 0$ corresponds to isentropic perturbations). It appears as source term in the trace of the perturbed Einstein equations \begin{mathletters} \label{scalar} \begin{equation} \label{trace} x^2\left( \Phi^{\prime\prime} + {4\over x} \Phi^{\prime} + {1\over 3}\Phi + {2\over x} \Pi^{\prime} - {2\over 3} \Pi \right) = - \eta \ , \end{equation} whereas in the 0-0 components we encounter the density contrast $\epsilon_g$ \begin{equation} \label{00} (x^2 + 3)\Phi + 3x\Phi^{\prime} + 6\Pi = 3\epsilon_g \ . \end{equation} \end{mathletters} Here and in the following a prime denotes the derivative with respect to $x$. \subsubsection{Vector perturbations} For vector (rotational) perturbations \begin{equation} \label{vector} {x^2\over 8} \Psi = v_c \end{equation} relates the metric (frame dragging) potential $\Psi$ to the matter velocity $v_c$ (relative to the normal of constant-$\tau$ hypersurfaces), which is proportional to the amplitude of the vorticity \cite{Bardeen}. The anisotropic pressure is given through \begin{equation} \label{vcon} v_c^{\prime} = - {1\over 8} \pi_T^{(1)}. \end{equation} \subsubsection{Tensor perturbations} Tensor perturbations describe the propagation of gravitational waves, their evolution equation has the anisotropic pressure as source term \begin{equation} \label{tensor} x^2 \left( H^{\prime\prime} + {2\over x} H^{\prime} + H \right) = \pi_T^{(2)} \ . \end{equation} \section{Perturbative thermal field theory} \subsection{Fluctuations from thermal $\lambda \phi^4$ theory} To obtain the expressions for the matter variables we use the thermal-field-theoretic approach \cite{Kraemmer} instead of the usual kinetic approach. The perturbation of the energy momentum tensor \begin{eqnarray} \delta T_{\ \nu}^{\mu}(x) &=& \int_{x^\prime} {\delta T^{\mu}_{\ \nu}(x)\over \delta g_{\alpha\beta}(x^{\prime})} \delta g_{\alpha\beta}(x^{\prime}) \nonumber \\ \label{delT} &=& {2\over \sqrt{-g(x)}} \int_{x^\prime} \Pi^{\mu\ \alpha\beta}_{\ \nu}(x,x^\prime) \delta g_{\alpha\beta}(x^{\prime}) - \left[\frac 12 T_{\ \nu}^{\mu} g^{\alpha\beta} + T^{\mu\alpha} \delta^{\beta}_{\nu} \right](x) \delta g_{\alpha\beta}(x) \end{eqnarray} is related to the gravitational polarization tensor \begin{equation} \Pi^{\mu\nu\alpha\beta}(x,x^\prime)\equiv {\delta^2\Gamma\over\delta g_{\mu\nu}(x)\delta g_{\alpha\beta}(x^\prime)} = \frac 12 {\delta \left( \sqrt{-g(x)}T^{\mu\nu}(x)\right)\over \delta g_{\alpha\beta}(x^\prime)} \ . \end{equation} In the high-temperature domain the effective action is conformally invariant \cite{NRS2,Rebhan91}, i.e. $\Gamma[S^2 g_{\mu\nu}] = \Gamma[g_{\mu\nu}]$, therefore the polarization tensor in the conformally flat Einstein-de Sitter background reads \begin{equation} \left. \Pi^{\mu\nu\rho\sigma}(x,x^{\prime}) \right|_{g=S^2\eta} = S^{-2}(\tau) \int {d^4 k \over (2\pi)^4} e^{\imath k (x - x^{\prime})} \left. \tilde{\Pi}^{\mu\nu\rho\sigma}(k) \right|_{\eta} S^{-2}(\tau^{\prime}) \ . \end{equation} Additionally we have performed a Fourier transformation to momentum space, where standard thermal-field-theoretic methods apply. Due to the conformal invariance and the invariance under general coordinate transformations, the gravitational polarization tensor has only three independent components. We choose them to read \begin{equation} A(Q)\equiv\tilde\Pi^{0000}(Q)/\rho,\quad B(Q)\equiv\tilde\Pi^{0\mu}{}_\mu{}^0(Q)/\rho,\quad C(Q)\equiv\tilde\Pi^{\mu\nu}{}_{\mu\nu}(Q)/\rho. \end{equation} The explicit expressions for $A, B$, and $C$ have been calculated in \cite{NRS2} to ${\cal O}(\lambda^{\frac32}) $. The contributions through order $\lambda$ consist of ``hard thermal loops'', i.e., diagrams that are dominated by hard loop momenta. These correspond to thermal fluctuations and could have been calculated by kinetic theory as well (despite the collision term that cannot be derived by first principles). The order $\lambda^{\frac32}$ stems from the necessary resummation \`a la Braaten-Pisarski \cite{BP} where propagators and vertices of the scalar particles have to be dressed in order to avoid infra-red singularities. The resummation involves a particular (infinite) subclass of Feynman diagrams of ordinary perturbation theory which presumably lie far beyond the scope of kinetic theory. $A,B,C$ and their Fourier transforms are listed in Appendix A. \subsubsection{Scalar perturbations} {From} (\ref{delT}) and the definition of the gauge-invariant matter variables \cite{Bardeen} it follows that \begin{mathletters} \label{mv} \begin{eqnarray} \eta_{\rm RP} &=& 0 \;,\\ \epsilon_{g\ \rm RP} &=& - 2\Phi + 4 {\cal F}[A - \frac14]*(\Phi + \Pi) \;,\\ v_{s\ \rm RP}^{(0)} &=& 3\imath {\cal F}[\omega (A - \frac14)]*(\Phi + \Pi)\;, \\ \pi_{T\ \rm RP}^{(0)} &=& - 18 {\cal F}[(\omega^2 - \frac13) (A - \frac14) + \frac13]*(\Phi + \Pi) \;. \end{eqnarray} \end{mathletters} The $*$ denotes the convolution $(g * f) (x) = \int^x dx^\prime g(x-x^\prime) f(x^\prime)$ and the operator ${\cal F}$ defines the Fourier transformation \begin{equation} {\cal F}[g](x) = \lim_{\gamma \to 0^+} {1\over 2\pi} \int_{-\infty +\imath \gamma}^{\infty + \imath \gamma} d \omega e^{-\imath \omega x } g(\omega) \ . \end{equation} The particular choice of the integration contour corresponds to retarded boundary conditions. After performing the Fourier transformations (Appendix A) the matter variables are written in terms of the integral kernel \begin{eqnarray} K^{(0)}(x) = &\biggl[& j_0(x) + {5 \lambda \over 8 \pi^2} \left(2\kappa^\prime - j_0 -\cos\right)(x) \nonumber \\ && + {15 \over 8} \left(\lambda\over \pi^2\right)^{\frac32} \left(\cos + \frac x3 J_{-1} + 2 J_0 + \frac 1x J_1 + \frac 43 j_2 - \frac 23 j_0 \right. \nonumber \\ \label{K0} &&\qquad \left. - 2 \nu^{\prime\prime} - 4 \nu - \kappa^\prime - \kappa^{\prime\prime\prime} - \frac 16 \xi \right)(x) \biggr] \ , \end{eqnarray} \begin{mathletters} according to \begin{eqnarray} \epsilon_{g\ \rm RP} &=& 2\Phi+4\Pi-4 K^{(0)}*(\Phi + \Pi)^\prime \\ \label{vsk} v_{s\ \rm RP}^{(0)} &=& 3 K^{(0)\,\prime}*(\Phi + \Pi)^\prime \\ \label{pitk} \pi_{T\ \rm RP}^{(0)} &=& - 6 (K^{(0)} + 3 K^{(0)\,\prime\prime})*(\Phi + \Pi)^\prime \end{eqnarray} \end{mathletters} At the origin this kernel behaves like \begin{equation} K^{(0)}(0^+) = 1, \qquad K^{(0)\prime}(0^+) = 0 \ . \end{equation} The scalar matter variables (\ref{mv}) couple only via the sum of the metric potentials $\Phi + \Pi \equiv \Phi_N$ to the nontrivial component $A$ of the polarization tensor. $\Phi_N$ is the only scalar contribution to the electric part of the Weyl curvature tensor \cite{Ellis} and can be interpreted as the generalization of the Newton gravity potential. There is no scalar contribution in the magnetic part of the Weyl tensor. Were it not for the conformal symmetry of the effective action, the additional components of the polarization tensor would also couple to $\Pi$, the potential for the anisotropic pressure. For vanishing entropy perturbations, Eq. (\ref{trace}) can be integrated for $\Phi$, \begin{eqnarray} \Phi (x) &=& - \frac{2}{3 x} \int^x dx^{\prime} \left(x^\prime \cos ( x-x^{\prime} ) + 4 \sin (x - x^{\prime} ) \right) \Phi_N^{\prime} (x^{\prime} ) \nonumber \\ \label{intPhi} & & + \left. \frac{2}{3x} \left(x^\prime \cos (x-x^{\prime}) + \sin (x - x^{\prime}) \right) \Phi_N (x^{\prime} )\right|^x + C_1 \frac{\sin (x)}{x} + C_2 \frac{\cos (x)}{x} \end{eqnarray} where $C_1,C_2$ are integration constants. Together with Eqs.\ (\ref{pit}) and (\ref{pitk}) we can also obtain a single equation determining $\Phi_N$ (see Eq.~(\ref{PhiN})). \subsubsection{Vector perturbations} In a similar way \begin{equation} \label{vect} v_{c\ \rm RP} = -\Psi + \frac32 {\cal F}[(\omega^2 -1)(A - \frac14) - B]*\Psi \end{equation} is derived. With \begin{equation} \label{K1} K^{(1)}(x) = \biggl[ - \frac13 (1+\frac{15 \lambda}{8 \pi^2 } - \frac{15 \lambda^{3/2} }{2 \pi^3 }) (j_0 + j_2)(x) + {5 \lambda \over 8 \pi^2} j_0(x) - {15\over 8}\left(\lambda\over \pi^2\right)^{\frac 32} \left( \frac13 J_0 + j_0 \right) (x) \biggr] \end{equation} and \begin{equation} K^{(1)}(0^+) = - \frac13 \ , \qquad K^{(1)\prime}(0^+) = 0 \ , \end{equation} the velocity amplitude is given by \begin{equation} \label{vk} v_{c\ \rm RP} = - 3 ( K^{(1)\,\prime}) * \Psi \ . \end{equation} \subsubsection{Tensor perturbations} The evaluation of (\ref{delT}) for the anisotropic pressure leads to \begin{eqnarray} \label{tk} \pi_{T\ \rm RP}^{(2)} &=& 3 {\cal F}[(\omega^2 - 1)^2 (A - \frac14) - 4 (\omega^2 -1) B + 2 C - {11\omega^2\over 3} + 3] * H \nonumber\\ &=& 3 ( K^{(2)}) * H^\prime \end{eqnarray} with the kernel \begin{equation} \label{K2} K^{(2)}(x) = \biggl[ - 8 {j_2(x)\over x^2} (1+\frac{15 \lambda}{8 \pi^2 } - \frac{15 \lambda^{3/2} }{2 \pi^3 } ) + {5 \lambda \over \pi^2}{j_1(x)\over x} - 5 \left(\lambda\over \pi^2\right)^{\frac 32} \left(\frac{J_1}{x} + j_0 + j_2\right)(x) \biggr] . \end{equation} \subsection{Initial conditions} To specify the dynamical equations for cosmological perturbations completely, initial conditions for the metric potentials are needed. We fix them in the limit $x_0\to 0$, because this will allow us to derive exact solutions of the integro-differential equations in terms of generalized power series \cite{Rebhan94}. The initial conditions are given by a set of numbers $\gamma^{(a)}_n$. These are related to the n-th momenta of the particle's distribution function \cite{Rebhan94}. Here they show up in the convolution integrals because the lower integration boundary $x_0$ truncates the support of the metric potentials to a half space. This is done in the convolution integrals by the replacement \begin{equation} \label{ic} Y^\prime(x) \to Y^\prime(x)\theta(x-x_0) + \sum_{n=0}^{\infty} \gamma^{(a)}_n \delta^{(n)}(x-x_0) \ , \end{equation} where $Y$ stands for $\Phi_N , \Psi$, and $H$, respectively. As is well known from simple forms of matter like perfect fluids \cite{Lifshitz} and from work by Zakharov and Vishniac \cite{Zakharov} on collisionless matter, there are two branches of solutions, regular and singular ones. This is also the case for self-interacting plasmas. We will concentrate on regular solutions in our evaluations, but will nevertheless sketch the behaviour of the singular solutions below. The singular solutions necessarily violate the assumption of a Friedmannian singularity, but they are relevant when fitting the evolution of cosmological perturbations to a previous epoch at some small nonvanishing value of $\tau$ \cite{Grishchuk,Deruelle}. A detailed discussion of singular solutions for collisionless matter is given in \cite{Rebhan94}. Due to geometrical effects \cite{Zakharov}, the singular solutions permit superhorizon oscillations. This effect depends on the ratio $\alpha$ only, and is not at all sensitive to the $\gamma_n^{(a)}$. Only the normalization of the singular part of the solutions has to be fixed by an initial condition. The regular solutions are determined by the $\gamma_n^{(a)}$. We will restrict our attention to isentropic (adiabatic) perturbations, which may be left over from an earlier inflationary epoch. We will neglect all $\gamma_n^{(a)}$ with $n>2$. This is motivated by the findings of \cite{Rebhan94} that these quantities are related to the higher moments of the kinetic distribution function. The restriction to $n\le2$ means that we fix the momenta which directly occur in the energy-momentum tensor and set all others to zero initially. \subsubsection{Scalar perturbations} {From} (\ref{phipi}) and (\ref{scon}) the small $x$ behaviour of the matter variables \begin{eqnarray} \epsilon_m &\sim& x^2 \Phi(0)\nonumber \\ v_s^{(0)} &\sim& x (\Phi(0)+2\Pi(0))\nonumber \\ \pi_T^{(0)} &\sim& x^2 \Pi(0)\nonumber \end{eqnarray} follows. Therefore, $v_s^{(0)}(0^+)$ and $\pi_T^{(0)}(0^+) = 0$ for regular solutions. This means that the r.h.s. of (\ref{vsk}) and (\ref{pitk}) after performing (\ref{ic}) have to vanish as well, i.e., \begin{eqnarray} \sum_{n=0}^\infty \gamma_n^{(0)} K^{(0)(n+1)}(0^+) &=& 0 \\ \sum_{n=0}^\infty \gamma_n^{(0)} (K^{(0)} + 3 K^{(0)\prime\prime})^{(n)}(0^+) &=& 0 \ . \end{eqnarray} Due to \begin{equation} \label{intc} K^{(0) \prime}(0^+) = (K^{(0)} + 3 K^{(0)\prime\prime})(0^+) = 0 \end{equation} we are free to choose any $\gamma_0^{(0)}$. It is remarkable that these relations hold for collisionless matter and are not changed by the addition of weak self-interactions. A completely arbitrary choice of the $\gamma_n^{(0)}$ is not possible. In the following we choose $\gamma_0^{(0)} \neq 0$ and all higher $\gamma_n^{(0)}$ vanishing. This corresponds to the usual choice for isentropic initial conditions, e.g. \cite{Schaefer}. In the above mentioned equation for $\Phi_N$ the arbitrary constants $C_1$ and $C_2$ remain to be fixed. Initial conditions are included by the replacement (\ref{ic}) which yields, together with Eqs. (\ref{intPhi},\ref{pit},\ref{pitk}), the integral equation for $\Phi_N$ \begin{eqnarray} \label{PhiN} \Phi_N(x)= &&- \frac{2}{x} \int_0^{x} dx^{\prime} \left( x^\prime \cos (x-x^{\prime} ) + 4 \sin (x - x^{\prime} ) \right) \Phi_N^{\prime} (x^{\prime} ) \nonumber \\ \label{phiN} &&- \frac{18 \alpha}{x^2} \int_0^x d x^\prime ( K^{(0)} + 3 K^{(0)\,\prime\prime})(x-x^\prime) \Phi_N^\prime (x^\prime) + \mbox{i.c.} \end{eqnarray} with the initial conditions \begin{eqnarray} \mbox{i.c.} = && \frac{ \sin x}{ x} \left(3 C_1 - 2 \Phi_N(0) + 4 \sum_{n=0}^\infty \gamma_{2n} (-)^n (n-2) \right) + \frac{ \cos x}{x} \left(3 C_2 +2 \sum_{n=0}^\infty \gamma_{2n+1} (-)^n (2 n-3) \right) \nonumber \\ &&- \frac{18 \alpha}{x^2} \sum_{n=0}^\infty \gamma_n ( K^{(0)} + 3 K^{(0)\,\prime\prime} )(x) )^{(n)}. \end{eqnarray}begin For regular solutions, $C_2$ is determined by requiring that the term in the second round brackets vanishes, and $C_1$ is related to the initial value $\Phi_N(0)$ by the constant term in the series in $x$. The metric perturbation $\Phi_{\rm RP}$ is given through \begin{equation} \label{phiRP} \Phi_{\rm RP} = {3 \alpha \over x^2} \left( \epsilon_{g\ \rm RP} + \frac 4x v_{s\ \rm RP}^{(0)}\right) \ . \end{equation} The regularity requirement establishes a relation between $\Phi_N(0), \Pi(0)$ and the $\gamma_{2n}^{(0)}$ coefficients. Using this relation together with the expressions for $\Phi_N(0)$ and $\Pi(0)$, $C_1$ may be expressed as a sum of coefficients $\gamma_{2n}^{(0)}$ (see Appendix B). \subsubsection{Vector perturbations} {From} (\ref{vcon}) a constant $v_{c \rm\ PF}$ follows, which is a consequence of the Kelvin-Helmholz theorem. For a perfect fluid alone, this forbids regular solutions at all, because it entails a singular frame dragging potential $\Psi$. This situation changes when collisionless or weakly self-interacting matter is added (a discussion of potential cosmological consequences like primordial magnetic fields can be found in \cite{Rebhan92b}). One can have a regular vector perturbation sustained by the relativistic plasma alone, or one can compensate a primordial vorticity in the perfect fluid component, which has constant $v_{c \rm\ PF}$, by a nonvanishing vorticity of opposite sign in the relativistic plasma, and a growing net vorticity is generated by the nontrivial evolution of the latter. Eq.~(\ref{vector}) implies for small $x$ \begin{equation} v_c = \alpha v_{c\rm\ RP} + (1-\alpha)v_{c \rm\ PF} \sim x^2 \ . \end{equation} Therefore, (\ref{vk}) leads with (\ref{ic}) to \begin{equation} \label{vPF} v_{c \rm\ PF} = \frac{ 3 \alpha}{1-\alpha} \sum_{n=0}^\infty \gamma_n^{(1)} K^{(1)(n)}(0^+) \ . \end{equation} As can be seen from the above formula, in the absence of a primordial perfect fluid vorticity one has to have nonvanishing coefficients $\gamma_n^{(1)}$, $n\ge1$ for nontrivial regular solutions. In the presence of a perfect fluid component, we shall restrict ourselves to nonzero $\gamma_0^{(1)}$ and vanishing higher coefficients. \subsubsection{Tensor perturbations} Eqs. (\ref{tensor}) and (\ref{tk}) with (\ref{ic}) lead to \begin{equation} \sum_{n=0}^\infty \gamma_n^{(2)} K^{(2)(n)}(0^+) = 0 \ . \end{equation} Because tensor perturbations correspond to gravitational waves, there are nontrivial solutions for all $\gamma_n^{(2)}$ vanishing. Then the initial conditions are specified by the amplitude $H(0)$ and its derivative $H^\prime(0)$. This will be our choice in what follows. \subsection{Solutions through $O(\lambda^{3/2})$} The singular solutions to the equations for the dynamics of cosmological perturbations are obtained with the Ansatz \begin{equation} Y^{(a)}(x) = x^\sigma \bar{Y}^{(a)}(x) \end{equation} with $\bar{Y}^{(a)}(0^+) \neq 0$ and finite ($a=0,1,2$). This yields \begin{equation} \sigma = -\frac52 + a \pm \frac12 \sqrt{1 - {\alpha\over \alpha_{\rm crit.}(\lambda)}} \ . \end{equation} Therefore, for $\alpha$ greater than \begin{equation}\label{acrit32} \alpha_{\rm crit.} (\lambda) = \frac5{32} \left( 1 - \frac54 {\lambda\over \pi^2} + {105\over 16} \left(\lambda\over \pi^2\right)^{\frac32} \right)^{-1} \end{equation} $\sigma$ takes complex values, and thus gives rise to superhorizon oscillations $\sim\cos([{\mathrm Im}\sigma]\ln x)$. For small $\lambda \geq 0$ the value of $\alpha_{\rm crit.}$ is increased. This is as one may expect, since in the collision-dominated case of a perfect fluid superhorizon oscillations do not occur. For large values of $\lambda$, $\lambda \ge 8\pi^2/63\approx 1.25$ this trend is reversed, but there the perturbative series can no longer be trusted, because the $\lambda^{3/2}$ correction begins to dominate over the $\lambda^1$-term. Turning now to the regular solutions, the power series Ansatz detailled in Appendix B is made. This leads to recursion relations for its coefficients that can be solved as also given in the Appendix. These have been evaluated with a {\em Mathematica} code \cite{Wolfram}. On superhorizon scales the effects of the self-interactions on the regular solutions are small. E.g., for scalar perturbations with $\alpha = 1$ and the above initial conditions, \begin{equation}\label{pioeps} {\pi_T^{(0)}\over \epsilon_m^{(0)}}(0) = - \frac 37 \left( 1 - {25\over 28} {\lambda\over \pi^2} + {75\over 16} \left(\lambda\over \pi^2\right)^{\frac32} \mp O(\lambda^2)\right) \end{equation} the amplitude of the anisotropic pressure becomes smaller when the plasma is collisional for values of $\lambda$ smaller $ (4\pi/21)^2 \sim 0.36$. Since for tightly coupled plasmas we expect the behaviour of a perfect fluid, i.e. the above ratio should be zero, we conclude that the perturbative result through order $\lambda^{3/2}$ can be trusted only for $\lambda < 0.36$. Some solutions for the density contrast in the subhorizon region are shown in Fig.~\ref{f1}. The three lines show the behaviour of $|\epsilon_{m\ {\rm RP}}(x)|$ for an ultrarelativistic plasma alone. We plotted solutions for $\lambda = 0,1/2$, and $1$. The collisionless solution (full line) decays due to directional dispersion \cite{Boerner}. The phase velocity of the oscillations is the speed of light. For very small values of $\lambda$ no visible effect occurs in the plotted region, whereas for bigger values a considerable change in the damping behaviour and in the phase velocity is observed. For $\lambda =1$ (dotted line) the density contrast starts to grow again at $x/\pi \sim 6$, which in fact is associated with rather unlikely beats. Also for the smaller value of $\lambda=1/2$ such a behaviour arises for large enough $x$. This will be analysed later in the next section. \begin{figure} \centerline{ \epsfbox{f1.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|\epsilon_{m\ {\rm RP}}(x)|$} \put(8,0){$x/\pi$} \end{picture} \\[30pt] \caption{\label{f1} The full line shows the density contrast $|\epsilon_{m\ {\rm RP}}|$ for collisionless matter in the subhorizon region. The dashed and dotted solutions show the effect of collisional matter through $O(\lambda^{3/2})$ with $\lambda = 1/2$ and $1$ respectively. } \end{figure} \section{Pad\'e improvement} As we have seen, in order to have sizable effects from the self-interactions of the thermal matter, we have to adopt sizable values of $\lambda$. However, the perturbative results quickly become unreliable with increasing $\lambda$. In order to get a somewhat more quantitative idea of the problem and of ways to improve this situation, we first inspect a solvable case. \subsection{The thermal mass in the limit $N\to\infty$} It is well-known\cite{DolanJackiw} that in the limit $N\to\infty$ the model of Eq.~(\ref{L}) becomes exactly solvable. In this limit, the thermal mass of the scalars is exactly given by the resummed one-loop gap equation \begin{equation} \label{mgap} m^2/T^2={6\lambda\over\pi^2}\int_{m/T}^\infty dx \, { \sqrt{x^2-m^2/T^2} \over e^x-1} . \end{equation} The first two terms of the perturbative series \begin{equation} \label{mpert} m^2/T^2=\lambda-\frac3\pi \lambda^{3/2}+O(\lambda^2\ln\lambda) \end{equation} are naturally a good approximation for very small $\lambda$, but apparently breaks down when $\lambda\approx1$. Indeed, for $\lambda=1$, Eq.~(\ref{mpert}) gives $m^2/T^2\approx 0.05$, which is an order of magnitude too small when compared with the result following from Eq.~(\ref{mgap}), which yields $m^2/T^2\approx 0.53$. However, rewriting Eq.~(\ref{mpert}) in the perturbatively equivalent way of \begin{equation} \label{mpade} m^2/T^2= {\lambda\over1+ \frac3\pi \lambda^{1/2} }+O(\lambda^2\ln\lambda) \end{equation} considerably extends the range of $\lambda$ over which the first two terms of the perturbative series give a faithful picture of the actual behaviour. For $\lambda=1$, Eq.~(\ref{mpade}) yields $m^2/T^2\approx 0.51$, which is only a few percent too small. \begin{figure} \centerline{ \epsfxsize=4in \epsfbox[68 240 540 560]{lam2.ps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(1,5.3){$m^2/T^2$} \put(13.5,-0.5){$\lambda$} \end{picture} \\[30pt] \caption{\label{lam2} $m^2/T^2$ as a function of $\lambda$ in the exactly solvable case of $N\to\infty$ (full line). The long-dashed curve is the perturbative result (4.2); the short-dashed one the Pad\'e-improved (4.3). } \end{figure} Of course, for still larger values of $\lambda$, also Eq.~(\ref{mpade}) becomes increasingly imprecise, but the improvement over Eq.~(\ref{mpert}) remains striking. What we have done by going from Eq.~(\ref{mpert}) to Eq.~(\ref{mpade}) can be viewed as replacing the first terms of a power series in $\sqrt{\lambda}$ \begin{equation} X=a+b \lambda^{1/2}+c\lambda+d\lambda^{3/2}+\ldots \end{equation} by its (2,1)-Pad\'e approximant (again in powers of $\sqrt{\lambda}$)\cite{Pade} \begin{equation} X={\alpha+\beta\lambda^{1/2}+\gamma\lambda \over 1+\delta\lambda^{1/2}} +\ldots \end{equation} In the case of the thermal mass, $a=0$ since we have no tree-level mass to start with, and $b=0$ since the plasmon effect is down by one-half order with respect to the perturbative one-loop one. Extending this procedure to other quantities, we thus still have $b=0$, so that the results through order $\lambda^{3/2}$ determine the four parameters of the corresponding (2,1)-Pad\'e approximants, \begin{equation} \label{abcd} \alpha=a,\quad \beta=-ad/c,\quad \gamma=c,\quad \delta=-d/c . \end{equation} \subsection{The resummed 1-loop kernel} In our applications, the accuracy of the perturbative result will not only deteriorate when $\lambda$ is increased, but also when $x$ becomes too large, as we have seen in Fig.~\ref{f1}. This is so because the asymptotic behaviour of the Fourier transforms (\ref{K0},\ref{K1},\ref{K2}) with $x\to\infty$ is such that for any small but finite value of $\lambda$ the correction of order $\lambda^{3/2}$ in $K^{(a)}$ eventually overtakes the $\lambda^1$ term which in turn overtakes the lowest order term. Apparently, this signals a breakdown of perturbation theory at large $x$. The origin of the difference in the asymptotic behaviour in $x$ comes from an increasingly singular behaviour of the discontinuity of the functions $A$, $B$, and $C$ at higher orders in $\lambda$. For example, from (\ref{Ar}) one notices that at lowest order the discontinuity of $A$ across the branch cut between $\omega=\pm1$ is a constant; at order $\lambda^1$ it is logarithmically singular at $\omega=\pm1$ and in addition, there are now simple poles at these points; at order $\lambda^{3/2}$ the singularities are still worse. Since these singularities occur at the end points of the integration region contributing to the Fourier transform, they become dominant for the large $x$ behaviour of the latter. On the other hand, these singularities at the light-cone should not exist at all since the originally massless scalars have acquired thermal masses. Indeed, keeping the thermal masses in the integrals without expanding them on account of them being proportional to $\lambda$, shows that the complete discontinuity is smooth at $\omega=\pm1$ and that also the simple poles there are spurious \cite{NRS1}. For instance, the one-loop contribution to $K^{(0)}(x)=j_0(x)$ is modified by including the thermal mass in the scalar propagators according to \begin{eqnarray}\label{R18} K_1^{\rm res.}(x)= - && \int_{-1}^1 d\omega e^{-i\omega x}\\ \nonumber \times \int_{m/\sqrt{1-\omega^2}}^\infty && dp\,p^4 {d\over dp}\left({ 1\over \exp (p/T) -1 } \right) \left/ \left( {8\pi^4 T^4\over15} \right) \right. \end{eqnarray} up to terms whose amplitude is suppressed by explicit powers of $\lambda$. With a non-zero $m=\sqrt{\lambda}T$, the integrand is now seen to vanish at $\omega=\pm1$. Instead of being a constant, it vanishes at the endpoints of the branch cut together with all its derivatives, but rapidly recovers the bare one-loop value away from the light-cone. This is a negligible effect for small $x$, but the large $x$ behavior is changed completely. Eq.\ (\ref{R18}) can be evaluated by a Mellin transform\cite{Dav} which yields \begin{eqnarray}\label{R19} &&K_1^{\rm res.}(x)= {\sqrt\pi\over2}{15\over 8 \pi^4} \times\\ \nonumber &&\sum_{k=0}^\infty \lambda^{k/2} {(-1)^{1+k}(k-4)\zeta(k-3)\over (2\pi)^{k-4} \Gamma(1+k/2)} \left(2\over x\right)^{{( 1-k)/ 2}} J_{{ (1-k )/ 2}}(x) , \end{eqnarray} where in the term with $k=4$ one has to substitute $(k-4)\zeta(k-3)\to1$. For small $x$, \begin{equation}\label{K1rexp1} K_1^{\rm res.}(x) = \left( j_0(x)-{5\lambda\over 8\pi^2}\cos(x) \right) +O(\lambda^{3/2}) \end{equation} is a good approximation; for $x\gg1$, on the other hand, the complete function $K_1^{\rm res.}(x)$ turns out to decay even faster than $j_0(x)$, oscillating with a reduced phase velocity \begin{equation} v=1-{5\lambda\over 8\pi^2}+O(\lambda^{3/2}). \end{equation} A better approximation is thus obtained by modifying $$ K^{(0)}_{1}(x) \propto j_0(x) \to j_0((1-\frac{5\lambda}{8\pi^2})x). $$ This indeed resums and thus removes the terms of order $\lambda$ in $K^{(0)}$ which are the most dominant for large $x$, leaving only such which are logarithmically larger than $K^{(0)}_1$. However, the kernel $K^{(0)}$ has to satisfy (\ref{intc}) in order that the equation for scalar cosmological perturbations remains integrable. In Ref.~\cite{NRS1} we have found that this can be fulfilled by adjusting the order $\lambda$ term in $K^{(0)}$ so that it contains the same phase velocity as the modified $K^{(0)}_1$ and correcting the prefactors without modifying the lowest orders in $\lambda$. However, it is difficult to see how the effect of the higher terms in Eq.~(\ref{R19}) can be accounted for by similarly simple modifications. The functions that come with higher powers of $\lambda$ are in fact more and more {\em diverging} with $x$ and only the infinite sum is a decaying and regularly oscillating function --- any truncation results in a function that oscillates with interferences before exploding eventually. Unfortunately, we are able to compute only the first few terms of the contributions to the kernel, Eq.~(\ref{R18}) being only one particularly simple contribution. In order to improve the behaviour of our results at large values of $x$, we propose to use the same Pad\'e approximation that was working rather well above. The most obvious way for doing that would be to turn the coefficients in Eq.~(\ref{abcd}) into functions of $x$. However, this turns out to violate the integrability constraints on $K^{(a)}$. A method which manifestly respects the latter is to perform a Pad\'e improvement on each of the Taylor coefficients of $K^{(a)}$ as a function of $x$. In order to test this procedure, we have applied it to the first three terms of the expanded kernel (\ref{R19}). With $\lambda=1$, the result is shown in Fig.~\ref{padek}. There the full line gives the complete function (\ref{R18}) and the long-dashed line shows the truncated series through $\lambda^{3/2}$. The latter deviates quickly from the complete function and after few oscillations is completely off. The Pad\'e improved function, where each term in the power series in $x$ is replaced by a (2,1)-Pad\'e approximant, behaves instead much more regularly and is quite close to the complete function for all values of $x$. \begin{figure} \centerline{ \epsfxsize=5in \epsfbox[68 230 540 560]{padek.ps} } \vspace{-6cm} \unitlength1cm \begin{picture}(16,5) \put(0,7.2){$K_1^{\rm res.}(x)$} \put(14.5,3){$x/\pi$} \end{picture} \\[30pt] \caption{\label{padek} The function $K_1^{\rm res.}(x)$ (full line) for $\lambda=1$ and two perturbative approximations: The long-dashed line gives the first three terms of its perturbative expansion; the short-dashed one its Pad{\'e} improved version. } \end{figure} We therefore expect that the analogous procedure for improving the kernels we have obtained perturbatively up to and including order $\lambda^{3/2}$ allows us to extend the range in both, $\lambda$ and $x$, where the solutions to the corresponding equations for cosmological perturbations can be trusted, to about $\lambda\lesssim1$ and $x/\pi\lesssim10$. \section{Pad\'e-improved solutions} On superhorizon scales the perturbative results became obviously unreliable already at moderate values of $\lambda$ because, among others, the $O(\lambda^{3/2})$-contributions in Eq.~(\ref{acrit32}) stopped $\alpha_{\rm crit.}$ from increasing beyond $\lambda\approx1.25$. With the Pad\'e-improved results we have \begin{equation} \alpha_{\rm crit.}(\lambda) = \frac5{32}{4 + 21{\lambda^{\frac12}\over\pi} \over 4 + 21 {\lambda^{\frac12}\over\pi} - 5 {\lambda\over \pi^2}}, \end{equation} which shows an ever-increasing behaviour up to very large $\lambda$. Likewise the Pad\'e-improved version of Eq.~(\ref{pioeps}) is a monotoneous function. In both cases, the results for the weakly-interacting plasma remains far from the perfect-fluid ones for $\lambda\sim1$; only for $\lambda\gtrsim 10^2$ these would be reached, where a perturbative treatment is certainly inadequate. In the following we shall inspect our Pad\'e-improved solutions for $\lambda=1$ and $x/\pi\ge10$. \subsection{Scalar perturbations} In Fig.~\ref{f4}, the density contrast associated with a scalar perturbation in a pure relativistic plasma is given for the interacting and the collisionless case. The difference turns out to be moderate so that a perturbative treatment seems justified. The main effect turns out to be a somewhat decreased phase velocity and a somewhat diminished exponent in the power-law decay. This is exactly what one would expect in view of the behaviour of density perturbations in the presence of a perfect fluid, where the phase velocity equals $1/\sqrt3$ and where damping through directional dispersion is inoperative. \begin{figure} \centerline{ \epsfbox{f4.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|\epsilon_{m\ {\rm RP}}(x)|$} \put(8,0.2){$x/\pi$} \end{picture} \\[5pt] \caption{\label{f4} The density contrast is shown for collisionless matter (full line) and for an ultrarelativistic plasma with $\lambda = 1$ (dotted line), which is the Pad\'e-improved solution. } \end{figure} In Fig.~\ref{f5} the density perturbations are shown for a two-component system with an equal amount of perfect fluid and relativistic plasma. There is little difference from the case considered in Ref.~\cite{Rebhan92a}, where the relativistic plasma was collisionless. The main effect is again a diminished phase velocity, which is exhibited in the magnified Fig.~\ref{f6}. There is no longer a simple decay-law for the density perturbations in the plasma component, because it is strongly influenced by the comparatively large over- and underdensities created by the acoustic waves propagating in the perfect fluid component. \begin{figure} \centerline{ \epsfbox{f5.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,5){$|\epsilon_{m\ {\rm RP}}(x)|$} \put(0,4){$|\epsilon_{m\ {\rm PF}}(x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f5} Density perturbations for a mixture of a perfect fluid (dashed line) and an ultrarelativistic plasma (dotted line) with $\alpha = 1/2$ and $\lambda = 1$. } \end{figure} \begin{figure} \vspace{-1cm} \centerline{\epsfbox{f6.eps}} \nopagebreak \vspace{-4cm} \nopagebreak \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|\epsilon_{m\ {\rm RP}}(x)|$} \put(8,0.2){$x/\pi$} \end{picture} \\[5pt] \nopagebreak \caption{\label{f6} For the same mixture as in Fig.~5 the subhorizon perturbations of the plasma component are shown for $\lambda = 0$ (full line) and $\lambda = 1$ (dashed line). } \end{figure} In Fig.~\ref{f7}, the anisotropic pressure associated with the scalar perturbations in the two-component case is given, compared with the collisionless version. Except for the third peak, there is remarkably little difference between $\lambda=0$ and $\lambda=1$, although one might have expected that the anisotropic pressure would be the most sensitive quantity to self-interactions in the plasma, since a collision-dominated perfect fluid forbids anisotropic pressure completely. \begin{figure} \centerline{ \epsfbox{f7.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|\pi^{(0)}_T(x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f7} The anisotropic pressure is plotted for $\alpha = 1/2$. The full line shows the behaviour for a plasma with $\lambda = 0$, whereas the dotted line shows the solution for $\lambda =1$. } \end{figure} \subsection{Vector perturbations} As we have mentioned, the presence of a relativistic plasma opens the possibility of having regular vector perturbations. In a perfect-fluid alone, the velocity amplitude is constant in the radiation-dominated epoch by virtue of the Kelvin-Helmholtz circulation theorem \cite{HKth}, which leads to a singular behaviour of the frame dragging potential. Adding in a plasma component with initially compensating vorticity, however, allows nontrivial regular solutions. These solutions were first found within the thermal-field-theoretical treatment and it was pointed out in Ref.~\cite{Rebhan92b} that they might have interesting applications in the open issue of primordial magnetic fields. In Fig.~\ref{f8}, such a solution exhibiting the generation of a net vorticity which approaches a constant velocity amplitude is given for $\lambda=1$. In this case there is even only a relatively small deviation from the collisionless scenario without the Pad\'e improvement (Fig.~\ref{f9}). With it, however, the difference becomes even rather tiny. This is also somewhat unexpected, since the very existence of these solutions hinges on having a plasma component that is approximately collisionless. The effect of self-interactions are found to give only a small increase of the period of the wiggles in the vorticity of the plasma component while it dies from directional dispersions. \begin{figure} \centerline{ \epsfbox{f8.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,5){$|v_{c\ {\rm RP}}(x)|$} \put(0,4){$|v_c (x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f8} In a universe containing a mixture ($\alpha = 1/2$) of a perfect fluid and an ultrarelativistic plasma rotational perturbations $|v_c (x)|$ may survive in the subhorizon region (full line). The dotted line shows the rotational perturbation of the plasma component $|v_{c\ {\rm RP}}(x)|$ with $\lambda = 1$. } \end{figure} \begin{figure} \centerline{ \epsfbox{f9.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|v_{c\ {\rm RP}}(x)|$} \put(8,0.2){$x/\pi$} \end{picture} \\[5pt] \caption{\label{f9} For the same mixture as in Fig.~8, the rotational perturbation $|v_{c\ {\rm RP}}(x)|$ is plotted for $\lambda = 0$ (full line) and for $\lambda = 1$. In the latter case the dashed line shows the solution through order $\lambda^{3/2}$; the Pad\'e improved solution is shown by the dotted line. } \end{figure} Without a perfect-fluid component, one can have a regular solution which has a growing velocity amplitude on superhorizon scales and which decays from directional dispersion after horizon crossing, which is shown in Fig.~\ref{f10}. In Fig.~\ref{f11}, a magnified picture of the subhorizon behaviour is given, which shows both a small decrease of the phase velocity of the oscillations and a small reduction of the damping. While this is similar to the subhorizon behaviour encountered in the scalar case, it could hardly be anticipated by comparison with the perfect fluid case, because in the limit of tight coupling these perturbations are forbidden entirely. \begin{figure} \centerline{ \epsfbox{f10.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|v_{c\ {\rm RP}}(x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f10} Without perfect fluid all rotational perturbations decay on subhorizon scales. The full line shows the behaviour of collisionless plasmas ($\lambda = 0$), whereas the dotted line shows the Pad\'e improved solution for $\lambda = 1$. } \end{figure} \begin{figure} \vspace{-1cm} \centerline{ \epsfbox{f11.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|v_{c\ {\rm RP}}(x)|$} \put(8,0.2){$x/\pi$} \end{picture} \\[5pt] \caption{\label{f11} The same as in Fig.~10, but just the subhorizon region. } \end{figure} \subsection{Tensor perturbations} Tensor perturbations correspond to primordial gravitational waves. Their large-time behaviour is expected to be rather independent of the medium, since it is dictated by energy conservation. Indeed, there is only some difference in the behaviour at the time of horizon crossing which implies that a relativistic plasma requires stronger initial tensor perturbations in order to have equal amplitude in the gravitational waves at late times. There is, however, extremely little difference in the behaviour of the solutions for the plasma case for $\lambda=0$ and $\lambda=1$, see Fig.~\ref{f12}. The self-interacting case is closer to the perfect-fluid case, but only very little so. As Fig.~\ref{f13} shows, the perturbative result is moreover rather insensitive to the Pad\'e-improvement. \begin{figure} \centerline{ \epsfbox{f12.eps} } \nopagebreak \vspace{-4cm} \nopagebreak \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|H(x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f12} The amplitude of a gravitational wave $|H(x)|$ for a a perfect fluid (dashed line) and an ultrarelativistic plasma. The full line solution corresponds to $\lambda = 0$ and the dotted line to the Pad\'e improved solution for $\lambda = 1$. } \end{figure} \begin{figure} \centerline{ \epsfbox{f13.eps} } \vspace{-4cm} \unitlength1cm \begin{picture}(16,3) \put(0,4.5){$|H(x)|$} \put(8,-1.3){$x/\pi$} \end{picture} \\[40pt] \caption{\label{f13} As in Fig.~12, but only for $\alpha = 1$ with $\lambda = 1$. The full line shows the solution through order $\lambda^{3/2}$ and the dotted line the Pad\'e improved solution. } \end{figure} \section{Conclusion} We have studied the effects of weak self-interactions in an ultrarelativistic plasma on cosmological perturbations through order $\lambda^{3/2}$ in $\lambda\phi^4$-theory. At this order it turns out that perturbation theory requires a resummation of an infinite set of higher-order diagrams. This is natural to include in the thermal-field-theory approach to cosmological perturbations, which in the collisionless case is equivalent to the usual approach based on classical kinetic theory, but now leaves the latter clearly behind. While it still turned out to be possible to exactly solve the perturbation equations by means of a power series ansatz, we have found that the relatively large coefficients of the order-$\lambda^{3/2}$ corrections make the results reliable only for rather small values of $\lambda$, and even then there is a breakdown of perturbation theory in the asymptotic late-time behaviour. The latter comes from increasingly singular contributions to the gravitational polarization tensor for light-like momenta and could be cured by a further resummation procedure. However, the effects of the latter turned out to be well approximated by a (2,1)-Pad\'e-improvement of the perturbative results, which also drastically improves the apparent convergence of the results for smaller times. The concrete results obtained showed a tendency toward perfect-fluid behaviour, but with $\lambda=1$ all of them are still (perhaps surprisingly) close to the collisionless case. The main effects turned out to be a small increase of the critical mixing factor of a perfect-fluid component with a relativistic plasma above which one can have singular solutions exhibiting superhorizon oscillations. Concerning the regular solutions, we have found a decrease of the phase velocity of scalar perturbations and a reduction of its damping. In the case of vector perturbations (corresponding to large-scale vorticity), which are only possible in the presence of a relativistic plasma, similar effects were found, but quantitatively much smaller. Very little effects from self-interactions were finally observed in the case of tensor perturbations, which correspond to primordial gravitational waves. {}From this one may conclude that a description of a primordial plasma built from weakly interacting elementary particles through perfect-fluid models is in general applicable only for scales far below the Hubble radius. At the scale of the horizon and beyond, self-interactions can be treated perturbatively, at least in the model considered here, and there can be significant differences from the perfect-fluid behaviour, in particular in the case of rotational perturbations. \acknowledgments This work was supported partially by the Austrian ``Fonds zur F\"orderung der wissenschaftlichen Forschung (FWF)'' under projects no.\ P9005-PHY and P10063-PHY, and by the EEC Programme ``Human Capital and Mobility'', contract CHRX-CT93-0357 (DG 12 COMA).
993458bf7c79fa83ae60112595ff6bbf86013c2f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Models have been widely used to study the properties of the hadron spectrum due to the impossibility to solve Quantum Chromodynamics (QCD) at the current moment. In particular, the quark potential models incorporate the perturbative one-gluon exchange quark-quark ($qq$) potential ($V_{\rm OGE}$) derived from QCD as well as a parametrization of some nonperturbative effects through a $qq$ confining potential ($V_{\rm con}$) \cite{RUJ}. Such an {\em effective theory} [the quark-gluon coupling constant is taken as an effective one and the constituent quark masses ($m_q$) are parameters fitted from the baryon magnetic moments] provides a reasonable understanding of the baryon spectrum and the hadron static properties \cite{YAO}. The idea of an interquark potential has been also used to study baryon-baryon interactions. More explicitly, the repulsive core of the nucleon-nucleon (NN) force has been shown to arise from the color-spin structure of the $V_{\rm OGE}$ \cite{FAE}. Nevertheless, the same scheme has been proven incapable of describing the long range part of the NN interaction unless pion exchange between quarks is introduced. From the basic theory, the origin of the one-pion exchange potential ($V_{\rm OPE}$) is associated with the spontaneous chiral symmetry breaking of QCD. Moreover, the inclusion of the $qq$ sigma potential ($V_{\rm OSE}$) consistently with its chiral partner (the pion) allows to reproduce the intermediate range $NN$ interaction as well as the deuteron properties \cite{FER}. In this paper we examine the consistency of the two scenarios. We determine a $qq$ interaction by studying two nucleon properties and charge exchange reactions and proceed to analyze the baryon spectrum for this interaction. As we shall discuss the predictions for the baryon masses and the baryon wave functions, it is not only a stringent test of the potential but also a consistency test of the formalism used to generate the NN interaction [resonating group method (RGM)]. \section{The Quark-Quark Potential} The starting point of our description is a quark-quark interaction of the following form: \begin{equation} V_{\rm qq} (\vec{r}_{ij}) = V_{\rm con} (\vec{r}_{ij}) + V_{\rm OGE} (\vec{r}_{ij}) + V_{\rm OPE} (\vec{r}_{ij}) + V_{\rm OSE} (\vec{r}_{ij}) \, , \end{equation} \noindent where $\vec{r}_{ij}$ is the interquark distance. The confinement is chosen to be linear as suggested by the meson spectrum and lattice calculations: \begin{equation} V_{\rm con} (\vec{r}_{ij}) = - a_c \, ( \vec{\lambda}_i \cdot \vec{\lambda}_j ) \, r_{ij} \, , \end{equation} \noindent where $\lambda 's$ are the SU(3) color matrices. $V_{\rm OGE}$, $V_{\rm OPE}$ and $V_{\rm OSE}$ have been derived in detail elsewhere \cite{RUJ,FER} and we shall limit here to write the final expressions: \begin{equation} V_{\rm OGE} ({\vec r}_{ij}) = {1 \over 4} \, \alpha_s \, {\vec \lambda}_i \cdot {\vec \lambda}_j \Biggl \lbrace {1 \over r_{ij}} - {1 \over {4 \, m^2_q}} \, \biggl [ 1 + {2 \over 3} {\vec \sigma}_i \cdot {\vec \sigma}_j \biggr ] \,\, {{e^{-r_{ij}/r_0}} \over {r_0^2 \,\,r_{ij}}} - {1 \over {4 m^2_q \, r^3_{ij}}} \, S_{ij} \Biggr \rbrace \, , \end{equation} \noindent where $\alpha_s$ is the effective quark-quark-gluon coupling constant, $r_0$ is the range of a smeared $\delta$ function in order to avoid and unbound spectrum \cite{BHA}, the $\sigma ' s$ stand for the spin Pauli matrices and $S_{ij}$ is the quark tensor operator $S_{ij} = 3 (\vec{\sigma}_i \, . \, \hat{r}_{ij}) (\vec{\sigma}_j \, . \, \hat{r}_{ij}) - \vec{\sigma}_i \, . \vec{\sigma}_j $. \begin{eqnarray} V_{\rm OPE} ({\vec r}_{ij}) & = & {1 \over 3} \, \alpha_{ch} {\Lambda^2 \over \Lambda^2 - m_\pi^2} \, m_\pi \, \Biggr\{ \left[ \, Y (m_\pi \, r_{ij}) - { \Lambda^3 \over m_{\pi}^3} \, Y (\Lambda \, r_{ij}) \right] {\vec \sigma}_i \cdot {\vec \sigma}_j + \nonumber \\ & & \left[ H( m_\pi \, r_{ij}) - { \Lambda^3 \over m_\pi^3} \, H( \Lambda \, r_{ij}) \right] S_{ij} \Biggr\} \, {\vec \tau}_i \cdot {\vec \tau}_j \, , \end{eqnarray} \begin{equation} V_{\rm OSE} ({\vec r}_{ij}) = - \alpha_{ch} \, {4 \, m_q^2 \over m_{\pi}^2} {\Lambda^2 \over \Lambda^2 - m_{\sigma}^2} \, m_{\sigma} \, \left[ Y (m_{\sigma} \, r_{ij})- {\Lambda \over {m_{\sigma}}} \, Y (\Lambda \, r_{ij}) \right] \, , \end{equation} \noindent where $m_\pi$ ($m_\sigma$) is the pion (sigma) mass, $\alpha_{ch}$ is the chiral coupling constant (related to the $\pi NN$ coupling constant), $\Lambda$ is a cutoff parameter and Y(x), H(x) are the Yukawa functions defined as: \begin{equation} Y(x) \, = \, {e^{-x} \over x} \,\,\,\, , \,\,\,\, H(x) \, = \, \Bigl( 1 + {3 \over x} + { 3 \over {x^2}} \Bigr) Y(x) \, , \end{equation} With this interaction, the two baryon system has been studied in the RGM framework assuming for the spatial part of the wave function of the quarks a harmonic oscillator ground state, \begin{equation} \eta_{\rm os} (\vec{r}_i - \vec{R} \, ) = \left( {1 \over {\pi b^2}} \right)^{3/4} e^{-(\vec{r}_i - \vec{R}\,)^2/ 2b^2} \, , \end{equation} \noindent where the parameter $\vec{R}$ determines the position of the baryon and $b$ is the harmonic oscillator constant. \begin{figure}[t] \vbox{ \vspace*{-2.75cm} \centerline{\epsfig{file=espe.eps,height=5.2in}} \vspace*{-3.5cm} \caption{Relative energy $N$ and $\Delta$ spectrum up to 0.7 GeV excitation energy. The solid line corresponds to the results of our model. The boxes represent the experimental data with the corresponding uncertainties.} \label{espe} } \end{figure} \begin{table}[hbt] \caption[]{Value of the potential parameters} \label{param} \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline \tabstrut $m_q$ & $b$ & $\alpha_s$ & $\alpha_{ch}$ & $a_c$ & $m_\pi$ & $m_\sigma$ & $\Lambda$ \\ MeV & fm & & & ${\rm MeV} \cdot {\rm fm}^{-1}$ & ${\rm fm}^{-1}$ & ${\rm fm}^{-1}$ & ${\rm fm}^{-1}$ \\ \hline 313 & 0.518 & 0.485 & 0.027 & 91.488 & 0.7 & 3.42 & 4.2 \\ \hline \end{tabular} \end{table} Using the same set of parameters given in Table \ref{param}, the $NN$ scattering phase shifts, the static and electromagnetic properties of the deuteron \cite{FER} and reactions that take place with the excitation of the $\Delta$ resonance \cite{FER2} are reasonably reproduced. \section{Results} To study the baryonic spectrum with the potential just described (parameters as in Table \ref{param}) we solve the Schr\"{o}dinger equation in the hyperspherical harmonic approach \cite{BAL}. The low energy $N$ and $\Delta$ spectrum obtained, a part of which is shown in Fig. \ref{espe}, is quite reasonable though some small discrepancy remains concerning the relative energy position of the Roper resonance [$N^* (1440)$] and the first negative parity state excitation [$N^- (1535)$], as it is common in two body potential models. \begin{figure}[t] \vbox{ \vspace*{-3.0cm} \centerline{\epsfig{file=pot.eps,height=4.0in}} \vspace*{-0.1cm} \caption{Adiabatic $NN$ potential for different nucleon wave functions for the channel (S,T)=(1,0). The solid line corresponds to the solution of the Schr\"odinger equation. The others to a gaussian ansatz with the values of $b$ given in the figure.} \label{pot} } \end{figure} Regarding the wave function and in order to check the internal RGM consistency, it makes sense to compare the $NN$ potential one obtains from the wave function solution of the Schr\"odinger equation, with the potential obtained from the ansatz RGM wave function for different values of the parameter $b$. For the sake of technical simplicity we do this in the Born-Oppeheimer approximation. The results appear in Fig. \ref{pot}, where we can see that it is precisely the value of $b$ that gives the best overall fit to the $NN$ data with RGM, the one which provides a better approximation. This confers to $b$ a self-consistent character and solves the controversy about its possible values. Certainly, more work and further refinements are needed along this direction. Meanwhile, the proposed model points out the plausibility of a unified description of the baryon structure and the baryon-baryon interaction. \begin{acknowledge} This work has been partially funded by Direcci\'on General de Investigaci\'on Cient\'{\i}fica y T\'ecnica (DGICYT) under the Contract No. PB91-0119 \end{acknowledge} \catcode`\@=11 \if@amssymbols% \clearpage \else\relax\fi\catcode`\@=12
c37f7be8ad9d23a4ad99cbe61754be453a69595b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Over the past decade there has been enormous interest in reaction-diffusion systems (see [1--12] and references therein), with particular emphasis on the effects of fluctuations in low spatial dimensions. Most attention has been paid to reactions of the form $A+A\rightarrow\emptyset$ and $A+B\rightarrow\emptyset$ with a variety of different initial/boundary conditions. At or below an upper critical dimension $d_c$, these systems exhibit fluctuation induced anomalous kinetics, and the straightforward application of traditional approaches, such as mean field rate equations, breaks down. Attempts to understand the role played by fluctuations for $d\leq d_c$ have involved several techniques, including Smoluchowski type approximations \cite{KBR} and field theoretic methods \cite{L,LC,HC}. In this paper we set out to study these fluctuation effects in a system with three competing reactions: $$ A+A\rightarrow\emptyset \qquad B+B\rightarrow\emptyset \qquad A+B\rightarrow\emptyset. $$ The reactions are irreversible, and we choose homogeneous, though not necessarily equal, initial densities for the two species at $t=0$. Our goal is to calculate density decay exponents and amplitudes, taking into account fluctuation effects. In pursuit of this aim, we analyse the system using both the Smoluchowski approximation and the field theory approach, and we show that the two methods are closely related. However, whereas it is unclear how the Smoluchowski approach may be improved, the field theory provides a systematic way to obtain successively more accurate values for the asymptotic density decay exponents and amplitudes. We shall concentrate on situations where one of the two species is greatly in the majority (as is almost always the case asymptotically) - so, for example, if the A species is predominant, then we can safely neglect the reaction $B+B\rightarrow\emptyset$. This kind of assumption will lead to a considerable simplification in our analysis. Previous work on this problem includes use of the Smoluchowski approximation \cite{KBR}, as well as exact $1d$ results obtained by Derrida {\it et al.} \cite{D1,D2,D3} for the special case of {\it immobile} minority particles. Derrida {\it et al.} were, in fact, studying a different problem, namely the probability that a given spin has never flipped in the zero temperature Glauber dynamics of the q-state Potts model in one dimension. By solving that model exactly \cite{D2,D3} they showed that this probability decreased as a power law: $t^{-3/8}$ for the Ising ($q=2$) case. However, in one dimension, the Ising spin flip problem and the decay rate for the immobile impurity in our reaction-diffusion system are exactly equivalent problems, and hence this exact decay rate also holds in our case. We also mention one other previous result for the immobile impurity problem, due to Cardy \cite{C}. Using renormalisation group methods similar to those employed in this paper, it was shown that the density of the minority species decays away as a universal power law: $t^{-\beta}$ for $d<2$, where $\beta={1\over 2}+O(\epsilon)$ and $\epsilon=2-d$. The case where the {\it majority} species is immobile has also been solved (see \cite{PG}). In this case the decay rate for the minority species is dominated by minority impurity particles existing in regions where there happen to be very few of the majority particles. Since these majority particles are strictly stationary, this situation is not describable using a rate equation approach, and it turns out that the minority species decays away as $\exp\left(-t^{d/(d+2)}\right)$, a result which is not accessible by perturbative methods. In this paper, using a field theory formalism and techniques from the renormalisation group, we will obtain decay rates and amplitudes for the general case of arbitrary diffusivities - a regime previously only accessible using the Smoluchowski approximation. Our basic plan is to map the microscopic dynamics, as described by a master equation, onto a quantum field theory. This theory is then renormalised (for $d\leq 2$), and the couplings (reaction rates) are shown to have $O(\epsilon)$ fixed points, whose values depend only on the ratio of the species' diffusion constants. Note that this system (with irreversible reactions) is particularly simple in that only the couplings (and not the diffusivities) are renormalised. The next step is to group together Feynman diagrams which are of the same order in the renormalised couplings - i.e. diagrams with the same number of loops. These diagrams are then evaluated and a Callan-Symanzik equation used to obtain improved asymptotic $\epsilon$ expansions for the densities. In this fashion, quantities of interest may be systematically calculated by successively including higher order sets of diagrams (with more loops) in the perturbative sum. One consequence of the theory is that the asymptotic decay rates and amplitudes for $d<d_c$ will be independent of the reaction rates - a result which is in accordance with the Smoluchowski approach. In fact, all physical quantities below the upper critical dimension asymptotically depend only on the diffusivities and the initial densities, and in this sense they display universality. We now present a summary of our results for the density decay rates. In what follows we define $n_A$, $n_B$ to be the initial density of A, B particles, and $\delta=(D_B/D_A)\leq 1$ to be the ratio of the diffusion constants. For $d<d_c=2$, $n_A\gg n_B$ and $n_A^{-2/d}D_A^{-1}\ll t\ll t_{1}$ (where $t_{1}$ is a crossover time derived in section 4), we have (as in \cite{L}): \begin{equation} \langle a\rangle\sim\left({1\over 4\pi\epsilon}+{2\ln 8\pi-5\over 16\pi}+O(\epsilon)\right) (D_At)^{-d/2}. \end{equation} For the minority species, we find, from the RG improved tree level approximation in the field theory: \begin{equation} \langle b\rangle\sim F(D_At)^{-\beta} \end{equation} where \begin{equation} \beta\approx{d\over 2}\left({\delta +1\over 2}\right)^{d/2} \qquad F\approx n_B\left({\Gamma(\epsilon/2)\over n_A(8\pi)^{d/2}} \right)^{\left({\delta+1\over 2}\right)^{d/2}}. \end{equation} These decay exponents are identical with the Smoluchowski results. Performing a strict $\epsilon$ expansion on this RG improved tree level result gives an exponent $\beta={1\over 2}+O(\epsilon)$ for the immobile impurity case ($\delta=0$). This is in agreement with previous RG calculations by Cardy \cite{C}. If we now go beyond the tree level calculation by including one loop diagrams, then we obtain an improved value for the exponent $\beta$ using an $\epsilon$ expansion: $$ \beta=\left({1+\delta\over 2}\right)\left(1-{\epsilon\over 2}\left[{3\over 2}+\ln\left({1+\delta\over 2}\right)- {\delta(1+\delta)\over 4}\left[1+2\ln\left({1+\delta\over 2}\right)\right] \right.\right. $$ \begin{equation} \left.\left. -{1\over 4}(\delta^2-1)\left(1+(1+\delta)\left[f\left\{{2\over 1+\delta}\right\}-{\pi^2\over 6}\right]\right)\right]\right) +O(\epsilon^2), \end{equation} where \begin{equation} f\{x\}=-\int_1^x{\ln u\over u-1}du \end{equation} is the dilogarithmic function \cite{AS}. This exponent is found to be in good agreement with simulations \cite{KBR} and exact results \cite{D2} in $d=1$. However, for $\delta<1$, the system crosses over to a second regime where $\langle b\rangle\gg \langle a\rangle$. This situation is similar to the case where we begin with $n_B\gg n_A$. In that regime, at times $D_Bt\gg n_B^{-2/d}$, and for $n_B\gg n_A$, $\delta\neq 0$ and $d<2$, we have: \begin{equation} \langle b\rangle\sim\left({1\over 4\pi\epsilon}+{2\ln 8\pi-5\over 16\pi}+O(\epsilon)\right)(D_Bt)^{-d/2}, \end{equation} for the majority species. Using the RG improved tree level result for the minority species, we obtain: \begin{equation} \langle a\rangle\sim E(D_Bt)^{-\alpha}, \end{equation} with \begin{equation} \alpha\approx{d\over 2}\left({1+\delta^{-1}\over 2}\right)^{d/2} \qquad E\approx n_A \left({\Gamma(\epsilon/2)\over n_B(8\pi)^{d/2}}\right)^{\left({1+\delta^{-1}\over 2}\right)^{d/2}}. \end{equation} \def>\mkern 4mu\sim{>\mkern 4mu\sim} The exponent is again in agreement with the Smoluchowski result. If we attempt to improve this calculation to one loop accuracy, then we obtain: $$ \alpha=\left({1+\delta^{-1}\over 2}\right)\left(1-{\epsilon\over 2}\left[{3\over 2}+\ln\left({1+\delta^{-1}\over 2}\right)- {\delta^{-1}(1+\delta^{-1})\over 4}\left[1+2\ln\left({1+\delta^{-1}\over 2}\right)\right] \right.\right. $$ \begin{equation} \left.\left. -{1\over 4}(\delta^{-2}-1)\left(1+(1+\delta^{-1})\left[f\left\{{2\over 1+\delta^{-1}}\right\}-{\pi^2\over 6}\right]\right)\right]\right) +O(\epsilon^2). \end{equation} This exponent is only valid for $\delta$ quite close to unity, and even in this region it may be less accurate than the (non $\epsilon$-expanded) RG improved tree level result given above. This point will be discussed further in section 4.2. We next give results valid for $d=2$, where we find extra logarithmic factors multiplying the power law decay rates. Treating first the case $\langle a\rangle\gg \langle b\rangle$ and $\delta\leq 1$, we have, from the RG improved tree level, an initial regime with: \begin{equation} \langle a\rangle\sim{\ln t\over 8\pi D_At} \end{equation} \begin{equation} \langle b\rangle =O\left(\left({\ln t\over t}\right)^{\left({1+\delta\over 2}\right)}\right). \end{equation} However, for $\delta<1$, the system again crosses over to a second regime where $\langle b\rangle\gg\langle a\rangle$. In this second regime the density decay exponents (though not the amplitudes) are the same as for the case where we begin with $n_B\gg n_A$. In that case we have, for $\delta\neq 0$: \begin{equation} \langle b\rangle\sim{\ln t\over 8\pi D_Bt} \end{equation} \begin{equation} \langle a\rangle =O\left(\left({\ln t\over t}\right)^{\left({1+\delta^{-1}\over 2}\right)}\right). \end{equation} Crossover times for these cases are given in section 4.3. We now give a brief description of the layout of this paper. In the next section we analyse the system using the mean field/Smoluchowski approach. We then set up the necessary formalism for our field theory in section 3, and use it to perturbatively calculate values for the density exponents and amplitudes in section 4. Finally, we give some conclusions and prospects for future work in section 5. \section{The Mean Field and Smoluchowski Approach} The simplest description of a reaction-diffusion process is provided by the mean field rate equations. For the system we are considering with densities $a$ and $b$, they take the form: \begin{eqnarray} & & {da\over dt}=-2\lambda_{AA}a^2-\lambda_{AB}ab \label{ratea} \\ & & {db\over dt}=-2\lambda_{BB}b^2-\lambda_{AB}ab, \label{rateb} \end{eqnarray} where $\lambda_{AA}$, $\lambda_{BB}$, and $\lambda_{AB}$ are the reaction rates, and where we impose initial conditions of the form $a|_{t=0}=n_A$ and $b|_{t=0}=n_B$. In this approach we have completely neglected the effects of fluctuations - in other words we have made assumptions of the form $\langle ab\rangle\propto\langle a\rangle\langle b\rangle$ etc., where the angular brackets denote averages over the noise. Below the critical dimension, where fluctuations become relevant, this sort of approximation will break down. Nevertheless, even at the mean field level, the complete solution set for these rate equations is quite complicated. In what follows we shall restrict our analysis to the case where $2\lambda_{BB}<\lambda_{AB}<2\lambda_{AA}$. The solution for this particular parameter set will be required for our later field theoretic analysis. Following \cite{KBR}, it is easy to show (by forming a rate equation for the concentration ratio) that $(a/b)\rightarrow 0$ as $t\rightarrow\infty$. Thus if we begin with initial conditions where $n_A\gg n_B$, we can identify two distinct regimes - an early time regime where $a\gg b$ and, after a crossover, a late time (true asymptotic) regime where $b\gg a$. Treating the early time regime first, we find (after some algebra): \begin{eqnarray} & & a\sim(2\lambda_{AA}t)^{-1} \label{mf1a}\\ & & b\sim{n_B\over(2n_A\lambda_{AA}t)^{\lambda_{AB}/2\lambda_{AA}}}. \label{mf1b} \end{eqnarray} Note that the A particles are decaying away more quickly than the B's, so eventually we crossover to a second regime: \begin{eqnarray} & & b\sim(2\lambda_{BB}t)^{-1} \label{mf2a} \\ & & a\sim{n_A\over (2n_B\lambda_{BB}t)^{\lambda_{AB}/2\lambda_{BB}}} \left(1+{(\lambda_{AB}-2\lambda_{AA})\over (2\lambda_{BB}-\lambda_{AB})} {n_A\over n_B}\right)^{-1-{\lambda_{AB}(\lambda_{AB}-2\lambda_{BB}) \over 2\lambda_{BB}(\lambda_{AB}-2\lambda_{AA})}}. \label{mf2b} \end{eqnarray} Alternatively, if we begin with $n_B\gg n_A$, then we have a single asymptotic regime: \begin{eqnarray} & & b\sim(2\lambda_{BB}t)^{-1} \label{mf3a}\\ & & a\sim{n_A\over (2n_B\lambda_{BB}t)^{\lambda_{AB}/2\lambda_{BB}}}. \label{mf3b} \end{eqnarray} However, if we now wish to extend our results at or below the upper critical dimension, we must attempt to include some of the fluctuation effects. The simplest way in which this can be done is to employ the Smoluchowski approximation \cite{SM,CH,KBR}. The essential idea of this approach is to relate the effective reaction rates $\lambda^{eff}_{\{ij\}}$ to the diffusion constants $D_A,D_B$. Suppose we want to calculate the reaction rate $\lambda^{eff}_{AB}$. We begin by choosing a (fixed) A species target ``trap'', which is surrounded by B particles. When a B particle approaches within a distance $R$ of the target, a reaction is deemed to have occurred. Consequently, the reaction rate may be obtained by solving a diffusion equation with boundary conditions of fixed density as $r\rightarrow\infty$, and absorption at $r=R$. The flux of B particles across the $d$ dimensional sphere of radius $R$ is then proportional to an effective microscopic reaction rate. If we now generalise to the case where both the A and B species are mobile, then we find (in dimension $d<2$ and in the large time limit): \begin{equation} \lambda^{eff}_{AB}\sim ({\rm const.})(D_A+D_B)^{d/2}t^{d/2-1}. \end{equation} For $d=2$ we obtain logarithmic corrections: \begin{equation} \lambda^{eff}_{AB}\sim {({\rm const.})(D_A+D_B)\over\ln((D_A+D_B)t)}. \end{equation} The Smoluchowski reaction rates for $\lambda^{eff}_{AA}$ and $\lambda^{eff}_{BB}$ are obtained in a similar fashion. Note that above $d=2$ the reaction rate approaches a limiting (constant) value, and we see that the Smoluchowski approach predicts a critical dimension of $d_c=2$ for this system. This is simply related to the reentrancy property of random walks in $d\leq 2$. It is the inclusion of this effect which accounts for the improvement introduced by the Smoluchowski approach. If we now substitute these modified reaction rates into the rate equations, we can obtain the Smoluchowski improved density exponents. For the case where $n_A\gg n_B$, we find an initial regime with \begin{eqnarray} & & a=O(t^{-d/2}) \\ & & b=O\left(t^{-{d\over 2}\left({1+\delta\over 2}\right)^{d/2}}\right). \end{eqnarray} Once again, since the A particles are decaying away faster than the B's, we cross over to a second regime, where (for $0<\delta<1$) \begin{eqnarray} & & b=O(t^{-d/2}) \\ & & a=O\left(t^{-{d\over 2}\left({1+\delta^{-1}\over 2}\right)^{d/2}}\right). \end{eqnarray} This second set of exponents is the same as for the case where we begin with $n_B\gg n_A$ and $\delta\neq 0$. In this situation no crossover occurs and the exponents are valid for all asymptotic times. These exponents can be compared favourably with both simulations \cite{KBR}, and exact results \cite{D3}. For example, the decay rate for an immobile minority impurity is given by Smoluchowski to be $\approx t^{-0.354}$. This compares well with the exact decay rate of $t^{-0.375}$. Turning to the case $d=d_c=2$ and $n_A\gg n_B$, we obtain, for the initial regime: \begin{eqnarray} & & a=O\left({\ln t\over t}\right) \\ & & b=O\left(\left({\ln t\over t}\right)^{\left({1+\delta\over 2}\right)}(\ln t)^{\left({1+\delta\over 2}\right)\ln\left({1+\delta\over 2}\right)}\right). \end{eqnarray} We again eventually crossover to a second regime, where (for $0<\delta<1$): \begin{eqnarray} & & b=O\left({\ln t\over t}\right) \\ & & a=O\left(\left({\ln t\over t}\right)^{\left({1+\delta^{-1}\over 2}\right)}(\ln t)^{\left({1+\delta^{-1}\over 2}\right)\ln\left({1+\delta^{-1}\over 2}\right)}\right). \end{eqnarray} This second set of exponents is again valid (for all asymptotic times) in the case where we begin with $n_B\gg n_A$ and $\delta\neq 0$. Note that the Smoluchowski approach can also be employed for $d>d_c$, where again we will find (time independent) reaction rates which depend on the diffusion constants. However, in the general case, our later field theoretic analysis shows that there is no real justification for this procedure. One exception to this occurs in the case where we have heterogeneous single species annihilation, as considered in \cite{KBR}. In this situation we have only one fundamental reaction process, but different reaction rates may still arise, for example, by having two or more different particle masses (and hence two or more different diffusion constants). In this case it is physically reasonable to suppose that the exponents (which are ratios of reaction rates) may depend only on the diffusivity ratios, with any other parameters canceling out. However, in the general case, where the reaction processes are genuinely distinct this will not be the case. Overall, we have seen that the Smoluchowski approach is a simple way to incorporate some fluctuation effects into the rate equation approach. Unfortunately, it is not at all clear how these methods may be systematically improved. It is for this reason that we turn to the main purpose of this paper - the development of an alternative field theoretic framework. \section{The Field Theory Approach} Fluctuation effects in reaction-diffusion systems have previously been successfully tackled using techniques borrowed from quantum field theory and also from the renormalisation group. Examples include studies of the diffusion limited reactions $A+A\rightarrow\emptyset$ \cite{L} and $A+B\rightarrow\emptyset$ \cite{LC,HC}. The first step in this analysis is to write down a Master Equation, which exactly describes the microscopic time evolution of the system. Using methods developed by Doi \cite{Doi} and Peliti \cite{Pel}, this can be mapped onto a Schroedinger-like equation, with the introduction of a second quantised Hamiltonian, and then onto a field theory, with an action $S$. These steps have been described in detail elsewhere \cite{Doi,Pel,L,LC,HC}, and consequently we shall simply give the resulting action appropriate for our theory: \begin{eqnarray} & & S=\int d^dx \left( \int dt \left[\bar a (\partial_t-\nabla^2)a+\bar b (\partial_t-\delta\nabla^2)b +2\lambda_{AA}\bar aa^2+\lambda_{AA}\bar a^2a^2 \right.\right. \label{action} \\ & & \left.\left. +2\lambda_{BB}\bar bb^2+\lambda_{BB}\bar b^2b^2+\lambda_{AB}\bar aab+\lambda_{AB}\bar bab+\lambda_{AB}\bar a\bar bab \right] -\bar an_A -\bar bn_B\right). \nonumber \end{eqnarray} Here we have defined $\delta=(D_B/D_A)\leq 1$ and also introduced the response fields $\bar a$ and $\bar b$. In addition time $t$, together with the reaction rates $\lambda_{\{ij\}}$ have been rescaled to absorb the diffusion constant $D_A$. Averaged quantities are then calculated according to \begin{equation} \langle X(t)\rangle={\cal N}^{-1}\int {\cal D}a\,{\cal D} \bar a\,{\cal D}b\,{\cal D}\bar b\,X(t)\, e^{-S}, \end{equation} where \begin{equation} {\cal N}= \int{\cal D}a\,{\cal D}\bar a\,{\cal D}b\,{\cal D}\bar b\, e^{-S}. \end{equation} Notice that in the path integral \begin{equation} \int{\cal D}a\,{\cal D}\bar a\,{\cal D}b\,{\cal D}\bar b\, e^{-S} \end{equation} integration over the fields $\bar a$, $a$ and $\bar b$, $b$ whilst neglecting the quartic terms, leads to a recovery of the mean field rate equations. Performing power counting on the action $S$, we can now give the natural canonical dimensions for the various parameters appearing in the action: \begin{equation} [t]\sim k^{-2}\qquad [a],[b],[n_A],[n_B]\sim k^d \qquad [\bar a],[\bar b]\sim k^0 \qquad [\lambda_{\{ij\}}]\sim k^{2-d}. \end{equation} Notice that the reaction rates become dimensionless in $d=2$, which we therefore postulate as the upper critical dimension for the system, in agreement with the Smoluchowski prediction. {}From the action $S$, we can see that the propagators for the theory are given by \begin{eqnarray} & & G_{a\bar a}(k,t-t')=\cases{e^{-k^2(t-t')} & for $t>t'$\cr 0 & for $t<t'$\cr} \\ & & G_{b\bar b}(k,t-t')=\cases{e^{-k^2(t-t')\delta} & for $t>t'$\cr 0 & for $t<t'$.\cr} \end{eqnarray} Diagrammatically, we represent $G_{a\bar a}$ by a thin solid line and $G_{b\bar b}$ by a thin dotted line. The vertices for the theory are given in figure 1. \subsection{Renormalisation} One of the most important features of this theory, as mentioned in the introduction, is the relative simplicity of its renormalisation. Examination of the vertices given in figure 1 reveals that it is not possible to draw diagrams which dress the propagators. Hence the bare propagators are the full propagators for the theory. Consequently, the only renormalisation needed involves the reaction rates $\lambda_{\{ij\}}$, and in particular the diffusion constants (or $\delta$) are {\it not} renormalised. The temporally extended vertex functions for the reaction rates are given by the diagrammatic sums given in figure 2. As is the case in similar theories \cite{L,LC,HC}, these sums may be evaluated exactly, using Laplace transforms: \begin{eqnarray} & & \lambda_{AA}(k,s)={\lambda_{AA}\over 1+\lambda_{AA}C\Gamma(\epsilon/2)(s+{1\over 2}k^2)^{-\epsilon/2}} \label{tevf1} \\ & & \lambda_{BB}(k,s)={\lambda_{BB}\over 1+\lambda_{BB}C\Gamma(\epsilon/2)\delta^{-1}(s/\delta +{1\over 2}k^2)^{-\epsilon/2}} \label{tevf2} \\ & & \lambda_{AB}(k,s)={\lambda_{AB}\over 1+\lambda_{AB}2^{-\epsilon/2}C\Gamma(\epsilon/2)(1+\delta)^{-d/2}(s+ k^2\delta/(1+\delta))^{-\epsilon/2}} \label{tevf3}, \end{eqnarray} where $C=2/(8\pi)^{d/2}$ and $s$ is the Laplace transformed time variable. We can now use these vertex functions to define the three dimensionless renormalised and bare couplings, with $s=\kappa^2$, $k=0$ as the normalisation point: \begin{equation} g_{R_{\{ij\}}}=\kappa^{-\epsilon}\lambda_{\{ij\}} (k,s)|_{s=\kappa^2,k=0}\qquad\quad g_{0_{\{ij\}}}=\kappa^{-\epsilon}\lambda_{\{ij\}}. \end{equation} Consequently, we can define three $\beta$ functions: \begin{eqnarray} & & \beta(g_{R_{AA}})=\kappa{\partial\over\partial\kappa}g_{R_{AA}} = -\epsilon g_{R_{AA}}+\epsilon C\Gamma(\epsilon/2)g_{R_{AA}}^2 \label{b1} \\ & & \beta(g_{R_{BB}})=\kappa{\partial\over\partial\kappa}g_{R_{BB}} = -\epsilon g_{R_{BB}}+\epsilon C\Gamma(\epsilon/2)\delta^{-d/2} g_{R_{BB}}^2 \label{b2} \\ & & \beta(g_{R_{AB}})=\kappa{\partial\over\partial\kappa}g_{R_{AB}} = -\epsilon g_{R_{AB}}+2^{-\epsilon/2} \epsilon C\Gamma(\epsilon/2)(1+\delta)^{-d/2} g_{R_{AB}}^2, \label{b3} \end{eqnarray} and three fixed points $\beta(g_{R_{\{ij\}}}^*)=0$: \begin{eqnarray} & & g_{R_{AA}}^*=(C\Gamma(\epsilon/2))^{-1} \\ & & g_{R_{BB}}^*=(C\Gamma(\epsilon/2)\delta^{-d/2})^{-1} \\ & & g_{R_{AB}}^*=\left(C\Gamma(\epsilon/2){1\over 2} \left({1+\delta\over 2}\right)^{-d/2}\right)^{-1}. \end{eqnarray} Finally, we see from (\ref{tevf1}), (\ref{tevf2}), and (\ref{tevf3}) that the expansion of $g_{0_{\{ij\}}}$ in powers of $g_{R_{\{ij\}}}$ is given by: \begin{equation} g_{0_{\{ij\}}}=g_{R_{\{ij\}}}+{g_{R_{\{ij\}}}^2\over g_{R_{\{ij\}}}^*} +\ldots \label{gexp} \end{equation} \subsection{Callan-Symanzik Equation} We now exploit the fact that physical quantities calculated using the field theory must be independent of the choice of normalisation point. This leads us to a Callan-Symanzik equation: \begin{equation} \left[\kappa{\partial\over\partial\kappa}+\beta(g_{R_{AA}}){\partial \over\partial g_{R_{AA}}}+\beta(g_{R_{BB}}){\partial \over\partial g_{R_{BB}}}+\beta(g_{R_{AB}}){\partial \over\partial g_{R_{AB}}}\right]\langle a\rangle_R=0. \end{equation} However dimensional analysis implies \begin{equation} \left[\kappa{\partial\over\partial\kappa}-2t{\partial\over\partial t}+dn_A{\partial\over\partial n_A}+dn_B{\partial\over\partial n_B}-d\right]\langle a\rangle_R(t,n_A,n_B,g_{R_{\{ij\}}},\delta,\kappa)=0. \end{equation} Exactly similar equations hold for $\langle b\rangle_R$. Eliminating the terms involving $\kappa$ and solving by the method of characteristics, we find: \begin{equation} \langle a\rangle_R(t,n_A,n_B,g_{R_{\{ij\}}},\delta,\kappa)=(\kappa^2 t)^{-d/2}\langle a\rangle_R(\kappa^{-2},\tilde n_A(\kappa^{-2}),\tilde n_B(\kappa^{-2}),\tilde g_{R_{\{ij\}}}(\kappa^{-2}),\delta,\kappa), \label{CSS} \end{equation} with the characteristic equations: \begin{equation} 2t{\partial\tilde n_A\over\partial t}=-d\tilde n_A \qquad 2t{\partial\tilde n_B\over\partial t}=-d\tilde n_B \qquad 2t{\partial\tilde g_{R_{\{ij\}}}\over\partial t}=\beta(\tilde g_{R_{\{ij\}}}), \label{gchar} \end{equation} and initial conditions: \begin{equation} \tilde n_A(t)=n_A \quad \tilde n_B(t)=n_B \end{equation} \begin{equation} \tilde g_{R_{AA}}(t)=g_{R_{AA}} \quad \tilde g_{R_{BB}}(t)=g_{R_{BB}} \quad \tilde g_{R_{AB}}(t)=g_{R_{AB}}. \end{equation} These equations have the exact solutions: \begin{equation} \tilde n_A(t')=\left({t\over t'}\right)^{d/2}n_A \qquad \tilde n_B(t')=\left({t\over t'}\right)^{d/2}n_B, \label{RD} \end{equation} and \begin{equation} \tilde g_{R_{\{ij\}}}(t')=g^*_{R_{\{ij\}}}\left(1+{g^*_{R_{\{ij\}}}- g_{R_{\{ij\}}}\over g_{R_{\{ij\}}}(t/t')^{\epsilon/2}}\right)^{-1}. \label{RC} \end{equation} In the large $t$ limit $\tilde g_{R_{\{ij\}}}\rightarrow g^*_{R_{\{ij\}}}$, a relationship which will allow us to relate an expansion in powers of the renormalised couplings $g_{R_{\{ij\}}}$ to an $\epsilon$ expansion using (\ref{CSS}). In our later density calculations we will assume that this asymptotic regime has been reached. \subsection{Tree Diagrams} In order to perform systematic $\epsilon$ expansion calculations we now need to identify the leading and subleading terms in an expansion in powers of $g_{0_{\{ij\}}}$. In calculating $\langle a\rangle$ and $\langle b\rangle$, contributions from tree diagrams are of order $g_{0_{\{ij\}}}^q n_{\{i\}}^{1+q}$, for integer $q$, and densities $n_{\{i\}}=\{n_A,n_B\}$. However, diagrams with $l$ loops will be of order $g_{0_{\{ij\}}}^{q+l} n_{\{i\}}^{1+q}$. The addition of loops makes the power $g_{0_{\{ij\}}}$ higher relative to the power of the densities - so we conclude that the number of loops gives the order of the diagram. The lowest order diagrams contributing to $\langle a\rangle$ and $\langle b\rangle$ are the tree diagrams shown in figure 3. We represent the classical (tree level) density $\langle a\rangle_{cl}$ by a wavy solid line, and $\langle b\rangle_{cl}$ by a wavy dotted line. These sets of diagrams are equivalent to the mean field rate equations, as may be seen by acting on each by their respective inverse Green functions. The second tree level quantities appearing in the theory are the response functions: \begin{eqnarray} & & L(k,t_2,t_1)=\langle a(-k,t_2)\bar a(k,t_1)\rangle \\ & & M(k,t_2,t_1)=\langle b(-k,t_2)\bar a(k,t_1)\rangle \\ & & N(k,t_2,t_1)=\langle b(-k,t_2)\bar b(k,t_1)\rangle \\ & & P(k,t_2,t_1)=\langle a(-k,t_2)\bar b(k,t_1)\rangle \end{eqnarray} which we represent diagrammatically by the thick lines shown in figure 4. These functions can be evaluated analytically, but only in the limit $\langle a\rangle\gg \langle b\rangle$, or $\langle b\rangle\gg \langle a\rangle$. The details of this calculation are presented in appendix A, where the following results are derived (for $\langle a\rangle\gg \langle b\rangle$): \begin{eqnarray} & & L(k,t_2,t_1)=\left({1+2\lambda_{AA}n_At_1\over 1+2\lambda_{AA}n_At_2}\right)^2\exp{(-k^2(t_2-t_1))}\label{R1L}\\ & & N(k,t_2,t_1)=\left({1+2\lambda_{AA}n_At_1\over 1+2\lambda_{AA}n_At_2}\right)^{\lambda_{AB}/2\lambda_{AA}} \exp{(-k^2(t_2-t_1)\delta)}\label{R1N}\\ & & P(k,t_2,t_1)=-\lambda_{AB}n_A{(1+2\lambda_{AA}n_At_1)^{ \lambda_{AB}/2\lambda_{AA}}\over(1+2\lambda_{AA}n_At_2)^2}\exp{(-k^2 (t_2-t_1\delta))}\label{R1P} \nonumber \\ & & \qquad \qquad\qquad\qquad\times\int_{t_1}^{t_2}{\exp{(k^2(1-\delta)t')}\over (1+2\lambda_{AA}n_At')^{-1+\lambda_{AB}/2\lambda_{AA}}}dt' \\ & & M(k,t_2,t_1)=-\lambda_{AB}n_B{(1+2\lambda_{AA}n_At_1)^2 \over(1+2\lambda_{AA}n_At_2)^{\lambda_{AB}/2\lambda_{AA}}}\exp{(-k^2 (t_2\delta-t_1))} \nonumber \\ & & \qquad \qquad\qquad\qquad\times\int_{t_1}^{t_2}{\exp{(-k^2(1-\delta)t')}\over (1+2\lambda_{AA}n_At')^{2}}dt'. \label{R1M} \end{eqnarray} An extra check on validity of these response functions is provided by the relations: \begin{eqnarray} & & L(0,t,0)={\partial\langle a(t)\rangle\over\partial n_A} \qquad N(0,t,0)={\partial\langle b(t)\rangle\over\partial n_B} \\ & & P(0,t,0)={\partial\langle a(t)\rangle\over\partial n_B} \qquad M(0,t,0)={\partial\langle b(t)\rangle\over\partial n_A}, \end{eqnarray} which follow from the definition of the response functions, and from the initial condition terms in the action $S$. It is easy to check that the above response functions do indeed satisfy these relations. For the opposite situation where $n_B\gg n_A$ (and hence $\langle b\rangle\gg\langle a\rangle$), we could use a formalism similar to the above for the density calculations. However, it is much simpler to map this case onto the $\langle a\rangle\gg\langle b\rangle$ regime by swapping the labels on the A and B particles, and then relabeling: $$ n_A\leftrightarrow n_B \quad \lambda_{AA}\leftrightarrow\lambda_{BB} \quad D_A\leftrightarrow D_B. $$ We can then obtain the exponents and amplitudes for this second regime with no extra work. This concludes our discussion of the field theory formalism. The framework we have built up allows (in principle) the systematic calculation of fluctuation effects in all circumstances. However, it is only in the case where one of the species is greatly in the majority where the equations (for the tree level densities and response functions) are sufficiently simple for analytic progress to be made. We now turn to use of the field theory in calculating the fluctuation modified densities. \section{Density Calculations} \subsection{Tree Level} The first step in using our field theory to include fluctuation effects is to insert the mean field (tree level) solution into the Callan-Symanzik solution (\ref{CSS}), using the results for the running densities/couplings (\ref{RD}), (\ref{RC}). Since the fixed points for the couplings obey $2g^*_{R_{BB}}<g^*_{R_{AB}}<2g^*_{R_{AA}}$ (when $\delta<1$) it is appropriate to use the mean field solutions derived in section 2. For the case where $n_A\gg n_B$, this gives: \begin{equation} \langle a\rangle\sim\left({\Gamma(\epsilon/2)\over (8\pi)^{d/2}}\right)(D_At)^{-d/2}, \end{equation} and \begin{equation} \langle b\rangle\sim F(D_At)^{-\beta}, \end{equation} with \begin{equation} \beta\approx {d\over 2}\left({1+\delta \over 2}\right)^{d/2} \qquad F\approx n_B\left({\Gamma(\epsilon/2)\over n_A(8\pi)^{d/2}}\right)^{\left({ 1+\delta\over 2}\right)^{d/2}}, \end{equation} valid for $n_A^{-2/d}D_A^{-1}\ll t\ll t_{1}$, where \begin{equation} D_At_{1}\approx\left({n_B\over n_A^{\left({1+\delta \over 2}\right)^{d/2}}}\right)^{{2\over d}\left(\left({1+\delta\over 2}\right)^{d/2}-1\right)^{-1}}. \end{equation} These modified crossover times are obtained by using the expressions for the running couplings/densities in the mean field crossovers. Notice that the density decay exponents derived here are the same as those obtained from the Smoluchowski approach. However, as we are performing an $\epsilon$ expansion, we are only strictly justified in retaining leading order $\epsilon$ terms. Consequently we find, for the minority species density decay exponent and amplitude: \begin{equation} \beta=\left({1+\delta\over 2}\right)+O(\epsilon) \qquad F=n_B\left({1\over 4\pi\epsilon n_A}+O(\epsilon^0)\right)^{\left({1+\delta \over 2}\right)+O(\epsilon)}. \end{equation} Eventually, however, as the A particles are decaying away more quickly than the B particles (due to their greater diffusivity when $\delta<1$), we crossover to a second regime where $\langle b\rangle\gg\langle a\rangle$. For $0<\delta<1$, we have: \begin{equation} \langle b\rangle\sim\left({\Gamma(\epsilon/2)\over (8\pi)^{d/2}}\right) (D_Bt)^{-d/2} \end{equation} \begin{equation} \langle a\rangle\sim E(D_Bt)^{-\alpha}, \end{equation} with \begin{eqnarray} & & \alpha\approx {d\over 2}\left({1+\delta^{-1}\over 2}\right)^{d/2}= \left({1+\delta^{-1}\over 2}\right)+O(\epsilon) \\ & & E\approx n_A f(d)\left({\Gamma(\epsilon/2)\over n_B(8\pi)^{d/2}}\right)^{\left({1+\delta^{-1}\over 2}\right)^{d/2}} =n_A f(2)\left({1\over 4\pi\epsilon n_B}+O(\epsilon^0)\right)^{\left({1+\delta^{-1}\over 2}\right) +O(\epsilon)} \end{eqnarray} where \begin{equation} f(d)=\left(1+{[((1+\delta)/2)^{d/2}-1]n_A\over [\delta^{d/2}-((1+\delta)/2)^{d/2}]n_B}\right)^{-1-\left({1+ \delta^{-1}\over 2}\right)^{d/2}\left({((1+\delta)/2)^{d/2}- \delta^{d/2}\over ((1+\delta)/2)^{d/2}-1}\right)}. \end{equation} This result is valid for $t\gg t_{2}$, where \begin{equation} D_Bt_{2}\approx\left({n_Af(d)(1+\delta^{-1})^{d/2}\over n_B^{((1+\delta^{-1})/2)^{d/2}}}\right)^{{2\over d}\left(\left({1+\delta^{-1}\over 2}\right)^{d/2}-1\right)^{-1}}. \end{equation} Note that for $\delta=1$ the first crossover time $t_{1}\rightarrow\infty$ - in this case the two species decay away at the same rate, and so no further crossover occurs. Alternatively if $\delta=0$, then the first regime is left, but the second crossover time $t_{2}\rightarrow\infty$. In that case the minority species finally decays away in the exponential fashion predicted in \cite{PG}. For the intermediate case where $\delta$ is small, but nonzero, the decay exponent for the minority species becomes large in the final regime. The explanation for this result lies in the relatively large diffusivity of the minority A species (if $D_A$ is large) and/or the increased density amplitude for the majority B particles (if $D_B$ is small). Both these effects will lead to an increased rate of decay for the A species. Finally, if the initial conditions are changed such that now $n_B\gg n_A$, with $\delta\neq 0$, then we obtain the same results as for the second of the above regimes for $D_Bt\gg n_B^{-2/d}$, with $f\approx 1$. \subsection{One Loop Results} We now describe the one loop improvements to the tree level result. In the regime $\langle a\rangle\gg\langle b\rangle$, the dominant diagrams will be those where the minimum possible number of $\langle b\rangle_{cl}$ insertions are made. For the majority A species the appropriate diagram is shown in figure 5, where there are no $\langle b\rangle_{cl}$ insertions. This is identical to the one loop diagram for $A+A\rightarrow\emptyset$ evaluated in \cite{L}, which gives, in conjunction with the subleading terms from the tree level: \begin{equation} \langle a\rangle\sim\left({1\over 4\pi\epsilon}+{2\ln 8\pi-5\over 16\pi}+O(\epsilon)\right) (D_At)^{-d/2}. \label{maxexp} \end{equation} In addition, for that subset of diagrams with no $\langle b\rangle_{cl}$ insertions, the decay exponent is exact. More details of this calculation, including a demonstration of the cancellation of divergences, can be found in \cite{L}. Turning now to the one loop calculation for the minority species, the appropriate diagrams are the three shown in figure 6, each of which contains just one $\langle b\rangle_{cl}$ insertion: \begin{eqnarray} (i) & & \qquad {-4\lambda_{AB}\lambda_{AA}^2n_A^2n_B\over (2\lambda_{AA}n_At)^{1+\lambda_{AB}/2\lambda_{AA}}}\int{d^dk\over (2\pi)^d} \int_0^t dt_2\int_0^{t_2} dt_1 (t-t_2) \nonumber \\ & & \qquad\qquad\qquad\qquad\qquad \times{(1+2\lambda_{AA}n_At_1)^2\over (1+2\lambda_{AA}n_At_2)^3}\exp{[-2k^2(t_2-t_1)]} \label{la1}\\ (ii) & & \qquad {-2\lambda_{AB}^2\lambda_{AA}n_A^2n_B\over (2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}}\int{d^dk\over (2\pi)^d} \int_0^t dt_2\int_0^{t_2} dt_1\int_{t_1}^{t_2} dt'{(1+2\lambda_{AA}n_At_1)^2\over (1+2\lambda_{AA}n_At_2)^2} \nonumber \\ & & \qquad\qquad\times{1\over (1+2\lambda_{AA}n_At')^2} \exp{[-k^2(t_2(1+\delta)-2t_1+(1-\delta)t')]} \label{la2} \\ (iii) & & \qquad {\lambda_{AB}^2n_An_B\over (2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}}\int{d^dk\over (2\pi)^d}\int_0^t dt_2\int_0^{t_2} dt_1 {(1+2\lambda_{AA}n_At_1)\over (1+2\lambda_{AA}n_At_2)^2} \nonumber \\ & & \qquad\qquad\qquad\qquad \qquad\qquad\qquad \times\exp{[-k^2(1+\delta)(t_2-t_1)]}. \label{la3} \end{eqnarray} The detail of the evaluation of these diagrams is rather subtle. Essentially we are interested in extracting the most divergent parts of these integrals, which will turn out to be pieces of $O(\epsilon^{-1})$ and $O(\epsilon^0)$. However, we must be careful not to confuse genuine bare divergences (of $O(\epsilon^{-1})$ which must be removed by the renormalisation of the theory), with logarithmic pieces, which we must retain. The divergences arise in diagrams (i) and (iii) as the difference in time $t_2-t_1$ between the beginning and end of the loops tends to zero (in $d=2$). After the process of renormalisation we find corrections of the form: \begin{equation} 1+({\rm constant})\epsilon\ln(({\rm constant})t^{d/2})+O(\epsilon^2). \end{equation} If this series is identified as the expansion of an exponential, then we find that our one loop diagrams (together with subleading components from the tree level) have provided $O(\epsilon)$ corrections to the exponents. Diagrams (i) and (iii) are relatively straightforward to evaluate. The $k$ and $t_1$ integrals are elementary, and the final $t_2$ integrals can be done by parts to extract the necessary most divergent pieces (up to $O(\epsilon^0)$). The second diagram of figure 6 is more complicated, and we perform its evaluation in appendix B - although we are only able to extract the logarithmic piece of $O( t^{-\lambda_{AB}/2\lambda_{AA}}\,t^{\epsilon/2}\ln t)$. There will be corrections to this of $O( t^{-\lambda_{AB}/2\lambda_{AA}}\,t^{\epsilon/2})$ (contributing to a modified amplitude) which we have been unable to calculate. We find asymptotically: $$ \;(i)\;{-\lambda_{AB}n_B\over 8\pi(2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}}\left({2t^{ \epsilon/2}(\ln(2\lambda_{AA}n_At)-1)\over\epsilon}+t^{\epsilon/2} (\ln(2\lambda_{AA}n_At)-1)\ln(8\pi)\right. $$ \begin{equation} \left.\quad +{15t^{\epsilon/2}\over 4}-{3\over 2}t^{\epsilon/2} \ln(2\lambda_{AA}n_At)-\int_0^t t_2^{-1+\epsilon/2}\ln(1+ 2\lambda_{AA}n_At_2)dt_2+O(\epsilon)\right)\label{q1} \end{equation} \begin{equation} \;(ii)\;{-\lambda_{AB}^2n_B\over 32\pi\lambda_{AA}(2\lambda_{AA}n_At) ^{\lambda_{AB} /2\lambda_{AA}}}\left(\delta+{1\over 2}(\delta^2-1)\left[\ln\left( {1-\delta\over1+\delta}\right)\qquad\qquad\qquad\qquad\quad \right.\right. \end{equation} $$ \left.\left. \quad\qquad\qquad\qquad\qquad\qquad -\int_{-1}^{1-\delta\over 1+\delta} dv {(1+v)^2\over v^2}\ln(1+v)\right]+O(\epsilon)\right) t^{\epsilon/2}\ln(2\lambda_{AA}n_At) $$ \begin{equation} \;(iii)\;{\lambda_{AB}^2n_B (4\pi(1+\delta))^{-1}\over 2\lambda_{AA}(2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}} \left({2t^{\epsilon/2}\ln(2\lambda_{AA}n_At)\over \epsilon}+t^{\epsilon/2}\ln(2\lambda_{AA}n_At) \ln(4\pi(1+\delta))\right.\label{q2} \end{equation} $$ \left.\quad -t^{\epsilon/2}(\ln(2\lambda_{AA}n_At)-1)-\int_0^t t_2^{-1+\epsilon/2}\ln(1+2\lambda_{AA}n_At_2)dt_2 +O(\epsilon)\right). $$ To one loop accuracy we can make the replacement: $\lambda_{\{ij\}}=\kappa^{\epsilon}g_{0_{\{ij\}}}\rightarrow \kappa^{\epsilon} g_{R_{\{ij\}}}$. These results must now be combined with the subleading terms from the tree level. Using (\ref{gexp}), we find \begin{eqnarray} & &\langle b\rangle\sim {n_B\over (2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}}={n_B\over (2\kappa^{\epsilon}g_{R_{AA}}n_At)^{g_{R_{AB}}/2g_{R_{AA}}}}\left(1 -{g_{R_{AB}}\over 2g^*_{R_{AA}}} \right.\nonumber \\ & & \left. \quad -{g^2_{R_{AB}}\over 2g_{R_{AA}}g^*_{R_{AB}}}\ln(2\kappa^{\epsilon}g_{R_{AA}}n_At)+ {g_{R_{AB}}\over 2g^*_{R_{AA}}}\ln(2\kappa^{\epsilon}g_{R_{AA}}n_At)+O(g_R^2)\right). \label{q3} \end{eqnarray} If we now insert explicit $\epsilon$ expanded values for the fixed points $g^*_{R_{\{ij\}}}$, then we discover that the bare divergences cancel between (\ref{q1}), (\ref{q2}), and (\ref{q3}). With insertion into the Callan-Symanzik solution (\ref{CSS}), we also find that the pieces we have left as integrals in (i) and (iii) (which are $O( t^{\epsilon/2}(\ln t)^2)$) also mutually cancel. Eventually we find: $$ \langle b\rangle\sim({\rm const.}) t^{-{d\over 2}\left({1+\delta\over 2}\right)^{d/2}}\left(1+{\epsilon(1+\delta)\over 8}\left[1-2(1+\delta)\left({\delta\over 4}+{\delta^2-1\over 8}\left(\ln\left({1-\delta\over1+\delta}\right) \right.\right.\right.\right. $$ \begin{equation} \left.\left.\left.\left.\qquad\qquad \qquad\qquad -\int_{-1}^{1-\delta\over 1+\delta}dv{(1+v)^2\over v^2}\ln(1+v)\right) \right)\right]\ln(({\rm const.})t^{d/2})+O(\epsilon^2)\right), \label{bfirst} \end{equation} where we have neglected $O(\epsilon)$ pieces which, aside from the prefactor, are time {\it independent}. These terms contribute only to the density amplitude. We now evaluate the integral in (\ref{bfirst}), using $$ \int_{-1}^{1-\delta\over 1+\delta}{\ln(1+v)\over v}dv=\int_0^{2 \over 1+\delta}{\ln u\over u-1}du=\int_0^1{\ln u\over u-1}du+\int_1^ {2\over 1+\delta}{\ln u\over u-1}du $$ \begin{equation} \qquad\qquad\qquad\qquad\qquad ={\pi^2\over 6}-f\left\{{2\over 1+\delta}\right\}, \label{minexp} \end{equation} where $f\{x\}$ is the dilogarithm function \cite{AS}. The other parts of the integral are elementary. The next step is to $\epsilon$ expand the RG improved tree level result: \begin{equation} {d\over 2}\left({1+\delta\over 2}\right)^{d/2}=\left({1+\delta\over 2}\right)\left(1- {\epsilon\over 2}\left(1+\ln\left({1+\delta\over 2}\right)\right)+ O(\epsilon^2)\right). \label{tei} \end{equation} Then, exponentiating the $\epsilon$ expansion in (\ref{bfirst}), we find $\langle b\rangle=O(t^{-\beta})$, where \begin{eqnarray} & & \beta=\left({1+\delta\over 2}\right)\left(1-{\epsilon\over 2}\left[{3\over 2}+\ln\left({1+\delta\over 2}\right)- {\delta(1+\delta)\over 4}\left[1+2\ln\left({1+\delta\over 2}\right)\right] \right.\right. \nonumber \\ & & \left.\left.\qquad\qquad -{1\over 4}(\delta^2-1)\left(1+(1+\delta)\left[f\left\{{2\over 1+\delta}\right\}-{\pi^2\over 6}\right]\right)\right]\right) +O(\epsilon^2). \label{bsecond} \end{eqnarray} $\beta$ is plotted as a function of $\delta$ for $\epsilon=1$ ($d=1$) in figure 9. For the case where $\delta=1$, we recover the decay rate $\langle b\rangle=O(t^{-d/2})$. This is to be expected, as when $\delta=1$ we are effectively again dealing with a single species reaction-diffusion system (at least for $d<2$). In that case the density decay exponent is known to all orders in perturbation theory \cite{L}, and is in agreement with our result. For the case where $\delta=0$ and $d=1$, the decay exponent is also known exactly to be $\langle b\rangle=O( t^{-0.375})$ \cite{D3}. This can be compared with our result, where we find \begin{equation} \beta={1\over 16}+{1\over 4}\ln 2 +{\pi^2\over 64}\approx 0.39 \qquad (\delta=0). \end{equation} Consequently, this answer is a modest improvement over the Smoluchowski result derived in section 2, and also in \cite{KBR}. For the case $n_B\gg n_A$ (and hence $\langle b\rangle\gg\langle a\rangle$), we could follow the same route as described above, by evaluating the one loop diagrams shown in figures 7 and 8. However, as we mentioned in the last section we can much more easily obtain these corrections by swapping the labels on the A and B particles, and then relabeling: $$ n_A\leftrightarrow n_B \quad \lambda_{AA}\leftrightarrow\lambda_{BB} \quad D_A\leftrightarrow D_B. $$ Following this procedure, the majority species amplitude/exponent can be found by taking $D_A\rightarrow D_B$ in equation (\ref{maxexp}): \begin{equation} \langle b\rangle\sim\left({1\over 4\pi\epsilon}+{2\ln 8\pi-5\over 16\pi}+O(\epsilon)\right)(D_Bt)^{-d/2}. \label{ola} \end{equation} We can obtain the one loop minority species exponent by substituting $\delta\rightarrow \delta^{-1}$ in equation (\ref{bsecond}): \begin{equation} \langle a\rangle=O(t^{-\alpha}), \end{equation} where $$ \alpha=\left({1+\delta^{-1}\over 2}\right)\left(1-{\epsilon\over 2}\left[{3\over 2}+\ln\left({1+\delta^{-1}\over 2}\right)- {\delta^{-1}(1+\delta^{-1})\over 4}\left[1+2\ln\left({1+\delta^{-1}\over 2}\right)\right] \right.\right. $$ \begin{equation} \left.\left. -{1\over 4}(\delta^{-2}-1)\left(1+(1+\delta^{-1})\left[f\left\{{2\over 1+\delta^{-1}}\right\}-{\pi^2\over 6}\right]\right)\right]\right) +O(\epsilon^2). \label{olalph} \end{equation} Notice, however, that in forming the one loop corrections for the minority species exponent, we have had to expand the RG improved tree level result: \begin{equation} {d\over 2}\left({1+\delta^{-1}\over 2}\right)^{d/2}= \left({1+\delta^{-1}\over 2}\right)\left(1-{\epsilon\over 2}\left(1+\ln\left({1+\delta^{-1}\over 2}\right)\right)+O(\epsilon^2) \right). \label{treexp1} \end{equation} The error arising from this expansion will become large as $\delta$ becomes small. Eventually this inaccuracy will cause the exponent to reach a maximum and then {\it decrease} as $\delta$ is further reduced - behaviour which is clearly unphysical. In order to reduce the error, and to ensure that the expansion in equation (\ref{treexp1}) is qualitatively correct, we need to retain the $O(\epsilon^2)$ terms. Hence the one loop exponent in equation (\ref{olalph}) should be treated with some caution - terms of order $O(\epsilon^2)$ will probably be required for precise results. Consequently the (non $\epsilon$ expanded) RG improved tree level result given in the last section may be more accurate in this regime. In figure 10 we have plotted the one loop exponent $\alpha$ as a function of $\delta$, for $d=1$ ($\epsilon=1$), in the region $0.7\leq\delta\leq 1$, where the exponent is still {\it increasing} for decreasing $\delta$. In principle, calculations can also be made for the case with $n_A\gg n_B$, but where we have crossed over to the regime $\langle b\rangle\gg\langle a\rangle$ (for $0<\delta<1$ and times $t\gg t_{2}$). However, a rigorous evaluation of the one loop diagrams is now much more difficult, as the functional forms for the densities and response functions will change over time. Nevertheless, since the above corrections to the exponents come from asymptotic logarithmic terms, it is plausible to suppose that the new exponent corrections will be dominated by contributions from the final asymptotic regime. If this is indeed the case, then the one loop exponents (though {\it not} the amplitudes) will be unchanged from the previous results (equations (\ref{ola}) to (\ref{olalph})). This calculation will, however, suffer from the same problem as described above. \subsection{$d=d_c$} For the case $d=d_c=2$ we expect logarithmic corrections to the decay exponents, as the reaction rates $\lambda_{\{ij\}}$ are marginal parameters at the critical dimension. We can find the running couplings from the characteristic equation (\ref{gchar}) by taking the limit $\epsilon\rightarrow 0$ in equations (\ref{b1}), (\ref{b2}), and (\ref{b3}): \begin{equation} \tilde g_{R_{AA}}(\kappa^{-2})={g_{R_{AA}}\over 1+g_{R_{AA}}C\ln(\kappa^2t)}\sim (C\ln t)^{-1} \end{equation} \begin{equation} \tilde g_{R_{BB}}(\kappa^{-2})={g_{R_{BB}}\over 1+g_{R_{BB}}C\delta^{-1} \ln (\kappa^2t)}\sim (C\delta^{-1}\ln t)^{-1} \end{equation} \begin{equation} \tilde g_{R_{AB}}(\kappa^{-2})={g_{R_{AB}}\over 1+g_{R_{AB}}C(1+\delta)^{-1}\ln(\kappa^2t)}\sim (C(1+\delta)^{-1}\ln t)^{-1}, \end{equation} where we have taken the asymptotic limits. Corrections to the asymptotic running couplings will be an order $(\ln t)^{-1}$ smaller, and consequently these asymptotic expressions will only be correct at very large times. Hence our expressions for the densities will only be valid when both this condition, and the crossover time constraints given below, are satisfied. In what follows we shall assume the validity of the first of these two conditions. Notice that the asymptotic running couplings are still ordered $2\tilde g_{R_{BB}}<\tilde g_{R_{AB}}<2\tilde g_{R_{AA}}$ for $\delta<1$, so we can use the mean field solutions derived in section 2 as the basis for the RG improved tree level exponents and amplitudes. Making use of the Callan-Symanzik solution (\ref{CSS}) and the above running couplings, we find for $\langle a\rangle\gg\langle b\rangle$: \begin{eqnarray} & & \langle a\rangle\sim {\ln t\over 8\pi D_At} \\ & & \langle b\rangle\sim {n_B\over (8\pi n_AGD_At/\ln t)^{(1+\delta)/2}}, \end{eqnarray} where $G=\exp\left({4\pi\over g_{R_{AA}}}\left(1-{(1+\delta)g_{R_{AA}}\over g_{R_{AB}}}\right)\right)$ is a non-universal amplitude correction. Note that the next order terms for the minority species are suppressed by a factor of only $(\ln \ln t)/(\ln t)$. Using our expressions for the running couplings/densities in the mean field crossovers, we find that these expressions are valid for times $D_A^{-1}n_A^{-1}\ln t\ll t\ll T_1$, where \begin{equation} (D_AT_1/\ln T_1)\approx\left({(Gn_A)^{(1+\delta)/2}\over n_B}\right)^{2\over 1-\delta}. \end{equation} For the case $\delta<1$ the system will eventually enter a second regime, where now the B species will be in the majority. We have (for $\delta\neq 0$): \begin{eqnarray} & & \langle b\rangle\sim{\ln t\over 8\pi D_Bt} \\ & & \langle a\rangle\sim{n_A K\over (8\pi n_BHD_Bt/\ln t)^{(1+\delta^{-1})/2}}, \end{eqnarray} with \begin{equation} H=\exp\left({4\pi\delta\over g_{R_{BB}}}\left(1-{(1+\delta^{-1})g_{R_{BB}}\over g_{R_{AB}}}\right)\right) \qquad K=\left(1+{n_A\over n_B}\right)^{\delta^{-1}-1\over 2}. \end{equation} This is valid for times when $t\gg T_2$, where \begin{equation} (D_BT_2/\ln T_2)\approx\left({(Hn_B)^{(1+\delta^{-1})/2}\over n_A(1+\delta^{-1})K}\right)^{2\over 1-\delta^{-1}}. \end{equation} Alternatively, if we begin with $n_B\gg n_A$, then for $\delta\neq 0$ and $(D_Bt/\ln t)\gg n_B^{-1}$, we have the same results as for the second of the above cases, with $K\approx 1$. Interestingly, the logarithmic corrections we have derived in this section using the RG approach differ slightly from the Smoluchowski results given in section 2. \section{Conclusion} In this paper we have made a comparison of two methods for treating fluctuation effects in a reaction-diffusion system. We have found that the Smoluchowski and field theory approaches are rather similar - the Smoluchowski approximation, for $d<2$, giving the same exponents as the renormalisation group improved tree level in the field theory. In addition, we have gone on to calculate the field theoretic one loop corrections, which have yielded improved values for the exponents. The advantage of the field theory is that it provides a systematic way to calculate these corrections - a procedure which is lacking in the Smoluchowski approach. Furthermore the use of renormalisation group techniques has demonstrated universality in the asymptotic amplitudes and exponents, in that, for $d<2$, they only depend on the diffusivities and the initial densities, and not on the reaction rates. The theory we have developed in this paper can easily be extended to slightly different situations. Consider first an annihilation/coagulation reaction-diffusion system, where the following reactions occur: $$ A+A\rightarrow A\qquad B+B\rightarrow B\qquad A+B\rightarrow\emptyset. $$ The Smoluchowski approach differs from before only in the absence of factors of $2$ in the rate equation terms describing the same species reactions. Consequently, if we begin with $n_A\gg n_B$ then the minority species will decay as \begin{equation} b=O\left(t^{-d\left({1+\delta\over 2}\right)^{d/2}}\right) \qquad (d<2). \end{equation} On the other hand, the field theory description lacks only the factors of $2$ in the action (\ref{action}). If this difference is followed through then the decay exponent in the RG improved tree level is seen to be the same as in the Smoluchowski approach. However, this difference of a factor of $2$ has a major effect on the response functions (where this factor appears as a power), and as a result the new one loop corrections will be different from those calculated in section 4.2. These results should be compared with the exact solution \cite{F1,F2,BAV} for the minority species decay rate $b=O(t^{-\gamma})$, where: \begin{equation} \gamma={\pi\over 2\cos^{-1}(\delta/(1+\delta))}. \end{equation} Note that in this case, although the Smoluchowski answer is qualitatively correct, it deviates considerably from the exact answer. Hence we can see that application of the Smoluchowski approach does not always lead to accurate exponents. Another possible extension is to consider reaction-diffusion systems with more than two species of particle. For example, examining a three species system, we could have the reactions: $$ A+A\rightarrow\emptyset \qquad A+B\rightarrow\emptyset \qquad A+C\rightarrow\emptyset $$ $$ B+B\rightarrow\emptyset \qquad B+C\rightarrow\emptyset \qquad C+C\rightarrow\emptyset. $$ Analysis of this situation is very similar to before, and we merely remark that in the appropriate asymptotic regimes the Smoluchowski and RG improved tree level exponents (consisting of ratios of diffusion constants) are once again identical. Hence the convergence between the Smoluchowski exponents and those obtained from the RG improved tree level is fairly robust, and is not simply confined to the two species systems we have previously been considering. A further possibility is to analyse the case where we have a continuous distribution of diffusivities, but with only a {\it single} reaction channel. This has been studied from the Smoluchowski point of view by Krapivsky {\it et al.} \cite{KBR}, and it would be interesting to extend our RG methods to include this situation. Our theory could also be employed to consider clustered immobile reactants - a generalisation of the $\delta=0$ case included in our calculations. This situation has been analysed by Ben-Naim \cite{B}, using the Smoluchowski approach, where the dimension of the cluster $d_I$ was found to substantially affect the kinetics. Specifically, for codimensionality $d-d_I<2$ (in a space of dimension $d$) a finite fraction of the impurities was found to survive, whereas for $d-d_I\geq 2$ the clusters decayed away indefinitely. The formalism we have presented in this paper could be adapted to study this clustered impurity problem, where calculations could be made without reliance on the Smoluchowski approach. \ \noindent{\bf Acknowledgments.} \noindent The author thanks John Cardy for suggesting this problem and for many useful discussions. Financial support from the EPSRC is also acknowledged. \section{Appendix A: Response Functions} Obtaining an exact analytic expression for the response functions is, in general, very hard. Suppose we define the ``trunk'' to be the line of propagators onto which the density lines are attached, as shown at the bottom of figure 4. Difficulties arise from diagrams where the ``trunk'' changes from one propagator into the other, and then back again, as shown in last of the diagrams for the $L$ response function in figure 4. If diagrams of this type are initially excluded then progress can be made. Consider first the two subseries shown in figure 11, for the functions $\xi(k,t_2,t_1)$ and $\theta(k,t_2,t_1)$, where diagrams of the above kind have been excluded. These series can be summed exactly (using the same technique as described in \cite{L}), giving: \begin{equation} \xi(k,t_2,t_1)=\exp{\left(-k^2(t_2-t_1)\right)}\exp{\left(-\int_{t_1}^ {t_2}(4\lambda_{AA}a+\lambda_{AB}b)dt\right)} \end{equation} \begin{equation} \theta(k,t_2,t_1)=\exp{\left(-k^2(t_2-t_1)\delta\right)}\exp{\left( -\int_{t_1}^{t_2}(4\lambda_{BB}b+\lambda_{AB}a)dt\right)}. \end{equation} The full response functions are now given by the diagrammatic equations shown in figure 12, where all possible diagrams are included. Written out explicitly these give: \begin{eqnarray} & & L(k,t_2,t_1)=\xi(k,t_2,t_1)-\lambda_{AB}\int_{t_1}^{t_2} \xi(k,t_2,\tau) a(\tau) M(k,\tau,t_1)d\tau \\ & & M(k,t_2,t_1)=-\lambda_{AB}\int_{t_1}^{t_2}\theta(k,t_2,\tau) b(\tau)L(k,\tau,t_1)d\tau \\ & & N(k,t_2,t_1)=\theta(k,t_2,t_1)-\lambda_{AB}\int_{t_1}^{t_2} \theta(k,t_2,\tau) b(\tau) P(k,\tau,t_1)d\tau \\ & & P(k,t_2,t_1)=-\lambda_{AB}\int_{t_1}^{t_2}\xi(k,t_2,\tau) a(\tau)N(k,\tau,t_1)d\tau. \end{eqnarray} In general this set of coupled integral equations is intractable - however we can make progress in the limit where $\langle a\rangle\gg \langle b\rangle$, or $\langle b\rangle\gg \langle a\rangle$. Considering the case where $\langle a\rangle\gg \langle b\rangle$, the dominant contributions to the response functions come from diagrams with the minimum possible number of $\langle b\rangle_{cl}$ density line insertions. Accordingly, we can now truncate the full diagrammatic equations, as shown in figure 13. Notice that to this order $L$, $N$, and $P$ contain no $\langle b\rangle_{cl}$ density insertions, whereas $M$ must contain one such insertion. In this approximation we can now perform the integrals inside the $\xi$ and $\theta$ functions, using the appropriate mean field density: \begin{equation} \int_{t_1}^{t_2}(4\lambda_{AA}a+\lambda_{AB}b)dt\approx\int_{t_1} ^{t_2}{4\lambda_{AA}n_A\over 1+2\lambda_{AA}n_At}dt=\ln\left({ 1+2\lambda_{AA}n_At_2\over 1+2\lambda_{AA}n_At_1}\right)^2 \end{equation} \begin{equation} \int_{t_1}^{t_2}(4\lambda_{BB}b+\lambda_{AB}a)dt\approx\int_{t_1} ^{t_2}{\lambda_{AB}n_A\over 1+2\lambda_{AA}n_At}dt=\ln\left({ 1+2\lambda_{AA}n_At_2\over 1+2\lambda_{AA}n_At_1}\right)^{ \lambda_{AB}/2\lambda_{AA}}, \end{equation} and therefore \begin{equation} \xi(k,t_2,t_1)=\left({1+2\lambda_{AA}n_At_1\over 1+2\lambda_{AA} n_At_2}\right)^2\exp{(-k^2(t_2-t_1))} \end{equation} \begin{equation} \theta(k,t_2,t_1)=\left({1+2 \lambda_{AA}n_At_1\over 1+2\lambda_{AA}n_At_2}\right)^{\lambda_{AB}/ 2\lambda_{AA}}\exp{(-k^2(t_2-t_1)\delta)}. \end{equation} Using these expressions, it is now straightforward to derive the response functions given in equations (\ref{R1L}), (\ref{R1N}), (\ref{R1P}), and (\ref{R1M}). \section{Appendix B: A One Loop Integral} For the case where $\langle a\rangle\gg\langle b\rangle$ the hardest of the three diagrams of figure 6 to evaluate is (ii) - see equation (\ref{la2}). We shall evaluate it first in $d=2$, and then deduce its form in $d=2-\epsilon$. Notice that the extra integration resulting from the $\langle b\rangle_{cl}$ insertion in the loop ensures that this diagram is not divergent. Taking the asymptotic part of the $t_1$ and $t'$ pieces, we find: $$ {-\lambda_{AB}^2n_B\over 2\lambda_{AA}(2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}}\int {d^2k\over (2\pi)^2}\int_0^t dt_2\int_0^{t_2}dt_1\int_{t_1}^{t_2} dt'{(2\lambda_{AA}n_A)^2 t_1^2\over (1+2\lambda_{AA}n_At_2)^2\,t'^2} $$ \begin{equation} \qquad\qquad\qquad\qquad\qquad\qquad\times\exp{(-k^2(t_2(1+\delta) -2t_1+(1-\delta)t'))}. \end{equation} The $k$ and $t'$ integrals are elementary, giving \begin{equation} {-\lambda_{AB}^2n_B\over 8\pi\lambda_{AA}(2\lambda_{AA}n_At)^{\lambda_{AB}/2\lambda_{AA}}} \int_0^t{dt_2 (2\lambda_{AA}n_A)^2\over (1+2\lambda_{AA}n_At_2)^2} \int_0^{t_2} dt_1 t_1^2 \end{equation} $$ \times\left({1\over (t_2(1+\delta)-2t_1)}\left[{1\over t_1}-{1\over t_2}\right]+{1-\delta\over (t_2(1+\delta)-2t_1)^2}\ln\left({2t_1\over (1+\delta)t_2}\right)\right). $$ Although the first part of the $t_1$ integral is straightforward, the second piece involving the logarithm is more difficult. However, if we make the transformation \begin{equation} v={2t_1\over (1+\delta)t_2}-1, \end{equation} we find: \begin{equation} \int_0^{t_2}dt_1 {(1-\delta)t_1^2 \over (t_2(1+\delta)-2t_1)^2}\ln\left({2t_1\over (1+\delta)t_2}\right)= {1\over 8}(1-\delta^2)t_2\int_{-1}^{1-\delta\over 1+\delta}dv{(1+v)^2\over v^2}\ln(1+v), \end{equation} where all time dependency has been removed from the integral limits. The final $t_2$ integral is then easy to perform, and we end up with: \begin{equation} {-\lambda_{AB}^2n_B\over 32\pi\lambda_{AA}(2\lambda_{AA}n_At)^{\lambda_{AB} /2\lambda_{AA}}}\left(\delta+{1\over 2}(\delta^2-1)\left[\ln\left( {1-\delta\over1+\delta}\right)\qquad\qquad\qquad\qquad\qquad\quad \right.\right. \end{equation} $$ \left.\left. \quad\qquad\qquad\qquad\qquad\qquad\qquad\qquad -\int_{-1}^{1-\delta\over 1+\delta} dv {(1+v)^2\over v^2}\ln(1+v)\right]\right) \ln(2\lambda_{AA}n_At). $$ However, we now need to extend this analysis to determine the behaviour of the integral in $d=2-\epsilon$. If we take the asymptotic part of all the pieces inside the integral, and perform power counting, we find that it should scale as $t^{-\lambda_{AB}/2\lambda_{AA}}\,t^{\epsilon/2}$. However, this procedure is not strictly valid, as in moving to the asymptotic version a false $t_2=0$ divergence is created. Nevertheless, the integral is dominated by contributions from late times where arguments based on power counting should be valid. Hence in $d=2-\epsilon$ we find: \begin{equation} {-\lambda_{AB}^2n_B\over 32\pi\lambda_{AA}(2\lambda_{AA}n_At)^{\lambda_{AB} /2\lambda_{AA}}}\left(\delta+{1\over 2}(\delta^2-1)\left[\ln\left( {1-\delta\over1+\delta}\right)\qquad\qquad\qquad\qquad\qquad\quad \right.\right. \end{equation} $$ \left.\left. \quad\qquad\qquad\qquad\qquad\qquad -\int_{-1}^{1-\delta\over 1+\delta} dv {(1+v)^2\over v^2}\ln(1+v) \right]+O(\epsilon)\right)t^{\epsilon/2}\ln(2\lambda_{AA}n_At). $$ Further subleading corrections (in time), which we have not calculated, will lack the logarithm factor, and so will contribute to the {\it amplitude} for the minority species density. \newpage
81b1ac75ec3b10dff7f267986c13939709b233f9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Introduction} Vertex systems which are originally introduced to two dimensional ferroelectrics models with the so called ``ice-rule'', have rapidly evolved in the last quarter of century and have led to a tremendeous developement in the statistical physics of soluble models. New concepts and new mathematical structures have been discovered essentially due to the introduction of the method of quantum inverse scattering. Recently the solubility conditions known as the Yang-Baxter equations, which generalized the star-triangle relations for spin systems, for vertex systems are shown to have an expression in terms of quantum groups. These new objects introduced by Drinfield \cite{drinf} and Jimbo \cite{jimbo} , are essentially matrix groups with non commutative elements. In this paper we shall be concerned with the $SL_q$(2) group which is a $q$-deformation of the standard $SL$(2) group \cite{bm}.\\ Interestingly, when one seeks a representation of the $SL_q$(2) group by the Weyl operators associated with a quantum degree of freedom $Q$ and its canonical conjugate momentum $P$, one obtains a new soluble vertex system having on ho\-ri\-zon\-tal bonds random variables taking only two values ($\pm$1/2), and on vertical bonds random variables taking an infinite number of values ( a kind of infinite spin ). Physically one may view the system as a two-dimensional array of vertical quantized spring coupled to binary horizontal devices. Alternatively the discrete vertical variables are analogous to ``heights'' in face models ( or SOS models ) of statistical mechanics for which, there exists a Yang-Baxter Algebra and Bethe-Ansatz solutions \cite{hv}, whereas the horizontal variables remain standard arrow variables of Lieb's six-vertex model. In this respect, the new soluble model may be viewed as having a mixed face-vertex nature.\\ Section 1 is devoted to the description of the system. We show how the construction of the vertex operator L is performed using $Q$ and $P$. We also indicated how a local vacuum may be chosen in order to be able to apply the method of quantum inverse scattering and to obtain the Bethe Ansatz equations.\\ In section 2, we diagonalized the transfer matrix of the model, using this conventional technique and obtain the free enegy per site. In analogy to the six-vertex model, we derive the tensor representation of the $SL_q$(2) group, the generators of which have a remarquable structure which is parallel to that of the six-vertex model. At the value $q$ = $i$, there is also a ``free fermion'' limit, and fermion-like operators may be expressed in terms of the Bose degrees of freedom $Q_{j}$ and $P_{j}$ for $j = 1, 2, \cdots, N.$ The generators of the tensor representation of $SL_q$(2) in this model, are closed to the ``screening'' operators of Dotsenko and Fateev \cite{dotko} in their treatment of Conformal Field theory using the Coulomb gas picture. If a Hamiltonian operator for a chain with degrees of freedom $Q_{j}$ and $P_{j}$ can be found so that it commutes with the transfer matrix of the model, one may be able to establish a connection with a lattice Coulomb gas. Since the critical behavior of these models are the same, there is a strong evidence that such a connection may exist. We conclude by comparing our model with those arising from the lattice version of the quantum Sine-Gordon or the quantum Non-linear Schr\"odinger equation ( Faddeev , Korepin, Kulish, Sklyanin et al. \cite{fada}, \cite{kul}, \cite{kora}, and those arising from the bosonisation of the six-vertex model using the Holstein-Primakoff transformation (Y. K. Zhou \cite{zhou} ). Finally we give some future directions of investigation. \newpage \section{The vertex system on a square lattice} \subsection{Formulation} The statistical system we consider is made up of elementary vertices consisting of a pair of Ising variables $ \sigma$ and $\sigma^{\prime}$ on horizontal bonds and other random variables on vertical bonds $\xi$ and $\xi^{\prime}$. For each set of values of the 4 random variables $\sigma$, $\sigma^{\prime}$ and $\xi$, $\xi^{'}$ a Boltzmann weight W$\xi,\xi^{'} ;\sigma ,\sigma^{'})$ is assigned (see fig.1). \input{dessin} In the 1970's, R.J.Baxter \cite{baxa} showed that vertices that satify the triangle-relations (nowadays called the Yang-Baxter Equations) then their horizontal row transfer matrices form a commuting set of operators with respect to a ``spectral parameter'' introduced by the Russian school \cite{fada}. The triangle equations are the analog of the star-triangle relations for spin systems which yield then the same property of commuting horizontal row transfer matrices. The triangle-relations state that the partition function of the following two triangles are the same for every configuration of random variables on open external bonds (i.e. $\sigma_1$,$\sigma_2$,$\sigma_3$,$\sigma_4$,$\xi_1$,$\xi_2$) (see fig.2).\\ \begin{figure}[thp] \unitlength=1cm \begin{picture}(10,4.5) \put(0.8,3){\line(5 $\sigma_4$ , -1){4.5}} \put(0.75,2.5){\line(5 $\sigma_3$ , 1){4.5}} \put(4.7,4){\line(0,-1){2.5}} \put(4.5,4){\line(0 ,-1){2.5}} \put(7.5,3.3){\line(5 $\sigma_4$ , -1){4.5}} \put(8,2){\line(5,1){4.5}} \put(8.9,4){\line(0,-1){2.5}} \put(8.7,4){\line(0,-1){2.5}} \put(5.9,3.5){\shortstack{$\sigma_1$}} \put(7.5,2){\shortstack{$\sigma_3$}} \put(8.5,1.2){\shortstack{$\xi_2$}} \put(4.55,1.2){\shortstack{$\xi_2$}} \put(4.9,2.5){\shortstack{$\xi$}} \put(8.2,2.5){\shortstack{$\xi$}} \put(4.55,4.3){\shortstack{$\xi_1$}} \put(8.5,4.3){\shortstack{$\xi_1$}} \put(12.8,3){\shortstack{$\sigma_1$}} \put(6,2){\shortstack{$\sigma_2$}} \put(12.8,2.3){\shortstack{$\sigma_2$}} \put(6.6,2.6){\shortstack{=}} \put(10,2){\shortstack{$\sigma$}} \put(10,3.1){\shortstack{$\sigma'$}} \put(3.5,2){\shortstack{$\sigma'$}} \put(3.5,3.1){\shortstack{$\sigma$}} \end{picture} \label{crossing} \caption[]{Summation is performed on $\sigma,\; \sigma'$ and, $\xi$} \end{figure} \newpage Here as it stands, we note the necessity of having a third vertex having only Ising variables on the left and right sides. The triangle-relations are the necessary conditions for the calculation of the partition function of the model by Bethe Ansatz techniques and consequently the thermodynamics of the system. Note that when $\xi=\sigma=\pm1$, we recover the standard triangle relations of the six or eight vertex solved by E. H. Lieb \cite{lieb} and R. J. Baxter \cite{baxa}. A particular system of vertex having unequal number of random variables on horizontal and vertical bonds but verifying the triangle relations was solved by R. Z. Bariev and Yu. V. Kozhinov \cite{bara}, and a general discussion on such type of vertex is presented by H. J. de Vega \cite{vega} \\ \indent An appropriate way of handling the star triangle relations (or Yang-Baxter equations) consists of using an operator formulation. We associate to a vertex with Ising variables on horizontal bonds a 2$\times$2 matrix whose matrix elements $L_{\sigma\sigma^{'}}$ are labelled by $\sigma$ and $\sigma^{\prime}$ (see fig.1): \begin{displaymath} L=\left(\begin{array}{ccccccc}L_{11} & = & \alpha & \hspace{1cm} & L_{1-1} & = &\beta^-\\ L_{-11} & = & \beta^{+} & \hspace{1cm} & L_{-1-1} & = & \delta \end{array} \right) \end{displaymath}\\ $\alpha$, $\beta^\pm$, $\delta$ are themselves operators in a ``vertical'' Hilbert space with matrix elements labelled by $\xi$ and $\xi^{'}$. Anticipating on the existence of a ``spectral parameter'' \cite{fadb}, the triangle-relations take up the following compact form:\\ \begin{equation} R(\frac{u}{v})L(u)\otimes L(v)=L(v)\otimes L(u)R(\frac{u}{v}), \end{equation}\\ where $R$ is associated with the standard vertex with only Ising variables on the bonds.\\ Eq(1) is rather general, one may consider the set of all $L$ satisfying (1) with the same $R$; in particular one may choose $R$ to be that of a standard symmetric six-vertex model. In this case the matrix elements $\alpha$(u), $\delta$(u), $\beta^\pm$ of $L$ are shown to be expressible in terms of the generators of the quantum group $SL_q$(2): $L_z$, $L^{+}$, $L^{-}$ which obey the defining relations (see \cite{wieg}):\\ $$[L_z,L^\pm ] = \pm L^\pm \hspace{0.5cm}{\mbox or}\hspace{0.5cm} q^{L_{z}}L^{\pm} = q^{\pm1}L^{\pm}q^{L_{z}},$$ \begin{equation} [ L^+ ,L^- ] = \frac{q^{2L_z} - q^{-2L_z}}{q - q^{-1}}. \end{equation} Then one has the expressions ( see Wiegmann and Zabrodin \cite{wieg} ) \begin{equation} \beta^{\pm} = L^{\pm} , \end{equation} $$\alpha(u) = \frac{uq^{L_z} - u^{-1}q^{-L_z}}{q - q^{-1}} , $$ \begin{equation} \delta(u) = \frac{uq^{-L_z} - u^{-1}q^{L_z}}{q - q^{-1}} . \end{equation} Thus the problem is reduced to finding the appropriate representations of $SL_q(2)$ which corresponds to the definition of the vertex. We observe that the standard six vertex model is recovered if one considers the spin 1/2 representation of $SL_q(2)$ which is generated by the Pauli matrices, higher spin representations are possible (see Saleur and Pasquier \cite{sala}, \cite{pas} ).\\ But elaborating on Wiegmann and Zabrodin \cite{wieg} representation of $SL_q(2)$ by 2-dimensional magnetic translation operators, we shall consider next, the representation of $SL_q(2)$ by the Weyl operators of the canonical commutation relation of one degree of freedom in quantum mechanics. \subsection{Algebraic tools} In his treatement of quantum mechanics H. Weyl had proposed to replace the canonical commutator between a dynamical variable $Q$ and its conjugate $P$\\ \begin{equation} [Q,P]=iI , \end{equation}\\ by the commutation relation between $e^{ipQ}$ and $e^{ixP}$:\\ \begin{equation} e^{ixP}e^{ipQ} = e^{ixp}e^{ipQ}e^{ixP} . \end{equation} \\ In this paper we shall concentrate on the case $q=e^{i\eta}$, which corresponds to a physical phase of the system. Using appropriate scaling, (6) may be rewritten under the form:\\ \begin{equation}\\ e^{iP}q^Q = qq^Qe^{iP} . \end{equation} \\ This relation allows us to represent the generators of $SL_q(2)$ as:\\ $$L_z = Q , $$ $$L^+ = \frac{q^{Q-1/2}-q^{-Q+1/2}}{q-q^{-1}}e^{-iP} ,$$ \begin{equation} L^- = -e^{iP}\frac{q^{Q-1/2}-q^{-Q+1/2}}{q-q^{-1}} . \end{equation} \\ From (8), we recover in the limit q$\longrightarrow$1, the following representation of $SL(2)$ generators: $$S^+ = (Q-1/2)\exp(-iP),$$ $$S^- = -\exp(iP)(Q-1/2),$$ $$S^z = Q.$$ We observe that $L^{+}$ and $L^{-}$ are each other antihermitian when we require that the limit $q\longrightarrow 1$ of (8) obeys the commutators of $SL$(2).\\ One may check that (8) fulfills automatically (2), using the shift property of $e^{ixP}$,\\ \begin{equation} e^{ixP}Qe^{-ixP} = Q+x . \end{equation}\\ The new representation of the vertex operator $L$ is now: $$\beta^{+} = \frac{q^{Q-1/2}-q^{-Q+1/2}}{q-q^{-1}}e^{-iP} ,\hspace{1cm} \beta^{-} = -e^{iP}\frac{q^{Q-1/2}-q^{-Q+1/2}}{q-q^{-1}} ,$$ \begin{equation} \alpha = \frac{uq^{Q}-u^{-1}q^{-Q}}{q-q^{-1}},\hspace{1cm} \delta = \frac{uq^{-Q}-u^{-1}q^{Q}}{q-q^{-1}} . \end{equation}\\ Physically we have a quantum mechanical degree of freedom on ``vertical'' space coupled to Ising spins on horizontal bonds. Such a system obeys the triangle-relations(Yang-Baxter equations) and may under specified conditions be solved by Bethe Ansatz techniques. Since it has the $R$-matrix of a six-vertex model, one expects its critical behavior to be the same as in the six-vertex case. The attractive point is that the critical universality class may be in fact defined by the choice of the $R$-matrix.\\ For comparison let us recall the standard spin 1/2 representation of $SL_q(2)$, for which we have: $$\frac{q^{\sigma^z}-q^{-\sigma^z}}{q-q^{-1}} =\sigma^z .$$\\ In this case with the standard Pauli matrices $\sigma^x$, $\sigma^y$, $\sigma^z$, one has:\\ $$\beta^\pm = \sigma^\pm =1/2(\sigma^x \pm i\sigma^y) ,$$ $$\alpha = \frac{uq^{\sigma^{z}/2} - u^{-1}q^{-\sigma^{z}/2}}{q - q^{-1}} ,$$ \begin{equation} \delta = \frac{uq^{-\sigma^{z}/2} - u^{-1}q^{\sigma^{z}/2}}{q - q^{-1}} . \end{equation} \\ Here the vertical space is two-dimensional, whereas it is infinite dimensional when one uses $P$ and $Q$.\\ \subsection{Schr\"odinger representation and the local vacuum.} In order to apply the method of quantum inverse scattering (Faddeev \cite{fadb}) to construct the explicit solution of the problem, one needs to construct a local vacuum. This is fairly evident in the case of the standard six-vertex model where one may choose for example, the state $\left(\begin{array}{c}0\\1\end{array}\right)$ which is annihilated by the $\beta^-$ = $\sigma^-$ operator: \begin{equation} \sigma^{-}\left(\begin{array}{c}0\\1\end{array}\right) = 0 . \end{equation} Thus the local vacuum is nothing else as the ``spin down'' state on the vertical direction. Moreover\\ $$\alpha\left(\begin{array}{c}0\\1\end{array}\right) = \frac{uq^{-1/2} - u^{-1}q^{1/2}}{q - q^{-1}} \left(\begin{array}{c}0\\1\end{array}\right) = a \left(\begin{array}{c}0\\1\end{array}\right) ,$$ \begin{equation} \delta \left(\begin{array}{c}0\\1\end{array}\right) = \frac{uq^{1/2} - u^{-1}q^{-1/2}}{q - q^{-1}}\left( \begin{array}{c}0\\1\end{array}\right) = b \left( \begin{array}{c}0\\1\end{array}\right) . \end{equation}\\ Or in a more usual parametrization : \, $q = e^{i\eta } ,\; \; u = e^{\theta} ;$ we recognize: $$a = \frac{\sinh{(\theta - i\eta/2)}}{\sinh{i\eta}} , \hspace{2cm} b = \frac{\sinh{(\theta + i\eta/2)}}{\sinh{i\eta}} .$$ \indent With the use of $Q$ and $P$ it is necessary to find a local vacuum. We shall do so in using the position Schr\"odinger representation, in which the $Q$ operator is diagonal and has continuous spectrum:\\ $$Q \vert\xi\rangle = \xi\vert\xi\rangle ,$$ \begin{equation} e^{ixP} \vert\xi\rangle = \vert\xi - x\rangle . \end{equation} The local vacuum $|\omega\rangle$, in analogy with eq( 12 ), is defined by the annihilation property \\ $$\beta^{-} \vert\omega\rangle = L^{-} \vert\omega\rangle = 0 $$ \begin{equation} e^{iP}\frac{q^{{\omega}-{1/2}} - q^{-{\omega}+1/2}}{q - q^{-1}} \vert\omega\rangle = 0 . \end{equation} Hence one must choose \begin{equation} \omega = 1/2 . \end{equation} \indent The existence of this local vacuum is directly related to the construction of the representation (see eq.(8)) with a proper limit $q\longrightarrow1$. Demanding from the start that $L^{+}$ and $L^{-}$ be each other hermitian, will not lead to the correct $q\longrightarrow1$limit, nor yield a local vacuum for one vertex operator $L$, as in the Sine-Gordon theory \cite{kora}. Application of $\beta^+ = L^{+}$ on the local vacuum $\vert \omega\rangle = \vert1/2\rangle$ yields\\ $$L^{+}\vert 1/2\rangle = \vert 3/2\rangle .$$\\ More generally we have:\\ \begin{equation} (L^{+})^n \vert 1/2\rangle = \frac{q^n -q^{-n}}{q - q^{-1}} \times \frac{q^{n-1} - q^{-n+1}}{q - q^{-1}} \times \ldots \times \frac{q - q^{-1}}{q - q^{-1}} \vert n+1/2\rangle . \end{equation}\\ The sequence of states $\vert n+1/2\rangle$ reminds us of the sequence of the harmonic oscillator with unit frequency. In this sense it is reasonnable to think of the vertices as vertical springs coupled to horizontal Ising variables.\\ We may evaluate also, since $Q$ is diagonal, the action of $\alpha(u)$ and $\delta(u)$ on $\vert 1/2\rangle$ \\ $$\alpha ( u ) \vert 1/2\rangle =\frac{uq^{1/2} - u^{-1}q^{-1/2}}{q - q^{-1}} \vert 1/2\rangle ,$$ \begin{equation} \delta ( u ) \vert 1/2\rangle = \frac{uq^{-1/2} - u^{-1}q^{1/2}}{q - q^{-1}} \vert 1/2\rangle . \end{equation} This will be used later in constructing the solution. \input{paper3} \end{document} \section{Properties of the vertex system} \subsection{Quantum inverse scattering and diagonalisation of the transfer matrix} Following standard procedure we construct the monodromy operator $T(u)$, for one row of $N$ vertices (see fig.5).\\ \begin{figure}[thp] \input{dessin5} \end{figure} \paragraph*{} As it is known, $T(u)$ is a $2\times 2$ matrix of the type, \begin{equation} T(u) = \left( \begin{array}{ccc} A(u)& &C(u)\\B(u)& &D(u) \end{array} \right) \end{equation} and obeys the triangle-equations (Yang-Baxter equations) which yield commutation relations between $A(u), B(u), C(u)$ and $D(u)$, necessary to construct the eigenstates of the transfer matrix \begin{equation} t(u) = \mbox{Tr}\;T(u) = A(u) + D(u). \end{equation} But before doing so, let us recall that due to our choice of local vacuum $\vert 1/2\rangle$ at each site, the local vertex system fulfills an extended ``ice-rule'' on which, conservation laws relie on. The bare vacuum of the row is thus: \begin{equation} \vert \Omega \rangle = \vert 1/2 \rangle_{1} \otimes \vert 1/2 \rangle_{2} \otimes \ldots \otimes \vert 1/2 \rangle_{N}. \end{equation} Excitations are created in a standard way by repeated applications of the $B(u)$ operator on $\vert \Omega \rangle$, namely: \begin{equation} B(u_{1})B(u_{2})\ldots B(u_{m}) \vert \Omega \rangle = \vert u_{1},u_{2}, \ldots ,u_{m} \rangle. \end{equation} The requirement that $\vert u_{1},u_{2},\ldots ,u_{m}\rangle$ should be eigenstate of $t(u)$ forces the $u_{j}, j = 1,2,\ldots, m$, to obey a system of nonlinear equations, the so-called Bethe Ansatz equations:\\ \begin{equation} \frac{\sinh({\theta_{j} + i{\eta\over2}})}{\sinh({\theta_{j} - i{\eta\over2}})} = -\prod_{k=1}^{m}\frac{\sinh({\theta_{j} - \theta_{k} + i\eta})}{\sinh({\theta_{j} -\theta_{k} - i\eta})}, \end{equation} where $u_{j} = \exp{\theta_{j}}$.\\ These equations may be solved exactly in the limit of $N\longrightarrow \infty$ by standard Fourier techniques since they are formaly the same equations as in the six-vertex model \cite{vega}. The free energy per site is then in the thermodynamics limit in the antiferroelectric regime for example,\cite{vega} \begin{equation} f(\theta,\eta) \approx \int\limits_{0}^{+\infty}\frac{\sinh{2x\theta} \sinh{[x(\pi-\eta)]}}{\cosh(x\eta)\sinh(x\pi)}\;dx + const, \end{equation} and has the same critical behavior as the six-vertex or model I of \cite{hv}. This is not new since the lattice version of the Sine-Gordon model (Faddeev et al. \cite{fadb}, Korepin et al. \cite{kora}) shares also the same behavior. One may say that so long one has the same $R$-matrix in the Yang-Baxter equations, one necessarily obtain the same critical behavior and conjecture that, a possible classification of universality classes of critical behavior, may be made according to a classification of $R$-matrices. \subsection{Alternative Quantum Group structure and representation} \indent In this section we discuss some aspects of $SL_q$(2) which are relevant for the so called ``Free Fermion'' limit: q$\longrightarrow i$ (or $\eta \longrightarrow {\pi\over 2}$), which is well known in the Bethe Ansatz equations (26) (see for example \cite{baxlivre}).\\ Consider the representation of one vertex operator $L$ in term of the generators $L_{z}$, $L^{\pm}$ of $SL_q$(2). We may form two new operators $a^{+}$ and $a^{-}$ by defining: \begin{equation} a^{\pm} = q^{L_{z}}L^{\pm}. \end{equation} Then $SL_{q}$(2) may be characterized by $L_{z}$, $a^{\pm}$ with the following relations: $$q^{L_{z}}a^{\pm} = q^{\pm1}a^{\pm}q^{L_{z}},$$ \begin{equation} \displaystyle a^{+}a^{-} - q^{-2}a^{-}a^{+} = \frac{q^{4L_{z}} - 1}{q^{2} - 1}. \end{equation} Curiously this last equation appears as a $q$-commutator for a $q$-deformed oscillator, although not identical to the relation proposed by Biedhenharn and Macfarlane \cite{bm}. The limit $q\longrightarrow 1$ of (29) is again $SL$(2) and the oscillator algebra can be recovered after a Wigner contraction is performed \cite{gil}.\\ Moreover we see that for any integer $m$, \begin{equation} \displaystyle (a^{+})^{m} = q^{-m(m-1)\over 2}q^{mL_{z}}(L^{+})^{m}. \end{equation} There exists a similar equation for $(a^{-})^{m}$. Note that $a^{+}$ and $a^{-}$ are not each other hermitian.\\ As it is known \cite{vega}, one may obtain tensor representation of $SL_q$(2) by repeated application of the coproduct (\cite{jimbo}) on the generators for one vertex. This is what one obtains physically when the whole row of N vertices is considered. The N-fold tensor representation generators are: $$J_{\pm} = \sum_{j=1}^{N}q^{(L_{z})_{1}}\otimes \cdots \otimes q^{(L_{z})_{j-1}} \otimes L_{j}^{\pm}\otimes q^{-(L_{z})_{j+1}}\otimes \cdots q^{-(L_{z})_{N}},$$ \begin{equation} J_{z} = \sum_{j=1}^{N} (L_{z})_{j}, \end{equation} they fulfill the defining relations (2): $$[J_{+}, J_{-}] = \frac{q^{2J_{z}} - q^{-2J_{z}}}{q - q^{-1}},$$ \begin{equation} q^{J_{z}}J_{\pm} = q^{\pm1}J_{\pm}q^{J_{z}}. \end{equation} These relations may be obtained by hand starting from the standard commutation relations of $A(u)$, $B(u)$, $C(u)$ and $D(u)$, obtained in the quantum inverse scattering method, in the ferroelectric regime ($q$ real and positive), and taking the appropriate limits \cite{vega}.\\ Similarly the tensor representation may be characterized by the set of operators $J_{z}$, $\eta^{\pm}$ where, \begin{equation} \eta^{\pm} = q^{J_{z}}J_{\pm} = \sum_{j=1}^{N}q^{2(L_{z})_{1} + \cdots + 2(L_{z})_{j-1}}a_{j}^{\pm}. \end{equation} Of course $\eta^{\pm}$, $J_{z}$ obey the relation (29) satisfied by $a^{\pm}$, $L_{z}$. But here, we have the freedom of introducing new local operators $\eta_{j}^{\pm}$ defined by: \begin{equation} \displaystyle \eta_{j}^{\pm} = q^{2(L_{z})_{1} + \cdots + 2(L_{z})_{j-1}}a_{j}^{\pm}, \end{equation} which in turns obey the following commutation relations: $$\eta_{j}^{+}\eta_{i}^{+} = q^{2sgn(j-i)}\eta_{i}^{+}\eta_{j}^{+},$$ $$\eta_{j}^{-}\eta_{i}^{-} = q^{-2sgn(j-i)}\eta_{i}^{-}\eta_{j}^{-},$$ $$\eta_{j}^{+}\eta_{i}^{-} = q^{-2}\eta_{i}^{-}\eta_{j}^{+}\;\;i\not=j,$$ and \begin{equation} \eta_{j}^{+}\eta_{j}^{-} - q^{-2}\eta_{j}^{-}\eta_{j}^{+} = q^{4(L_{z})_{1} + \cdots + 4(L_{z})_{j-1}}\frac{q^{4(L_{z})_{j}} - 1}{q^{2} - 1}. \end{equation} Note that $(\eta_{j}^{+})^{m}$ or $(\eta_{j}^{-})^{m}$ for $m$ integer, is not necessarily zero, because of (30).\\ Consequently in this section, we have shown that the structure of $SL_{q}$(2), presents aspects of the so called $q$-deformed oscillator of a special type and that, its tensor representation leads naturally to local operators which exhibit ``anyonic'' commutation relations as well as behaves the $q$-deformed oscillator \cite{bm}, \cite{wz}. \subsection{Free Fermion limit} As can be seen from eq (26), the Bethe Ansatz equations decouple at $\eta = {\pi\over 2}$ (or $q = i$): the scattering of pseudo-particles created by the $B(u)$ operators becomes trivial, the corresponding phase shifts equal -1, and the multiparticle wave function is simply represented by a Slater determinant.\\ We can now make some interesting observations in two different situations: the six-vertex case and the vertex studied in this paper. \subsubsection{The six-vertex case} The $\eta^{\pm}$ operators are closely related to usual lattice Fermion operators. Since $L_{z} = {1\over 2}\sigma^{z},$ \begin{equation} \eta_{j}^{\pm} = e^{i{\pi\over 2}(\sigma_{1}^{z} + \cdots + \sigma_{j-1}^{z})}e^{i{\pi\over 4} \sigma_{j}^{z}}\sigma_{j}^{\pm}, \end{equation} and they anticommute: $$\eta_{j}^{\pm}\eta_{i}^{\pm} = -\eta_{i}^{\pm}\eta_{j}^{\pm},$$ \begin{equation} \{\eta_{j}^{+}, \eta_{i}^{-}\} = \delta_{ij}e^{i\pi(\sigma_{1}^{z} + \cdots + \sigma_{j-1}^{z})}\frac{e^{i\pi\sigma_{j}^{z}} - 1}{-2} = \delta_{ij} (-1)^{j-1}. \end{equation} Moreover $(\eta_{j}^{\pm})^{2} = 0.$ Thus we may define Fermion operators by: \begin{equation} \displaystyle c_{j}^{\pm} = (i)^{j-1}\eta_{j}^{\pm} = \displaystyle \exp\{{i{\pi\over 2}(\displaystyle \sum_{l=1}^{j-1}(\sigma^{z} +1)_{l})}\}\displaystyle \exp\{{i{\pi\over 4}\sigma_{j}^{z}}\}\sigma_{j}^{\pm}, \end{equation} which are not exactly identical to the ones obtained by the Jordan-Wigner transformation, since $c_{j}^{\pm}$ are not hermitian adjoint each other \cite{lieb}, but they anticommute and are of square zero. \subsubsection{The new vertex case} The $\eta_{j}^{\pm}$ operators do have a fermion-like behavior. Here $L_{z} = Q$, \begin{equation} \displaystyle \eta_{j}^{\pm} = e^{i\pi(Q_{1} + \cdots + Q_{j-1})}e^{i{\pi\over 2}Q_{j}}\beta_{j}^{\pm}. \end{equation} They also anticommute: $$\eta_{j}^{\pm}\eta_{i}^{\pm} = -\eta_{i}^{\pm}\eta_{j}^{\pm},$$ \begin{equation} \{\eta_{j}^{+}, \eta_{i}^{-}\} = \delta_{ij}e^{2i\pi(Q_{1} + \cdots + Q_{j-1})}\frac{e^{2i\pi Q_{j}} - 1}{-2}. \end{equation} Now the r.h.s of the second equation of (40) must be applied to the N-fold tensor product states of the type \begin{equation} \displaystyle \otimes_{l=1}^{N} \vert n_{l} + 1/2\rangle \hspace{1cm} n_{l} = 0, 1, 2, \cdots. \end{equation} Since all $Q_{j}$ are diagonal in this representation, we obtain a factor $(-1)^{j-1}$ as in the second equation of (37): \begin{equation} e^{2i\pi\{(n_{1} + \cdots + n_{j-1}) +{(j-1)\over 2}\}}\frac{e^{2i\pi(n_{j} + {1\over 2})} - 1}{-2} = (-1)^{j-1}. \end{equation} Hence in analogy with the six-vertex case, we may define the fermion-like operators: \begin{equation} d_{j}^{\pm} = (i)^{j-1}\eta_{j}^{\pm}, \end{equation} which anticommute properly as $c_{j}^{\pm}$ in (37), but its square (or any power of it), is not zero because: \begin{equation} (a_{j}^{+})^{m} = e^{-i{\pi\over 4}m(m-1)}e^{i{\pi\over 2}mQ_{j}}(\beta_{j}^{+})^{m}, \end{equation} is not zero at site $j$. This feature already manifests itself in the Bethe-Ansatz wave function which differs markedly with the Bethe-Ansatz wave function in the six-vertex model. However since the fermionic character prevails at the point $q = i\; (\eta = {\pi\over 2})$, there is a simplifying decoupling in the Bethe-Ansatz equations and perhaps, a simple direct evaluation of the free energy is possible. In any case, this study suggests that the old duality in one dimension between Fermions and Bosons discovered in the continuum, has an unexpected features on the lattice which are brought to light by the algebraic structure of quantum groups. \section{Conclusion and outlook} We have presented here a study of an integrable vertex system made of Ising variables on horizontal bonds and, ``oscillator-like'' variables on vertical bonds. The model is integrable in the sense it fulfills the Yang-Baxter equations with an $R$-matrix identical to the six-vertex model one. This fact leads to the same critical behavior of the free energy per site as in the six vertex case \cite{baxlivre} . The detailed algebraic structure is however different. The Bethe Ansatz wave function is not in general antisymmetric and the free fermion limit reveals a fermionic behavior of the creation/annihilation operators for the excitations.\\ In the past, there has been several studies of this type of the vertex. Faddeev et al., Korepin et al. \cite{fada}, \cite{kora}, \cite{fadb}, in the Sine-Gordon model on a lattice (or its non relativistic limit the non-linear Sch\"odinger model), have considered an integrablity based on two sites in order to achieve solubility of the model. Zhou \cite{zhou} has performed the Holstein-Primakoff transformation on the six-vertex model. Here we have directly obtained the ``vertical Bose variables'' by the technique of quantum groups, and have obtained an integrability based only on one site.\\ The present study opens up new lines of investigation. First it would be interesting to find the Hamiltonian of the system, since the Hamiltonian of the six-vertex model is the XXZ chain. Presumably, such a Hamiltonian under appropriate continuum limit may yield the Sine-Gordon Hamiltonian. Second, the limit of free Fermion may provide also new insight in the old equivalence between Fermion and Boson in one dimension. Finally one may choose formal perturbation theory to generate, as it is known in the continuum, a classical Coulomb gaz of ``electric charges'' in the plane. But on a lattice, the spacing between sites would then be a parameter controlling the divergences which one has to introduce by hand in a standard bosonisation procedure. We hope to tackle these problems in the future.\\ \medskip \noindent{\bf Acknowledgment} One of us (S.C.L) would like to thank the ``Conseil R\'egional de la R\'egion Centre'' for financial support. We thank the referee for his comments leading to the improvement of the paper.
441dd906d8d77592967a3b707d5a1192b1bf8e5d
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{\label{sec1}Introduction} Slow light with a remarkably low group velocity is a fascinating physical effect, which outperforms fast light in many aspects \cite{baba2008nphotslow,krauss2008we,thevenaz2008slow}. For example, it can be used for buffering and time-domain processing of optical signals \cite{tucker2005slow,okawachi2005tunable,beggs2008ultracompact}. Furthermore, slow light can also promote stronger light-matter interaction, offering new possibilities for miniaturization and improvement of photonic devices \cite{rao2007singleprb,rao2007single,lund2008experimental,monat2009slow,ek2014slow,liu2014random,colman2010temporal}. Photonic crystal structures are particularly attractive for generating slow light due to the great potential applications in photonic integrated circuits. In photonic crystal waveguide, slow light has been demonstrated in the vicinity of the photonic band edge \cite{notomi2001extremely,letartre2001group}, which can be used to improve lasing characteristics \cite{ek2014slow,liu2014random} and realize efficient on-chip single-photon gun \cite{rao2007singleprb,rao2007single}. However, in the slow light regime, the extrinsic scattering loss has a profound impact on the propagation of light, limiting the further application of slow light devices. Topological photonics provide a robust way to manipulate light. Especially, the topological edge state occurred at the interface between two topologically distinct structures exhibits robust unidirectional transport against defects and perturbations, which can be used as one-way waveguide \cite{haldane2008possible,wang2009observation,fang2012realizing,hafezi2011robust,khanikaev2013photonic}. Moreover, the topological resonators can be formed due to the robustness against sharp bends, which can be used to realize topological lasers \cite{bahari2017nonreciprocal,harari2018topological,bandres2018topological,yang2020spin,zeng2020electrically,gong2020topological,noh2020experimental} and chiral quantum optical interface \cite{barik2020chiral,mehrabad2020chiral}. Introducing slow light into topological structures offers a promising solution for taking the advantage of slow light and also topological protection, such as the enhancement of light-matter interaction and robustness against extrinsic scattering loss. So far, a few novel ideas have been proposed to realize slow light topological waveguides in different structures, such as gyromagnetic photonic crystal \cite{yang2013experimental,chen2019strong}, bianisotropic metamaterials \cite{chen2015manipulating} and topological valley photonic crystal (VPC) \cite{yoshimi2020slow,arregui2021quantifying}. In contrast to other structures, topological VPC is all-dielectric structure, which can be easily implemented in optical regime \cite{he2019silicon,shalaev2019robust}. Furthermore, it possesses good compatibility with embedded quantum emitters such as quantum dots (QDs), which can be used to realize chiral quantum interface \cite{barik2020chiral,mehrabad2020chiral}. In VPC, the slow light topological valley edge state can be realized at a bearded interface, which exhibits high group index as well as robust transport \cite{yoshimi2020slow,arregui2021quantifying}. The slow light mode provides a new platform to design topological cavity for further enhancing the light-matter interaction and exploring novel phenomena. Here, we propose a topological cavity based on slow light topological edge modes for broadband Purcell enhancement. The topological cavity is formed using super-triangle with a bearded interface between two topologically distinct VPCs. Topological edge modes with large group indices over 100 can be realized at the bearded interface. The Purcell factor in an infinite topological waveguide is enhanced with the increase of group index. In the slow light regime, the topological cavity supports much denser cavity modes with high quality factor (Q) than that in the fast light regime, which is demonstrated both theoretically and experimentally. In such a topological cavity, we realize broadband Purcell enhancement with substantial Purcell factor of single quantum emitter, due to the existence of dense cavity modes with high Q in a wide spectral range. It relaxes the demand on spectral match between cavity modes and quantum emitters, having great potential in the development of highly efficient on-chip single-photon sources and entangled-photon sources. This work paves a way for exploring novel topological slow light devices with diverse functionalities in integrated photonic circuit platform. \section{\label{sec2}Topological slow light edge mode} The topological VPC investigated is composed of a honeycomb lattice with two inverted equilateral triangular airholes (pointing-up and pointing-down). Different sizes between the two triangular airholes break the spatial inversion symmetry, resulting in the formation of a band gap. The VPC features non-zero Berry curvatures, with opposite signs at K and K’ valleys. Although the Berry curvature integrated over the entire Brillouin zone (BZ) is zero due to the time-reversal symmetry, the valley Chern number $C_{K/K'}$, an integration of Berry curvature over the half of the BZ around K/K’ points, gives rise to a non-zero value \cite{he2019silicon,shalaev2019robust}. Figure \ref{f1}(a) shows the rhombic units of two topologically distinct VPCs, VPC1 (blue) and VPC2 (orange), with different orientations of the larger triangular airholes. By interchanging the sizes of two triangular airholes, the sign of the Berry curvature at each valley will be flipped, leading to opposite valley Chern numbers for VPC1 and VPC2. At the interface between the two topologically distinct VPCs, topological edge states are formed. Detailed discussions are shown in Supplementary Information. \begin{figure} \centering \includegraphics[scale=0.43]{f1_v4} \caption{ (a) Schematics of rhombic units of two topologically distinct VPCs, VPC1 (blue) and VPC2 (orange). They have different orientations of the larger triangle. In VPC1, $L_\bigtriangleup$ is smaller than $L_\bigtriangledown$, while in VPC2, $L_\bigtriangleup$ is larger than $L_\bigtriangledown$. (b) Schematics of the 2D topological VPC waveguide with zigzag (left) and bearded interface (right). The blue and orange regions represent two topologically distinct VPCs, VPC1 and VPC2. (c) Dispersion curves for the edge states formed at zigzag (blue line) and bearded (red lines) interface with $a=$ 340 nm, $L_l=253$ and $L_s=$ 138 nm, where $L_l$ ($L_s$) is the size of larger (smaller) triangular airholes. The edge states at the bearded interface have the slow light regime near the BZ edge. (d) Calculated group indices of the edge states as a function of wavelength at the zigzag (blue line) and bearded (red line) interface. The blue and orange regions correspond to the fast light and slow light regime at the bearded interface, respectively. (e) Calculated Purcell factor of $x$-polarized dipole source in an infinite waveguide with zigzag (blue dots) and bearded (red dots) interface as a function of wavelength. Insets show examples of electric field profile of $x$ component ($|E_x|^2$) of edge modes at the zigzag (left) and bearded (right) interface. The $x$-polarized dipole source was placed at the antinode of corresponding edge modes, as illustrated by the black stars in the insets. (f) Magnetic field distribution ($H_z$) of several edge modes propagating through a Z-shaped bearded interface. The boundary is indicated by the black dashed lines. The scale bar is 2 $\mu$m. From left to right, the wavelengths of edge states decrease and the group indices increase. These simulations were performed by FDTD method.} \label{f1} \end{figure} Figure \ref{f1}(b) shows two types of interfaces, in which the left panel shows a zigzag interface, and the right panel shows a bearded interface. Both interfaces are formed between two topologically distinct VPCs, VPC1 (upper half) and VPC2 (lower half), supporting topological edge states. The zigzag interface faces larger or smaller airholes at the boundary. This type of interface has been predominantly investigated for topological waveguides \cite{he2019silicon,shalaev2019robust} and cavities \cite{barik2020chiral,noh2020experimental,gong2020topological}. It supports fast light modes by crossing the bandgap with near-linear dispersion (shown as blue line in Fig. \ref{f1}(c)). In contrast, the bearded interface joints smaller airholes. The decrease of airhole size adjacent to interface leads to the local increase of refractive index. As a result, the frequency of topological edge mode decreases, and a trivial mode near the upper bulk band edge emerges (shown as red lines in Fig. \ref{f1}(c)). The two modes are degenerate at the BZ edge due to glide plane symmetry of the interface. Therefore, a heavily dispersive characteristics within the bandgap is realized at such an interface, leading to the formation of topological slow light mode. The dispersion curves were calculated for transverse electric field mode by the three-dimensional finite-difference time-domain (FDTD) method. The simulation parameters for the structure were set as $a=340$ nm, $L_l=253$ nm, $L_s=138$ nm, $n$ = 3.43, where $a$ is the period of unit cell, $n$ is the refractive index and $L_l$ ($L_s$) is the size of larger (smaller) triangular airholes. Compared to circular airholes \cite{mehrabad2020chiral}, the design using triangular airholes supports a single-mode topological slow light, as well as a direct and larger bandgap at K (K’) point for the same degree of asymmetry. Figure \ref{f1}(d) shows the calculated group indices of the edge states at the bearded (red) and zigzag (blue) interface. Compared to zigzag interface, the bearded interface exhibits both fast light and slow light topological modes, as shown blue and orange regions in Fig. \ref{f1}(d) respectively. At the bearded interface, the group index shows a sharp increase near the BZ edge and reaches its highest value of about 106 at the wavelength of 1086.23 nm. This value is about 22 times higher than that of the topological edge mode at the zigzag interface. The increase of group index leads to an increase of local density of states (LDOS) in waveguide, therefore resulting in an enhancement of light-matter interaction. In particular, the spontaneous emission rate can be greatly enhanced as the Purcell factor scales with the group index \cite{rao2007singleprb}. Therefore, the topological slow light mode can support much larger Purcell factor than the fast light mode. Figure \ref{f1}(e) shows the calculated Purcell factor in an infinite waveguide with zigzag (blue) and bearded (red) interface as a function of wavelength. The simulations were performed by placing the $x$-polarized dipole at antinode of waveguide mode, as shown in the insets. As expected, the Purcell factor is barely changed with wavelength at the zigzag interface. In contrast, a significant enhancement in Purcell factor is observed at the bearded interface, demonstrating the great potential of the topological slow light mode in enhancing light-matter interaction. The topological edge states at the zigzag interface have been demonstrated to be robust against sharp bends of $60^{\circ}$ and $120^{\circ}$ in previous reports \cite{chen2017valley,he2019silicon,shalaev2019robust}. To visualize the transport property of the topological edge states at bearded interface, we calculated the field distribution of topological edge state in the Z-shaped waveguide with different wavelengths, as shown in Fig. \ref{f1}(f). The edge states were excited by an $x$-polarized dipole source. In the fast light regime, the field distribution does not show any beating in field intensity, confirming the robust light transmission. In the slow light regime, the modes with relatively small group index still exhibit robust transport. Therefore, the bearded interface supports topological slow light edge modes exhibiting robustness against sharp bends. However, with the decrease of wavelength (from left to right in Fig. \ref{f1}(f)), the group index increases, the reflection from sharp corner increases, leading to the formation of standing waves in the waveguide. \section{\label{sec3} Topological cavity} \begin{figure}[b] \centering \includegraphics[scale=0.43]{f2_v3} \caption{ (a) Schematic of topological cavity shaped in the form of a super-triangle. (b) Simulated modes of the topological cavity with zigzag (blue line) and bearded (red line) interface. The spectra are shifted for clarity. (c) An example of electric field profile ($|E|$) of cavity mode in topological cavity with zigzag interface as indicated by the red star in (b). Examples of electric field profile of (d) topological modes with small group index, (e) trivial modes and (f-h) topological modes with high group index in topological cavity with bearded interface, corresponding to the blue, green and orange regions in (b). The modes in (d)(e) are indicated by the blue and green stars in (b). From (f) to (h), the wavelength decreases and the group index increase. The simulations were performed by putting a dipole source at the position indicated by the red star in (a). The scale bars in (c-h) are 2 $\mu$m.} \label{f2} \end{figure} At the bearded interface, the topological edge modes still exhibit robust transport against sharp bends, except the modes with extremely high group index. This robustness enables them to form whispering gallery mode using a super-triangle. Figure \ref{f2}(a) shows the schematic of the topological cavity using a super-triangle. The inside (VPC2) and outside (VPC1) of the super-triangle correspond to two topologically distinct VPCs. Figure \ref{f2}(b) shows the calculated spectra from super-triangle cavity with zigzag (blue) and bearded (red) interface. The perimeters of the super-triangle cavity for zigzag and bearded interface are 44 unit cells (15 $\mu$m) and 50 unit cells (17 $\mu$m), respectively. With the zigzag interface, only the fast light mode can be used to form cavity mode, leading to a large free spectral range (FSR) about a dozen nanometers. Figure \ref{f2}(c) shows an example of electric field profile of cavity mode in the cavity with zigzag interface. The electric field distribution exhibits strong confinement at the topological interface and extends over the super-triangle. The large FSR results in a narrow-band Purcell enhancement, leading to the strict restriction on the spectral alignment between cavity mode and quantum emitters. While, with the bearded interface, both fast light and slow light modes are supported, which can be used to form cavity modes. In the fast light regime, the cavity modes show a similar behavior, including the large FSR and similar electric field profile. As the wavelength goes down, entering the slow light regime, the cavity modes become denser due to the increase of group index. Figure \ref{f2}(d-h) show the electric field profile of the cavity modes in the three regions as shown in Fig. \ref{f2}(b). The electric field was excited by the dipole source located at the position indicated by the red star in Fig. \ref{f2}(a). In the blue region (in Fig. \ref{f2}(b)), corresponding to fast light mode and the slow light mode with relatively small group index, the electric field extends over the whole interface of the super-triangle, as shown in Fig. 2(d), demonstrating the robustness against sharp corners. However, in orange region for the slow light modes with high group index, they show different standing-wave patterns, as illustrated in Fig. \ref{f2}(f-h). With increasing group index (from Fig. \ref{f2}(f) to (h)), the cavity modes tend to localize at one side of the super-triangle, which may result from the increase of reflection from the sharp corners \cite{yoshimi2020slow,gong2020topological}. As for the green region, corresponding to the trivial mode, they are denoted as Fabry-Perot mode due to the lack of robustness against sharp corners, as shown in Fig. \ref{f2}(e). Note that the distinction between trivial mode and topological mode mainly depends on the edge dispersion in Fig. \ref{f1}(c). \begin{figure}[t] \centering \includegraphics[scale=0.43]{f3_v1} \caption{ (a) SEM image of a fabricated topological cavity with a scale bar of 2 $\mu$m. The topological cavity is formed by super-triangle with bearded interface, as indicated by the red dashed line. (b) PL spectra from topological cavity with zigzag (blue line) and bearded (red line) interface. The spectra are shifted for clarity. (c-d) Wavelength (black squares) and corresponding Q (red squares) of the cavity modes in topological cavity with (c) Zigzag and (d) bearded interface. The wavelength and Q were extracted by Lorentz-fitting of high-resolution spectra. In the orange region in (d), much denser cavity modes with high Q are supported in a wide spectral range about 30 nm. (e-f) Polar plots of linear-polarization dependence of several cavity modes in topological cavity with (e) zigzag and (f) bearded interface. } \label{f3} \end{figure} Then we fabricated the topological cavity into a GaAs slab with a thickness of 170 nm with an embedded layer of InAs QDs as light sources to probe the mode of the device. The structures were patterned using electron beam lithography followed by inductively coupled plasma etching. The sacrificial layer was removed with HF etching to form air bridge. Furthermore, the surface passivation treatment using Na$_2$S solution was performed to reduce the surface-state-related absorption losses and improve the Q \cite{kuruma2020surface}. Figure \ref{f3}(a) shows the scanning electron microscope (SEM) image of a fabricated cavity. The optical properties of the topological cavity were characterized by the confocal micro-photoluminescence (PL) measurement at low temperature (5 K) using a liquid helium flow cryostat. Figure \ref{f3}(b) shows the PL spectra from cavities with zigzag (blue line) and bearded (red line) interface. In the cavity with a zigzag interface, three peaks are observed with an FSR about a dozen nanometers. The Qs are about 2000-3000, as shown in Fig. \ref{f3}(c). Whereas in the cavity with a bearded interface, much denser cavity modes with higher Q (up to 8000) are observed, especially in the wavelength range from 1169-1190 nm, corresponding to the slow light regime, as shown in Fig. \ref{f3}(d). The mode spectral spacing can be even below 1 nm. Higher Q in the slow light regime compared to those in the fast light regime is related to the increased LDOS together with topological protection. The Q reduces with the wavelength further decreasing, which may result from the topological protection becoming weaker, leading to an increased effect of scattering losses. Additionally, we measured the linear-polarization dependence of these cavity modes with zigzag (Fig. \ref{f3}(e)) and bearded (Fig. \ref{f3}(f)) interface using half-wave plate (HWP) followed by a polarizer in the detection path. The cavity modes with zigzag interface have the same polarization direction. While in the cavity modes with bearded interface they are different, especially in the slow light regime. In fast light regime, the electric field distribution extending over the whole interface of super-triangle leads to a very close polarization property. In contrast, in slow light regime, the different polarizations may result from the different electric field distributions of the cavity modes excited by random QDs at different sides. \section{\label{sec4}Discussion} The cavity with bearded interface investigated here features much denser cavity modes with high Q in a broad spectral range. The dense cavity modes make it much easier to get resonance with quantum emitters located in a wide spectral range, enabling broadband Purcell enhancement. Besides broadband Purcell enhancement, the cavity modes in this regime also enable stronger light-matter interaction due to the increase of LDOS. Figure \ref{f4} shows the calculated Purcell factor of topological cavity as a function of wavelength with zigzag (Fig. \ref{f4}(a)) and bearded (Fig. \ref{f4}(b)) interface. As expected, the topological cavity with a zigzag interface exhibits a narrow-band Purcell enhancement with small Purcell factor about 10. In the fast light regime, the topological cavity with bearded interface shows a similar behavior to the topological cavity with zigzag interface. Whereas, in the slow light regime, it exhibits much stronger Purcell enhancement in a broad spectral range with large Purcell factor up to 170 due to the increased LDOS. \begin{figure}[t] \centering \includegraphics[scale=0.43]{f4_v1} \caption{ Calculated Purcell factor of topological cavity as function of wavelength with (a) zigzag and (b) bearded interface. } \label{f4} \end{figure} Compatibility with QDs, such a cavity has outstanding advantages in the development of single-photon sources and entangled-photon sources. It is well known that self-assembled QDs have been demonstrated to be a promising candidate to realize single-photon sources \cite{michler2000quantum,xu2007,senellart2017high} and entangled-photon sources \cite{benson2000regulated,stevenson2006semiconductor,akopian2006entangled}. By coupling to cavity, the spontaneous emission rate can be enhanced and the efficiency can be greatly improved \cite{purcell1946resonance}. To gain a strong enhancement, it is essential to get a good spectral and spatial match between cavity mode and quantum emitters, as well as a cavity mode with small mode volume and high Q. However, it is usually challenging due to the random distribution of QD both in position and emission wavelength, especially in the cavities with smaller mode volume or length with large FSR, exhibiting a narrow-band Purcell enhancement \cite{englund2005controlling,liu2018high,qian2019enhanced,xie2020cavity}. By lowing the Q, the broadband enhancement can be realized, which has been used to realize entangled-photon sources \cite{liu2019solid}. However, the lower Q reduces the Purcell factor. In contrast, for a large cavity, although the FSR is small and Q is high, the mode volume is large, leading to a weak light-matter interaction strength. Therefore, a cavity which can support broadband enhancement as well as large Purcell factor is necessary for the realization of high-efficient single-photon sources and especially, entangled-photon sources which need both enhancement of biexciton and exciton with different emission wavelengths. In the topological cavity proposed here, the dense cavity modes in a wide spectral range make it much easier to tune quantum emitters into resonance with cavity mode. Thus the Purcell enhancement of quantum emitters located in the wide spectral range can be realized with less restriction on spectral match. Meanwhile, the dense cavity modes also enable simultaneous enhancement of biexciton and exciton with different emission wavelengths, which can be used to realize entangled-photon sources. Furthermore, the increased LDOS resulting from the slow light effect will give rise to strong Purcell enhancement. Therefore, this topological cavity enabling broadband Purcell enhancement with substantial Purcell factor provides an ideal platform to realize highly efficient single-photon sources and entangled-photon sources with less restriction on spectral match. \section{\label{sec5} Conclusion and outlook} In summary, we have proposed a topological cavity based on the topological slow light mode in VPC for broadband Purcell enhancement. Topological edge state with high group index over 100 can be realized at the bearded interface of VPC. Substantial enhancement in Purcell factor with the slow light is demonstrated. By comparing the topological cavity with zigzag and bearded interface both in theory and experiment, we found the slow light regime exhibits much denser cavity modes with high Q. Experimentally a Q up to 8000 is demonstrated with bearded interface. With numerical simulations, we have demonstrated that broadband Purcell enhancement with large Purcell factor can be realized in such a cavity. This topological cavity, enabling broadband enhancement, provides a versatile platform to realize high-efficiency single-photon sources and entangled-photon sources. On the one hand, the broadband enhancement is beneficial to the realization of highly efficient single-photon sources with less restriction on spectral match. On the other hand, it can also be used to realize efficient entangled-photon sources, which needs both enhancement in biexciton and exciton with different emission wavelengths. Moreover, the existence of slow light mode in the cavity allows for the realization of photon-photon interaction due to the slow-down of light. By integrating with topological waveguides, a chiral quantum interface can be formed, forming the basis of scalable chiral quantum optical circuits. Exciting prospects can be predicted, including the development of complex nanophotonic circuits for quantum information processing and studying novel quantum many-body dynamics. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (Grants No. 62025507, No. 11934019, No. 11721404, No. 61775232 and No. 11874419), the Key-Area Research and Development Program of Guangdong Province (Grant No.2018B030329001), and the Strategic Priority Research Program (Grant No. XDB28000000) of the Chinese Academy of Sciences. \end{acknowledgments}
6d080e5d0eb01561b635228adb1aea2076878516
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Analysis} \subsection{Inference Speed} We compare AdaLM's parameters' size and inference speed with the BERT model in the biomedical domain in Table~\ref{speed-compare}. \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{llcc} \bottomrule \textbf{Type} &\textbf{Model} & \textbf{\#Params} & \textbf{Speedup} \\ \hline \multirow{3}{*}{Large} &BERT & 109M & $\times 1.0$ \\ &PubMedBERT & 109M & $\times 1.0$ \\ &AdaLM vocab & 132M & $\times 1.07$ \\ \hline \multirow{2}{*}{Small} & BERT vocab & 22M & $\times 5.0 $ \\ &AdaLM & 34M & $\times 5.1$\\ \bottomrule \end{tabular} } \caption{Comparison of model's parameter size and the inference speed. The inference speedup is computed by the classification task ChemProt and evaluated on a single NVIDIA P100 GPU.} \label{speed-compare} \end{table} First we can find that the vocabulary expansion yields marginal improvements on the model's inference speed. We added about 20M parameters in the embedding weights in the large model using AdaLM vocabulary, but its inference speed is slightly faster than BERT and PubMedBERT. Since most domain-specific terms are shattered into fragmented subwords, the length of the token sequence we get by using the incremental vocabulary is shorter than the length of the sequence got by the original vocabulary, which reduces the computation load. We list the change of the sequence length of the downstream tasks in Appendix~\ref{appen:seqlen}. Meanwhile, in the embedding layers, the model just needs to map the sub-words' id to their dense representations, which is little affected by the parameters' size. The small model shows the same trend. In addition, the small model AdaLM shows great potential. Compared with the 12-layer model of 768 hidden dimensions, the 6-layer model of 384 hidden dimensions is 3.3x smaller and 5.1x faster in the model efficiency, while performing similarly to or even better than $\text{BERT}_\text{BASE}$. \subsection{Impact of Training Time} Pre-training often demands lots of time. In this section, we examine the adapted model's performance as a function of training time. Here we use the biomedical domain since its unlabelled texts are abundant and compare the large domain-specific adapted model with BioBERT. For every 24 hrs of continual pre-training, we fine-tuned the adapted model on the downstream tasks. For comparison, we convert the training time of BioBERT to the time it may take with the same computing resource of this work (16 V100 GPUs). \begin{table}[h] \centering \begin{tabular}{lcc} \bottomrule \textbf{Model} & \textbf{Training Time} & \textbf{Average} \\ \hline \multirow{4}{*}{AdaLM} & 0 hrs & 74.25 \\ & 24 hrs & 76.80\\ & 48 hrs & 77.36\\ & 72 hrs & 77.74\\ \hline BERT & 0 hrs & 74.28\\ \hline BioBERT & 120 hrs & 76.22\\ \bottomrule \end{tabular} \caption{Results with different pre-training time. In the table, AdaLM is the adapted large model without compressing.} \label{time_compare} \end{table} We list the results in Table~\ref{time_compare}, we denote the large adapted model as AdaLM in the table. AdaLM at 0 hrs means that we fine-tune the initialized model directly without any continual pre-training. We find that BERT is slightly better than 0hr AdaLM and after 24 hrs, AdaLM outperforms BioBERT, which demonstrates that domain-specific vocabulary is very critical for domain adaption of pre-trained model. Our experiments demonstrate promising results in the biomedical domain. Under constrained computation, AdaLM achieves better performance compared to BioBERT. More details can be found in Appendix~\ref{sec:time} \subsection{Impact of Vocabulary Size} \label{sec: impact-of-size} To understand the impact of the vocabulary size, we conduct some experiments with different vocabulary sizes in the biomedical domain. We select the biomedical large AdaLM model and to reduce the computation load, we set the batch size as 256 and step as 250K in our ablation studies. We show performance of the model with different sizes in Table~\ref{perform_diff_vocab}. \begin{table}[ht] \centering \scalebox{0.9}{ \begin{tabular}{lccccc} \bottomrule & 40k & 50k & 60k & 70k & 80k \\ \hline JNLPBA & 78.84 & \textbf{79.02} & 78.91 & 78.94 & 79.01 \\ \hline PICO & \textbf{75.09} & 74.81 & 74.99 & 74.58 & 75.00 \\ \hline ChemProt & 76.10 & 76.80 & \textbf{77.21}& 76.40 & 76.85 \\ \hline Average & 76.67 & 76.87 & \textbf{77.03} & 76.64 & 76.95 \\ \bottomrule \end{tabular} } \caption{The performance of different vocabulary sizes} \label{perform_diff_vocab} \end{table} We observe that the model of 60k achieves the best results in our ablation studies. The result is a bit surprising. Despite having a larger vocabulary, the 70k and 80k model does not show a stronger performance. A possible explanation for these results may be that a larger vocabulary set may contain some more complicated but less frequent words, which cannot be learnt well through continual pre-training. For example, the word \emph{ferrocytochrome} exists in 70k and 80k vocabularies but is split into (\emph{`ferrocy', `\#\#tochrom', `\#\#e'}) in the 60k vocabulary. In our sampled data (about 550k sentences), `\emph{ferrocytochrome}' appears less than 100 times, while the subword `\emph{\#\#tochrom}' appears more than 10k times and `\emph{ferrocy}' appears more than 200 times. The representation of those rare words cannot be learnt well due to the sparsity problem. \subsection{Vocabulary Visualization} The main motivation for using an the expanded vocabulary set is to leverage domain knowledge better. Compared to PubMedBERT which just uses the domain-specific vocabulary and initializes the model randomly, the keep of the general vocabulary and the general language model's weights may help us make good use of the existing knowledge and word embedding. To assess the importance of the expanded vocabulary, we compute the $L2$-distance of the embedding weights before and after pre-training in our AdaLM model in the biomedical domain in Figure~\ref{fig:vocab}. \begin{figure}[h] \centering \includegraphics[scale=0.5]{vocab_use.pdf} \caption{The $L2$-distance of the embedding layer. The deeper the color, the farther the distance. } \label{fig:vocab} \end{figure} We observe that the domain-specific vocabulary part changes a lot during the pre-training time, which indicates that our model learns much information about these domain-specific terms. We also observe that there is little change in many original sub-words’ embedding weights, which indicates that many general vocabularies can be used directly in continual training. \section{Occurrence probability of different vocabulary sizes} \label{sec:probability} \begin{table}[ht] \centering \begin{tabular}{lc} \hline \textbf{Vocabulary} & \textbf{P(D)} \\ \hline \textbf{BERT} & -255.92\\ \textbf{PubMed} & -218.49\\ \hline \textbf{40k vocab} & -220.06\\ \textbf{50k vocab} & -214.40\\ \textbf{60k vocab} & -211.88\\ \textbf{70k vocab}& -210.44\\ \textbf{80k vocab} & -209.57\\ \textbf{90k vocab}& -208.86\\ \textbf{100k vocab}& -208.42\\ \hline \end{tabular} \caption{The $P(D)$ of different vocabulary under biomedical domain.} \end{table} \begin{table}[ht] \centering \begin{tabular}{lc} \hline \textbf{Vocabulary} & \textbf{P(D)} \\ \hline \textbf{BERT} & -211.14 \\ \hline \textbf{40k vocab} & -194.08 \\ \textbf{50k vocab} & -192.56 \\ \textbf{60k vocab} & -191.87 \\ \textbf{70k vocab} & -191.45 \\ \textbf{80k vocab} & -191.09 \\ \textbf{90k vocab}& -190.76 \\ \textbf{100k vocab}& -190.53 \\ \hline \end{tabular} \caption{The $P(D)$ of different vocabulary under computer science domain.} \end{table} \section{Fine-tuning hyperparameters for downstream tasks} \label{sec:hyparameter} \begin{table}[ht] \centering \begin{tabular}{llcc} \hline \textbf{Hyperameter} & \multicolumn{3}{c}{\textbf{Assignment}} \\ \hline & \textbf{NER} & \textbf{PICO} &\textbf{RE}\\ \hline Batch size & 32 & \{16,32\} & 32 \\ Learning rate &\multicolumn{3}{c}{\{1e-5,3e-5,5e-5\}} \\ Epoch &\{30-40\} & \{10,15\} & \{40-50\} \\ Dropout & \multicolumn{3}{c}{0.1}\\ \hline \end{tabular} \caption{Hyparameters we used to finetune on biomedical tasks.} \label{bio:hyparameter-finetune} \end{table} \begin{table}[ht] \centering \begin{tabular}{llc} \hline \textbf{Hyperameter} & \multicolumn{2}{c}{\textbf{Assignment}} \\ \hline & \textbf{ACL-ARC} &\textbf{SCIERC}\\ \hline Batch size &\multicolumn{2}{c}{16}\\ Learning rate&\multicolumn{2}{c}{2e-5} \\ Epoch &\multicolumn{2}{c}{20}\\ Dropout & \multicolumn{2}{c}{0.1}\\ \hline \end{tabular} \caption{Hyparameters we used to finetune on computer science tasks.} \label{cs: hyparameter-finetune} \end{table} \section{Sequence Length} \label{appen:seqlen} After the vocabulary expansion, the length of the token sequence may get shorter. We compute the average sentence length of the downstream tasks. We list the results in Table~\ref{append:lenght} \begin{table}[ht] \begin{tabular}{lcc} \bottomrule \textbf{Dataset} & \textbf{Original Vocab} & \textbf{Incr. Vocab} \\ \midrule ChemProt & 66 & 53 \\ EBM PICO & 36 & 31 \\ JNLPBA & 41 & 32 \\ \midrule ACL-ARC & 53 & 50 \\ SCIERC & 45 & 42 \\ \bottomrule \end{tabular} \caption{The sequence length tokenized by the original vocabulary and expanded vocabulary.} \label{append:lenght} \end{table} \section{Results of different training time} \label{sec:time} We list the biomedical tasks' results of each pretraining time in the following table. \begin{table}[ht] \begin{tabular}{lcccc} \hline & \textbf{0h} & \textbf{24h} & \textbf{48h} & \textbf{72h} \\ \hline JNLPBA& 77.56 & 79.14 & 79.11 & 79.46 \\ \hline PICO&73.29 & 74.22 & 75.28 & 75.36 \\ \hline ChemProt& 71.91 & 77.06 & 77.69 & 78.42 \\ \hline Average& 74.25 & 76.80 & 77.36 & 77.74 \\ \hline \end{tabular} \caption{The performance of models with different pretraining time} \end{table} \section{Conclusion} In this paper, we investigate several variations to compress general BERT models to specific domains. Our experiments reveal that the best strategy to obtain a task-agnostic domain-specific pretrained model is to adapt large and small models into specific domains separately and then compress the adapted large model with the adapted small model as initialization. We show that the adapted 6-layer model of 384 hidden dimensions outperforms the $\text{BERT}_{\text{BASE}}$ model while 3.3× smaller and 5.0× faster than $\text{BERT}_{\text{BASE}}$. Our findings suggest that domain-specific vocabulary and general-domain language model play vital roles in domain adaptation of a pretrained model. In the future, we will investigate more directions in domain adaptation, such as data selection and efficient adaptation. \section{Experiment Details} We conduct our experiments in two domains: biomedical and computer science. \subsection{Datasets} \paragraph{Domain corpus:} For the biomedical domain, we collect a 16GB corpus from PubMed\footnote{https://pubmed.ncbi.nlm.nih.gov/} abstracts to adapt our model. We use the latest collection and pre-process the corpora with the same process as PubMedBERT (we omit any abstracts with less than 128 words to reduce noise.). For the computer science domain, we use the abstracts text from the arXiv\footnote{https://www.kaggle.com/Cornell-University/arxiv} Dataset. We select abstracts in computer science categories, collecting 300M entries for the corpus. \paragraph{Fine-tuning tasks:} For the biomedical domain, we choose three tasks: named entity recognition (NER), evidence-based medical information extraction (PICO), and relation extraction (RE). We perform entity-level F1 in NER task and word-level macro-F1 in the PICO task. The RE task uses the micro-F1 of positive classes evaluation. JNLPBA \citep{collier-kim-2004-introduction} NER dataset contains 6,892 disease mentions, which are mapped to 790 unique disease concepts with BIO tagging \cite{ramshaw1999text}. EBM PICO~\citep{nye2018corpus} datasets annotates text spans with four tags: Participants, Intervention, Comparator and Outcome. ChemProt~\cite{krallinger2017overview} dataset consists of five interactions between chemical and protein entities. We list the statistics of those tasks in Table~\ref{bio-task-info}. We fine-tune two downstream tasks in the computer science domain. They are both classification tasks. The ACL-ARC~\cite{jurgens-etal-2018-measuring} dataset mainly focuses on analyzing how scientific works frame their contributions through different types of citations. SCIERC~\cite{luan-etal-2018-multi} dataset includes annotations for scientific entities, their relations, and coreference clusters. The statistics are available in Table~\ref{cs-task-info}. \begin{table}[h] \centering \begin{tabular}{lrrr} \bottomrule \textbf{Dataset} & \textbf{Train} & \textbf{Dev} &\textbf{Test} \\ \hline JNLPBA & 46,750 & 4551 & 8,662 \\ EBM PICO & 339,167& 85321 & 16,364 \\ ChemProt & 18,035 & 11268 & 15,745 \\ \bottomrule \end{tabular} \caption{Biomedical dataset used in our experiment. All selected from BLURB\protect\footnotemark} \label{bio-task-info} \end{table} \begin{table}[ht] \centering \begin{tabular}{lcccc} \bottomrule \textbf{Dataset} & \textbf{Train} & \textbf{Task} &\textbf{Test}& \textbf{Classes}\\ \hline ACL-ARC & 1,688 & 114 & 139 & 6 \\ SCIERC & 3,219 & 455 & 974 & 7\\ \bottomrule \end{tabular} \caption{Computer science dataset used in our experiment. We use the same train, development, and test splits as \citet{dontstoppretraining2020}} \label{cs-task-info} \end{table} \footnotetext{https://microsoft.github.io/BLURB/} \subsection{Implementation} We use the uncased version of $\text{BERT}_{\text{BASE}}$ ($12$ layers, $768$ hidden size) as the large model and the $\text{MiniLM}$ ($6$ layers, $384$ hidden size) as the small model. To adapt the large model, we set the batch size at 8192 and the training step at 30,000. The peak learning rate was set to 6e-4. To adapt the small model, we set the batch size as 256 and the training step as 200,000. The learning rate is set to 1e-4. The maximum length of the input sequence was 512 and the token masking probability was 15\% for both the large model and the small model. We implement MiniLM to compress large models and follow the setting of MiniLM, where the batch size was set to 256 and peak learning rate as 4e-4. We set the training step as 200,000. For biomedical tasks, we follow the setting of PubMedBERT~\cite{2020Domain} to fine-tune these three tasks. For computer science tasks, we use the same setting as \citet{dontstoppretraining2020}. The concrete parameters are shown in Appendix~\ref{sec:hyparameter}. \section{Introduction} Pre-trained language models, such as GPT~\cite{radford2018improving}, BERT~\cite{devlin-etal-2019-bert}, RoBERTa~\cite{liu2019roberta} and UniLM~\cite{dong2019unified} have achieved impressive success in many natural language processing tasks. These models usually have hundreds of millions of parameters. They are pre-trained on a large corpus of general domain and fine-tuned on target domain tasks. However, it is not optimal to deploy these models directly to edge devices in specific domains. First, heavy model size and high latency makes it difficult to deploy on resource-limited edge devices such as mobile phone. Second, directly fine-tuning a general pre-trained model on a domain-specific task may not be optimal when the target domain varies substantially from the general domain. Thirdly, many specialized domains contain their own specific terms, which are not included in pre-trained language model vocabulary. \begin{figure}[t] \centering \includegraphics[scale=0.41]{overview6.pdf} \caption{The four alternatives when distilling BERT into specific domains. All strategies are task-agnostic.} \label{fig:overview} \end{figure} In this paper, we introduce AdaLM, a framework that aims to develop small, fast and effective pre-trained language models for specific domains. To address domain shift problem, recent studies~\cite{lee2020biobert,dontstoppretraining2020} conduct continual pre-training to adapt a general domain pre-trained model to specific domains. However, specific domains contain many common in-domain terms, which may be divided into bite-sized pieces (e.g., \textit{lymphoma} is tokenized into [\textit{l, \#\#ym, \#\#ph, \#\#oma}]). \citeauthor{2020Domain}(\citeyear{2020Domain}) mentions that domain-specific vocabularies play a vital role in domain adaptation of pre-trained models. Specifically, we propose a domain-specific vocabulary expansion in the adaptation stage, which augments in-domain terms or subword units automatically given in-domain text. Also, it is critical to decide the size of incremental vocabulary. Motivated by subword regularization~\cite{kudo2018subword}, AdaLM introduces a corpus occurrence probability as a metric to optimize the size of incremental vocabulary automatically. We systematically explore different strategies to compress general BERT models to specific domains (Figure~\ref{fig:overview}): (a) From scratch: pre-training domain-specific small model from scratch with domain corpus; (b) Distill-then-Adapt: first distilling large model into small model, then adapting it into a specific domain; (c) Adapt-then-Distill: first adapting BERT into a specific domain, then distilling model into small size; (d) Adapt-and-Distill: adapting both the large and small models, then distilling with these two models initializing the teacher and student models respectively. We conduct experiments in both biomedical and computer science domain and fine-tune the domain-specific small models on different downstream tasks. Experiments demonstrate that Adapt-and-Distill achieves state-of-the-art results for domain-specific tasks. Specifically, the 6-layer model of 384 hidden dimensions outperforms the $\text{BERT}_{\text{BASE}}$ model while 3.3× smaller and 5.1× faster than $\text{BERT}_{\text{BASE}}$. \section{Methods} \subsection{Overview} We systematically explore different strategies to achieve an effective and efficient small model in specific domains. We summarize them into four strategies: from scratch, distill-then-adapt, adapt-then-distill and adapt-and-distill. \paragraph{Pretrain-from-scratch} Domain-specific pretraining from scratch employs a random initialization of a pretrained model and pretrains a small model directly on domain-specific corpus. In this work, we conduct pretraining from scratch on different vocabularies including BERT original vocabulary, from scratch vocabulary, and expanded vocabulary. \paragraph{Distill-then-adapt} These approaches first distill the large general pretrained model which pretrained on Wikipedia and BookCorpus. Then it continues the pretraining process using a domain-specific corpus. In this work, we first distill the BERT model into a small model using task-agnostic knowledge distillation in MiniLM~\cite{wang2020minilm}. Then we initialize the small model with it and conduct continual training with both the BERT original vocabulary and the expanded vocabulary. \paragraph{Adapt-then-distill} In this work, we select different large models as teacher models such as BERT and large models with different vocabularies. We first adapt these models into domain-specific models and then implement MiniLM to compress them to small models. \paragraph{Adapt-and-distill} In the previous part, when doing knowledge distill, we initialized the student model randomly. In order to get a better domain-specific small model, we try to explore the impact of the initialization of the student model. In this part, we adapt large and small models into specific domains separately, then use these two models to initialize the teacher and student model respectively. \subsection{Domain Adaptation} \label{sec:adalm} AdaLM contains a simple yet effective domain adaptation framework for a pretrained language model. As shown in Figure~\ref{fig:pipeline}, it takes a general pretrained language model, original vocabulary and a domain specific corpus as input. Through vocabulary expansion and continual pretraining, AdaLM adapts general models into specific domains. \begin{figure}[t] \centering \includegraphics[scale=0.5]{pipeline_v2.pdf} \caption{The pipeline of domain adaptation. Here we adapt the BERT model into the biomedical domain with PubMed dataset.} \label{fig:pipeline} \end{figure} \begin{figure}[t] \centering \includegraphics[scale=0.43]{model.init.v2.pdf} \caption{Concatenate original embedding with expanded embedding.} \label{fig:model_init} \end{figure} The core pipeline of domain adaptation consists of the three steps described below: \begin{enumerate} \item Given original vocabulary and a domain-specific corpus, the vocabulary expansion module aims to augment original vocabulary with domain-specific subword units or terms. We augment domain-specific vocabulary from the target domain, while keeping the original BERT vocabulary unchanged. We describe them in more detail in Section~\ref{sec:vocab}. \item Due to the size of the vocabulary having changed, we cannot initialize our model with BERT directly. As illustrated in Figure~\ref{fig:model_init}, we initialize the original embedding and Transformer encoder with weights from BERT (the green part in Figure~\ref{fig:model_init}). For incremental vocabulary, we first tokenize them into sub-words with the original vocabulary and then use an average pooling of their own sub-words embedding to initialize. As shown in Figure~\ref{fig:model_init}, the word `\textit{lymphoma}' is not included in BERT vocabulary. We tokenize it into three sub-words (\textit{lym}, \textit{\#\#pho}, \textit{\#\#ma}). The embedding vector of `lymphoma' is initialized by the average embedding vector of `\textit{lym}', `\textit{\#\#pho}'and `\textit{\#\#ma}'. \item After model initialization and data preprocessing, we continually pretrain our model with domain-specific corpus using masked language model loss. Following BERT, we randomly replace 15\% of tokens by a special token (e.g., [MASK]) and ask the language model to predict them in continual pretraining. \end{enumerate} \subsection{Vocabulary Expansion}\label{sec:vocab} Vocabulary expansion is the core module of AdaLM. It augments domain-specific terms or subword units to leverage domain knowledge. The size of the incremental vocabulary is a vital parameter for vocabulary expansion. Considering that unigram language modeling ~\cite{kudo2018subword} aligns more closely with morphology and avoids problems stemming from BPE’s greedy construction procedure, as proposed in \cite{bostrom-durrett-2020-byte}, we followed \citet{kudo2018subword} and introduced a corpus occurrence probability as a metric to optimize the size of incremental vocabulary automatically. We assume that each subword occurs independently and we assign to each subword in the corpus a probability equal to its frequency in the corpus. \begin{eqnarray} \forall i\,\, x_i \in \mathcal{V},\,\,\, \sum_{x_i \in \mathcal{V}} p(x_i) = 1, \label{prob of sentences} \end{eqnarray} where $\mathcal{V}$ is a pre-determined vocabulary. The probability of a subword sequence $\mathbf{x} = (x_1,\ldots,x_M)$ can be computed by the product of the subword appearance probabilities $p(x_i)$. We convert it to logarithmic form: \begin{eqnarray} P(\mathbf{x}) = \sum_{i=1}^{M} log(p(x_i)), \end{eqnarray} Given a domain-specific corpus $D$, the occurrence probability of corpus $D$ is formulated as: \begin{eqnarray} P(D) = \sum_{\mathbf{x}}^{|D|} log(P(\mathbf{x})), \end{eqnarray} where $\mathbf{x}$ represents tokenized sentence in corpus $D$. We sample 550k sentences from the PubMed corpus and compute the occurrence probability $P(D)$ with different vocabulary sizes. The results are shown in Figure~\ref{fig:vocab_size_copare}. We compare the occurrence probability with BERT and PubMedBERT vocabularies. We observe that $P(D)$ reveals a logarithmic trend with substantial increases at the beginning and little influence after vocabulary size of 70k in the biomedical domain. The PubMedBERT vocabulary performs similarly to the 40k size vocabulary. We present the occurrence probability of different vocabulary sizes in Appendix~\ref{sec:probability}. \begin{figure}[h] \centering \includegraphics[scale=0.52]{vocab_size_compare.pdf} \caption{The $P(D)$ of different vocab sizes under biomedical domain. We use the BERT's vocabulary as the 30k vocabulary without vocabulary expanding. The PubMedBERT vocabulary is also 30k.} \label{fig:vocab_size_copare} \end{figure} We propose a simple method to decide the size of the incremental vocabulary. Assume the probability at the time step $i-1$ is $P_{i-1}(D)$ and at the time step $i$ is $P_{i}(D)$. If the rise $\frac{P_{i}(D) - P_{i-1}(D)}{P_{i-1}(D)}$ is lower than a threshold $\delta$, we regard the vocabulary size at the time step $i$ as the final size. \begin{algorithm} \caption{Vocabulary Expansion} \label{algo} \KwIn{Original vocabulary $raw\_vocab$, domain corpora $D$, threshold $\delta$ and vocabulary size step $V_\Delta$} \KwOut{$vocab_{final}$} $token\_count \leftarrow$ whitespace split from $D$\; $P_0 \leftarrow$ computed from $raw\_vocab$\; $V_0 \leftarrow |raw\_vocab|$\; \Do { $\frac{P_{i} - P_{i-1}}{P_{i-1}}>\delta$ } { vocabulary size $V_i \leftarrow V_{i-1} + V_\Delta$\; $sub\_count \leftarrow$ split token to subwords\; Sort $sub\_count$ by frequency\; $incr\_vocab \leftarrow $keep $(V_i - V_0)$ subwords\; $vocab_{i} \leftarrow$ $raw\_vocab + incr\_vocab$\; $P_i \leftarrow$ computed from $vocab_i$ } return $vocab_{final} \leftarrow vocab_{i}$ \; \end{algorithm} We expand the domain-specific vocabulary with the process shown in Algorithm~\ref{algo}. We implement our vocabulary expansion algorithm referring to SubwordTextBuilder in tensor2tensor\footnote{https://github.com/tensorflow/tensor2tensor}. In experiments, we set the threshold $\delta$ as $1\%$ and vocabulary size step $V_\Delta$ as 10k. Finally, we obtain the expanded vocabulary size of biomedical as 60k and computer science domain as 50k. \section{Related Work} \paragraph{Domain adaptation of pre-trained model} Most previous work on the domain-adaptation of pre-trained models targets large models. \citeauthor{lee2020biobert}~(\citeyear{lee2020biobert}) conduct continual pre-training to adapt the BERT model to the biomedical domain using the PubMed abstracts and the PMC full text. \citeauthor{dontstoppretraining2020}~(\citeyear{dontstoppretraining2020}) also employ continual pre-training to adapt pre-trained models into different domains including biomedical, computer science and news. However, many specialized domains contain their own specific words that are not included in pre-trained language model vocabulary. \citeauthor{2020Domain}(\citeyear{2020Domain}) propose a biomedical pre-trained model PubMedBERT, where the vocabulary was created from scratch and the model is pre-trained from scratch. Furthermore, in many specialized domains, large enough corpora may not be available to support pre-training from scratch. \citet{zhang2020multi} and \citet{tai2020exbert} extend the open-domain vocabulary with top frequent in-domain words to resolve this out-of-vocabulary issue. This approach ignores domain-specific sub-word units (e.g., \textit{blasto-}, \textit{germin-} in biomedical domain). These subword units help generalize domain knowledge and avoid unseen words. \paragraph{Task-agnostic knowledge distillation} In recent years, tremendous progress has been made in model compression~\cite{cheng2017survey}. Knowledge distillation has proven to be a promising way to compress large models while maintaining accuracy~\cite{sanh2019distilbert,jiao2019tinybert,sun2020mobilebert,wang2020minilm}. In this paper, we focus on task-agnostic knowledge distillation approaches, where a distilled small pre-trained model can be directly fine-tuned on downstream tasks. DistilBERT~\cite{sanh2019distilbert} employs the soft label and embedding outputs to supervise the student. TinyBERT~\cite{jiao2019tinybert} and MobileBERT~\cite{sun2020mobilebert} introduce self-attention distributions and hidden states to train the student model. MiniLM~\cite{wang2020minilm} avoids restrictions on the number of student layers and employs the self-attention distributions and value relation of the teacher’s last transformer layer to supervise the student model. Because this method is more flexible, we implement MiniLM to compress large models in this work. No previous work systematically explores different strategies to achieve an effective and efficient smaller model in specific domains. \section{Results} \begin{table*}[ht] \centering \scalebox{0.83}{ \begin{tabular}{clllccccc} \bottomrule \textbf{Config} & \multicolumn{1}{c}{\textbf{Type}} & \textbf{Model} & \textbf{Teacher} & \textbf{JNLPBA} & \textbf{PICO} & \textbf{Chemprot} & \textbf{Average} \\ \hline \multirow{4}{*}{$L$=12; $d$=786} & \multirow{4}{*}{Large model} & BERT$^\dagger$ & - & 78.63 & 72.34 & 71.86 & 74.28 \\ && BioBERT$^\dagger$ & - &79.35& 73.18 & 76.14 & 76.22 \\ && PubMedBERT$^\dagger$ & - & \textbf{80.06} & 73.38 & 77.24 & 76.89\\ && AdaLM$^\diamondsuit$ & - & 79.46 & \textbf{75.47} &\textbf{78.41} &\textbf{77.74} \\ \hline \multirow{11}{*}{$L$=6; $d$=384} & \multirow{1}{*}{Small model} & MiniLM &- & 77.44 & 71.69 & 68.08 & 72.40\\ \cmidrule{2 - 8} &\multirow{3}{*}{From scratch} & BERT vocab (a) & - &77.89 &72.97 & 70.21 & 73.69\\ && PubMed vocab (b) & - &77.82 &73.82 & 70.32 & 73.99\\ & & AdaLM vocab (c) &- &77.80 &73.39 & 70.86 & 74.02\\ \cmidrule{2 - 8} & \multirow{3}{*}{Distill-then-Adapt}& BERT vocab (d) & - & 78.63 & 74.00 & 72.28 & \underline{74.97} \\ & & PubMed vocab (e) & - & 78.36 & 73.91 & 71.33 & \underline{74.53}\\ & & AdaLM vocab (f) & - & 78.77 & 74.23 & \textbf{72.29} & \underline{75.10}\\ \cmidrule{2 - 8} &\multirow{3}{*}{Adapt-then-Distill}& Random initial (g)& BERT & 77.98 & 72.38& 68.86 & 73.07 \\ && Random initial (h) & PubMedBERT & 78.78 & 74.20 & 70.89 & \underline{74.62} \\ & & Random initial (i) &AdaLM$^\diamondsuit$ & 78.98 & 74.78 & 71.51 & \underline{75.09} \\ \cmidrule{2 - 8} & Adapt-and-Distill & Model (f) initial (j) & AdaLM$^\diamondsuit$ & \textbf{79.04} & \textbf{74.91} & 72.06 & \underline{\textbf{75.34}} \\ \bottomrule \end{tabular} } \caption{Comparison between different strategies on biomedical tasks. The AdaLM$^\diamondsuit$ means we just adapt the large model without distillation. Scores of the methods marked with $^\dagger$ are taken from~\cite{2020Domain}. Underlined data marks the small models whose performances surpass the BERT model's performance. $L$ and $d$ indicate the number of layers and the hidden dimension of the model.} \label{table:biomedical} \end{table*} \begin{table*}[ht] \centering \scalebox{0.92}{ \begin{tabular}{clllccc} \bottomrule \textbf{Config} & \multicolumn{1}{c}{\textbf{Type}} & \textbf{Model} & \textbf{Teacher} & \textbf{ACL-ARC} & \textbf{SCIERC} & \textbf{Average} \\ \hline \multirow{2}{*}{$L$=12; $d$=786 } &\multirow{2}{*}{Large model} & BERT & - & 64.92 & 81.14 & 73.03 \\ & & AdaLM$^\diamondsuit$ & - & \textbf{73.61} & \textbf{81.91} & \textbf{77.76} \\ \hline \multirow{8}{*}{$L$=6; $d$=384} & Small model & MiniLM & - & 61.5 & 72.88 & 67.19 \\ \cmidrule{2 - 7} &\multirow{2}{*}{From scratch} & BERT vocab (a)& - & 62.48 & 74.93 & 68.70 \\%6_384_wo_vocab_cs/fs_scratch_v1 & & AdaLM vocab (b) & - & 59.57 & 74.93 & 67.25 \\ \cmidrule{2 - 7} &\multirow{2}{*}{Distill-then-Adapt} & BERT vocab (c) & - & 65.75 & 79.13 & 72.44 \\ & & AdaLM vocab (d) & - & 65.93 & \textbf{79.88} & 72.91 \\ \cmidrule{2 - 7} &\multirow{2}{*}{Adapt-then-Distill} & Random initial (e) & BERT & 63.12 & 77.89 & 70.50 \\ & & Random initial (f) & AdaLM$^\diamondsuit$ & 66.21 & 77.04 & 71.62 \\ \cmidrule{2 - 7} & Adapt-and-Distill & Model (d) initial (g) & AdaLM$^\diamondsuit$ &\textbf{68.74} & 78.88 & \underline{\textbf{73.81}} \\ \bottomrule \end{tabular} } \caption{Comparison between different strategies on computer science tasks. The AdaLM$^\diamondsuit$ is the adapted large model without compressing. We report averages across five random seeds. Data marked with underlines are the results of small models which outperform the BERT model's. $L$ and $d$ indicate the number of layers and the hidden dimension of the model.} \label{table:cs} \end{table*} The results of the tasks are shown in the Table~\ref{table:biomedical} and \ref{table:cs}. We structure our evaluation by stepping through each of our three findings: (1) Domain-specific vocabulary plays a significant role in domain-specific tasks and expanding vocabulary with the general vocabulary is better than just using domain-specific vocabulary. We observe improved results via the expanded vocabulary with both the large and small models. For large model, AdaLM achieves the best results under each domain, where 77.74 on biomedical domain tasks, beating BioBERT and PubMedBERT and 77.76 on the computer science domain tasks. For small models, in the biomedical domain, whether we train from scratch or distill-then-adapt with small models, incremental vocabulary models always perform better than the general vocabulary or just the domain-specific vocabulary. (When distill-then-adapt with the PubMed vocabulary, we initialize the word embedding in the same way as mentioned in Section~\ref{sec:adalm}). In addition, with distill-then-adapt, the model (f) (75.10) can surpass the BERT model (74.28). In the computer science domain, distill-then-adapt models with incremental vocabulary also show great performance. Model (d) achieves a comparable result of 72.91 as BERT and outperforms BERT in the ACL-ARC datasets with 65.93 (+1.01 F1). We also observe that when training from scratch, the results of Model (b) with incremental vocabulary are lower (1.45 lower) than that of model (a). This may be because after vocabulary expansion, a from-scratch model needs to be pretrained with more unlabeled data. (2) Continual pretraining on domain-specific texts from general language models is better than pretraining from scratch. \citet{2020Domain} finds that for domains with abundant unlabeled texts, pretraining language models from scratch outperforms continual pretraining of general-domain language models. However, in our experiments, we find that general-domains model can help our model to learn the target domain better. In the biomedical domain, we use MiniLM model to initialize the model (d), (e) and (f) in distill-then-adapt setting. No matter which vocabulary is used, continual pretraining on domain-specific texts from general language models is better than pretraining from scratch. For AdaLM vocabulary, the model (f) gets 75.10, outperforming the model (c) trained from scratch with the same vocabulary by 1.08. On the other hand, for domains that do not have enormous unlabeled texts such as the computer science domain in our experiments, continual pretraining also showed better results. With continual pretraining, model (d) achieves higher results exceeding both model (b) (+5.66 F1) and model (c) (+0.47 F1). (3) Adapt-and-Distill is the best strategy to develop a task-agnostic domain-specific small pretrained model. In the Adapt-then-Distill part, our findings supports evidence from previous observations~\citep{wang2020minilm} that a better teacher model leads to a better student model. Using AdaLM which performs best among large models as the teacher model can yield good results: 75.09 in the biomedical domain and 71.62 in the computer science, better than other domain-specific large models. Furthermore, we find that a better student model for initialization can also help to get a better small model. In the Adapt-and-Distill part, we adapt large and small models into specific domains separately and then compress the adapted large model as the teacher with the adapted small model as initialization. In the biomedical domain, the model (j), initialized from model (i), achieves the best result of 75.34 among the small models. It also outperforms the BERT model (+1.06 F1). In the computer science domain, model (g), initialized by model (d), is the only small model that outperforms BERT (+0.78 F1).
b82004884889410258e0a074be892aa6de9ed79f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} It has been observed that classifiers built on deep learning architectures are prone to misclassification given tiny perturbations on their inputs \citep{szegedy2013intriguing}. Because these perturbations are typically imperceptible, they are commonly thought of as adversarial \citep{papernot2016limitations}. The existence of small perturbations that alter classifier decisions has motivated a plethora of research into how such perturbations may be engineered or prevented \citep{moosavi2016deepfool, madry2017towards, ma2018secure, tramer2020adaptive, machado2021adversarial}. While adversarial perturbations are defined to be imperceptible to humans, the concept of imperceptibility is difficult to formally define. Therefore, the size of a perturbation is often implicitly adopted as a surrogate for perceptibility \citep{fawzi2018adversarial}. Here we demonstrate that susceptibility to small perturbations is a fundamental property of any algorithm that partitions an image space into distinct classes. Specifically, we show that on any image space consisting of images with $n$-by-$n$ pixels and finite bit depth, there exists some universal constant $c$ (parametrized by the number of channels) such that most images in all but one class can have their classes changed with $cn$ pixel changes, a vanishingly small number compared to the $n^2$ pixels within the entire image for sufficiently large $n$. Similarly, we show that a perturbation with a $p$-norm of size $c'n^{1/p}$ suffices as well, for some $c'$ dependent on $p$, the number of channels, and the bit depth. Thus, the creation of a classifier that is robust to perturbations of the sizes described above is impossible. Conversely, we also demonstrate that an upper bound on classifier robustness that applies universally to all image classifiers cannot be smaller than ours by more than a constant factor (parametrized by bit depth). Finally, we show how increasing the bit depth of our image space decreases classifier robustness under certain definitions of perturbation size. Our bounds are unconditional, therefore they apply to classifiers based on human perception as well. We discuss the possible interpretations of this fact, and its potential implications for designing computer vision systems. \subsection{Related work} The sensitivity of neural networks to small perturbations was discovered in \citep{szegedy2013intriguing} where the authors remarked that perhaps adversarial examples form a dense low measure set analogous to the rationals. A serious effort to explain adversarial examples was undertaken in \citep{goodfellow2014explaining}, which suggests that adversarial examples is a consequence of high dimensional dot products between weights and inputs. However, their argument is not formal, and it has been shown that high dimensional dot products is neither necessary nor sufficient to explain adversarial images \citep{tanay2016boundary}. Formal arguments bounding adversarial perturbations and robustness have been proven for specific instances \citep{gilmer2018adversarial, tsipras2018robustness}. However, the settings under which these theoretical results hold are usually highly idealized, and these arguments do not hold under more general settings. The most general results for explaining adversarial examples comes from universal non-robustness bounds achieved through the use of isoperimetric bounds. This is the approach we take in this work. Isoperimetric results bound the surface area of any given volume in some space, so they are highly generalizable. The work presented in \citep{fawzi2018adversarial} uses an isoperimetric bound to bound the fraction of the space of natural images that is susceptible to changing classes under a small perturbation for any arbitrary classifier. However, they only consider perturbations measured by the Euclidean distance (2-norm), while our analysis encompasses perturbations measured by any $p$-norm. Furthermore, our bound is of a different nature as it considers the space of all images and is therefore unconditional and universal, while their bounds focus on image manifolds defined by generative functions and therefore are parametrized by the generator. Isoperimetric bounds are also applied to understanding adversarial perturbations in \citep{diochnos2018adversarial}, where it is shown that for arbitrary classifiers over boolean inputs, most inputs can be pushed into a misclassification region with a small perturbation as long as the region occupies an asymptotically finite fraction of the input space. This work has since been extended to apply to a more general class of spaces in \citep{mahloujifar2019curse} using concentration bounds. Our work instead focuses on pushing images into different classification regions, rather than into a specific misclassification region, and is therefore of a slightly different nature. Also, unlike our analysis, their analysis does not preclude the existence of asymptotically infinitesimal classes of images that are robust to perturbations. Our work also explores how these bounds apply to the human visual system due to their universality in contrast to prior work. Studies attempting to understand adversarial perturbations in the human visual system usually do so by showing people adversarial images. This line of work has revealed that imperceptible adversarial perturbations may in fact be perceptible and influence human classifications \citep{elsayed2018adversarial, zhou2019humans}. This line of work is very different from the work presented here: our approach is more theoretical, and our subsequent interpretations focus on perturbations that are clearly visible to humans despite being small. In the remainder of this paper we provide a precise exposition of all our results as well as our terminology (Section 2), interpret these results (Section 3), and provide concluding remarks (Section 4). Proofs are mostly omitted and can be found in Appendix \ref{appdx-all-proofs}. \section{Results} In this section we state universal non-robustness results for classifiers over images that can be encoded with finite bit strings. We then state how these non-robustness results are asymptotically the best we can achieve up to a constant factor, and we conclude by stating some results on how bit depth influences some of these bounds. Intuitively, our results are a consequence of the high dimensional geometric phenomenon where measure concentrates near the boundary of sets in high dimensions. \subsection{Preliminaries} Images consist of pixels on a two dimensional grid, with each pixel consisting of a set of channels (for example R, G, and B) of varying intensity. Therefore, we define an \emph{$h$-channel image of size $n\times n$} to be a real valued tensor of shape $(n,n,h)$, where each entry is restricted to the interval $[0,1]$. The first two dimensions index the pixel, while the third indexes the channel. We use $\imgspace{n}{h}{\infty}$ to denote the set of all such images. Only a finite subset of these images can be represented with a finite number of bits. Therefore, we define the set of all \emph{$h$-channel images of size $n\times n$ with bit depth $b$}, denoted $\bitspace{n}{h}{b}$, as the set of all bit valued tensors with shape $(n,n,h,b)$. The additional fourth dimension indexes the positions of a bit string that encodes the intensity of a channel. We map elements of $\bitspace{n}{h}{b}$ to $\imgspace{n}{h}{\infty}$ by mapping each length $b$ bit string to equally spaced values in $[0,1]$ with the largest value being 1 and the smallest being 0. We will use $\imgspace{n}{h}{b}$ to denote the image of $\bitspace{n}{h}{b}$ under this map. We will sometimes refer to $\imgspace{n}{h}{b}$ as \emph{discrete image spaces} to disambiguate them from $\imgspace{n}{h}{\infty}$, which we will refer to as the \emph{continuous image space}. \subsubsection{Classifiers and Classes} A classifier $\mathcal{C}$ is a function $\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$, where $\mathcal{Y}$ is some finite set of labels. For each $y \in \mathcal{Y}$, we define the class of $y$ as the preimage of $y$, denoted as the set of images $\mathcal{C}^{-1}(y)$. We say that such a class is induced by $\mathcal{C}$. If a class takes up a large part of the image space, then it contains a lot of images that look like randomly sampled noise, since randomly sampling channel values from a uniform distribution yields a uniform distribution over the image space. Therefore, many images in these classes tend to be uninteresting, which motivates the following definition: \begin{definition} \label{def-interestingclass} A class $C \subseteq \imgspace{n}{h}{b}$ is \emph{interesting} if it is not empty, and if it contains no more than half of the total number of images in $\imgspace{n}{h}{b}$. \end{definition} Note that if no class is empty, then no more than 1 class can be uninteresting. This is because classes are disjoint and so at most 1 class can contain more than half the total number of images. \subsubsection{Perturbations and Robustness} In order to discuss perturbations, we define addition and subtraction over tensors that are of the same shape to be element-wise, and we define the $p$-norm of a tensor $A$, denoted $\|A\|_p$, to be the $p$th root of the sum of the absolute values of the entries of $A$ raised to the $p$th power. $p$ is assumed to be a non-negative integer, and for the special case of $p=0$ we let $\|A\|_0$ be the number of non-zero entries in $A$. We can then define what it means for an image to be robust to perturbations: \begin{definition} Let $\mathcal{C}:\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$ be a classifier. We say an image $I \in \imgspace{n}{h}{b}$ is robust to $\elp{p}$-perturbations of size $d$ if for all $I' \in \imgspace{n}{h}{b}$, $\|I-I'\|_p \leq d$ implies $\mathcal{C}(I) = \mathcal{C}(I')$. \end{definition} We can then define what it means for a class to be robust to perturbations. Note that unless a class occupies the entire image space, it must contain some non-robust images, so the best we can hope for is to attain robustness for a large fraction of the images within a class. This is reflected in the following definition. \begin{definition} Let $\mathcal{C}:\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$ be a classifier, and let $C$ be a class induced by it. Then we say that a class $C$ is $r$-robust to $\elp{p}$-perturbations of size $d$ if it is not empty, and the number of images $I \in C$ that are robust to $\elp{p}$-perturbations of size $d$ is at least $r|C|$, where $|C|$ is the number of images in $C$. \end{definition} \subsection{Universal upper bound on classifier robustness} We can now state a universal non-robustness result that applies to all classifiers over discrete image spaces $\imgspace{n}{h}{b}$. \begin{theorem} \label{thm-main} Let $\mathcal{C}: \imgspace{n}{h}{b} \rightarrow \mathcal{Y}$ be any classifier. Then for all real values $c > 0$, no interesting class is $2e^{-2c^2}$-robust to $\elp{p}$-perturbations of size $(2+c\sqrt{h}*n)^{1/\max(p,1)}$. \end{theorem} \begin{proofsketch} We can use the images in $\imgspace{n}{h}{b}$ to form a graph where images are the vertices, and images are connected if and only if they differ at exactly one channel. In other words, the image tensors must differ at precisely one entry. Figure \ref{fig-proof1}a illustrates the construction of this graph. Note that graph distance between vertices coincides with the Hamming distance between the images represented by the vertices. Such graphs are known as Hamming graphs, and they have a vertex expansion (or isoperimetry) property ~\citep{harper1999isoperimetric} which implies that for any sufficiently small set, if we add all vertices that are within a graph distance of $\mathcal{O}(n)$ to that set, then the size of that set increases by at least some given factor (see Figure \ref{fig-proof1}b for an example). We can then show that an interesting class $C$ cannot be too robust in the following way: suppose for contradiction that it is. Then there must be some set $C' \subseteq C$ that is pretty large, and has the property that all vertices within some graph distance of $C'$ are in $C$. We can then use the vertex expansion property to show that adding these vertices to $C'$ gives a set larger than $C$, which contradicts the assumption that all vertices within some graph distance $C'$. Plugging explicit values into this argument yields the statement of the theorem. We can then generalize to $\elp{p}$-perturbations for arbitrary $p$ since each coordinate varies by at most 1 unit. The full proof can be found in Appendix \ref{appdx-proof-main}.\qed \end{proofsketch} \input{figures/figure_proof1} Intuitively, the above results state that we can change the class of most ``interesting'' images with small perturbations that are on the order of $\mathcal{O}(n)$ pixel changes. The implications of this are considered in the discussion. \subsubsection{The universal non-robustness results are asymptotically optimal up to a constant factor} Up to a constant factor, the bounds in Theorem \ref{thm-main} are the best possible for a universal non-robustness result that applies to arbitrary predictors if we only consider $n$ and hold the number of channels per pixel $h$ and bit depth $b$ constant. In other words, there exists no bound on robustness that applies \emph{universally to all classifiers} that grows much more slowly in $n$ than the ones given in Theorem \ref{thm-main}. Therefore, if we wish to show that the classes induced by some classifier are not robust to, for instance, $\elp{0}$-perturbations of size $\mathcal{O}(log(n))$, more specific properties of that classifier would need to be considered. To prove this, consider the classifier defined by Algorithm \ref{alg-allcord}. \begin{algorithm} \SetKwInOut{Input}{Input} \Input{An image $I \in \imgspace{n}{h}{b}$} \KwResult{A label belonging to $\{0,1\}$} $S \gets 0$\; \For{$x \gets 1$ \KwTo $n$}{ \For{$y \gets 1$ \KwTo $n$}{ \For{$a \gets 1$ \KwTo $h$}{ $S \gets S+I_{x,y,a}$\; } } } \uIf{ $S < n^2h/2$ }{ \Return 0\; } \uElse{ \Return 1\; } \caption{Robust Classifier} \label{alg-allcord} \end{algorithm} \begin{theorem} \label{thm-robust} Let $\mathcal{C}: \imgspace{n}{h}{b} \rightarrow \{0,1\}$ be the classifier described by Algorithm \ref{alg-allcord}. Then there exists an interesting class $C$ induced by $\mathcal{C}$ such that for all $c > 0$: \begin{enumerate} \item $C$ is $(1-4c)$-robust to $\elp{p}$-perturbations of size $c\sqrt{h}*n - 2$ for all $p \leq 1$. \item $C$ is $(1-4c)$-robust to $\elp{p}$-perturbations of size $\frac{(c\sqrt{h}*n - 2)^{1/p}}{2^b-1}$ for all $p \geq 2$. \end{enumerate} \end{theorem} \begin{proofsketch} Given an image $I$, let $S(I)$ be the sum of all its channel values subtracted by $n^2h/2$. Then $I$ being robust to $\elp{1}$-perturbations of size $x$ is approximately equivalent to $S(I) \notin [-x, x]$. By the central limit theorem, the fraction of images $I$ such that $S(I) \notin [-cn\sqrt{h}, cn\sqrt{h}$] is some monotonic function of $c$ independent of $n$ and $h$ if $n^2h$ is sufficiently large, which is our desired result. Appendix \ref{appdx-proof-robust} provides a more careful analysis of this that does not rely on limiting behaviour and extends the result to all $p$-norms.\qed \end{proofsketch} Combining this statement with Theorem \ref{thm-main} then immediately yields the following statement, which implies that the statements in Theorem \ref{thm-main} are asymptotically optimal up to a constant factor: \begin{corollary} \label{cor-main} For all integers $h,b \geq 1$, $p \geq 0$, and $r \in (0,1)$, there exist constants $c_1 \geq c_2 > 0$ and $n_0$ such that for any $n\geq n_0$ and labels $\mathcal{Y}$: \begin{enumerate} \item No classifier $\mathcal{C}:\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$ induces an interesting class that is $r$-robust to $\elp{p}$-perturbations of size $c_1n^{1/\max(p,1)}$. \item There exists a classifier $\mathcal{C}:\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$ which induces an interesting class that is $r$-robust to $\elp{p}$-perturbations of size $c_2n^{1/\max(p,1)}$. \end{enumerate} \end{corollary} We remark that the constant factor by which Theorem \ref{thm-main} misses optimality by is dependent on the bit depth $b$ for $p$-norms where $p \geq 2$, so significant improvements in the bound may still be possible when we consider it. We make some progress towards this in Theorem \ref{thm-cont-discr}. \subsubsection{Classifier robustness to \texorpdfstring{$\elp{2}$}{Lg}-perturbations decreases with increasing bit depth} In this section we investigate the role played by the bit depth $b$. Theorem \ref{thm-robust} has a dependency on $b$ when considering $\elp{p}$-perturbations for $p \geq 2$. Is there an alternative construction by which we can remove any such dependency altogether to close the gap between Theorem \ref{thm-main} and \ref{thm-robust}? We demonstrate in this section that we cannot. Specifically, we can derive a universal upper bound on robustness that is dependent on $b$, such that as $b$ grows without bound, this bound approaches some constant independent of the number of pixels in the image. \begin{theorem} \label{thm-cont-discr} Let $\mathcal{C}: \imgspace{n}{h}{b} \rightarrow \mathcal{Y}$ be any classifier. Then for all real values $c > 0$ and $p \geq 2$, no interesting class is $2e^{-c^2/2}$-robust to $\elp{p}$-perturbations of size $\big( c + 2\frac{n\sqrt{h}}{2^{b}} \big)^{2/p}$. \end{theorem} \begin{proofsketch} We will focus on the 2-norm. Extension to higher $p$-norms is straightforward and is given as part of the full proof found in Appendix \ref{appdx-cont-discr-proof}. The main idea of the proof rests on the fact that if we extend the classifier to the continuous image space with something like a nearest neighbour approach, the measure of the images that are robust to perturbations of a constant size is small (the statement and proof may be found in Appendix \ref{appdx-continuous-results}). Therefore, if we randomly jump from an image in the discrete image space to an image in the continuous image space, with high probability we will be within a constant distance of an image of a different class. The size of this random jump can be controlled with a factor that shrinks with increasing bit depth. Summing up the budget required for this jump, the perturbation required on the continuous image space, and the jump back to the discrete image space yields the desired bound.\qed \end{proofsketch} We remark that this suggests that the bounds in Theorem \ref{thm-main} pertaining to $\elp{p}$-perturbations for $p \geq 2$ can be improved to reflect its dependency on the bit depth $b$. However, additional work would need to be done to show that the component that shrinks with $b$ scales with $n^{1/p}$ rather than $n^{2/p}$. \subsection{Summary of bounds and their relation to average image distances} We conclude this section by recapitulating the bounds we derived and compare them to the average distances between images for context. We summarize the bounds we derived in Table \ref{table-summary}. For parsimony, we have reparametrized the bounds in terms of the robustness $r$ in Table \ref{table-summary}, although the equations look more complex as a result. In terms of image size $n$, the bounds stated for the $0$-norm and $1$-norm are asymptotically optimal up to a constant factor. The bounds for the other $p$-norms are also asymptotically optimal up to a constant factor, although the constant is parametrized by the bit depth $b$. We showed that the presence of $b$ in our lower bound is not an artifact of our construction: robustness really does drop as $b$ increases (Theorem \ref{thm-cont-discr}). \begin{table*}[t] \caption{Bounds for attainable robustness. Rather than leaving the robustness and bound parametrized by a separate constant $c$, the bounds have been reparametrized in terms of the robustness $r$. The upper bound should be understood as ``no classifier induces an interesting class that is $r$-robust to perturbations of these sizes'' and the lower bound should be interpreted as ``there exists a classifier that induces an interesting class that is $r$-robust to perturbations of these sizes''.} \label{table-summary} \vskip 0.15in \begin{center} \begin{small} \begin{sc} \begin{tabular}{lll} \toprule Perturbation & Upper bound & Lower bound \\ \midrule \shortstack{$\elp{0}$-perturbation\\ $\elp{1}$-perturbation} & $2 + \sqrt{\dfrac{h}{2}ln(\dfrac{2}{r})} * n$ & $-2 + \big( \dfrac{1-r}{4} \big) \sqrt{h}*n$ \\ % \\ % \shortstack{$\elp{p}$-perturbation,\\ $p \geq 2$} & \shortstack{$\min\bigg( \big( 2 + \sqrt{\dfrac{h}{2}ln(\dfrac{2}{r})} * n \big) ^ {1/p} ,% \big( \sqrt{2 ln( \dfrac{2}{r} )} + \dfrac{2\sqrt{h}}{2^{b}}*n \big) ^ {2/p} \bigg)$ } & $\dfrac{\bigg( -2 + \big( \dfrac{1-r}{4} \big) \sqrt{h}*n \bigg)^{1/p} } {2^b - 1}$ \\ \bottomrule \end{tabular} \end{sc} \end{small} \end{center} \vskip -0.1in \end{table*} To conclude this section, we contextualize the bounds derived in this section by comparing them to typical distances between random elements of the image space. We can show that for a pair of images $I, I' \in \imgspace{n}{h}{b}$ that are sampled independently and uniformly, we have: \begin{align} \mathbb{E}[\|I-I'\|_p] \geq k_{h,b,p}n^{2/\max(1,p)} \end{align} Where $k_{h,b,p}$ is some constant parametrized by $h$, $b$, and $p$. See Appendix \ref{appdx-avg-distance} for additional details. Combining this with Corollary \ref{cor-main} shows that if $n$ is sufficiently large, for 99\% (or some arbitrarily high percentage) of images $I''$ within some interesting class, there exists some $c_{h,b,p}$ parametrized by $h$, $b$, and $p$ such that: \begin{align} \dfrac{\min_{X \in \imgspace{n}{h}{b}, \mathcal{C}(I'') \neq \mathcal{C}(X)} \|I''-X\|_p }{ \mathbb{E}[\|I-I'\|_p] } \leq c_{h,b,p}n^{-\frac{1}{\max(p,1)}} \end{align} The right hand side approaches $0$ as $n$ grows without bound, so compared to typical distances one finds in an image space, the distance of an image to an image outside of its class is vanishingly small in any $p$-norm it is measured in. \section{Human classification decisions are subject to universal bounds on robustness} Since the bounds in Table \ref{table-summary} apply universally to any image classifier, they must also apply to the human visual system. Although there are many nuances to consider when interpreting the human visual system as a classifier, we can abstract most of them out by considering the following system for classifying images: we imagine a room containing a person and a monitor that displays images of size $n$-by-$n$. The person then has access to a selection of labels to label images with. To classify an image, the image is first fed into a memoization subroutine that checks if the image has been seen before and returns the label it was previously labelled with if it has. If the image has not been seen before, it is then displayed on the monitor, and the person is allowed to select a single label (or no label at all) to apply to the image. We remark that this classifier can be \emph{concretely realized}, so we cannot dismiss it as simply an abstract construction. This system acts as a classifier which partitions the set of all images into disjoint classes, therefore the bounds in Table \ref{table-summary} must apply. To simplify the discussion, we make an assumption about the human based classifier: at least half the images in the image space are unlabelled. This condition is met if there is no label applicable to images that look like random static. Intuitively, we can interpret labelled images as ones that are ``meaningful'' and unlabelled images as ones that are ``meaningless'' if the label set is sufficiently large. If the unlabelled images occupy at least half the image space, then the labelled images form an interesting class (as defined by Definition \ref{def-interestingclass}). Therefore, the bounds in Table \ref{table-summary} apply, which means that a large fraction of labelled images can be turned into unlabelled images with a small perturbation. If we return to the intuition that labels formalize the notion of ``meaning'', this means that for most ``meaningful'' images, the meaning can be erased with only a tiny fluctuation. Conversely, the ``meaning'' present in most ``meaningful'' images arises from tiny fluctuations. The bounds in Table \ref{table-summary} then state that such ``meaning'' can fit in a perturbation of size $\mathcal{O}(n)$ when measured using the 1-norm or via the Hamming distance. This can be interpreted as a statement about the saliency of line drawings. Figure \ref{fig-crossWrite} gives a demonstration of how line drawings are small perturbations that contain ``meaning''. \input{figures/figure_crosswrite} Table \ref{table-summary} also states that when we raise the bit depth of the image space to be arbitrarily high, ``meaning'' can fit in a perturbation of size $\mathcal{O}(1)$ when measured using a $p$-norm with $p \geq 2$. Line drawings do not necessarily fulfill this criterion, so the interpretation of this fact is more difficult. The human visual system is known to be particularly sensitive to certain small cues \citep{liu2014seeing}, but a unified understanding remains elusive. Understanding the nature of the small perturbations that humans are sensitive to is not merely of academic curiosity. The results summarized in Table \ref{table-summary} show that no computer vision system can be robust to small perturbations. However, a computer vision system that is aligned to the human visual system \emph{ought} not be robust to small perturbations, since the human visual system is not robust either. Over the past decade we have learned that standard machine learning methodology does not automatically produce vision systems that are aligned to the human visual system with respect to small perturbations ~\citep{szegedy2013intriguing}, and methodologies that seek to produce such vision systems still contain misalignments ~\citep{tramer2020adaptive}. A deeper understanding of how small perturbations affect the human visual system may inform the development of such methodologies (for example we may wish to explicitly train computer vision systems on human sensitive small perturbations), which is becoming increasingly necessary as computer vision systems become increasingly deployed in safety and security critical applications, where the trustworthiness of the system is essential~\citep{pereira2020challenges, ma2018secure}. \section{Conclusion} We have derived universal non-robustness bounds that apply to any arbitrary image classifier. We have further demonstrated that up to a constant factor, these are the best bounds attainable. These bounds reveal that most images in any interesting class can have their class changed with a perturbation that is asymptotically infinitesimal when compared to the average distance between images. We then discuss how these universal properties of classifiers relate to the human visual system. We show that part of our results can be interpreted as the sensitivity of the human visual system to line drawings, which are tiny signals when measured using the 1-norm or 0-norm. However, line drawings can still be ``large'' when measured using the 2-norm, so a full understanding remains the subject of future work. Our results focuses on image \emph{classifiers}, which make hard decisions when labelling images. However, vision models underlying the classifiers can make soft decisions, which are then further processed into hard decisions. The applicability of our results to such underlying vision models will be the subject of future investigation. \section{Proofs of Statements} \label{appdx-all-proofs} \subsection{Proof of Theorem \ref{thm-main}} \label{appdx-proof-main} \subsubsection{Properties of binomial coefficients} We will work with binomial coefficients extensively. To simplify some of our statements, we will extend the definition of a binomial coefficient to work with any $n > 0$ and arbitrary integer $k$: \begin{align} \binom{n}{k} = \begin{cases} \dfrac{n!}{k!(n-k)!} & \text{if $0 \leq k \leq n$}\\ 0 & \text{otherwise} \end{cases} \end{align} Binomial coefficients can be bound in the following way: \begin{lemma} $\binom{n}{k} < \dfrac{2^n}{\sqrt{n}}$ when $n \geq 1$. \label{lemma-binomial-mode} \end{lemma} \begin{proof} We first note that $n!$ is bounded by the following for all $n \geq 1$~\citep{robbins1955remark}: \begin{equation} \sqrt{n}\dfrac{n^n}{e^n} < \dfrac{n!}{\sqrt{2\pi}} < \sqrt{n}\dfrac{n^n}{e^n}e^{1/(12n)} \end{equation} Applying the appropriate inequalities for the numerator and denominator yields the following for when $n$ is even: \begin{align} \label{eqn-helper-17} \binom{n}{k} \leq \binom{n}{n/2} = \dfrac{n!}{((n/2)!)^2} < 2\dfrac{2^n}{\sqrt{n}}\dfrac{e^{1/(12n)}}{\sqrt{2\pi}} \end{align} When $n$ is odd, we have: \begin{align} \binom{n}{k} &\leq \binom{n}{\flor{n/2}} \\ &= \dfrac{1}{2}\binom{n+1}{(n+1)/2} \\ &< 2\dfrac{2^n}{\sqrt{n+1}}\dfrac{e^{1/(12(n+1))}}{\sqrt{2\pi}}\\ &< 2\dfrac{2^n}{\sqrt{n}}\dfrac{e^{1/(12n)}}{\sqrt{2\pi}} \end{align} Where the third comparison is an application of Equation \ref{eqn-helper-17}. If $n \geq 1$, we have $\frac{e^{1/(12n)}}{\sqrt{2\pi}} < 0.5$, which proves the claim. \qed \end{proof} It will also be useful to define the following cumulative sums (which are also the tails of binomial distributions): \begin{align} \bincdfp{n}{p}{k} &= \begin{cases} \sum_{i=0}^k \binom{n}{i}p^k(1-p)^{n-k} & \text{if $k \geq 0$}\\ 0 & \text{otherwise} \end{cases} \end{align} We can show that the ratio of these cumulative sums are monotonic increasing: \begin{lemma} \label{lemma-monotonic-ratio} Let $p \in (0,1)$. Then $\frac{ \bincdfp{n}{p}{x-k} }{ \bincdfp{n}{p}{x} }$ is monotonic increasing in $x$, where $0 \leq x \leq n$ and $k$ is any positive integer. \end{lemma} \begin{proof} First, we note that the ratio $\binom{n}{x-k}/\binom{n}{x}$ is monotonic increasing in $x$ when $x \geq 0$. This holds by definition if $x-k < 0$. Otherwise, we have the following: \begin{align} \dfrac{ \binom{n}{x-k}/\binom{n}{x} }{ \binom{n}{x-k+1}/\binom{n}{x+1} } = \dfrac{ (n-x) }{ (n-x+k) } * \dfrac{ (x-k+1) }{ (x+1) } \leq 1 \end{align} We then claim the following holds for all $x$ where $0 \leq x \leq n-1$: \begin{align} \dfrac{ \bincdfp{n}{p}{x-k} }{ \bincdfp{n}{p}{x} } \leq \dfrac{ \bincdfp{n}{p}{x-k+1} }{ \bincdfp{n}{p}{x+1} } \leq \dfrac{ \binom{n}{x-k+1}(1-p)^{k} }{ \binom{n}{x+1}p^{k} } \end{align} The above holds with equality when $x-k+1 < 0$. If $x-k+1 =0$, the above also holds: the leftmost ratio is 0. For the other two ratios, if we multiply the rightmost ratio by $(1-p)^{n-k}$ above we can see that the numerators are equal while the denominator of the rightmost ratio is smaller. Otherwise, by induction on $x$ we have: \begin{align} \dfrac{ \bincdfp{n}{p}{x-k} }{ \bincdfp{n}{p}{x} } &\leq \dfrac{ \binom{n}{x-k}(1-p)^k }{ \binom{n}{x}p^k }\\ &\leq \dfrac{ \binom{n}{x-k+1}(1-p)^k }{ \binom{n}{x+1}p^k }\\ &= \dfrac{ \binom{n}{x-k+1}p^{x-k+1}(1-p)^{n-x+k-1} }{ \binom{n}{x+1}p^{x+1}(1-p)^{n-x+1} } \end{align} Where the first inequality follows by induction, and the second inequality follows because $\binom{n}{x-k}/\binom{n}{x}$ is monotonic increasing in $x$. For any positive numbers $a$, $c$ and strictly positive numbers $b$, $d$, where $\frac{a}{b} \leq \frac{c}{d}$, we have $\frac{a}{b} \leq \frac{a+c}{b+d} \leq \frac{c}{d}$ because: \begin{align} \dfrac{d}{d\lambda} \bigg( \dfrac{a+\lambda c}{b +\lambda d} \bigg) = \dfrac{bc -ad}{ (b + \lambda d)^2 } \geq 0 \end{align} Therefore, we have: \begin{align}\nonumber &\dfrac{ \bincdfp{n}{p}{x-k} }{ \bincdfp{n}{p}{x} }\\ &\leq \dfrac{ \bincdf{n}{x-k} + \binom{n}{x-k+1}p^{x-k+1}(1-p)^{n-x+k-1} } { \bincdfp{n}{p}{x} + \binom{n}{x+1}p^{x+1}(1-p)^{n-x+1} }\\ &= \dfrac{ \bincdfp{n}{p}{x-k+1} }{ \bincdfp{n}{p}{x+1} }\\ &\leq \dfrac{ \binom{n}{x-k+1}(1-p)^k }{ \binom{n}{x+1}p^k } \end{align} As claimed. Carrying on the induction up to $x=n-1$ yields the statement.\qed \end{proof} \subsubsection{Bounding the interior of a set over a Hamming graph} We will prove our main results by an application of isoperimetry bounds over a Hamming graph. Let $Q$ be a set of $q$ symbols. Then we define the $n$ dimensional Hamming graph over $q$ letters, denoted $\hamgraph{n}{q}$, as the graph with a vertex set $Q^n$ and an edge set containing all edges between vertices that differ at precisely one coordinate. For example, $\hamgraph{n}{2}$ is isomorphic to the Boolean hypercube. We will use $V(\hamgraph{n}{q})$ to denote the vertex set of the Hamming graph. Let $S \subseteq \hamgraph{n}{q}$. We define the expansion of $S$, denoted $\expansion{S}$, as the set of vertices that are either in $S$ or have a neighbour in $S$. Since $\expansion{.}$ inputs and outputs sets of vertices, we can iterate it. We will use $\expk{k}{.}$ to denote $k$ applications of $\expansion{.}$. We now adapt a a result from \citep{harper1999isoperimetric} (Theorem 3 in the paper). \begin{lemma}[Isoperimetric Theorem on Hamming graphs] \label{lemma-iso-hamgraph} Let $S \subsetneq \hamgraph{n}{q}$. Then: \begin{align}\nonumber \dfrac{|\expk{k}{S}|}{|V(\hamgraph{n}{q})|} \geq \min\{& \bincdfp{n}{p}{r+k} \\ \nonumber |& \bincdfp{n}{p}{r} = \dfrac{|S|}{|V(\hamgraph{n}{q})|},\\ & p \in (0,1),r\in[0,n-k) \} \end{align} \end{lemma} To work with this we first obtain bounds for the expression on the right hand side of Lemma \ref{lemma-iso-hamgraph}. \begin{lemma} \label{lemma-bound-any-binomial} Let $p$ be any value in $(0,1)$. Let $n > r \geq k$ such that $\bincdfp{n}{p}{r} \leq \frac{1}{2}$. Then $\frac{\bincdfp{n}{p}{r-k}}{\bincdfp{n}{p}{r}} \leq 2e^{-2(k-1)^2/n}$. \end{lemma} \begin{proof} Let $X$ be a binomially distributed random variable with $n$ trials and probability of success $p$. Let $r$ be the median of $X$. We have $r \leq np+1$ because the median and mean differ by at most 1 ~\citep{kaas1980mean}. $\bincdfp{n}{p}{r-k}$ can be interpreted as $\prob{X \leq r-k}$, We can then apply Hoeffding's inequality~\citep{hoeffding1994probability}: \begin{align} \prob{X \leq r-k} &= \prob{X \leq np+1-k}\\ &\leq e^{-2(k-1)^2/n} \end{align} Since $r$ is the median of $X$, we also have $\bincdfp{n}{p}{r} \geq \frac{1}{2}$. Combining this with the above equation gives: \begin{align} \frac{\bincdfp{n}{p}{r-k}}{\bincdfp{n}{p}{r}} \leq 2e^{-2(k-1)^2/n} \end{align} Since $\frac{\bincdfp{n}{p}{x-k}}{\bincdfp{n}{p}{x}}$ is monotonically increasing via Lemma \ref{lemma-monotonic-ratio}, this also implies that the above relation holds for all smaller $r$. This completes the proof.\qed \end{proof} We can then plug this into Lemma \ref{lemma-iso-hamgraph} to obtain a non-robustness result on Hamming graphs, which we will then apply to image spaces. % \begin{theorem} \label{thm-hamgraph-bound} Let $S \subsetneq V(\hamgraph{n}{q})$ such that $|S| \leq |V(\hamgraph{n}{q})|/2$, and $c > 0$ be any number. Let $S' \subseteq S$ be the set of vertices for which no path with $c\sqrt{n}+2$ edges or less leads to a vertex not in $S$. Then $\frac{|S'|}{|S|} < 2e^{-2c^2}$. \end{theorem} \begin{proof} Suppose for contradiction that $|S'| \geq 2e^{-2c^2}|S|$. Since for any vertex in $S'$ no path with $c\sqrt{n}+2$ edges or less leads to a vertex outside of $S$, we have $\expk{c\sqrt{n}+2}{S'} \subseteq S$. Then: \begin{align}\nonumber |\expk{c\sqrt{n}+2}{S'}| \geq& |V(\hamgraph{n}{q})|\min\{ \bincdfp{n}{p}{r+c\sqrt{n}+2} \\ \nonumber &| \bincdfp{n}{p}{r} = \dfrac{|S'|}{|V(\hamgraph{n}{q})|},\\ & p \in (0,1),r\in[0,n-c\sqrt{n}-2) \}\\ \geq& 2e^{2(c\sqrt{n} + 1)^2/n} |S'|\\ >& 2e^{2c^2} |S'| \end{align} The first relation follows from Lemma \ref{lemma-iso-hamgraph} and the second follows from Lemma \ref{lemma-bound-any-binomial}. Lemma \ref{lemma-bound-any-binomial} applies since $\expk{c\sqrt{n}+2}{S'} \subseteq S$, so $|\expk{c\sqrt{n}+2}{S'}| \leq |S| \leq \frac{1}{2}$. But then $|\expk{c\sqrt{n}+2}{S'}| > 2e^{-2c^2} |S'| \geq |S|$, which implies that $\expk{c\sqrt{n}+2}{S'} \nsubseteq S$. This is a contradiction, so we obtain our desired statement.\qed \end{proof} \subsubsection{Proving Theorem \ref{thm-main}} Let $\mathcal{C}: \imgspace{n}{h}{b} \rightarrow \mathcal{Y}$ be a classifier and let $C \subseteq \imgspace{n}{h}{b}$ be any interesting class induced by $\mathcal{C}$. \begin{lemma} \label{lemma-l0-discr-main} $C$ is not $2e^{-2c^2}$-robust to $\elp{0}$-perturbations of size $c\sqrt{h}*n+2$. \end{lemma} \begin{proof} Let $\mathcal{M}: V(\hamgraph{n^2h}{2^b}) \rightarrow \imgspace{n}{h}{b}$ be the following bijection: first let $Q$ be a set of $2^n$ equally spaced values between 0 and 1, where the largest value is 0 and the smallest is 1. Then the elements of $V(\hamgraph{n^2h}{2^b})$ can be viewed as $Q^{n^2h}$. We then map elements from $Q^{n^2h}$ to $\imgspace{n}{h}{b}$ such that the inverse operation is a flattening of the image tensor. Note that such a mapping preserves graph distance on $V(\hamgraph{n^2h}{2^b})$ as Hamming distance on $\imgspace{n}{h}{b}$. Let $C' \subseteq C$ be the set of images that are robust to $\elp{0}$-perturbations of size $c\sqrt{h}*n+2$. Let $S = \mathcal{M}^{-1}(C)$ and $S' = \mathcal{M}^{-1}(C')$. $S'$ is then the set of vertices for which no path with $c\sqrt{h}*n+2$ edges or less leads to a vertex outside of $S$. $C$ is an interesting class and $\mathcal{M}(.)$ preserves cardinality due to it being a bijection. Therefore $|C'| \leq |V(\hamgraph{n^2h}{2^b})|/2$, so by Theorem \ref{thm-hamgraph-bound} we have $|S'|/|S| < 2e^{-2c^2}$. Again, since $\mathcal{M}(.)$ preserves cardinality, this implies that $|C'|/|C| < 2e^{-2c^2}$, which means that $C$ is not $2e^{-2c^2}$-robust to $\elp{0}$-perturbations of size $c\sqrt{h}*n+2$.\qed \end{proof} We remark that if the domain of $\mathcal{M}(.)$ is changed to $\hamgraph{n^2}{h2^b}$, the above argument also shows that $C$ is not $2e^{-2c^2}$-robust to $cn+2$ pixel changes. It is straightforward to generalize this to $p$-norms with larger $p$. \begin{lemma} \label{lemma-thm1-higherp} $C$ is not $2e^{-2c^2}$-robust to $\elp{p}$-perturbations of size $(c\sqrt{h}*n+2)^{1/p}$. \end{lemma} \begin{proof} Let $S_1$ be the set of images that are $r$-robust to $\elp{0}$-perturbations of size $d$, and let $S_2$ be the set of images that are $r$-robust to $\elp{p}$-perturbations of size $d^{1/p}$. Suppose $I \notin S_1$. Then there exists some image $I'$ in a different class from $I$ such that $\|I-I'\|_{0} \leq d$. Therefore, for all $p > 0$, we have: \begin{align} d &\geq \|I-I'\|_{0}\\ &= \sum_{x,y,c} \cil{ |I_{x,y,c} - I'_{x,y,c}| }\\ &\geq \sum_{x,y,c} |I_{x,y,c} - I'_{x,y,c}|^{p}\\ &= (\|I-I'\|_{p})^{p} \end{align} Where the second and third relation follows from the fact that channel values are contained in $[0,1]$. Therefore, $I \notin S_2$ either since $\|I-I'\|_{p} \leq d^{1/p}$. Taking the contraposition yields $S_2 \subseteq S_1$. Setting $d = c\sqrt{h}*n+2$ and applying Lemma \ref{lemma-l0-discr-main} gives the desired result.\qed \end{proof} \subsection{Proof of Theorem \ref{thm-robust}} \label{appdx-proof-robust} \subsubsection{Anti-concentration inequalities} We first prove an anti-concentration lemma concerning the binomial distribution. \begin{lemma} \label{lemma-binomial-spread} Let $X$ be a random variable following the binomial distribution with $n$ trials and a probability of success of 0.5. Let $Y$ be a discrete random variable independent of $X$ whose distribution is symmetric about the origin. Then for any $t$ where $t < \mathbb{E}[X]$ and $t - \flor{t} = 1/2$, we have: \begin{align} \prob{X+Y \leq t} \geq \prob{X < t} \end{align} \end{lemma} \begin{proof} We have the following: \begin{align} \prob{X+Y \leq t} =& \prob{X+Y \leq t, X < t} \\ \nonumber &+ \prob{X+Y \leq t, X > t}\\ \prob{X < t} =& \prob{X+Y \leq t, X < t} \\ \nonumber &+ \prob{X+Y > t, X < t} \end{align} Therefore it suffices to show that $\prob{X+Y \leq t, X > t} \geq \prob{X+Y > t, X < t}$. We have for any $r \geq 0$: \begin{align} \label{eqn-helper-43} \prob{X+Y \leq t, X = t+r} &= \prob{Y \leq -r}\prob{X = t+r}\\ \label{eqn-helper-44} &\geq \prob{Y > r}\prob{X = t+r}\\ \label{eqn-helper-45} &\geq \prob{Y > r}\prob{X = t-r}\\ &= \prob{X+Y > t, X = t-r} \end{align} Where Equation \ref{eqn-helper-43} follows from the independence of $X$ and $Y$, Equation \ref{eqn-helper-44} follows from the symmetry of the distribution of $Y$, and Equation \ref{eqn-helper-45} follows from our assumption that $t < \mathbb{E}[X]$ and $t - \flor{t} = 1/2$. Summing over all positive $r$ for which $\prob{X = t+r} \geq 0$ yields the desired result.\qed \end{proof} \begin{lemma} \label{lemma-anti-concentration} Let $X_1, X_2, ..., X_n$ be independently and identically distributed random variables such that each $X_i$ is uniformly distributed on $2k$ evenly spaced real numbers $a=r_1 < r_2 < ... < r_{2k}=b$. Then for $t > 0$, we have: \begin{align} \prob{\sum_{i=1}^n X_i \leq (\sum_{i=1}^n\mathbb{E}[X_i]) -t+(b-a)} > \dfrac{1}{2} - \dfrac{2t}{\sqrt{n}(b-a)} \end{align} \end{lemma} \begin{proof} Let $Y_1, Y_2, ... ,Y_n$ be independently and identically distributed Bernoulli random variables with $p=0.5$. Let $Z_1, Z_2,...,Z_n$ be a set of independently and identically distributed random variables uniformly distributed between the integers between $1$ and $k$ inclusive. If the $Y$s and $Z$s are independent of each other as well, we have: \begin{align} \sum_{i=1}^n (X_i - \mathbb{E}[X_i]) =& \dfrac{b-a}{2k-1} \sum_{i=1}^n (kY_i+Z_i - \mathbb{E}[kY_i+Z_i])\\ \nonumber =& k\dfrac{b-a}{2k-1} \big( (\sum_{i=1}^n Y_i) + (\sum_{i=1}^n \dfrac{Z_i - \mathbb{E}[Z_i]}{k})\\ &- (\sum_{i=1}^n \mathbb{E}[Y_i]) \big) \end{align} Let $\sum_{i=1}^n Y_i = B$, $\sum_{i=1}^n \dfrac{Z_i - \mathbb{E}[Z_i]}{k} = D$, and $k\frac{b-a}{2k-1} = c$. Then for any $t > 0$, we have: \begin{align} \prob{ \sum_{i=1}^n (X_i - \mathbb{E}[X_i]) \leq -t } =& \prob{ B+D \leq -\dfrac{t}{c} + \mathbb{E}[B] }\\ \nonumber \geq& \prob{ B+D\\ &\leq -\dfrac{t}{c} + \mathbb{E}[B] - u }\\ \label{eqn-helper-52} \geq& \prob{ B < -\dfrac{t}{c} + \mathbb{E}[B] - 1 }\\ \nonumber \geq& \prob{B-\mathbb{E}[B] \\ &< -\dfrac{2t}{b-a} - 1}\\ \nonumber \geq& \dfrac{1}{2} - \prob{B-\mathbb{E}[B]\\ &\in [-\dfrac{2t}{b-a} - 1, 0]}\\ \nonumber \geq& \dfrac{1}{2} - \binom{n}{\flor{n/2}}2^{-n}\\ &(\dfrac{2t}{b-a} + 2) \label{eqn-helper-56} \end{align} Where $1 \geq u \geq 0$ is chosen such that $-\frac{t}{c} + \mathbb{E}[B] - u$ is the average of two adjacent integers. Equation \ref{eqn-helper-52} is then an application of Lemma \ref{lemma-binomial-spread} since $B$ is binomially distributed with $p=0.5$ and $D$ has a distribution that is symmetric about the origin, and Equation \ref{eqn-helper-56} follows from the fact that no more than $x+1$ values are supported on an interval of length $x$, and no supported value has probability greater than $\binom{n}{\flor{n/2}}2^{-n}$. Observing that $\binom{n}{\flor{n/2}}2^{-n} < \frac{1}{\sqrt{n}}$ due to Lemma \ref{lemma-binomial-mode} and substituting $t$ with $t-(b-a)$ yields the desired result.\qed \end{proof} \subsubsection{Proving Theorem \ref{thm-robust}} Let $A: \imgspace{n}{h}{b} \rightarrow \{0,1\}$ be described by Algorithm \ref{alg-allcord}. In other words, it is the classifier that inputs an image, sums all of its channels, and outputs $0$ if the sum is less than $n^2h/2$ and $1$ otherwise. Let $Z$ be the class of images that $A$ outputs $0$ on. Note that $Z$ is an interesting class since it cannot be larger than its complement, so it suffices to prove that $Z$ is robust. \begin{lemma} \label{lemma-robust-l1} $Z$ is $(1-4c)$-robust to $\elp{1}$-perturbations of size $c\sqrt{h}*n - 2$ \end{lemma} \begin{proof} Let $Z' \subseteq Z$ be the set of images in $Z$ that are robust to $\elp{1}$-perturbations of size $c\sqrt{h}*n - 2$. Let $I$ be a random image sampled uniformly. Then $|Z'| = \prob{I \in Z'}2^{-(n^2hb)}$. We then have the following: \begin{align} \prob{I \in Z'} &= \prob{ \sum_{x,y,a}I_{x,y,a} + c\sqrt{h}*n - 2 < n^2h/2 }\\ &\geq \prob{ \sum_{x,y,a}I_{x,y,a} \leq n^2h/2 - c\sqrt{h}*n + 1 }\\ &> \dfrac{1}{2} - 2c \end{align} Where the last inequality follows from Lemma \ref{lemma-anti-concentration} since each channel is sampled from a uniform distribution over a set of $2^b$ evenly spaced values between $0$ and $1$. Noting that $|Z| \leq 2^{(n^2hb)-1}$ since it cannot be larger than its complement yields $\frac{|Z'|}{|Z|} \geq 1-4c$. Therefore, $Z$ is $(1-4c)$-robust to $\elp{1}$-perturbations of size $c\sqrt{h}*n - 2$.\qed \end{proof} \begin{lemma} \label{lemma-robust-l2} $Z$ is $(1-4c)$-robust to $\elp{0}$-perturbations of size $c\sqrt{h}*n - 2$ \end{lemma} \begin{proof} It suffices to show that an image that is robust to $\elp{1}$-perturbations of size $d$ is also robust to $\elp{0}$-perturbations of size $d$, since the statement then follows directly from Lemma \ref{lemma-robust-l1}. Let $I$ be an image that is not robust to $\elp{0}$-perturbations of size $d$, so there exists some $I'$ in a different class such that $\|I-I'\|_0 \leq d$. Then: \begin{align} d &\geq \|I-I'\|_0\\ &= \sum_{(x,y,a)}\cil{|I_{x,y,a} - I'_{x,y,a}|}\\ &\geq \sum_{(x,y,a)}|I_{x,y,a}-I'_{x,y,a}|\\ &= \|I-I'\|_1 \end{align} Where the second and third relations hold since channel values lie in $[0,1]$. This implies that $I$ is not robust to $\elp{1}$-perturbations of size $d$. Therefore any image that is not robust to $\elp{0}$-perturbations of size $d$ is also not robust to $\elp{1}$-perturbations of size $d$. The contraposition yields the desired statement.\qed \end{proof} \begin{lemma} $Z$ is $(1-4c)$-robust to $\elp{p}$-perturbations of size $\frac{(c\sqrt{h}*n - 2)^{1/p}}{2^b-1}$ for $p \geq 2$. \end{lemma} \begin{proof} It suffices to show that any image that is robust to $\elp{0}$-perturbations of size $d$ is also robust to $\elp{p}$-perturbations of size $\frac{d^{1/p}}{2^b-1}$ for any $p \geq 2$, since the statement then follows directly from Lemma \ref{lemma-robust-l1}. Let $I$ be an image that is robust to $\elp{0}$-perturbations of size $d$. Let $I'$ be any image in a different class, so $\|I-I'\|_0 > d$. Then for any $p \geq 1$: \begin{align} \|I-I'\|_p^p &= \sum_{(x,y,a)}|I_{x,y,a}-I'_{x,y,a}|^p\\ &\geq \sum_{(x,y,a)} \dfrac{\cil{|I_{x,y,a} - I'_{x,y,a}|}}{(2^b-1)^p}\\ &= \dfrac{\|I-I'\|_0}{(2^b-1)^p}\\ &> \dfrac{d}{(2^b-1)^p} \end{align} Where the second relation follows from the fact that if two channel values differ, they must differ by at least $\frac{1}{2^b-1}$. Therefore, $\|I-I'\|_p > \frac{d^{1/p}}{2^b-1}$ for any $I'$ whose class is different from $I$, so $I$ is robust to $\elp{p}$-perturbations of size $\frac{d^{1/p}}{2^b-1}$ for $p \geq 2$.\qed \end{proof} \subsection{Proof of Theorem \ref{thm-cont-discr}} \label{appdx-cont-discr-proof} Let $\mathcal{C}:\imgspace{n}{h}{b}\rightarrow \mathcal{Y}$ be any classifier, and let $C$ be any interesting class induced by $\mathcal{C}$. Our objective is to show that $C$ is not robust to various perturbations. Let $T = \{[x*2^{-b}, (x+1)*2^{-b}) | x \in \mathbb{Z} \cap [0, 2^b-2] \} \cup \{[1-2^{-b}, 1]\}$ be a set of $2^b$ equal length intervals whose union is the interval $[0,1]$. Let $\discreizedh{n^2h}{2^b} = T^{n^2h}$ be their Cartesian power. Then the elements of $\discreizedh{n^2h}{2^b}$ are disjoint, and their union is precisely the hypercube $[0,1]^{n^2h}$. We can associate each element of $\imgspace{n}{h}{b}$ with an element of $\discreizedh{n^2h}{2^b}$ by first mapping $\imgspace{n}{h}{b}$ to $[0,1]^{n^2h}$, which can be done by flattening the image tensor (which we denote by $\flat(I)$ for an image $I \in \imgspace{n}{h}{b}$). We then map that point to the element of $\discreizedh{n^2h}{2^b}$ the point falls within. The overall mapping is bijective, and we will denote it by $F$. Let $\mathcal{A}:[0,1]^{n^2h} \times \mathbb{R} \rightarrow [0,1]^{n^2h} \cup \{\bot \}$ be a partial function that maps a point $p_1$ and a real value $c$ to a point $p_2$ such that the following hold: \begin{enumerate} \item $\|p_1-p_2\|_2 \leq c$. \item Let $I_1, I_2 \in \imgspace{n}{h}{b}$ such that $p_1 \in F(I_1)$ and $p_2 \in F(I_2)$. Then we require that $\mathcal{C}(I_1) \neq \mathcal{C}(I_2)$. \end{enumerate} $\mathcal{A}(.)$ returns $\bot$ if and only if no such $p_2$ exists. We can then define a procedure $\textsc{FindPerturbation}$ for finding a perturbation given an image $I$, which is outlined in Algorithm \ref{alg-perturb}. \begin{algorithm} \SetKwInOut{Input}{Input} \Input{An image $I \in \imgspace{n}{h}{b}$ and a real values $c$.} \KwResult{An image $I' \in \imgspace{n}{h}{b}$ such that $\mathcal{C}(I) \neq \mathcal{C}(I')$, or $\bot$.} Sample $p_1$ from $F(I)$ uniformly at random\; $p_2 \gets \mathcal{A}(p_1, c)$ \uIf{ $p_2 = \bot$ }{ \Return $\bot$\; } \uElse{ Find $I_2$ such that $p_2 \in F(I_2)$\; \Return $I_2$\; } \caption{Find Perturbation} \label{alg-perturb} \end{algorithm} Our proof strategy is to show that the perturbations found by $\textsc{FindPerturbation}$ are guaranteed to be small, and that the probability of failure is low. This must then imply that most images are not robust. \begin{lemma} \label{lemma-discc-len} If $I' = \textsc{FindPerturbation}(I, c)$ is not $\bot$, then $\|I-I'\|_2 \leq c + 2\frac{n\sqrt{h}}{2^{b}}$. \end{lemma} \begin{proof} Each element of $\discreizedh{n^2h}{2^b}$ has a diameter of $\frac{\sqrt{n^2h}}{2^b}$, thus $p_1$ differs from $\flat(I)$ be at most that distance. Similarly, $p_2$ differs from $\flat(I_2) = \flat(I')$ by that distance. We also must have $\|p_1-p_2\|_2 \leq c$ since $I' \neq \bot$. Putting it altogether with the triangle inequality we get $\|\flat(I) - \flat(I')\|_2 \leq c + 2\frac{n\sqrt{h}}{2^b}$. Since $\flat(.)$ preserves distances, we get the desired statement.\qed \end{proof} \begin{lemma} \label{lemma-discc-prob} If $I$ is drawn uniformly from $C$, then $\prob{\textsc{FindPerturbation}(I, c) = \bot} < 2e^{-c^2/2}$. \end{lemma} \begin{proof} Let $F(C)$ denote the image of $C$ under $F$. Let $\bigcup F(C)$ denote the union of all elements in $F(C)$. If the input $I$ is drawn uniformly from $C$, then $p_1$ is distributed uniformly over $\bigcup F(C)$. The procedure fails if and only if $\mathcal{A}(p_1, c) = \bot$, which happens if and only if all elements within a radius of $c$ from $p_1$ all belong to $\bigcup F(C)$. Let $C'$ denote the set of all such points. \begin{align} \prob{\mathcal{A}(I_2,c) = \bot} &= \dfrac{ \mu (C') }{\mu( \bigcup F(C) )}\\ &< 2e^{-c^2/2} \end{align} Where $\mu(.)$ denotes the Lebesgue measure. The last inequality comes from Theorem \ref{thm-continuous-weak}, which is given in the next section. The statement applies for any set $S$ formed from a union of elements of $\discreizedh{n^2h}{2^b}$ whose measure is no larger than $1/2$. $\bigcup F(C)$ satisfies these criteria since $C$ is an interesting class, so we attain the desired statement.\qed \end{proof} \begin{lemma} \label{lemma-discrization-result} $C$ is not $2e^{-c^2/2}$-robust to $\elp{2}$-perturbations of size $c + 2\frac{n\sqrt{h}}{2^{b}}$. \end{lemma} \begin{proof} Let $I$ be drawn uniformly from $C$. Let $C_r$ be the set of images that are robust to $\elp{2}$-perturbations of size $c + 2\frac{n\sqrt{h}}{2^{b}}$. Let $I'=\textsc{FindPerturbation}(I,c)$. Then $I'$ is randomly distributed over $\imgspace{n}{h}{b}\cup\{ \bot \}$. By Lemma \ref{lemma-discc-len}, if $I' \in \imgspace{n}{h}{b}$, then $\|I-I'\|_2 \leq c + 2\frac{n\sqrt{h}}{2^{b}}$, which implies that $I \notin C_r$. By contraposition, $I \in C_r$ implies that $\textsc{FindPerturbation}(I,c) = \bot$. Therefore: \begin{align} \prob{I' = \bot} &= \prob{ I \in C_r } + \prob{ I \notin C_r, I' = \bot }\\ &\geq \prob{I \in C_r}\\ &= \frac{|C_r|}{|C|} \end{align} By Lemma \ref{lemma-discc-prob}, $\prob{I' = \bot} < 2e^{-c^2/2}$. Thus, $\frac{|C_r|}{|C|} < 2e^{-c^2/2}$, which yields the desired statement.\qed \end{proof} \begin{lemma} \label{lemma-thm3-higherp} $C$ is not $2e^{-c^2/2}$-robust to $\elp{p}$-perturbations of size $\big( c + 2\frac{n\sqrt{h}}{2^{b}} \big)^{2/p}$ for $p \geq 2$. \end{lemma} \begin{proof} We use the identical argument from Lemma \ref{lemma-thm1-higherp}. Let $S_1$ be the set of images that are $r$-robust to $\elp{2}$-perturbations of size $d$, and let $S_2$ be the set of images that are $r$-robust to $\elp{p}$-perturbations of size $d^{2/p}$, where $p \geq 2$. Suppose $I \notin S_1$. Then there exists some image $I'$ in a different class from $I$ such that $\|I-I'\|_{2} \leq d$. Therefore, for all $p > 0$, we have: \begin{align} d^2 &\geq \|I-I'\|_{2}^2\\ &= \sum_{x,y,c} |I_{x,y,c} - I'_{x,y,c}|^2\\ &\geq \sum_{x,y,c} |I_{x,y,c} - I'_{x,y,c}|^{p}\\ &= (\|I-I'\|_{p})^{p} \end{align} Where the third relation follows from the fact that channel values are contained in $[0,1]$. Therefore, $I \notin S_2$ either since $\|I-I'\|_{p} \leq d^{2/p}$. Taking the contraposition yields $S_2 \subseteq S_1$. Setting $d = c + 2\frac{n\sqrt{h}}{2^{b}}$ and applying Lemma \ref{lemma-discrization-result} gives the desired result.\qed \end{proof} \subsection{Proof of Theorem \ref{thm-continuous-weak}} \label{appdx-continuous-results} Our objective in this section is to complete the proof of Theorem \ref{thm-cont-discr} by proving Theorem \ref{thm-continuous-weak}, stated below. We will use $\mu(.)$ to denote Lebesgue measure throughout this section. \begin{definition} We say a set $S \subseteq [0,1]^n$ is a regular set if there is some $q$ and $T \subseteq \discreizedh{n}{q}$ such that $S = \bigcup_{t \in T}t$. \end{definition} \begin{theorem} \label{thm-continuous-weak} Let $S \subseteq [0,1]^n$ be a regular set such that $\mu(S) \leq 1/2$. Let $S_r \subseteq S$ contain all the points in $S$ such that for all $y \in [0,1]$, $\|x-y\|_2 \leq r \implies y \in S$. Then $\frac{\mu(S_r)}{\mu(S)} < 2e^{c^2/2}$. \end{theorem} \subsubsection{Properties of the standard normal distribution} First, we define the cumulative distribution function for the standard normal distribution and its derivative. \begin{align} \Phi(x) &= \int_{-\infty}^{x} \dfrac{1}{\sqrt{2\pi}}e^{-t^2/2} dt\\ \Phi'(x) &= \dfrac{1}{\sqrt{2\pi}}e^{-x^2/2} \end{align} Similarly to the discrete case, the ratio of the cumulative distribution functions is monotonic increasing. \begin{lemma} \label{lemma-cont-monotonic} $\frac{\Phi(x-k)}{\Phi(x)}$ is monotonic increasing in $x$ for all $k \geq 0$. \end{lemma} \begin{proof} Let $f(x) = \frac{e^{-x^2/2}}{\int_{-\infty}^{x} e^{-t^2/2}dt}$. Then: \begin{align} \dfrac{d}{dx}f(x) = \dfrac{ -e^{-x^2/2}x\int_{-\infty}^{x} e^{-t^2/2}dt -e^{-x^2/2}e^{-x^2/2} } {\big( \int_{-\infty}^{x} e^{t^2/2} dt \big)^2} \end{align} When $x \geq 0$, this derivative is negative since both terms in the numerator are negative. If $x < 0$, we have the following: \begin{align} -x\int_{-\infty}^{x} e^{-t^2/2}dt &< -x\int_{-\infty}^{x} e^{-t^2/2} + \dfrac{1}{t^2}e^{-t^2/2}dt\\ &= -x\big( -\dfrac{1}{t}e^{-t^2/2} \bigg\rvert_{-\infty}^{x} \big)\\ &= e^{-x^2/2} \end{align} So the sum is strictly smaller than $(e^{-x^2/2})^2-(e^{-x^2/2})^2 = 0$. Therefore, the derivative is everywhere negative, so $f(x)$ is strictly decreasing. Therefore, we have the following for any non-negative $k$: \begin{align} \dfrac{d}{dx}ln(\dfrac{\Phi(x-k)}{\Phi(x)}) = f(x-k) - f(x) \geq 0 \end{align} Since $ln(.)$ is a monotonic increasing function, $\frac{\Phi(x-k)}{\Phi(x)}$ must also be monotonic increasing.\qed \end{proof} \subsubsection{Proving Theorem \ref{thm-continuous-weak}} Similarly to the discrete case, our main result relies on an isoperimetry statement, this time on the unit hypercube~\citep{barthe2000some}. \begin{lemma}[Isoperimetric Theorem on the Unit Hypercube] \label{lemma-cont-isoperim} For any $n$, let $A \subset [0,1]^n$ be a Borel set. Let $A_{\epsilon} = \{x \in [0,1]^n \big| \exists x' \in A: \|x-x'\| \leq \epsilon \}$. Then we have the following: \begin{equation} \liminf_{\epsilon \rightarrow 0^+} \dfrac{\mu(A_{\epsilon}) - \mu(A)}{\epsilon} \geq \sqrt{2\pi} \Phi'(\Phi^{-1}(\mu(A))) \end{equation} \end{lemma} Let $C \subseteq [0,1]$ be a regular set such that $0 < \mu(C) \leq 1/2$. Let $C_r \subseteq C$ denote the points $p_1$ in $C$ such that for any point $p_2 \in [0,1]$, $\|p_1-p_2\|_2 \leq r \implies p_2 \in C$. \begin{lemma} $C_r \leq \Phi( \Phi^{-1}(\mu(C))-r )$ \label{lemma-cont-tailbound} \end{lemma} \begin{proof} Let $z=\Phi^{-1}(\mu(C))$ and let $f(x) = \Phi( x+z )$. Let $v(.)$ be a Lebesgue integrable function such that the following holds: \begin{align} V(r) = \int_{(-\infty,r)} v(t)dt &= \begin{cases} \mu(C_{-r}) & \text{if $r \leq 0$}\\ \mu(C_0) & \text{otherwise} \end{cases} \end{align} This exists since $C$ is a regular set. Since $V(x)$ results from integration, it is also a continuous function. It then suffices to show that $V(x) \leq f(x)$ for all $x$, since $V(x)$ corresponds to the left hand side of the theorem statement and $f(x)$ corresponds to the right hand side. Suppose this is not the case. We know that $V(x) \leq f(x)$ for all $x \geq 0$, so if this is violated it must happen when $x < 0$. Since $V(x)$ and $f(x)$ are both continuous, by the intermediate value theorem there must exist some interval $[a,b)$ where $V(x) > f(x)$ if $x \in [a,b)$, $V(b)=f(b)$, and $a < b \leq 0$. This gives us the following: \begin{align} V(b)-V(a) &= \int_{[a,b)}v(t)dt\\ \label{eqn-helper-20} &= \int_{[a,b)\setminus Z}\lim_{\epsilon \rightarrow 0^+} \dfrac{V(t+\epsilon)-V(t)}{\epsilon}dt\\ &= \int_{[a,b)\setminus Z}\liminf_{\epsilon \rightarrow 0^+} \dfrac{\mu(C_{-t-\epsilon})-\mu(C_{-t})}{\epsilon}dt\\ \label{eqn-helper-22} &\geq \int_{[a,b)} \sqrt{2\pi} \Phi'(\Phi^{-1}(\mu(C_{-t}))) dt\\ \label{eqn-helper-23} &\geq \int_{[a,b)} \sqrt{2\pi} \Phi'(\Phi^{-1}(f(t))) dt\\ &\geq f(b)-f(a) \end{align} Where $Z$ is the set of values where the limit in Equation \ref{eqn-helper-20} is not equal to $v(t)$, which by the Lebesgue differentiation theorem is a set of measure 0. Equation \ref{eqn-helper-22} is an application of Lemma \ref{lemma-cont-isoperim}, which is applicable since $C_{-t}$ is a Borel set due to the $C$ being a regular set. Equation \ref{eqn-helper-23} follows from the fact that $f(x) \leq V(x)$ for all $x \in [a,b]$ and the fact that $\Phi'(\Phi^{-1}(.))$ is monotonically increasing if the input is no greater than $1/2$. We also have $V(a) > f(a)$ and $V(b) = f(b)$, so it must be the case that $V(b)-V(a) < f(b)-f(a)$. This contradicts the above, so it must be the case that $V(x) \leq f(x)$ for all $x$.\qed \end{proof} \begin{lemma} $\mu(C_c) < 2e^{-c^2/2}\mu(C)$ \label{lemma-cont-main} \end{lemma} \begin{proof} Let $z = \Phi^{-1}(\mu(C))$. Then for any $c \geq 0$, \begin{equation} \dfrac{\mu(C_c)}{\mu(C)} \leq \dfrac{\Phi(z-c)}{\Phi(z)} \leq \dfrac{\Phi(1/2 - c)}{\Phi(1/2)} < 2e^{-c^2/2} \end{equation} Where the first inequality follows from Lemma \ref{lemma-cont-tailbound}, the second inequality follows from Lemma \ref{lemma-cont-monotonic} and the fact that $\mu(C) \leq 1/2$, and the third inequality follows from the Gaussian tail bound $\Phi(x) < e^{-x^2/2}$ for all $x \leq 1/2$.\qed \end{proof} \subsection{Average distance between images} \label{appdx-avg-distance} We wish to show that for a pair of images $I, I' \in \imgspace{n}{h}{b}$ that are sampled independently and uniformly, there exists a $k_{h,b,p}$ such that: \begin{align} \mathbb{E}[\|I-I'\|_p] \geq k_{h,b,p}n^{2/\max(1,p)} \end{align} First, we note that we have: \begin{align} \mathbb{E}[\|I-I'\|_p^{\max(1,p)}] &= n^2h * \mathbb{E}[ |X-Y|^{\max(1,p)} ] \end{align} Where $X$ and $Y$ are independent random variables that are both drawn uniformly from a set of $2^b$ equally spaced values, where the largest is 1 and the smallest is 0. For simplicity, we denote $\mathbb{E}[ |X-Y|^{\max(1,p)} ]$ with $k_{b,p}$. $\|I-I'\|_p^{\max(1,p)}$ is non-negative and cannot be larger than $n^2h$. Therefore, the probability that $\|I-I'\|_p^{\max(1,p)} \geq n^2hk_{b,p}/2$ is at least $\frac{k_{b,p}}{2-k_{b,p}}$. Via a monotonicity argument we can deduce that the probability that $\|I-I'\|_p \geq (hk_{b,p}/2)^{1/\max(p,1)}n^{2/\max(p,1)}$ is at least $\frac{k_{b,p}}{2-k_{b,p}}$ as well. We can then apply Markov's inequality to get the following: \begin{align} \mathbb{E}[\|I-I'\|_p] \geq \frac{k_{b,p}}{2-k_{b,p}}(hk_{b,p}/2)^{1/\max(p,1)}n^{2/\max(p,1)} \end{align} By setting $k_{h,b,p}$ to be $\frac{k_{b,p}}{2-k_{b,p}}(hk_{b,p}/2)^{1/\max(p,1)}$ we attain our desired result.
5c6cb903ca2cc9918e0b4ebb2be53fe2d95181ae
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Emotions play an important role in the decision-making process of humans and thus, in recent years, extracting emotion from text has seen increased interest due to its possible application in a large number of domains such as business, politics, psychology and social media. However, identifying and extracting emotion is a very complicated task due to the innate subtlety and complexity of emotional expressions and language. It is also a multiclass classification problem that combines both core machine learning techniques in addition to natural language. In many cases, there is also a need to identify the cause of the observed emotions as well, and this has given rise to research on emotion-cause extraction techniques. In their study, \citet{9478079} reviewed the most recent research in this field, detailing the corpus, methodology and evaluation. An interesting application of emotion-cause extraction is its importance in customer reviews \citet{gupta-etal-2010-emotion}. From household items, books or movies to hotels and restaurants, reviews are a great way for a consumer to familiarize themselves with the merits and drawbacks of said commodity. However, in many instances, the volume of reviews is so high that it would be impractical to read them all. Emotion-cause extraction would help in understanding why the emotion expressed in the review came to be, and can help pinpoint key issues. In this project, we use Emotion-Cause Extraction along with Clustering techniques to provide an overview of a product based on the reviews and star-ratings. Firstly, we find the Word2Vec word embeddings for the words in the reviews, and enhance them with emotional context. (Mao et al.) Next, we use clause extraction, a RNN emotion extraction model and a RNN cause extraction model to get cause clause for each review. Then, we vectorize the cause clauses and prepare the data for Agglomerative clustering, clustering reviews for each (product, emotion) pair based on its vector representation. As the final step, for each cluster, we find a head clause and provide a list of the most important or prevalent causes for each (product, emotion) pair. The goal of this project was to provide a nlp-based method to analyze customer reviews and extract key issues, thereby enabling manufacturers, retailers and service providers to foster a better understanding of the performance of their product or improve their relationship with their customer. It will also help customers to get a better idea of the product without going through the large number of reviews. We achieve this by combining ECPE techniques with other machine learning techniques to build a pipeline for processing reviews in a way that makes them easy to analyze and understand. \section {Related Work} \citet{lee-etal-2010-text} first presented the task of emotion cause extraction (ECE) and defined this task as extracting the word-level causes that lead to the given emotions in text. They constructed a small scale Chinese emotion cause corpus in which the spans of both emotion and cause were annotated. Based on the above task, there were some other studies that conducted ECE research on their own corpus using rule based methods or machine learning methods. \citet{Chen10emotioncause} suggested that a clause may be the most appropriate unit to detect causes based on the analysis of the corpus in \citet{lee-etal-2010-text} and transformed the task from word-level to clause-level. They proposed a multi-label approach that detects multi-clause causes and captures the long-distance information. There was a lot of work done based on this task setting. \citet{russo-etal-2011-emocause} introduced a method based on the linguistic patterns and common sense knowledge for the identification of Italian sentences which contain a cause phrase. \citet{GuiYXLLZ14} used 25 manually compiled rules as features, and chose machine learning models, such as SVM and CRFs, to detect causes. \citet{gui-etal-2016-event} and \citet{8195347} released a Chinese emotion-cause dataset using SINA city news. This corpus has received much attention and has become a benchmark dataset for ECE research. Based on this corpus, several traditional machine learning methods and deep learning methods were proposed. All of the above work attempts to extract word-level or clause-level causes given the emotion annotations. \citet{xia-ding-2019-emotion} proposed a two-step process to extract emotion-cause pairs from documents without the need for annotation. This process first performs individual emotion extraction and cause extraction via multi-task learning, and then conducts emotion-cause pairing and filtering. The experimental results for this work on a benchmark emotion cause corpus \citep{gui-etal-2016-event} were quite high. More recently, end-to-end networks such as that proposed by \citet{singh2021endtoend} have shown to provide further improvements over multi-stage approaches by leveraging the mutual dependence between the extracted emotion clauses and cause clauses. \cite{li-etal-2018-co} also proposed a context-aware co-attention model for emotion cause pair extraction. \citet{cheng-etal-2020-symmetric} detailed a methodology that uses local search to pair emotions and causes simultaneously, reducing the search space and improving efficiency. In the domain of Sentiment-Aware Word Embeddings, \citet{app9071334} details a thorough algorithm to add emotional context to Word2Vec embeddings, as well as shows quantifiable improvement in results when used in emotional classification tasks. \section {Architecture and Implementation} The pipeline implemented in the project to achieve our objective can be split into the following steps: data annotation, creation of emotion aware word embeddings, extraction of clauses, extraction of emotions, extraction of causes of said emotions, and clustering of causes to summarize the reviews. Each of these steps are detailed in the following sections. \subsection{Data Annotation} Since there is no benchmark dataset for the ECPE task in the English language and no review dataset annotated with emotions, we have annotated the data manually. We picked 1000 diverse data points equally across 50 products and for each review, we manually annotated the emotion and cause clause pairs. \subsection{Creation of Emotion Aware Word Embeddings} The process for creating emotion-aware word embeddings centered around the idea that words have and use emotional context in addition to syntactic and structural context. To add this to our pipeline, we first start with Word2Vec embeddings for all words in the corpus. This is done using the gensim library and its inbuilt implementation. Next, we use an emotion lexicon (in our case we used \url{http://saifmohammad.com/WebPages/lexicons.html}) to identify emotion words in the English language as well as the intensity associated with them. To identify the emotional context of a word, we calculate the cosine similarity of that word with every emotion word in the lexicon that also occurs in our vocabulary. Doing this for all words and all emotion words generates a similarity matrix between the vocabulary of our corpus and the emotional words present in it. Adding the emotional context is a slightly more complicated task since any one word can be similar to multiple emotion words. One way to deal with this is to impose a limit on the number of emotions to add as context. In our specific use case, we decided to set that limit to 2. This is because in the domain of reviews, it is very rare to see more than two main emotions in any one given document. For each word in the vocab, we then calculated the average emotion embedding by combining the Word2Vec embeddings of the 2 most similar emotion words as a weighted average. The weight of each individual emotion in the final embedding is a function of both the similarity to the original word as well as the intensity of the emotion expressed. Finally, we overlay this emotion embedding with the original Word2Vec embedding of the word to obtain an emotion-aware word-embedding. We repeat this for all words in the vocabulary. \begin{figure*} \includegraphics[width=\textwidth]{EmotionExtractionModel.png} \caption{Model to Extract Emotions} \label{fig:emoExt} \end{figure*} \subsection{Extraction of Clauses} To prepare the reviews for processing, we used nltk inbuilt sentence tokenizer to tokenize the sentences from each review. For clause generation, we used the spaCy NLP package (specifically utilizing the en-web-core-trf module for accuracy) to parse the clauses , using each sentence’s predicted dependency structure. A dependency structure is determined by the relation between a word (a head) and its dependents, wherein each word is connected to each other by directed links or dependency. The spaCy engine will determine a ROOT word (usually a finite verb), which forms the structural center of our clause. The roots form the starting point from which the directed links are generated to every other word in the sentence’s clauses. We can now predict the clause by finding dependencies of words present before and after the root. Each sentence was processed using the spacy engine, which created the dependency structure for each sentence as well as predicted the parts of speech for each word. To find the ROOT word of the intended clause (as explained above), we call findRootOfSentence function, which returns the token that has a dependency tag of ROOT as well as the predicted dependency structure for the sentence. Next, we determine the VERB tokens present in our sentence and their direct dependency on the ROOT word. For this, we used the findOtherVerbs function, to look for tokens that have the VERB part-of-speech tag and has the root token as its only ancestor. In doing this we are finding clausal heads within the sentence that form separate clauses. Using the getClauseTokenSpanForVerb function, we find the beginning and ending index for the verbs found previously. The function goes through all the verb's children; the leftmost child's index is the beginning index, while the rightmost child's index is the ending index for this verb's clause. Next we found the clause indices for each verb. The tokenSpans variable contains the list of tuples, where the first tuple element is the beginning clause index and the second tuple element is the ending clause index. Finally, we create the token Span objects (using the above tokenSpan variables for each clause) in the sentence. We get the Span object by slicing the Doc object and then appending the resulting Span objects to a list. As a final step, we sort the list to make sure that the clauses in the list are in the same order as in the sentence. The final result is a list of documents tokenized into sentences, which are further split into clauses and then words within. \subsection{Extraction of Emotions} \begin{figure*} \centering \includegraphics[width=\textwidth]{CauseExtractionModel.png} \caption{Model to Extract Causes} \label{fig:causeExt} \end{figure*} For emotion extraction, we are using review text as input. Firstly, we tokenize the review text into words. Then we convert each word into their emotion-aware embedding. These emotion aware embeddings are then passed into an RNN-classifier (consisting of a BRNN followed by a classifier) which classifies the review text into 8 emotion classes. The output of this model will be used as a signal to the cause extraction model. The word embeddings are used as a time-step input to the Bi-LSTM. The Bi-LSTM outputs a 2 * 256 size vector for each time-step. The output of the last time step of Bi-LSTM is passed through a dropout with probability 0.5. We use the output of the last layer as input to a linear layer. The linear layer outputs a vector of size 80 which is passed through the elu nonlinearity function. The output of this layer is further passed to a linear layer which outputs a vector of size 8. This output is further passed to a log softmax function which provides the log-probability for each of the 8 emotions. The training of the model is carried out with the help of the NLL loss function. We are running a stochastic-gradient descent and hence, are using a batch size of 1. The model was trained for 100 epochs with the help of SGD optimizer with learning rate of 0.003 and momentum of 0.9. The trained model will be used to calculate the probability of each emotion. Sample results can be seen in Figures \ref{fig:resultsE1} and \ref{fig:resultsE2}. \subsection{Extraction of Causes} For finding the cause clause, we create a model which predicts whether a given clause is a cause clause or not. For each of the clause, we first tokenize the clause into words and convert each word into embeddings. Now, we create 8 embeddings for each word - one for each emotion. This is created by multiplying each dimension of embedding with the probability of emotion that is outputted from the emotion extraction model. It’s possible that a review involves multiple emotions and this step takes into account this subtlety. The 8 embeddings are passed as an input time step to Bi-LSTM. The Bi-LSTM outputs a 2 * 1024 size vector for each time-step. The output of the last time step of Bi-LSTM is passed through a dropout with probability 0.5. We use the output of the last layer as input to a linear layer. The linear layer outputs a vector of size 80 which is passed through the elu nonlinearity function. The output of this layer is further passed to a linear layer which outputs a vector of size 8. This output is further passed to a sigmoid function which provides the probability of the clause being a cause clause. The training of the model is carried out with the help of the BCE loss function. We are running a stochastic-gradient descent and hence, are using a batch size of 1. The clause having the highest probability is selected as the cause clause. The model was trained for 50 epochs with the help of SGD optimizer with learning rate of 0.003 and momentum of 0.9. Sample results can be seen in Figures \ref{fig:resultsC1} and \ref{fig:resultsC2}. \subsection{Clustering of Causes} Once we get a cause clause for each review, we cluster the reviews to cover most concepts. For example: there might be two reviews stating that the product is cheap. Such reviews will be covered in the same cluster and will be represented by one cause. To cluster the clauses, we first represent each clause with a vector. Firstly we convert each word of the clause into their corresponding word2vec embedding and emotion-aware embedding. We concatenate these embeddings for each word and then the clause embedding is obtained by max-pooling over these words. Once we get the clause embedding, we use agglomerative clustering to cluster the clauses. We use cosine similarity as the distance function. We are using complete linkage as we want all the clauses in a cluster to be similar to each other. In complete linkage, we calculate the distance between each node of the two clusters and merge the clusters only if the max distance is less than 0.13. We further remove clusters with size < 2 as those clusters are not important. Once we get a cluster of clauses, we need to represent each cluster with a head clause. The head clause is a clause which is close to every other clause in the cluster. We calculate the largest distance of a clause in a cluster with each clause in a cluster and pick the clause with the smallest max distance as the head clause. The clustering is done for each (product, emotion) pair. Hence, we get a list of clauses representing each emotion for the product. \begin{figure} \includegraphics[scale=0.35]{ClustersShown.png} \caption{Clusters of Cause Clauses} \label{fig:clusters} \end{figure} \section{Results} The results for a product and emotion can be seen in Figure \ref{fig:results}. In our dataset, the emotion and cause clause predictions looks to generate some meaningful results. The affinity of above clauses can be seen in the graph Figure \ref{fig:clusters}. We have used t-SNE algorithm to reduce the vectors to 2-D. Since there are less reviews, the reduction was not that efficient, but it gives an idea about the affinity. A quick overview of the project in the form of a video can be found here: \url{https://www.youtube.com/watch?v=YEUB32NEZmU}. Access to the code and all related files and resources can be found here: \url{https://github.com/ArpitMittalUSC/Emotion-Cause-Extraction} \section {Future Work} The popular field of emotion-related tasks in Natural Language Processing sees no shortage in opportunities to innovate. Although we achieved our initial objective in this project, there is a lot of potential for future work. Due to time and resource constraints, we have only managed to run this pipeline on a small dataset. However, training and testing this methodology on a much larger scale would leverage the true power of the pipeline. Another issue was the lack of a benchmark annotated dataset of emotion-cause pairs in the English language. With improved annotations, the results would improve as well, and if a benchmark dataset is created, it would make it much easier to compare our pipeline with other methodologies that exist for ECPE. Future work can also include using more complex and computationally intensive models such as Transformers and contextual embeddings to improve performance. \bibliographystyle{acl_natbib}
4f9ce8a44f1181fb89681fcfb749db491c89eae4
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Quantum technologies face many challenges, often arising due to the unavoidable coupling of any system to its environment. The prediction of their dynamics requires open quantum system methods that include such coupling effects, for example the Caldeira-Leggett model \cite{Breuer02} and the spin-boson model \cite{Weiss12}. These methods are successfully employed in many physical contexts, e.g., quantum optics \cite{Krinner18, Maniscalco04, Stewart17}, condensed matter \cite{Ronzani18, Semin20, Deffner13, Wilhelm04, Hanson08, Luschen17}, quantum computation \cite{Verstraete09, Kliesch11, Thorwart01}, nuclear physics \cite{Brambilla21} and quantum chemistry \cite{Teh19}. For instance, modelling circuit quantum electrodynamics with the spin-boson model shows that the heat transport of a superconducting qubit within a hybrid environment changes significantly, depending on the qubit-resonator and resonator-reservoir couplings \cite{Ronzani18}. In the mathematical treatment of an open quantum system, a coupling function $\tens{{\cal C}}_{\omega}$ is typically introduced that describes how strongly the system interacts with bath degrees of freedom (DoF). Its functional form determines the temporal memory of the bath and whether the noise is coloured or not \cite{Breuer02,Weiss12,Anders20}, critically affecting the system dynamics \cite{Deffner13, Liu16, Zou20}. A large body of theoretical results exist for various toy models that make specific assumptions on the coupling function $\tens{{\cal C}}_{\omega}$ \cite{Weiss12, Breuer02, Shabani05}. However, a major drawback is a somewhat lacking connection to system- or material-specific characteristics to which these methods could be applied: for a given DoF, in a given material, which coupling function $\tens{{\cal C}}_{\omega}$ should one choose to model its dynamics? An alternative approach is taken in the condensed matter literature, where open quantum systems are usually characterized by the density of states (DOS) $D_{\omega}$ of their environment \cite{Chen05}, and modes in the environment are often assumed to couple to the system with the same strength $g$ \cite{Tritt05}. Measurement of, for example, the phonon DOS is well-established using different inelastic scattering techniques \cite{Ament11,Bayle14}. In this paper, we present a useful relation that translates the coupling function $\tens{{\cal C}}_{\omega}$ of an open quantum system into an experimentally measurable DOS $D_{\omega}$, and vice versa. Our relation paves the way to parametrizing realistic coupling functions for a range of applications, for example, for spins in a magnetic material that experience damping through the coupling to the crystal lattice \cite{Anders20, Reppert20} or for nitrogen vacancy centers, a solid-state analogue of trapped atoms, whose coherence lifetime in optical transitions is also limited by interaction with phonons \cite{Bar-Gill12,Fuchs10}. The link is explicitly established for a generic quantum system that couples locally to a bosonic environment. Extensions to other environments, such as fermionic environments, will be possible using similar arguments. The paper is organised as follows: we first introduce the two approaches involving $D_{\omega}$ and $\tens{{\cal C}}_{\omega}$, respectively. Setting up the dynamics of the environment, we evaluate its memory kernel and establish the link between $D_{\omega}$ and $\tens{{\cal C}}_{\omega}$. We then demonstrate why the widely used Debye approximation is equivalent to the well-known Ohmic coupling function. While these approximations suffice at low frequencies, experimental DOS show peaks at higher frequencies. By approximating a given DOS with a series of Lorentzians, we show how the frequency dependence of the coupling may be obtained, leading to non-trivial dissipation regimes. As an illustration of the power of the derived link, we parametrize two experimentally measured phonon DOS, those of gold and iron (see Supplementary Material (SM) for the latter) and one theoretically computed phonon DOS of yttrium iron garnet (YIG), and extract key parameters for the corresponding coupling functions. These give direct insight into the impact of memory for any phonon-damped dynamics in these materials. \begin{figure} \includegraphics[width=0.47\textwidth]{fig1-a.jpg} \includegraphics[width=0.44\textwidth]{fig1-b.jpg} \caption{Schematic picture of two equivalent approaches to modelling the open quantum systems. (a) Wave vector approach: Each bath frequency $\omega$ includes several wave vectors $\{{\vect{k}}\}$. The interaction of each bath wave vector $\{{\vect{k}}\}$ with the system is often taken to have the same coupling strength $g$. (b) Frequency approach: Every bath frequency $\omega$ couples to the system with a strength given by $\tens{\cal C}_{\omega}$.} \label{fig1} \end{figure} \section{Two approaches}\label{two-appraoches} The Hamiltonian of a quantum system in contact with a bath is \begin{eqnarray} \hat{{\cal H}}_{tot} = \hat{{\cal H}}_S+ \hat{{\cal H}}_B +\hat{{\cal H}}_{SB}\,, \label{eq:H-tot} \end{eqnarray} where the bath Hamiltonian $\hat{{\cal H}}_B$ and the system Hamiltonian $\hat{{\cal H}}_S$ may contain the internal interactions among their own components. The system-bath interaction is assumed to be of product form, \begin{eqnarray} \hat{{\cal H}}_{SB} = - \hat{\vect{S}} \cdot \hat{\vect{B}}\,, \label{eq:system-bath-H} \end{eqnarray} where $\hat{\vect{S}}$ is a (Hermitian) system operator and $\hat{\vect{B}}$ is a bath operator, each with $d_s$ components. The form of the bath Hamiltonian $\hat{{\cal H}}_{B}$ and of the bath operator $\hat{\vect{B}}$ depends on the context. We consider here a bosonic bath, i.e. an infinite set of harmonic oscillators. In the literature, one can broadly distinguish two representations of the bath, working either in wave vector (WV) or frequency (F) \ space, as illustrated in Fig.\,\ref{fig1}. The wave vector approach\xspace is common in condensed matter physics \cite{Chen05, Weiss12} where the bath Hamiltonian is expressed as a sum over all possible modes ${\vect{k}}$ \begin{eqnarray} \hat{{\cal H}}_B^{WV} = \sum_{{\vect{k}}} \hbar \omega_{\vect{k}} \left(\hat{b}_{\vect{k}}^\dag \hat{b}_{\vect{k}}+\frac{1}{2}\right)\,. \label{eq:bath-H} \end{eqnarray} Here $\omega = \omega_{\vect{k}}$ gives the dispersion relation of a normal mode with wave vector ${\vect{k}}$ and $\hat{b}_{{\vect{k}}}$ ($\hat{b}_{{\vect{k}}}^{\dag}$) are bosonic annihilation (creation) operators of a mode excitation with commutation relations $[ \hat{b}_{{\vect{k}}},\hat{b}_{{\vect{k}}'}^{\dag}] = \delta_{{\vect{k}} {\vect{k}}'}$. Usually one considers a three-dimensional ($3$D) structure with wave vectors ${\vect{k}} = (k_x, k_y, k_z)$. For example, in a cubic $3$D lattice with number of lattice sites $N$, lattice constant $a$ and volume $V = Na^3$, each component of ${\vect{k}}$ runs through the range $\left(-\frac{\sqrt[3]{N}-1}{2}, \ldots, 0, \ldots, \frac{\sqrt[3]{N}-1}{2} \right) \, \frac{2 \pi}{\sqrt[3]{N} a}$. For large $N$ and $V$, and for any function $f(\omega_{{\vect{k}}})$ that only depends on the frequency $\omega_{{\vect{k}}}$, one can approximate sums over the wave vectors as \begin{align} \frac{1}{V} \sum_{ {\vect{k}}} f(\omega_{\vect{k}}) \cong \, \int \! \frac{\upd^3k}{(2\pi)^3} \, f(\omega_{\vect{k}}) =: \int \!\upd\omega \, D_{\omega} \, f(\omega)\,. \label{eq:l-omega} \end{align} This equation defines $D_{\omega}$ as the DOS per unit volume of bath modes at frequency $\omega$ \cite{Chen05}. For bosonic baths, we choose the standard interaction \cite{Weiss12} where the bath operator $\hat{\vect{B}}$ is linear in the bosonic mode operators (single phonon processes), \begin{eqnarray} \hat{\vect{B}}^{WV} = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}} \vect{ \xi}_{{\vect{k}}} \, \hat{b}_{\vect{k}} + \text{h.c.}\,, \label{eq:phonon-field-2} \end{eqnarray} where $\vect{ \xi}_{{\vect{k}}} = \left( \hbar g^2/ (2 \omega_{{\vect{k}}}) \right)^{1/2}\!\vect{ \epsilon}_{\vect{k}}$ with $\vect{ \epsilon}_{\vect{k}}$ a $d_s$-dimensional unit polarisation vector \cite{Breuer02}. The coupling constant $g$ is assumed to be mode-independent for simplicity \cite{Tritt05}. Eq.\,\eqref{eq:system-bath-H} may be generalized to the situation that several system components $\hat{\vect{S}}_m$ are located at different positions $\vect{ R}_m$, and sum over interaction terms, i.e. $\hat{{\cal H}}_{SB} = -\sum_{m}\hat{\vect{S}}_m \cdot \hat{\vect{B}}( \vect{ R}_m )$. The field operators would then be $\vect{ R}$-dependent, i.e. $\hat{\vect{B}}^{WV}(\vect{ R}) = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}} \vect{ \xi}_{{\vect{k}}} \, \hat{b}_{\vect{k}} \, {\rm e}^{ {\rm i} {\vect{k}} \cdot \vect{ R} } + \text{h.c.}$. For simplicity, we will concentrate in the following on just one system site and drop summation over $m$ again. Another approach to setting up bath Hamiltonian $\hat{{\cal H}}_B$ and interaction $\hat{{\cal H}}_{SB}$ is based on a frequency expansion often employed in the open quantum systems literature \cite{Weiss12, Breuer02}. In contrast to Eq.\,\eqref{eq:bath-H}, here $\hat{{\cal H}}_B$ is written directly as a sum or integral over frequencies, \begin{eqnarray} \hat{{\cal H}}_B^{F} = \frac{1}{2}\int_{0}^{\infty} \!\!\!\!\upd\omega \left( \hat{\vect{ P }}_{\omega}^2 + \omega^2\hat{\vect{ X}}_{\omega}^2\right), \label{eq:HB-omega} \end{eqnarray} where $\hat{\vect{ P }}_{\omega}$ and $\hat{\vect{X}}_{\omega}$ are $3$D [in general $d$-dimensional ($d$D)] momentum and position operators, respectively, for the bath oscillator with frequency $\omega$. Their components obey $[\hat{X}_{\omega,j}, \hat{P}_{\omega',l}] = {\rm i}\hbar \,\delta_{jl}\,\delta(\omega - \omega')$. In this approach, the bath operator in Eq.\,\eqref{eq:system-bath-H} is often chosen as \cite{ Anders20} \begin{eqnarray} \hat{\vect{B}}^{F} = \int_{0}^{\infty} \!\!\!\!\upd\omega \,\tens{{\cal C}}_{\omega} \hat{\vect{ X}}_{\omega}\,, \label{eq:B-omega} \end{eqnarray} where the coupling function $\tens{{\cal C}}_{\omega}$ (in general a $d_s\times d$ tensor) is weighting the system-bath coupling at frequency $\omega$. {The system operators couple isotropically to the bath if $\tens{{\cal C}}_{\omega}\tens{{\cal C}}_{\omega}^{T} = \mathbbm{1}_{d_s}\,C_{\omega}^2$. The scalar coupling function $C_{\omega}$} is related to the bath spectral density $J_{\omega}$, which alternatively quantifies the effect of the environment on the system as $J_{\omega} \propto C^2_{\omega}/\omega$ \cite{Breuer02,Weiss12}. The bath dynamics can be categorised \cite{Weiss12} based on the low-$\omega$ exponent of the spectral density, $J_{\omega}\propto \omega^s$, into three different classes, called Ohmic ($s = 1$), sub-Ohmic ($s < 1$), and super-Ohmic ($s > 1$). The difference between wave vector approach\xspace and frequency approach\xspace is that at a fixed frequency $\omega$, there is in Eq.\,\eqref{eq:B-omega} just one bath operator $\hat{\vect{X}}_\omega$ that couples to the system, while according to Eq.\,\eqref{eq:phonon-field-2}, the interaction is distributed over several wave vector modes ${\vect{k}}$ with weighting factors $\vect{ \xi}_{\vect{k}}$, their number being set by the DOS $D_{\omega}$ (see Fig.\,\ref{fig1}). We now want to address the question of the connection between the DOS $D_{\omega}$ and the coupling function $\tens{{\cal C}}_{\omega}$. To achieve this we consider one relevant quantity in both approaches and equate the corresponding formulas. In the following, we choose the memory kernel $\tens{\cal K}$ which encodes the response of the bath to the system operator $\hat{\vect{S}}$. Note that the choice of $\hat{\vect{B}}$ in Eq.\,\eqref{eq:phonon-field-2} restricts the discussion to the linear response of the bath, as is reasonable for a bath that is thermodynamically large \cite{Breuer02, Weiss12}. \section{Memory kernel in both approaches} To find an explicit relation in the wave vector approach\xspace for the dynamics of the bath operator $\hat{{\vect{B}}}^{WV}$ in Eq.\,\eqref{eq:phonon-field-2}, the starting point is the equation of motion for $\hat{b}_{\vect{k}}$, \begin{eqnarray} \frac{ d \hat{b}_{\vect{k}} }{ dt } = - {\rm i} \omega_{{\vect{k}}} \hat{b}_{\vect{k}} + \frac{{\rm i} }{\hbar \sqrt{V}} \vect{ \xi}_{{\vect{k}}}^{\dag} \cdot \hat{{\vect{S}}} \, , \label{eq:equation-of-motion} \end{eqnarray} whose retarded solution contains two terms \begin{eqnarray} \hat{b}_{{\vect{k}}}(t) & = & \hat{b}_{{\vect{k}}}(0) \, {\rm e}^{- {\rm i} \omega_{{\vect{k}}} t} \\ && {} + \frac{{\rm i} }{\hbar \sqrt{V}} \vect{ \xi}_{{\vect{k}}}^{\dag} \cdot \int_{0}^{t}\!\!\upd t'\, \hat{{\vect{S}}}(t') \, {\rm e}^{- {\rm i} \omega_{{\vect{k}}} (t-t')}\,. \nonumber \label{eq:solution-equation-of-motion} \end{eqnarray} Therefore the time evolution of the bath operator can be written as $\hat{{\vect{B}}}^{WV}(t) = \hat{{\vect{B}}}_{\rm induced}^{WV}(t) + \hat{{\vect{B}}}_{ \rm response}^{WV}(t)$. The first term represents the internally evolving bath which is given by $\hat{{\vect{B}}}_{\rm induced}^{WV}(t) = \frac{1}{\sqrt{V}}\sum_{{\vect{k}}}\hat{b}_{{\vect{k}}}(0) e^{-i\omega t}\vect{ \xi}_{{\vect{k}}} + \text{h.c.}$, while ${\hat{\vect{B}}}_{\rm response}^{WV}(t)$ contains information about the system's past trajectory, \begin{eqnarray} \hat{{\vect{B}}}_{\rm response}^{WV}(t) = \int_{0}^{\infty}\!\!\!\!\upd t'\, \tens{\cal K}^{WV}(t-t') \,\hat{{\vect{S}}}(t')\,, \label{eq:response-field} \end{eqnarray} where $\tens{\cal K}^{WV}(t-t')$ is the memory kernel (a tensor), \begin{eqnarray} \tens{\cal K}^{WV}(t-t') = \Theta(t-t') \frac{g^2}{ V}\sum_{{\vect{k}}} \vect{ \epsilon}_{{\vect{k}}} \vect{ \epsilon}_{{\vect{k}}}^{\dag} \frac{\sin \omega_{{\vect{k}}}(t-t')}{\omega_{{\vect{k}}}}\,. \label{eq:kernel-1} \end{eqnarray} Here, the $\vect{ \xi}_{{\vect{k}}}$ have been expressed by the unit polarisation vectors $\vect{ \epsilon}_{{\vect{k}}}$ [see after Eq.\,\eqref{eq:phonon-field-2}] and $\Theta(t-t')$ is the Heaviside function, which ensures that the bath responds only to the past state of the system, i.e. $t' < t$. For large volume $V$ and in the continuum limit, the summation over ${\vect{k}}$ in Eq.\,\eqref{eq:kernel-1} can be transformed into a frequency integration as in Eq.\,\eqref{eq:l-omega}. The projection on polarization vectors, averaged over an isofrequency surface $\Omega$, is taken into account by a ($d_s \times d_s$) positive Hermitian matrix $\tens{{\cal M}}_{\omega} = (\Omega)^{-1}\int \upd \Omega \; \vect{ \epsilon}_{{\vect{k}}} \vect{ \epsilon}^{\dag}_{{\vect{k}}}$, normalized to unit trace. With this notation, the memory tensor in the wave vector approach\xspace is \begin{eqnarray} \tens{\cal K}^{WV}(t-t') = \Theta(t-t') g^2\!\!\!\int_{0}^{\infty}\!\!\!\!\upd\omega\, \tens{{\cal M}}_{\omega} D_\omega \frac{\sin \omega(t-t')}{\omega} \,. \label{eq:kernel1} \end{eqnarray} Turning now to the frequency approach\xspace, the dynamics of the bath operator $\hat{\vect{X}}_{\omega}$ in Eq.\,\eqref{eq:B-omega} follows a driven oscillator equation \begin{eqnarray} \frac{d^2\hat{\vect{X}}_{\omega}}{dt^2}+\omega^2\hat{\vect{X}}_{\omega} = \tens{{\cal C}}_{\omega}^{T}\, \hat{\vect{S}} \, . \label{eq:equation-of-motion-X} \end{eqnarray} Its exact solution is \begin{eqnarray} \hat{\vect{X}}_{\omega}(t)&=& \hat{\vect{X}}_{\omega}(0)\cos{\omega t} + \hat{\vect{ P }}_{\omega}(0)\sin{\omega t}\nonumber\\ &&{} + \int_{-\infty}^{\infty}\!\!\!\!\upd t'G_{\omega}(t-t')\, \tens{{\cal C}}_{\omega}^{T}\, \hat{\vect{S}}(t')\,, \label{eq:X-solution} \end{eqnarray} where $G_{\omega}(t-t') = \Theta(t-t') \sin \omega(t-t')/\omega$ is the retarded Green's function. Inserting this solution in Eq.\,\eqref{eq:B-omega} leads again to induced and response evolution parts given, respectively, by $\hat{\vect{B}}_{\rm induced}^{F}(t)= \int_{0}^{\infty} \!\!\upd\omega\left(\hat{\vect{X}}_{\omega}(0)\cos{\omega t} + \hat{\vect{ P }}_{\omega}(0)\sin{\omega t}\right)$ and \begin{eqnarray} \hat{\vect{B}}_{\rm response}^{F}(t)= \int_{0}^{\infty} \!\!\!\!\upd\omega\int_{0}^{\infty}\!\!\!\!\upd t'\,G_{\omega}(t-t') \,\tens{{\cal C}}_{\omega}\tens{{\cal C}}_{\omega}^{T} \,\hat{\vect{S}}(t')\,. \label{eq:B-1-omega} \end{eqnarray} Comparing with Eq.\,\eqref{eq:response-field} one can identify the memory kernel tensor in the frequency approach\xspace as \begin{eqnarray} \tens{\cal K}^{F}(t-t') = \int_{0}^{\infty} \!\!\!\!\upd\omega \; \tens{{\cal C}}_{\omega} \tens{{\cal C}}_{\omega}^{T} \,G_{\omega}(t-t')\,. \label{eq:kernel2} \end{eqnarray} \section{Coupling function $\tens{{\cal C}}_{\omega} $ versus DOS $D_{\omega}$} Since Eqs.\,\eqref{eq:kernel1} and \eqref{eq:kernel2} describe the same memory effects, we may set them equal, leading to \begin{eqnarray} \tens{{\cal C}}_{\omega} \tens{{\cal C}}_{\omega}^{T} = g^2\tens{\cal M}_{\omega} D_{\omega} \,. \label{eq:Dw-Cw} \end{eqnarray} This relation links the system-bath couplings in the two approaches, i.e. the DOS $D_{\omega}$ is proportional to the Hermitian "square" of the coupling function $\tens{{\cal C}}_{\omega}$. This is the first result of the paper. Relation\,\eqref{eq:Dw-Cw} holds under two assumptions: First, we assumed the polarization matrix $\tens{\cal M}_{\omega}$ is sufficient to carry all information about the isofrequency surface in ${\vect{k}}$-space. Second, we considered identical coupling strengths $g$ to all modes of the reservoir (as sketched in Fig.\,\ref{fig1}) -- however, an extension to a $g$ that depends on frequency $\omega$ is straightforward. The result\,\eqref{eq:Dw-Cw} may be applied to any quantum system that interacts linearly with a bosonic bath. For instance, magnetic materials in which spins $\hat{\boldsymbol S}$ relax in contact with a phonon reservoir have been studied extensively \cite{Costi03, Anders20,Thorwart01}. An impurity in a condensate described by a Caldeira-Leggett model in Ref.\cite{Lampo17} is another example. For isotropic coupling to an isotropic bath, we may set $\tens{{\cal C}}_{\omega} = \mathbbm{1}_{d_s}\,C_{\omega}$ with scalar $C_{\omega}$, and $\tens{\cal M}_{\omega} = \mathbbm{1}_{d_s}/d_s$ because the trace of $\tens{\cal M}_{\omega}$ averages the norm square of the unit polarization vectors $\vect{ \epsilon}_{{\vect{k}}}$. With these assumptions, Eq.\,\eqref{eq:Dw-Cw} reduces to a scalar equation \begin{eqnarray} C_{\omega}^2 = \frac{g^2}{d_s} D_{\omega}\,. \label{eq:scalar-Dw-Cw} \end{eqnarray} Note that in general $d_s \leq d $. An example where $d_s = d$ is a $3$D spin vector that couples to a $3$D phononic environment \cite{Anders20}. A rectangular ($d_s \times d$) coupling matrix $\tens{{\cal C}}_{\omega}$ may model a graphene-on-substrate structure, where the electronic system ($d_s = 2$) is in contact with a $3$D phononic bath \cite{Cusati17}. \section{Debye approximation} In condensed-matter physics, the Debye model is used to describe the phonon contribution to a crystal's thermodynamic properties. It assumes an acoustic dispersion, i.e. $\omega = c \vert {\vect{k}}\vert$ with an averaged sound speed $c$, resulting in $3$D in \cite{Chen05} \begin{eqnarray} D_{\omega}^{\rm Deb} = \frac{3\,\omega^2}{2\pi^2 c^3}\, \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Debye-DOS} \end{eqnarray} Here $\omega_D$ is the Debye frequency, i.e. the maximum bath frequency, which in practice is taken to be near the edge of the Brillouin zone. For example, for gold, see Fig.\,\ref{fig2} (a), the Debye model fits the DOS data reasonably well in frequency region~$I$ up to $\approx 1.4 \un{THz}$. For the Debye DOS, our relation Eq.\,\eqref{eq:scalar-Dw-Cw} implies the coupling function (setting $d_s = d = 3$) \begin{eqnarray} C_{\omega}^{\rm Deb} = \frac{g\,\omega}{\sqrt{2\pi^2 c^3}} \, \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Debye-Cw} \end{eqnarray} The scaling of $C^{\rm Deb}_{\omega}$ implies that the spectral density $J(\omega) \propto C_{\omega}^2/\omega$ is Ohmic, i.e. $J(\omega) \propto \omega$. Hence, the Debye model with constant coupling $g$ in the wave vector approach captures the same relaxation dynamics as an Ohmic bath in the frequency approach. \begin{figure}[t] \begin{center} \includegraphics[width=0.4\textwidth]{fig2.pdf} \caption[]{ (a) Debye DOS (pink solid line, Eq.\,\eqref{eq:Debye-DOS}) and two-peak Lorentzian DOS (blue solid line, Eq.\,\eqref{eq:Dw-Lorentzian-sum}) fitted to a measured phonon DOS for gold (red dots) reported as in Ref.\cite{Munoz13}. The Debye frequency for gold is $\omega_D/2\pi = 3.54\un{THz}$ given in Ref.\cite{Chen05}. Fit specified peak frequencies $\omega_{0,j}$, widths $\Gamma_{j}$ and peak ratios $A_j/A_1$ are given in Table~\ref{tab:fit-to-Au}. The grey dashed lines separate three frequency regimes discussed in the main text. (b) Memory kernels ${\cal K}(t-t')$ corresponding to Debye DOS and two-peak Lorentzian DOS. } \label{fig2} \end{center} \end{figure} Beyond $3$D cubic lattices, $D_{\omega}$ will depend on the dimensionality and lattice symmetry. What happens if the lattice is effectively two- or one-dimensional? To answer this, let us imagine a $d$D isotropic lattice with volume $V=Na^d$. The volume element of such a lattice in ${\vect{k}}$-space corresponds to $\upd^{d}k = \Omega_d k^{d-1} \upd k$ where $\Omega_d = 2, 2\pi, 4\pi$ is the $d$D solid angle for $d = 1, 2, 3$, respectively. Analogously to the $3$D lattice, using the acoustic dispersion with an averaged sound speed $c$, one finds the $d$D Debye DOS \begin{eqnarray} D_{\omega}^{(d)} = \frac{ \Omega_d\, \omega^{d-1} }{ (2 \pi c )^d } \Theta(\omega_{\rm D} - \omega)\,. \label{eq:Dw-in-dD} \end{eqnarray} Via Eq.\,\eqref{eq:scalar-Dw-Cw} we obtain the power-law $C_{\omega}\propto \omega^{(d-1)/2}$ for the corresponding coupling functions which implies spectral densities $J(\omega) \propto \omega^{d-2}$. Thus, isotropic baths in $2$D or $1$D behave in a distinctly sub-Ohmic way. \section{Inferring coupling functions from DOS data} Beyond the conceptually useful Debye model, a structured DOS with several peaks is a generic feature of real materials \cite{Munoz13,Mauger14}. Sums of Lorentzian or Gaussian functions are two convenient candidates to approximate such peaky shaped densities \cite{Lemmer18}. Here, we fit experimentally measured DOS for gold \cite{Munoz13} (and iron \cite{Mauger14} in SM) and theoretically computed DOS for YIG \cite{Wang20} to a function consisting of multiple Lorentzians, \begin{eqnarray} D_{\omega}^{\rm Lor} = \frac{6\, A_1}{g^2\pi} \sum_{j=1}^{\nu} \frac{A_{j}\Gamma_{j}}{A_1}\frac{\omega^2}{(\omega_{0,j}^2 - \omega^2)^2+\Gamma_{j}^2 \omega^2}\,. \label{eq:Dw-Lorentzian-sum} \end{eqnarray} The fits, see Figs.\,\ref{fig2} (a), \ref{fig3} and figure in SM, reveal the material specific peak frequencies $\omega_{0,j}$, peak widths $\Gamma_j$ and peak ratios $A_j/A_1$, see Table\,\ref{tab:fit-to-Au} and tables in SM, while the first peak amplitude $A_1$ remains undetermined. Fixing $A_1$ would require information additional to the DOS, such as the system's relaxation rate due to interaction with the phonon bath. Note that phonon DOS are generally slightly temperature dependent \cite{Mauger14}. Hence the fit parameters in Eq.\,\eqref{eq:Dw-Lorentzian-sum} will be (usually weak) functions of temperature, a dependence that only matters when a large range of temperatures is considered. \begin{table}[bhtbp] \centering {\footnotesize \caption{Fit parameters of two-peak Lorentzian matched to the experimentally measured DOS for gold reported in Ref.\cite{Munoz13} (see Fig.~\ref{fig2} (a)).} \vspace{2mm} \begin{tabular}{c|ccc} \hline \text{peak}& \text{frequency} & \text{width} & \text{ratio}\\ $j$ & $\omega_{0,j}/2\pi\ [\un{\!THz}]$ & $\Gamma_{j}/2\pi\ [\un{\!THz}]$ & $A_j/A_1$\\ \hline 1 & 2.11 & 1.3 & 1 \\ 2 & 4.05 & 0.56 & 0.15 \\ \hline \end{tabular} \label{tab:fit-to-Au} } \end{table} The peak widths in Eq.\,\eqref{eq:Dw-Lorentzian-sum} determine a characteristic memory time $1/\Gamma_j$. However, beyond this single timescale number, the functional dependence of the memory is fully determined by the kernel Eq.\,\eqref{eq:kernel1}, which for multi-peak Lorentzians is proportional to \begin{equation} \tens{\cal K}^{\rm Lor}(t-t') \propto \sum_{j}^{\nu} A_j e^{-\frac{\Gamma_{j}(t-t')}{2}} \frac{\sin (\omega_{1,j}(t-t'))}{\omega_{1,j}} \Theta(t-t')\,, \end{equation} with $\omega_{1,j} = \sqrt{\omega_{0,j}^2 - \Gamma_j^2/4}$. For gold, Fig.\,\ref{fig2} (a) shows the phonon DOS measured by Mu\~{n}oz {\it et al.} \cite{Munoz13}, together with our two-peak Lorentzian fit. The fit gives good agreement in all frequency regimes, with a slightly slower decay in region $III$ than the measured DOS. For a system coupled to phonons in gold, the peak widths (see Table~\ref{tab:fit-to-Au}) immediately imply a characteristic memory time in the picosecond range. The relevant kernel is shown (blue) in Fig.\,\ref{fig2} (b) for the two-peak fitted DOS of gold shown in~(a). Using the Debye model instead would give a qualitatively different behaviour: the pink curve shows a distinctly slower long-time tail, due to the sharp cutoff at the Debye frequency. Without any cutoff, it would transform into $\tens{\cal K} (t-t') \propto \partial_{t'}\delta (t - t')$, implying no memory \cite{Anders20}. In contrast, the Lorentzian fit (blue) provides a quantitatively accurate memory kernel. Our approach may provide a more realistic picture of the magnetization dynamics based on actual material data. YIG \cite{Barker20, Barker21} is a typical magnetic insulator in which the relaxation of a spin DoF $\hat{\boldsymbol S}$ is dominated by the coupling to phonons\cite{Sebastian18}, similar to magnetic alloys like Co-Fe \cite{Schoen16}, while in metallic materials, the coupling to electrons is more relevant\cite{Kormann14}. Fig.\,\ref{fig3} illustrates a theoretically computed DOS for YIG \cite{Wang20} with a fit that contains eighteen Lorentzians. (Parameters are displayed in Table~\ref{tab:fit-to-YIG} in the SM.) In this fit, a few negative amplitudes $A_j$ in Eq.\,\eqref{eq:Dw-Lorentzian-sum} are needed to reproduce the gap near $16\un{THz}$. While positive $A_j$ are easily justified as energy transfer from the system to the bath modes, we can understand negative amplitudes as energy flow in the reverse direction, i.e. from the bath to the system. \begin{figure}[t] \begin{center} \includegraphics[width=0.45\textwidth]{fig4.pdf} \caption[]{ Illustration of eighteen-peak Lorentzian DOS, Eq.\,\eqref{eq:Dw-Lorentzian-sum}, (orange curve) fitted to the theoretically predicted phonon DOS $D_{\omega}$ for YIG (cyan curve) reported in Ref.\cite{Wang20}. The fitted peak frequencies $\omega_{0,j}$, widths $\Gamma_{j}$ and amplitude ratios $A_j/A_1$ can be found in Table~\ref{tab:fit-to-YIG} in the SM.} \label{fig3} \end{center} \end{figure} Using additional information of the typical Gilbert damping parameter for this material \cite{Krysztofik17}, also the peak amplitude $A_1$ can be determined (see the SM). More generally using relation \eqref{eq:Dw-Cw}, the multi-peak DOS \eqref{eq:Dw-Lorentzian-sum} imply coupling functions of the form \begin{eqnarray} C_{\omega}^{\rm Lor} = \sqrt{A_1\sum_{j=1}^{\nu} \frac{2A_{j}\Gamma_{j}}{A_1\pi}\frac{\omega^2}{(\omega_{0,j}^2 - \omega^2)^2+\Gamma_{j}^2 \omega^2}}\,. \label{eq:Cw-Lorentzian-sum} \end{eqnarray} This allows us, for the first time, to specify the overall magnitude of the coupling of a system to a phononic bath using the above multi-peak Lorentzian fits to the measured DOS in real materials. This second result of our paper should be useful for modelling the Brownian motion of spins \cite{Anders20, Coffey20} and in applications such as quantum information processing with solid-state spin systems \cite{Hegde20}. \section{Conclusion} We have derived the general relation \,\eqref{eq:Dw-Cw} that translates the function ${\cal C}_{\omega}$, determining the coupling of a generic system to a bosonic bath at various frequencies, into the density of states $D_{\omega}$ of the latter. This was achieved by evaluating the memory kernel of dynamical bath variables in two equivalent approaches. Several applications of the relation were then discussed. We demonstrated how for systems damped by phonons in $3$D, Debye's quadratic DOS captures the same physics as a linear coupling function $C_{\omega}$ which corresponds to an Ohmic spectral density. Secondly, we have established how to infer $C_{\omega}$ from the measured DOS of a material, such that it reflects the specific properties of the material and goes beyond a purely mathematical choice of coupling function. Given that real materials have densities of states with multiple peaks, the typical picture which emerges from our general relation \eqref{eq:Dw-Cw} is that the coupling function is non-Ohmic and memory effects in the system dynamics become important. The corresponding time scales (in the ps range, e.g., for gold in Fig.\,\ref{fig2} (b)) can be conveniently determined by fitting multiple Lorentzians to the bath DOS. Future work could address how to extend relation \eqref{eq:Dw-Cw} to systems interacting with multiple independent baths. This should be suitable for non-equilibrium settings involving different temperatures \cite{Millen2014}, as used in heat transport \cite{Dhar08}. The impact of memory may also change the behaviour of systems like superconducting qubits or two-level systems that are in contact with two baths \cite{Senior20, Segal05}. \section{Acknowledgments} We thank Jorge A. Mu\~{n}oz, Lisa M. Mauger and Brent Fultz for sharing their experimental data. We would also like to thank Joseph Barker, Luis Correa, and Simon Horsley for illuminating discussions, and Matias Bargheer for comments on an early draft of the paper. We gratefully acknowledge funding for this research from the University of Potsdam.
628a8cacbb73a0c6683b3caf934a8a6d3ed91c7f
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The use of complex world-lines (which can be interpreted as string worldsheets) in complex Kerr spacetimes has a long history, see \cite{Newman:2004, Burinskii:2013} and references therein, and also the contribution of Penrose to the book \cite{Wiltshire:2009} based on his paper \cite{Penrose:1997}. In particular, Burinskii\cite{Burinskii:2013} devoted many articles to expose string-like structures in complex Kerr-Schild geometry.\\ A completely different line of investigation was to find holographic duals of Kerr spacetimes, stimulated by the AdS/CFT correspondence and termed Kerr/CFT correspondence. It started with extreme Kerr black holes \cite{Guica:2008} and was generalized to non-extreme cases \cite{Castro:2010, Haco:2018}. These results generated considerable interest.\\ In this paper it is argued that twistor strings similar to ones of Burinskii can be regarded as special holomorphic solutions of free twistor string models on the 2-dimensional twistor manifolds \cite{Penrose:1986, Araneda:2019} that correspond to the two principal null congruences of the Kerr spacetime. Such a twistor string model implies a 2-dimensional conformal field theory (CFT), which is the reason for calling it a Kerr/CFT correspondence, although it is not based on holographic duality, but still might spawn some interest. Dealing with a CFT one can ask whether the Cardy formula \cite{Cardy:1986} applies and agrees with the Bekenstein-Hawking area law. Obviously this depends on the specifics of how the twistor string is gauged. We examine the 4-dimensional ambitwistor string of Geyer, Lipstein, and Mason \cite{Geyer:2014} and an anomaly-free 4-dimensional twistor string found by the author \cite{Kunz:2020, Kunz_1:2020}. Both models show agreement with Bekenstein-Hawking, but the ambitwistor string is related to non-minimal conformal gravity whereas the other twistor string has more potential to actually describe the Kerr spacetime.\\ In section 2 we review the twistor string approach of Burinskii and show that strings similar to ones he considered are solutions of free twistor string models defined on a pair of holomorphic 2-dimensional twistor manifolds that belong to the principal null congruences through the Kerr theorem \cite{Penrose:1986, Araneda:2019}. In section 3 we check whether the 4-dimensional ambitwistor model \cite{Geyer:2014} agrees with Bekenstein-Hawking using the Cardy formula and come to an affirmative answer if the central charge is made zero with help of the additional current algebra. On the other hand we argue that this model is not a good candidate because it is inherently conformal and cannot describe the Kerr spacetime. In section 4 we examine the twistor string of \cite{Kunz:2020, Kunz_1:2020}. It is shown that the model agrees with Bekenstein-Hawking like the previous ambitwistor string. On the other hand, because of the gauging of the translation group it has more potential to actually describe the Kerr spacetime. Further the model leads to the correct tree-level gravitational scattering amplitudes exactly because of the presence of ghost fields that come from the gauging of the translations. The last section 5 contains summary and discussion.\\ \section{Twistor Strings in Kerr Spacetime} \label{Strings} Any analytic shear-free null congruence can be characterized as a complex 2-parameter holomorphic family of $\alpha$-surfaces (see paragraph after the twistor form of the Kerr theorem 7.4.14 in \cite{Penrose:1986}). When applying this to the two principal null congruences in complexified Kerr spacetime, it leads to two 2-dimensional twistor manifolds endowed with a holomorphic structure, and similarly in dual twistor space for $\beta$-surfaces \cite{Penrose:1986, Araneda:2019}\footnote{Our convention for twistors and dual twistors is interchanged from the one used in \cite{Penrose:1986} and \cite{Araneda:2019}, as typically done for more than a decade in perturbative gauge theory, as already mentioned in \cite{Mason:2009}.}.\\ More specifically the Kerr metric in terms of a tetrad in null coordinates can be written as \cite{Kerr:2008} \begin{align} &ds^2 = (d\zeta + Y dv)(d\overline{\zeta} + \overline{Y} dv) - (dv - \frac{hk}{(1 + Y \overline{Y})^2})k,\label{kerr-schild}\\ &k = du + \overline{Y} d\zeta + Y d\overline{\zeta} + Y \overline{Y} dv.\nonumber \end{align} where \begin{align*} &u = z + t, &v = z - t, &&\zeta = x + iy, \end{align*} $k$ is a principal null congruence with $Y$ being a solution of the equation \begin{align} &\overline{\zeta}Y^2 + 2(z - ia)Y - \zeta = 0, \label{congruence} \end{align} in the corresponding Kerr theorem, and the multiplicative coefficient $h$ is \begin{align} & h = 2m \text{Re}(2 Y_{\zeta}). \label{factor} \end{align} The roots of the equation are \cite{Kerr:2008} \begin{align*} &Y_1 = \frac{r\zeta}{(z+r)(r-ia)}, &2Y_{1,\zeta} = + \frac{r^3 + iarz}{r^4 + a^2 z^2},\\ &Y_2 = \frac{r\zeta}{(z-r)(r+ia)}, &2Y_{2,\zeta} = - \frac{r^3 + iarz}{r^4 + a^2 z^2}, \end{align*} where $r$ is a root of \begin{align} &\frac{x^2 + y^2}{r^2 + a^2} + \frac{z^2}{r^2} = 1. \label{radius} \end{align} Equation \eqref{congruence} can also be written in terms of a twistor $Z_a = (\mu^{\dot{\alpha}}, \lambda_{\alpha})$ as a quadratic equation in $Z_a$: \begin{align} &Z_a Q^{ab}Z_b = 0, \text{\hspace{10pt} where \hspace{3pt}} &Y = \frac{\lambda_1}{\lambda_0}, && \mu^{\dot{\alpha}} = x^{\alpha \dot{\alpha}} \lambda_\alpha, &&Q^{ab} = \begin{pmatrix} 0 & ia & 0 & \frac{1}{2}\\ ia & 0 & -\frac{1}{2} & 0\\ 0 & -\frac{1}{2} & 0 & 0\\ \frac{1}{2} & 0 & 0 & 0 \end{pmatrix}. \label{twistor-congruence} \end{align} This form of the equation was used by Penrose in \cite{Penrose:1986, Wiltshire:2009}. The solutions for $Z_a$, up to a multiplicative factor, determine the two 2-dimensional twistor manifolds. Equivalently, they are given by $Y_{1,2}$, together with the coordinates. The complex conjugate values $\overline{Y}_{1,2}$ determine the ones in the dual twistor space\footnote{Obviously, it is just a matter of convention which role twistors and dual twistor play here. They can be interchanged.}. $Y_{1,2}$ can be represented in a more suggestive manner in terms of Kerr coordinates $(u, r, \theta, \phi)$ defined by \begin{align*} &t = u \pm r , & x + iy = (r \mp ia) e^{i\phi} \sin \theta, && z = r \cos \theta, \end{align*} where $r$ still satisfies \eqref{radius}, but $u$ is different from the null coordinate $u$. In these coordinates $Y_{1,2}$ simply become \begin{align*} &Y_1= e^{i\phi} \tan \frac{\theta}{2}, &Y_2 = - e^{i\phi} \cot \frac{\theta}{2}. \end{align*} They describe, for constant $\phi$ and $\theta$, outgoing and ingoing principal null geodesics, with affine parameter $\pm r$ \footnote{In this representation, the geometric interpretation of the outgoing geodesics is more fittingly viewed as ingoing geodesics on the $r < 0$ sheet of extended Kerr spacetime\cite{Hawking:1973}.}.\\ In complex spacetime these coordinates become complex and independent. The principal null geodesics can be represented as families of twistor curves or strings that are holomorphic in $w$ with $\mathfrak{Re} (w) \!=\! \pm r$, parametrized by $\phi$ and $\theta$ and projected onto the space $\mathbb{P}\mathbb{N}$ of null twistors ($\sim$ real spacetime). Alternatively, bundling up some of these parameters leads to two families of closed twistor strings holomorphic in $w$, one of circular form with $\mathfrak{Re} (w) \!=\! \phi \!\in\! [-\pi, \pi]$ and $\theta$ constant, and one sort of perpendicular to it with $\mathfrak{Re} (w) \!=\! \theta \!\in\! (-\pi, \pi)$ and $\phi$ constant. The first string family is periodic and the latter antiperiodic, reminiscent of Ramond (R) and Neveu-Schwarz (NS) type strings in superstring theory. The presence of an NS sector is a reflection of the ring singularity in Kerr spacetime and its expansion to extended Kerr spacetime with 2 sheets, one for $r > 0$ and one for $r \!<\! 0$ \cite{Hawking:1973}. Burinskii \cite{Burinskii:2004, Burinskii:2013} looked at similar (although not identical) types of strings, mainly in the context of the Kerr spinning particle. Of course, there are many more holomorphic twistor strings possible on these 2-dimensional twistor manifolds.\\ All these holomorphic strings and the analogous ones in dual twistor space can be viewed as special solutions of a twistor string model on the cross product of each of the two 2-dimensional twistor manifolds in twistor space with the corresponding one in dual twistor space, with action \begin{align} S_0 = \frac{1}{2 \pi} \int\mathop{}\negthickspace{\mathrm{d}^2\mathop{}\negthickspace z} \frac{1}{2} \left( W \cdot \overline\partial{Z} - Z \cdot \overline\partial{W} \right), \label{action_0} \end{align} where $Z$ denotes a twistor and $W$ a dual twistor, with components \begin{equation*} Z = \binom{\lambda_{\alpha}}{\mu^{\dot{\alpha}}}, \,\,\,\,W = \binom{\tilde{\mu}^{\alpha}}{\tilde{\lambda}_{\dot{\alpha}}}. \end{equation*} The presence of R and NS type strings means that the twistors should be worldsheet spinors. This model defines a Virasoro algebra with a central charge which depends on which symmetries are gauged. One can ask whether the Cardy formula \cite{Cardy:1986} for this model agrees with the Bekenstein-Hawking area law: \begin{equation*} S_{\text{Cardy}} = \frac{\pi^2}{3}|c_{\text{eff}}| T \stackrel{?}{=} S_{\text{BH}} = \frac{A}{4}, \end{equation*} where $T$ is the temperature and $c_{\text{eff}} = c - 24\Delta_0$ is the effective central charge with $c$ the central charge and $\Delta_0$ the lowest eigenvalue of the $L_0$ Virasoro operator. Concerning the temperature, the question arises about its value. As the principal null geodesics generate the event horizon, we can take clues from the holographic Kerr/CFT correspondence \cite{Castro:2010, Haco:2018} and set \begin{equation} T = T_L + T_R = \frac{Mr_+}{2 \pi J}, \label{temperature} \end{equation} where the ingoing and outgoing null congruences are considered to provide left and right temperature, respectively. If we could show $c = 12$ in units of $J$ and $\Delta_0 = 0$ like in the holographic Kerr/CFT correspondence, then $S_{\text{Cardy}} = 2 \pi M r_+ = S_{\text{BH}}$, and the Cardy formula indeed would agree with Bekenstein-Hawking. On the other hand, in the following sections we consider a couple of anomaly-free twistor string models with zero central charge where the agreement between Cardy and Bekenstein-Hawking is no longer obvious but still achievable as we will see.\\ Another question is whether such a twistor string defined on the 2-dimensional twistor and dual twistor manifolds determines the Kerr spacetime, i.e. whether the mapping between spacetime and twistor string model is bidirectional. The action \eqref{action_0} by itself is clearly not sufficient because it can describe a spacetime only up to a conformal factor, i.e. such a spacetime will generally not satisfy the Einstein equations. The conformal scaling invariance needs to be broken in a specific manner to satisfy these equations.\\ \section{Ambitwistor String} \label{Ambitwistor} The action for the 4-dimensional ambitwistor string with no supersymmetry\footnote{Adding supersymmetry by changing twistors Z and W to supertwistors does not alter any result in this section.} is \cite{Geyer:2014} \begin{align} S_1 = \frac{1}{4 \pi} \int\mathop{}\negthickspace{\mathrm{d}^2\mathop{}\negthickspace z}\left( W \cdot \overline\partial{Z} - Z \cdot \overline\partial{W} + a Z \cdot W \right) + S_j, \label{action_1} \end{align} where the scaling symmetry GL(1) of the twistor string model in section \ref{Strings} has been gauged, forcing the twistors and dual twistors to be ambitwistor pairs with zero GL(1) charge, and where an action for a worldsheet current algebra has been added, chosen such that the model is anomaly-free with zero central charge. In contrast to the Berkovits-Witten string\cite{Berkovits_1:2004} it is assumed that the twistor fields are worldsheet spinors as required from section \ref{Strings}. To get an idea about the spectrum one can look at the vertex operators \cite{Geyer:2014, Farrow:2018} which imply that the lowest $L_0$ eigenvalue is $\Delta_0 = 1$ in units of $J$ such that $|c_{\text{eff}}| = 24 \cdot J$. Therefore, we come to the conclusion that the Cardy formula gives twice the value of the Bekenstein-Hawking entropy. But one can argue that the model, originally defined on the full twistor space \cite{Geyer:2014}, only has one Virasoro algebra when viewed as restricted to the 2-dimensional twistor manifolds, not a \emph{left} and a \emph{right} one, such that the temperature should be averaged, and we end up with agreement between Cardy and Bekenstein-Hawking.\\ Unfortunately, this ambitwistor model cannot be relevant for the Kerr spacetime because the MHV tree-level gravitational scattering amplitudes describe conformal gravity, similar to the Berkovits-Witten model \cite{Farrow:2018, Berkovits_1:2004}.\\ \section{Alternate Anomaly-free Twistor String} \label{TwistorString} The second model we consider has the action \cite{Kunz:2020, Kunz_1:2020, Kunz:2021} \begin{multline} S_2 = \frac{1}{2 \pi} \int\mathop{}\negthickspace{\mathrm{d}^2\mathop{}\negthickspace z} \left\{ \frac{1}{2} \sum_{i=1}^2 \left( W_i \cdot \overline\partial{Z_i} - Z_i \cdot \overline\partial{W_i} + \Theta_i \cdot \overline\partial{\Psi_i} + \Psi_i \cdot \overline\partial{\Theta_i} \right) \right.\\ + \sum_{i,j=1}^2 \lambda_i \cdot a_{1 i j} \cdot \tilde{\phi}_j + \sum_{i,j=1}^2 \tilde{\lambda}_i \cdot a_{2 i j} \cdot \phi_j + \sum_{i,j=1}^2\tilde{\lambda}_i \cdot b_{i j} \cdot \lambda_j + \left(W_1 W_2 \right) \mathop{}\negthickspace \vec{c} \, \cdotp \vec{\tau} \mathop{}\negthickspace \begin{pmatrix} Z_1 \\ Z_2 \end{pmatrix} \left. \vphantom{\sum_i} \right\}, \label{action} \end{multline} where for $i=1,2$ $Z_i$ are twistors, $W_i$ dual twistors, $\Psi_i$ fermionic bi-spinors, and $\Theta_i$ fermionic dual bi-spinors, with components \begin{equation*} Z_i = \binom{\lambda_{i \alpha}}{\mu_i^{\dot{\alpha}}}, W_i = \binom{\tilde{\mu}_i^{\alpha}}{\tilde{\lambda}_{i \dot{\alpha}}}, \Psi_i = \binom{\phi_{i \alpha}}{\psi_i^{\dot{\alpha}}}, \Theta_i = \binom{\tilde{\psi}_i^{\alpha}}{\tilde{\phi}_{i \dot{\alpha}}}, \end{equation*} and where again the strings are considered to be worldsheet spinors, partitioning them into an NS sector and a R sector. One unusual aspect of this model is the presence of two twistors instead of just one. Assuming that they represent two intersecting $\alpha$-curves, due to the incidence relations $\mu_i^{\dot{\alpha}} = x^{\alpha \dot{\alpha}} \lambda_{i \alpha}$, then they are like two rays intersecting the celestial sphere over $x$ and are determined only up to a complex SU(2)\footnote{SU(2) $\sim$ SL(2, $\mathbb{C}$) as complex Lie algebra} symmetry operation. This is the reason for the gauged SU(2) symmetry between the two twistors in \eqref{action}, with Lagrange multiplier field $\vec{c}$ \cite{Kunz:2020}\footnote{$\vec{\tau}$ denote the Pauli matrices.}. Further, Isenberg \& Yasskin \cite{Isenberg:1986} showed that the twistor space and its dual are in natural correspondence with teleparallel spacetimes, which typically are modeled by gauging the translation group, suggesting the gauging of the translations on the worldsheet as well, with Lagrange multiplier field $b_{i j}$ \cite{Kunz:2020}. Finally, fermionic spinors have been added to the model, with worldsheet supersymmetries based on gauging of supertranslations, with Lagrange multipliers $a_{1 i j}$ and $a_{2 i j}$ \cite{Kunz:2020}. After performing BRST quantization, this model is anomaly-free (the BRST charge is nilpotent) in self-contained fashion, without need of an additional worldsheet current algebra \cite{Kunz:2020}.\\ Analysis of the spectrum \cite{Kunz_1:2020} shows that the gauging of the translation symmetry reduces the number of complex degrees of freedom for each twistor from three to two. Therefore, we can choose a gauge of the SU(2) symmetry that assigns each twistor to one of the two 2-dimensional twistor manifolds in consistent manner, and the dual twistors in analogous way. This choice will reveal itself as very convenient for going back to the original spacetime.\\ Like in the case of the ambitwistor string $\Delta_0 = 1$, and to make the Cardy formula agree with Bekenstein-Hawking, it needs to be adjusted to half the value. This makes absolute sense, considering that the single Virasoro algebra has two contributions, one from each of the two twistors, but gauged in such a way that it is to be considered a single contribution.\\ Can this model actually represent the Kerr spacetime? It has been shown that it provides the expected tree-level gravitational scattering amplitudes in the NS sector \cite{Kunz:2020, Kunz_1:2020}, and that this happens precisely because the contractions between ghost and antighost fields arising from the gauging of the translation group are ensuring that only connected trees (or equivalently trees without loops) are allowed amongst the contractions in the worldsheet correlator of gravitational scattering \cite{Adamo_1:2012}. Also, by being able to associate each twistor of the model with a particular one of the two 2-dimensional twistor manifolds we can keep track of these manifolds separately. And for Kerr spacetimes twistors on these manifolds fulfill an homogeneous quadratic equation of the form \eqref{twistor-congruence} which can be evaluated and solved for $Y_{1,2}$, and allows to calculate the conformal factor \eqref{factor}, up to the mass factor. So, indeed, we get back the original Kerr spacetime, up to the mass which, of course, is nowhere to be found in twistor space.\\ This might look like cheating, inserting the knowledge that the spacetime was Kerr to begin with. On the other hand, one should note that the whole construction of the two 2-dimensional twistor manifolds can be generalized to any Petrov type D spacetime with two gravitational shear-free principal null congruences, the main difference to Kerr being that $Z_a Q^{ab}Z_b$ in \eqref{twistor-congruence} is replaced by a general holomorphic homogeneous function in $Z_a$. Knowing the two twistor manifolds Araneda\cite{Araneda:2019} showed that with certain assumptions that for instance are satisfied by Kerr-(A)dS and Kerr-Newman-(A)dS spacetimes, the original Petrov type D spacetime can be recovered via a conformally K{\"a}hler structure\cite{Dunajski:2009}, but only up to a conformal factor which would need to be fixed to ensure the validity of Einstein equations with or without cosmological constant. Whether this twistor string model, with help of the gauged translation symmetry, determines the correct conformal factor in this more general case is an open issue. For more discussion on this topic, in particular on applying the Isenberg \& Yasskin programme \cite{Isenberg:1986} for general spacetimes to this model, see \cite{Kunz:2021}.\\ \section{Summary and Discussion} \label{Discussion} In this paper we presented a non-holographic correspondence between the Kerr spacetime and a CFT in twistor space together with its dual based on twistor string models that are defined on a couple of holomorphic 2-dimensional twistor manifolds, one for each principal null congruence, and have holomorphic strings of principal null geodesics as classical solutions. We examined a couple of gauged twistor string models to see whether the Cardy formula leads to the same entropy as the Bekenstein-Hawking area law and whether they are candidates for representing the Kerr spacetime. Both models could be made to agree between Cardy and Bekenstein-Hawking, but the first model, the 4-dimensional ambitwistor string \cite{Geyer:2014}, relates to non-minimal conformal gravity whereas the second twistor string model \cite{Kunz:2020, Kunz_1:2020} provides the correct Einstein gravitational scattering amplitudes by gauging the translation symmetry on the worldsheet and actually describes the Kerr spacetime with help of the congruence equations \eqref{twistor-congruence} and \eqref{factor}. In order for this model to go from a Kerr spacetime to more general Petrov type D spacetimes, the effect of the various worldsheet gauge symmetries, especially of the translation symmetry, needs to be investigated more thoroughly.\\ If the second model is actually a viable theory, it makes some interesting predictions. There are quite a few exotic particles in the spectrum never seen before \cite{Kunz_1:2020}. The spin 2 and $\frac{3}{2}$ excitations in the R sector can easily become massive at low energies by picking up corresponding spin 0,1, and $\frac{1}{2}$ excitations, but in the NS sector there are no obvious lower spin excitations available to make the graviton, gravitinos, and vector particles massive. This would mean that the NS sector only contains gravitationally interacting massless particles which should be detectable in gravitational waves (theoretically but experimentally with extreme difficulty), and the spin $\frac{1}{2}$ matter content is exclusively delegated to the R sector, providing an explanation why semi-classical quantum field theory on curved spaces can be so successful \cite{Parker:2009} and why gravitation is at a so different and weaker strength than the other interactions. And if the low energy limit of the theory exists, it is a modified gravity model with both hot and cold dark matter. A lot of details would need to be worked out.\\
49cb51542059f8a0c9656d2d5dc43af82280a3cd
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Let (see \cite{4}, p. 79) \begin{equation} \label{1.1} \begin{split} & Z(t)=e^{i\vartheta(t)}\zeta\left(\frac{1}{2}+it\right), \\ & \vartheta(t)=-\frac t2\ln\pi+\text{Im}\Gamma\left(\frac 14+i\frac t2\right), \end{split} \end{equation} and $\{ t_\nu\}$ be the sequence defined by the equation (see \cite{4}, p. 221) \begin{equation} \label{1.2} \vartheta(t_\nu)=\pi\nu,\ \nu=1,2,\dots \end{equation} In the paper \cite{2} we have obtained with the help of the Riemann-Siegel formula (see \cite{4}, p. 220) \begin{equation} \label{1.3} Z(t)=2\sum_{n\leq\sqrt{\bar{t}}}\frac{1}{\sqrt{n}}\cos(\vartheta-t\ln n)+\mathcal{O}(t^{-1/4}),\ \bar{t}=\frac{T}{2\pi}, \end{equation} the following formulae \begin{equation} \label{1.4} \begin{split} & \sum_{T\leq t\nu\leq T+H}Z(t_\nu)Z(t_\nu+\bar{\tau}_k)\sim -\frac{1}{(4k+3)\pi^2}H\ln^2\frac{T}{2\pi}, \\ & \sum_{T\leq t\nu\leq T+H}Z(t_\nu)Z(t_\nu+\bar{\bar{\tau}}_k)\sim \frac{1}{(4k+1)\pi^2}H\ln^2\frac{T}{2\pi},\ T\to\infty , \end{split} \end{equation} where \begin{equation} \label{1.5} \begin{split} & \bar{\tau}_k=\frac{(4k+3)\pi}{\ln\frac{T}{2\pi}}, \quad \bar{\bar{\tau}}_k=\frac{(4k+1)\pi}{\ln\frac{T}{2\pi}}, \\ & k=0,1,\dots,K_0(T)=\mathcal{O}(1),\ H=\sqrt{T}\ln T. \end{split} \end{equation} \begin{remark} The autocorrelative sum \begin{equation} \label{1.6} \sum_{T\leq t\nu\leq T+H}Z(t_\nu)Z(t_\nu+\tau_k) \end{equation} is oscillatory on the segment of the arithmetic sequence \begin{displaymath} \bar{\tau}_0,\bar{\bar{\tau}}_0,\bar{\tau}_1,\bar{\bar{\tau}}_1,\dots,\bar{\tau}_{K_0},\bar{\bar{\tau}}_{K_0}. \end{displaymath} as it follows by (\ref{1.4}). \end{remark} In this paper: \begin{itemize} \item[(a)] we use the above mentioned oscillatory behavior of the sum (\ref{1.6}) in order to construc new kind (in the theory of the Riemann $Z(t)$-function) of quasi-orthonormal system of vectors; \item[(b)] we obtain, on basis of (a), a \emph{microscopic} interpolation formula for the function $Z(t)$ on a set of segments each of them has the length \begin{equation} \label{1.7} < A\frac{\{\psi(T)\}^\epsilon}{\ln T}\to 0 \quad \text{as}\quad T\to\infty, \end{equation} where $\psi(T)$ is an arbitrary fixed function of the type \begin{displaymath} \ln_3T,\ln_4T,\dots;\quad \ln_3T=\ln\ln\ln T,\dots \end{displaymath} \end{itemize} \section{Result} \subsection{} The following Theorem holds true. \begin{mydef1} The formula \begin{equation} \label{2.1} \begin{split} & \overset{*}{Z}(h_\nu)=\frac 4\pi\sum_{p=1}^L \frac{(-1)^{p+1}}{2p-1}\frac{\overset{*}{Z}(h_\nu+\bar{\tau}_p)+\overset{*}{Z}(h_\nu-\bar{\tau}_p)}{2}+ \\ & + \mathcal{O}\left\{ [\psi(T)]^{-\epsilon/4}\right\},\quad L=[(\psi(T))^\epsilon], \end{split} \end{equation} where $[\dots]$ stands for the integer part, and \begin{equation} \label{2.2} h_\nu=t_\nu+\frac{\pi}{\ln\frac{T}{2\pi}},\ \bar{\tau}_p=\frac{(2p-1)\pi}{\ln\frac{T}{2\pi}},\ \overset{*}{Z}(t)=\frac{Z(t)}{\sqrt{\ln\frac{T}{2\pi}}} \end{equation} holds true for the values \begin{displaymath} h_\nu\in [T,T+\bar{H}] \end{displaymath} of the order \begin{equation} \label{2.3} \sim \frac{1}{2\pi}\bar{H}\ln\frac{T}{2\pi},\quad T\to\infty;\ \bar{H}=\sqrt{T}\psi(T), \end{equation} i. e. for \emph{the almost all} $h_\nu$. \end{mydef1} \begin{remark} The result of our Theorem may be expressed, from the viewpoint of the theory of interpolation, as follows. If we assume the values \begin{displaymath} \{\overset{*}{Z}(h_\nu\pm \bar{\tau}_p)\}_{p=1}^L \end{displaymath} are given, then the formula (\ref{2.1}) expresses an approximation of the unknown value \begin{displaymath} \overset{*}{Z}(h_\nu). \end{displaymath} \end{remark} \begin{remark} For the discrete set of arguments in (\ref{2.1}) \begin{displaymath} \{ h_\nu-\bar{\tau}_L,h_\nu-\bar{\tau}_{L-1},\dots,h_\nu,h_\nu+\bar{\tau}_1,\dots,h_\nu+\bar{\tau}_L\} \end{displaymath} we have (see (\ref{2.2})) \begin{equation} \label{2.4} h_\nu+\bar{\tau}_L-(h_\nu-\bar{\tau}_L)=2\bar{\tau}_L<A\frac{(\ln_3T)^\epsilon}{\ln T}\to 0,\ T\to\infty \end{equation} (comp. (\ref{1.7})). Consequently, from (\ref{2.4}) the title \emph{microscopic} interpolation for (\ref{2.1}) follows. \end{remark} \subsection{} The following Lemma is a basis for the proof of our Theorem. \begin{mydef5} If \begin{equation} \label{2.5} \tau',\tau''=\mathcal{O}\left(\frac{\psi^\epsilon}{\ln T}\right), \end{equation} then we have the following formulae \begin{equation} \label{2.6} \begin{split} & \sum_{T\leq t_\nu\leq T+\bar{H}}Z(t_\nu+\tau')Z(t_\nu+\tau'')= \\ & = \frac{1}{2\pi}\frac{\sin\{ (\tau''-\tau')\ln P_0\}}{(\tau''-\tau')\ln P_0}\bar{H}\ln^2\frac{T}{2\pi}+\mathcal{O}(\sqrt{T}\ln^2T), \ \tau'\not=\tau'', \end{split} \end{equation} and \begin{equation} \label{2.7} \begin{split} & \sum_{T\leq t_\nu\leq T+\bar{H}}Z^2(t_\nu+\tau')= \frac{1}{2\pi}\bar{H}\ln^2\frac{T}{2\pi}+\mathcal{O}(\sqrt{T}\ln^2T), \end{split} \end{equation} where \begin{displaymath} P_0=\sqrt{\frac{T}{2\pi}}, \end{displaymath} and these formulae are uniform in $\tau',\tau''$ (see the condition (\ref{2.5})). \end{mydef5} This Lemma follows from \cite{2}, (10), (11) in the case \begin{displaymath} \begin{split} & \tau\to\tau''-\tau',\ \tau=\mathcal{O}\left(\frac{1}{\ln T}\right)\to\tau',\tau''=\mathcal{O}\left(\frac{\psi^\epsilon}{\ln T}\right), \\ & H\to\bar{H}=\sqrt{T}\psi(T). \end{split} \end{displaymath} Next parts of this paper are ordered as follows. We define: \begin{itemize} \item[(a)] certain quasi-orthonormal system of vectors, \item[(b)] an analogue of the Fourier coefficients for this case and, consequently, the asymptotic Fourier doefficients, \item[(c)] an analogue of the trigonometric polynomial related with our vectors, \item[(d)] corresponding mean square deviation. \end{itemize} After completion this program we prove the Theorem. \section{Quasi-orthonormal system of vectors} Since (see \cite{1}, (23)) \begin{displaymath} Q=Q(T,\bar{H})=\sum_{T\leq t_\nu\leq T+\bar{H}}1=\frac{1}{2\pi}\bar{H}\ln\frac{T}{2\pi}+\mathcal{O}\left(\frac{\bar{H}^2}{T}\right), \end{displaymath} then (see (\ref{2.3})) \begin{equation} \label{3.1} Q\sim \frac{1}{2\pi}\bar{H}\ln\frac{T}{2\pi}; \ \bar{H}=\sqrt{T}\psi(T). \end{equation} Next, we have in the case \begin{equation} \label{3.2} \tau_p=\frac{2\pi}{\ln\frac{T}{2\pi}}p,\ p=-L+1,\dots,-1,0,1,\dots,L \end{equation} that (see (\ref{2.5}), (\ref{2.6}), (\ref{3.2})) \begin{equation} \label{3.3} \sum_{T\leq t_\nu\leq T+\bar{H}} Z^2(t_\nu+\tau_p)=\frac{1}{2\pi}\bar{H}\ln^2\frac{T}{2\pi}+\mathcal{O}(\sqrt{T}\ln^2T), \end{equation} and \begin{equation} \label{3.4} \sum_{T\leq t_\nu\leq T+\bar{H}} Z(t_\nu+\tau_p)Z(t_\nu+\tau_p')=\mathcal{O}(\sqrt{T}\ln^2T),\ p\not=p'. \end{equation} Consequently, we obtain (see (\ref{2.2}), (\ref{3.1}) -- (\ref{3.4})) the following formula \begin{equation} \label{3.5} \begin{split} & \frac 1Q\sum_{T\leq t_\nu\leq T+\bar{H}} \overset{*}{Z}(t_\nu+\tau_p)\overset{*}{Z}(t_\nu+\tau_p')= \\ & = \left\{\begin{array}{rcl} \mathcal{O}(\frac{1}{\psi}) & , & p\not=p', \\ 1+\mathcal{O}(\frac{1}{\psi}) & , & p=p' . \end{array}\right. \end{split} \end{equation} \begin{remark} We will call the property (\ref{3.5}) of the system of vectors \begin{equation} \label{3.6} \{ \overset{*}{Z}(t_\nu+\tau_p)\},\ t_\nu\in [T,T+\bar{H}];\ p=-L+1,\dots,-1,0,1,\dots,L \end{equation} as \emph{quasi-orthonormality}. \end{remark} \section{Analogue of Fourier coefficients and the classical Leibnitz series} We define following numbers \begin{equation} \label{4.1} \begin{split} & A_p=\frac 1Q\sum_{T\leq t_\nu\leq T+\bar{H}}\overset{*}{Z}(t_\nu+\tau^0)\overset{*}{Z}(t_\nu+\tau_p), \\ & p=-L+1,\dots,-1,0,1,\dots,L \end{split} \end{equation} as an analogue of the Fourier coefficients of the vector \begin{equation} \label{4.2} \overset{*}{Z}(t_\nu+\tau^0),\ T\leq t_\nu\leq T+\bar{H},\ \tau^0=\frac{\pi}{\ln\frac{T}{2\pi}}. \end{equation} Since (see (\ref{3.2}), (\ref{4.2})) \begin{equation} \label{4.3} \tau_p-\tau^0=\frac{(2p-1)\pi}{\ln\frac{T}{2\pi}}=\left(\pi p-\frac \pi2\right)\frac{1}{\ln P_0}; \ P_0=\sqrt{\frac{T}{2\pi}}, \end{equation} then we have (comp. (\ref{2.6})) \begin{equation} \label{4.4} \frac{\sin\{ (\tau_p-\tau^0)\ln P_0\}}{(\tau_p-\tau^0)\ln P_0}=\frac \pi2\frac{(-1)^{p+1}}{2p-1}. \end{equation} Consequently, we have from (\ref{2.6}) by (\ref{3.1}), (\ref{4.1}), (\ref{4.4}) that \begin{equation} \label{4.5} A_p=\frac 2\pi\frac{(-1)^{p+1}}{2p-1}+\mathcal{O}\left(\frac 1\psi\right). \end{equation} \begin{remark} It is natural to call the numbers \begin{equation} \label{4.6} \bar{A}_p=\frac 2\pi\frac{(-1)^{p+1}}{2p-1},\ p=-L+1,\dots,-1,0,1,\dots,L \end{equation} as the asymptotic Fourier coefficients of the vector (\ref{2.4}). \end{remark} \begin{remark} Let us point out the presence of the members of the classical Leibnitz series \begin{displaymath} \sum_{p=1}^\infty \frac 2\pi\frac{(-1)^{p+1}}{2p-1} = \frac 12 \end{displaymath} in the notion of the asymptotic Fourier coefficients. \end{remark} \section{Analogue of the trigonometric polynomial} Finally, we will define an analogue of the trigonometric polynomial $P_{2L}$, that corresponds to our quasi-orthonormal system (\ref{3.6}) by the following way \begin{equation} \label{5.1} P_{2L}[\overset{*}{Z}(t_\nu+\tau^0)]=\sum_{p=-L+1}^L \bar{A}_p\overset{*}{Z}(t_\nu+\tau_p). \end{equation} Since from (\ref{4.6}) follows that \begin{equation} \label{5.2} \bar{A}_p=\bar{A}_{-p+1}, \end{equation} then we have from (\ref{5.1}) by (\ref{4.6}), (\ref{5.2}) \begin{equation} \label{5.3} \begin{split} & P_{2L}=\sum_{p=1}^L \bar{A}_p\overset{*}{Z}(t_\nu+\tau_p)+\sum_{p=1}^L \bar{A}_{1-p}\overset{*}{Z}(t_\nu+\tau_{1-p})= \\ & = \sum_{p=1}^L \bar{A}_p\{\overset{*}{Z}(t_\nu+\tau_p)+\overset{*}{Z}(t_\nu+\tau_{1-p})\} = \\ & = \frac 2\pi\sum_{p=1}^L \frac{(-1)^{p+1}}{2p-1}\{\overset{*}{Z}(t_\nu+\tau_p)+\overset{*}{Z}(t_\nu+\tau_{1-p})\}. \end{split} \end{equation} \section{Mean square deviation and the classical Euler series} \subsection{} First of all, from the Euler series by (\ref{2.1}) we obtain \begin{equation} \label{6.1} \begin{split} & \frac{\pi^2}{8}=\sum_{n=1}^\infty \frac{1}{(2p-1)^2}=\sum_{p=1}^L\frac{1}{(2p-1)^2}+\sum_{p=L+1}^\infty\frac{1}{(2p-1)^2}= \\ & = \sum_{p=1}^L\frac{1}{(2p-1)^2}+\mathcal{O}\left(\frac 1L\right)=\sum_{p=1}^L\frac{1}{(2p-1)^2}+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right). \end{split} \end{equation} Since (see (\ref{4.6})) \begin{equation} \label{6.2} \bar{A}_p^2=\frac{4}{\pi^2}\frac{1}{(2p-1)^2}, \end{equation} then we have (see (\ref{5.2}), (\ref{6.1}), (\ref{6.2})) \begin{equation} \label{6.3} \begin{split} & \sum_{p=-L+1}^L\bar{A}_p^2=\frac{4}{\pi^2}\sum_{p=-L+1}^L\frac{1}{(2p-1)^2}=\frac{8}{\pi^2}\sum_{p=1}^L\frac{1}{(2p-1)^2}= \\ & = \frac{8}{\pi^2}\left\{ \frac{\pi^2}{8}+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right)\right\}= 1+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right), \end{split} \end{equation} and, of course, \begin{equation} \label{6.4} \sum_{p,q=-L+1}^L |\bar{A}_p\bar{A}_q|=\mathcal{O}(L^2)=\mathcal{O}(\psi^{2\epsilon}). \end{equation} \subsection{} We define discrete mean square deviation $\Delta$ as follows \begin{equation} \label{6.5} \Delta^2=\frac 1Q\sum_{T\leq t_\nu\leq T+\bar{H}}\{\overset{*}{Z}(t_nu+\tau^0)-P_{2L}\}^2. \end{equation} Consequently, we have \begin{equation} \label{6.6} \begin{split} & \Delta^2=\frac 1Q\sum_{(t_\nu)}\overset{*}{Z}^2(t_\nu+\tau^0)-\frac 2Q\sum_{(t_\nu)}\overset{*}{Z}(t_\nu+\tau^0)P_{2L}+\\ & + \frac 1Q\sum_{(t_\nu)} P^2_{2L}=w_1+w_2+w_3. \end{split} \end{equation} From (\ref{2.7}), $\tau'=\tau''$, we obtain immediately (see (\ref{2.2}), (\ref{2.5}), (\ref{3.1}), (\ref{4.2})) that \begin{equation} \label{6.7} w_1=1+\mathcal{O}\left(\frac{1}{\psi}\right). \end{equation} \subsection{} Next, we obtain from (\ref{5.1}), (\ref{6.6}) by (\ref{4.1}), (\ref{4.5}), (\ref{4.6}), (\ref{6.3}) \begin{equation} \label{6.8} \begin{split} & w_2=-\frac 2Q\sum_{(t_\nu)}\overset{*}{Z}(t_\nu+\tau^0)\sum_{p=-L+1}^L \bar{A}_p\overset{*}{Z}(t_p+\tau_p)= \\ & = -2\sum_{p=-L+1}^L \bar{A}_p\frac 1Q\sum_{(t_\nu)}\overset{*}{Z}(t_\nu+\tau^0)\overset{*}{Z}(t_\nu+\tau_p)= \\ & = -2\sum_{p=-L+1}^L \bar{A}_pA_p=-2\sum_{p=-L+1}^L \bar{A}_p^2+\mathcal{O}\left(\frac 1\psi\sum_{p=-L+1}^L 1\right)= \\ & = -2+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right)+\mathcal{O}\left(\frac{1}{\psi^{1-\epsilon}}\right)= \\ & = -2+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right). \end{split} \end{equation} \subsection{} Once again we have (see (\ref{3.5}), (\ref{5.1}), (\ref{6.3}), (\ref{6.4})) \begin{equation} \label{6.9} \begin{split} & w_3=\sum_{p,q=-L+1}^L \bar{A}_p\bar{A}_q\frac 1Q\sum_{(t_\nu)}\overset{*}{Z}(t_\nu+\tau_p)\overset{*}{Z}(t_\nu+\tau_q)= \\ & = \sum_{p=-L+1}^L \bar{A}_p^2\left\{ 1+\mathcal{O}\left(\frac{1}{\psi}\right)\right\}+ \mathcal{O}\left\{\frac 1\psi\sum_{p,q=-L+1}^L |\bar{A}_p\bar{A}_q|\right\}= \\ & = 1+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right)+\mathcal{O}\left(\frac{L^2}{\psi}\right)= \\ & = 1+\mathcal{O}\left(\frac{1}{\psi^\epsilon}\right). \end{split} \end{equation} Consequently, we obtain from (\ref{6.6}) by (\ref{6.7}) -- (\ref{6.9}) the following estimate \begin{equation} \label{6.10} \Delta^2<\frac{A}{\{\psi(T)\}^\epsilon}. \end{equation} \section{Finalisation of the proof of the Theorem} Let $R(T)$ denote the number of points \begin{displaymath} \bar{t}_\nu\in [T,T+\bar{H}] \end{displaymath} that fulfill the inequality \begin{equation} \label{7.1} |\overset{*}{Z}(\bar{t}_\nu+\tau^0)-P_{2L}[\overset{*}{Z}(\bar{t}_\nu+\tau^0)]|\geq \frac{1}{\psi^{\epsilon/4}}. \end{equation} Our assertion is that \begin{equation} \label{7.2} R(T)=o(\bar{H}\ln T),\quad T\to\infty. \end{equation} Namely, if \begin{equation} \label{7.3} R(T)>B\bar{H}\ln T \end{equation} for every fixed $B>0$, then we have from (\ref{6.10}) by (\ref{6.5}), (\ref{7.1}), (\ref{7.3}) that \begin{displaymath} \begin{split} & \frac{A}{\psi^\epsilon}\geq \frac 1Q\sum_{(\bar{t}_\nu)}\{\overset{*}{Z}(\bar{t}_\nu+\tau^0)-P_{2L}[\overset{*}{Z}(\bar{t}_\nu+\tau^0)]\}^2> \\ & > \frac{B\bar{H}\ln T}{Q}\frac{1}{\psi^{\epsilon/2}}>\frac{C}{\psi^{\epsilon/2}},\ T\to\infty, \end{split} \end{displaymath} i. e. we have contradiction. Hence, our assertion (\ref{7.2}) is true. Consequently, if $Q_1(T)$ denotes the number of the points \begin{displaymath} t_\nu\in [T,T+\bar{H}] \end{displaymath} that fulfill the inequality \begin{equation} \label{7.4} |\overset{*}{Z}(t_\nu+\tau^0)-P_{2L}[\overset{*}{Z}(t_\nu+\tau^0)]|<\frac{1}{\psi^{\epsilon/4}}, \end{equation} then (see (\ref{3.1}), (\ref{7.2})) \begin{equation} \label{7.5} Q_1(T,\bar{H})\sim \frac{1}{2\pi}\bar{H}\ln\frac{T}{2\pi},\ T\to\infty, \end{equation} i. e. the assertion of Theorem is true if we use the following notation \begin{displaymath} t_\nu+\tau^0=h_\nu,\ \tau_p-\tau^0=\bar{\tau}_p,\ \tau_{1-p}-\tau^0=-\bar{\tau}_p. \end{displaymath}
3579e0ac1985de66d5209a17a5e5789d6b6c2ee6
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The Symmetric Simple Exclusion Process (to which we sometimes refer simply as \textit{the Simple Exclusion}) is one of the simplest particle system with local interactions. It can be considered as a toy model for the relaxation of a gas of particle and was introduced by Spitzer in \cite{cf:Spitzer}. Since then, it has been the object of a large number of studies by mathematicians and theoretical physicists, who investigated many of its properties such as the evolution rules for the particle density and tried to derive Fick's law from microscopic dynamics, studied to motion of an individual tagged particle (see \cite{cf:Liggett2, cf:KL} for reviews on the subject and references therein). More recently \cite{cf:DSC0, cf:DSC, cf:Lac, cf:LeeYau, cf:Quas, cf:Yau} an interest has been developed for the convergence to equilibrium of the process on a finite graph in terms of mixing time, which is the object of our study. \subsection{The Process} We consider ${\ensuremath{\mathbb Z}} _N:={\ensuremath{\mathbb Z}} /N{\ensuremath{\mathbb Z}} $, the discrete circle with $N$ sites and place $k\in\{1,\dots,N-1\}$ particles on it, with \textit{at most} one particle per site. With a slight abuse of notation, we sometimes use elements of $\{1,\dots,N\} \subset {\ensuremath{\mathbb Z}} $ to refer to elements of ${\ensuremath{\mathbb Z}} _N$. \medskip The Simple Exclusion on ${\ensuremath{\mathbb Z}} _N$ is a dynamical evolution of the particle system which can be described informally as follows: each particle tries to jump independently on its neighbors with transition rates $p(x,x+1)=p(x,x-1)=1$, but the jumps are cancelled if a particle tries to jump on a site which is already occupied (see Figure \ref{partisys} in Section \ref{fluctuat} for a graphical representation). \medskip More formally, our state-space is defined by \begin{equation} \Omega= \Omega_{N,k}= \left\{ \eta \in \{0,1\}^{{\ensuremath{\mathbb Z}} _N}\ | \ \sum_{x=1}^N \eta(x)= k \right\}. \end{equation} Given $\eta\in \Omega$ define $\eta^x$ the configuration obtained by exchanging the content of site $x$ and $x+1$ \begin{equation} \begin{cases} \eta^x(x):=\eta(x+1),\\ \eta^x(x+1):=\eta(x),\\ \eta^x(y)=\eta(y), \quad \forall y\notin\{x,x+1\}. \end{cases} \end{equation} The exclusion process on ${\ensuremath{\mathbb Z}} _N$ with $k$ particle is the continuous time Markov process on $\Omega_{N,k}$ whose generator is given by \begin{equation}\label{crading} (\mathcal L f)(\eta):=\sum_{x\in {\ensuremath{\mathbb Z}} _N} f(\eta^x)-f(\eta). \end{equation} The unique probability measure left invariant by $\mathcal L$ is the uniform probability measure on $\Omega$ which we denote by $\mu$. Given $\chi \in \Omega,$ we let $(\eta^{\chi}_t)_{t\ge 0}$ denote the trajectory of the Markov chain starting from $\chi$. \medskip We want to know how long we must wait to reach the equilibrium state of the particle system, for which all configurations are equally likely. We measure the distance to equilibrium is measured in terms of total variation distance. If $\alpha$ and $\beta$ are two probability measures on $\Omega$, the total variation distance between $\alpha$ and $\beta$ is defined to be \begin{equation}\label{tv} \| \alpha -\beta\|_{TV}:=\frac{1}{2}\sum_{\omega\in \Omega} |\alpha(\omega)-\beta(\omega)|=\sum_{\omega\in \Omega} (\alpha(\omega)-\beta(\omega))_+, \end{equation} where $x_+=\max(x,0)$ is the positive part of $x$. It measures how well one can couple two variables with law $\alpha$ and $\beta$. We define the worst-case distance to equilibrium at time $t>0$ as follows \begin{equation}\label{dat} d(t)=d^{N,k}(t):=\max_{\chi\in \Omega_{N,k}} \| P^\chi_t-\mu\|_{TV}. \end{equation} Similarly we define the typical distance from equilibrium at time $t>0$ as \begin{equation} {\bf d} (t)={\bf d}^{N,k}(t):= \frac{1}{\# \Omega_{N,k}} \sum_{\chi\in \Omega_{N,k}} \| P^\chi_t-\mu\|_{TV}. \end{equation} For a given $\gep>0$ we define the $\gep$-mixing-time (or simply the mixing time when $\gep=1/4$) to be the time needed for the system to be at distance $\gep$ from equilibrium \begin{equation} T_{\rm mix}^{N,k}(\gep):=\inf\{t\ge 0\ | \ d^{N,k}(t)\le \gep\}. \end{equation} Let us mention that the convergence to equilibrium has also been studied in terms of asymptotic rates: it has been known for a long time (see e.g.\ \cite[Corollary 12.6]{cf:LPW}) that for any reversible Markov chain \begin{equation} \lim_{t\to \infty}t^{-1}\log d(t) = \lim_{t\to \infty} t^{-1}{\bf d} (t)=-\lambda_1. \end{equation} exists and that $\lambda_1>0$ is the smallest nonzero eigenvalue of $-\mathcal L$, usually referred to as the spectral gap. Note that the knowledge of the spectral-gap also give an information on $d(t)$ for finite $t$, as we have (cf.\ \cite{cf:LPW}[Theorem 12.3]) \begin{equation}\label{sgapin} \frac{1}{2} e^{-\lambda_1 t} \le d(t)\le |\Omega|^{-1}e^{-\lambda_1 t}. \end{equation} \medskip The exclusion process can in fact be defined on an arbitrary graph and its mixing property have been the object of a large number of works. Let us mention a few of them here. Let us start with the mean-field case: in \cite{cf:DS2}, the study of the exclusion on the complete graph with $N/2$ particles is reduced to the study of the birth and death chain and a sharp asymptotic for the mixing time is given using a purely algebraic approach (see also \cite{cf:LL} for a probabilistic approach of the problem for arbitrary $k$). \medskip The problem on the lattice is much more delicate. Let us mention a few results that were obtained on the torus $({\ensuremath{\mathbb Z}} _N)^d$: in \cite{cf:Quas} (and also independently in \cite{cf:DSC0}), comparisons with the mean-field model were used to prove that $\lambda_1:=O(N^{-2})$, and thus via \eqref{sgapin} that there exists a constant $C_d$ such that \begin{equation}\label{gapbound} T_{\rm mix}\le C_d N^2 \log \binom{N^d}{k}. \end{equation} \medskip In \cite{cf:DSC, cf:LeeYau, cf:Yau}, the related problem of the \textit{log-Sobolev constant} for the process was studied. In particular, in \cite{cf:Yau}, a sharp bound (up to multiplicative constant) on the log-Sobolev constant was proved for the exclusion process on the grid which allowed to improve \eqref{gapbound} into \begin{equation} T_{\rm mix}\le C_d N^2 \log\log \binom{N^d}{k}. \end{equation} In \cite{cf:Mor}, using the \textit{chameleon process}, this upper bound is improved in the case of small $k$ by showing that \begin{equation} T_{\rm mix}\le C (\log d) N^2 \log k, \end{equation} (see also \cite{cf:RO} where the technique is extended to obtain estimates on the mixing time for arbitrary graphs in terms of the mixing time of a single particle). \medskip In another direction: in \cite{cf:CLR}, it is shown that the spectral gap for the simple-exclusion on any graph is equal to that of the underlying simple random walk (e.g.\ in our case $\lambda_1=2(1-\cos(2\pi/N))$). Finally concerning the case of dimension $1$: in \cite{cf:Wilson}, the mixing time of the exclusion process on the segment is proved to be larger than $(2\pi^2)^{-1}N^2 \log k$ and smaller than $(\pi^2)^{-1}N^2 \log k$, with the conjecture that the lower bound is sharp. This conjecture was proved in \cite{cf:Lac}. \subsection{The main result} The first result of this paper is a sharp asymptotic for the mixing time of the exclusion process on the circle ${\ensuremath{\mathbb Z}} _N$. For a fixed $\gep\in(0,1)$, when $N$ and $k$ goes to infinity we are able to identify the asymptotic behavior of $T_{\rm mix}(\gep)$. We obtain that when $k\le N/2$ (which by symmetry is not a restriction) $$T_{\rm mix}(\gep)=\frac{N^2}{8\pi^2}(\log k)(1+o(1)).$$ Note that here the dependence in $\gep$ is not present in the asymptotic equivalent. This means that on a time window which is $o(N^2\log k)$ the distance to equilibrium drops abruptly from $1$ to $0$. This sudden collapse to equilibrium for a Markov chain was first observed by Diaconis and Shahshahani \cite{cf:DS} in the case of the (mean-field) transposition shuffle (see also \cite{cf:Aldous} for the random walk on the hypercube). The term cutoff itself was coined in \cite{cf:AD} and the phenomenon has since been proved to hold for a diversity of Markov chains (see e.g.\ \cite{cf:LS, cf:LS3} for some recent celebrated papers proving cutoffs). It is believed that cutoff holds with some generality for reversible Markov chains as soon as the mixing time is much larger than the inverse of the spectral gap, but this remains a very challenging conjecture (see \cite[Chapter 18]{cf:LPW} for an introduction to cutoff and some counterexamples and \cite{cf:bdcutoff, cf:Chen, cf:Basu} for recent progress on that conjecture). \medskip A natural question is then of course: ``On what time scale does $d(t)$ decrease from, say, $999/1000$ to $1/1000$ ?'' . This is what is called the cutoff window. We are able to show it is equal to $N^2$. Let us mention that, this is result is, to our knowledge, the first sharp derivation of a cutoff window for a lattice interacting particle system. \begin{theorem}\label{mainres} For any sequence $k(N)$ satisfying $k(N)\le N/2$ and tending to infinity. We have for every $\gep\in (0,1)$ \begin{equation}\label{cutoff} \lim_{N\to \infty} \frac{8\pi^2T_{\rm mix}^{N,k}(\gep)}{N^2\log k}=1. \end{equation} More precisely we have \begin{equation}\label{damix}\begin{split} \lim_{s\to \infty} &\limsup_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+sN^2 \right)=0,\\ \lim_{s\to -\infty} &\liminf_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+sN^2 \right)=1 \end{split}\end{equation} and the window is optimal in the sense that for any $u\in {\ensuremath{\mathbb R}} $ \begin{equation}\label{damix2}\begin{split} &\limsup_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right)<1,\\ &\liminf_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right) >0. \end{split}\end{equation} \end{theorem} \begin{rem} The result above can be reformulated in the following manner: \begin{equation}\begin{cases} &T_{\rm mix}= T_{\rm mix}(1/4)= (8\pi^2)^{-1}N^2\log k+O(N^2),\\ &\forall \gep\in (0,1), \quad \limsup_{N\to \infty} \frac{|T_{\rm mix}(\gep)- T_{\rm mix}|}{N^2}<\infty,\\ &\lim_{\gep \to 0} \liminf_{N \to \infty} \frac{T_{\rm mix}(\gep)- T_{\rm mix}}{N^2}= \lim_{\gep \to 0} \liminf_{N \to \infty} \frac{T_{\rm mix}-T_{\rm mix}(1-\gep)}{N^2}=\infty. \end{cases}\end{equation} The second line states that the cutoff window is at most $N^2$ while the third implies not only that this is sharp, but also that one the time scale $N^2$, the ``cutoff profile'' has infinite support in both directions. \end{rem} \begin{rem} Our result does not cover the case of a bounded number of particles. In this case there is no cutoff and the mixing time is of order $N^2$ for every $\gep$ with a pre-factor which depends on $\gep$ (a behavior very similar to the random-walk: case $k=1$). \end{rem} We also show that on the other-hand that starting from a typical configuration, the relaxation to equilibrium is not abrupt and occurs on the time-scale $N^2$. \begin{theorem}\label{secondres} For any sequence $k(N)$ satisfying $k(N)\le N/2$ and tending to infinity, we have for all $u>0$ \begin{equation} 0< \liminf_{N\to \infty} {\bf d}^{N,k}(N^2u) \le \limsup_{N\to \infty} {\bf d}(N^2u)<1 \end{equation} and \begin{equation} \lim_{s\to \infty} \limsup_{N\to \infty} {\bf d}^{N,k}(N^2s)=0. \end{equation} \end{theorem} \begin{rem} Note that we will not prove that \begin{equation}\label{liminf} \lim_{s\to 0}\liminf_{N\to \infty} {\bf d}(N^2s)=1, \end{equation} which would complete the picture by showing that the system does not mix at all before the $N^2$ time-scale. However we point out to the interested reader that such a result can be obtained combining ingredients of Section \ref{lowerbounds} together with \cite[Lemma 3.1]{cf:Lac2} which asserts that the fluctuations of $a_1(\eta)$ defined in \eqref{defa1} are asymptotically Gaussian. The convergence of ${\bf d}(N^2\cdot)$ when $N$ tends to infinity remains an open question. \end{rem} \subsection{The cutoff for the exclusion on the segment} Let us, in this section, briefly sketch the proof or at least recall the ingredients used in \cite{cf:Lac} to derive the cutoff for the exclusion on the segment $ \llbracket 1,N \rrbracket$. The state-space of particle configuration on the segment comes with a natural order \begin{equation}\label{order} \eta\le \eta' \quad \Leftrightarrow \quad \forall x\in \llbracket 1,N \rrbracket \quad \sum_{y=1}^x \eta_y \le \sum_{y=1}^x \eta'_y. \end{equation} We set $\xi_x(\eta):= \sum_{y=1}^x \eta_y.$ It turns out that not only this order is preserved by the dynamics in a certain sense (see \cite{cf:Wilson} but also \cite{cf:Rost}), and but it has additional properties: if $\wedge$ denotes the maximal configuration for \eqref{order} then $P_t(\wedge,\cdot)$ is an increasing function (for any $t$); we also have positive correlation of increasing events (a.k.a \ the FKG inequality after \cite{cf:FKG}). \medskip These monotonicity properties are first used to show that after time of $(1+\delta)(2\pi)^{-1}N^2\log k$, starting the dynamics from $\wedge$, we can couple a finite dimensional (\textit{i.e.} whose dimension remains bounded when $N$ grows) projection of $(\xi_x(\eta_t))_{x\in \llbracket 1,N \rrbracket}$ together with the corresponding equilibrium distribution. \medskip The monotonicity is then used again to check that the Peres-Winckler censorship inequality \cite[Theorem 1.1]{cf:PW} is valid in our context. The latter result establishes that, if one starts from the maximal configuration for the order described in \eqref{order}, ignoring some of the updates in the dynamics only makes the mixing slower (as shown in \cite{cf:Hol}, this can fail to be true if there is no monotonicity). We use this statement to show that the system mixes in a time much smaller than $(2\pi)^{-1} N^2\log k$ once a finite projection is close to equilibrium. This method establishes a sharp upper bound (at first order) for the mixing time starting from $\wedge$ and some additional work is necessary to show that this is indeed the worse initial condition (we refer to the introduction \cite{cf:Lac} for more details). \subsection{Differences between the segment and the circle} In the view of the previous section, the proof in \cite{cf:Lac} for the mixing time for the exclusion on the segment heavily relies on monotonicity arguments in every step of the process. The drawback of this approach is that it is not very robust, and cannot be used for either higher dimension graphs (for instance $\{1,\dots,N\}^d$ with either free or periodic boundary condition). It even breaks down completely if one allows jump between site $1$ and $N$. \medskip With this in mind, our idea when studying the exclusion on the circle is also to develop an approach to the problem which is more flexible, and could provide a step towards the rigorous identification of the cutoff threshold in higher dimensions (see Section \ref{higherdef} for conjectures and rigorous lower-bounds). This goal is only partially achieved as, even if we do not require monotonicity, a part of our proof relies on the interface representation of the process (see Section \ref{fluctuat}) which is a purely one-dimensional feature. However let us mention that a $d$ dimensional generalization of Proposition \ref{smallfluctu} can be shown to remain valid for $d\ge 2$. A missing ingredient in higher dimension is thus a coupling which allows to couple particle configuration with typical fluctuation with equilibrium in a time of order $N^2$. \medskip Another positive point is that by relying much less on monotonicity, we are able to prove statements about the mixing time starting from an arbitrary position (cf. Theorem \ref{secondres}) instead of focusing only on the extremal ones. \medskip Finally note that the method developed in this paper gives more precise results than the one in \cite{cf:Lac} as we identify exactly the width of the cutoff window (and it also extends to the segment). However, we could not extract from it the asymptotic mixing time for the adjacent transposition shuffle, which seem to require novel ideas. \begin{rem} In \cite{cf:Lac2}, by combining the technique of the present paper with some aditionnal new ideas, the author improved Theorem \ref{mainres} by describing the full cutoff profile, that is, identified the limit of $d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right)$. Proposition \ref{smallfluctu} as well as the multiscale analysis used in Section \ref{multis} play a crucial role in the proof. \end{rem} \subsection{Organization of the paper} In Section \ref{lowerbounds} we prove the part of the results which corresponds to lower-bounds for the distance to equilibrium, that is to say, the first lines of \eqref{damix} and \eqref{damix2}. The proof of this statement is very similar to the one proposed by Wilson in \cite{cf:Wilson}, the only significant difference is that we have to work directly with the particle configuration instead of the height-function. Doing things in this manner underlines that the proof in fact does not rely much on the dimension (see Section \ref{higherdef}). While the proof does not present much novelty, we prefer to mention it in full as it is relatively short and it improves the best existing bound in the literature (see \cite{cf:Mor}). \medskip The main novelty in the paper is the strategy to prove upper-bound results (second lines of \eqref{damix} and \eqref{damix2}). In Section \ref{pubdecompo} we explain how the proof is decomposed. In Section \ref{arratia}, we use a comparison inequality of Liggett \cite{cf:Lig77} to control the (random) fluctuations of the local density of particle after time $\frac{N^2}{8\pi^2}\log k$. Finally we conclude by showing that configuration which have reasonable fluctuations couples with equilibrium within time $O(N^2)$, using interface representation for the particle system, and a coupling based on the graphical construction. The construction is detailed in Section \ref{fluctuat}, and the proof is performed using a multi-scale analysis in Section \ref{multis}. \section{Lower bound on the mixing time}\label{lowerbounds} \subsection{The statement} The aim of this Section is to prove the some lower bounds on the distance to equilibrium. Following the method of \cite[Theorem 4]{cf:Wilson}, we achieve such a bound by controlling the first two moments of the first Fourier coefficient of $\eta_t$. \begin{proposition}\label{dastate} For any sequence $k(N)$ satisfying $k(N)\le N/2$ and tending to infinity. we have \begin{equation}\label{laslande} \lim_{s\to -\infty} \lim_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+sN^2 \right)=1, \end{equation} and for any $u\in {\ensuremath{\mathbb R}} $ \begin{equation}\label{wiltord} \liminf_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right) >0. \end{equation} Moreover we have for any $u>0$ \begin{equation}\label{meanstuf} \liminf_{N\to \infty} {\bf d}^{N,k}(N^2u)>0. \end{equation} \end{proposition} \subsection{Relaxation of the``first" Fourier coefficient} The main idea is to look at ``the first" Fourier coefficient (a coefficient corresponding to the smallest eigenvalue of the discrete Laplacian on ${\ensuremath{\mathbb Z}} _N$), of $\eta_t$. For $\eta\in \Omega_{N,k}$, we define \begin{equation}\label{defa1} a_1(\eta):=\sum_{x\in {\ensuremath{\mathbb Z}} _N} \eta(x) \cos \big(\frac{2\pi x}{N}\big). \end{equation} It is an eigenfunction of the generator $\mathcal L$, (the reason for this being that each particle performs a diffusion for which $\cos (\frac{x2\pi}{N})$ is an eigenfunction), associated to the eigenvalue $-\lambda_1$ where \begin{equation} \lambda_1:=2\left(1-\cos(2\pi/N)\right). \end{equation} \begin{lemma} The function $a_1$ is an eigenfunction of the generator $\mathcal L$ with eigenvalue $-\lambda_1$, and as a consequence, for any initial condition $\chi\in \Omega$ \begin{equation} M_t:=e^{-t\lambda_1}a_1(\eta_t^\chi) \end{equation} is a martingale for the filtration ${\ensuremath{\mathcal F}} $ defined by $${\ensuremath{\mathcal F}} _t:=\sigma((\eta_s)_{s\le t} ).$$ In particular we have \begin{equation} {\ensuremath{\mathbb E}} \left[ a_1(\eta_t^\chi)\right]=e^{-t\lambda_1}a_1(\chi). \end{equation} Furthermore one can find a constant such that for all $t\ge 0$ \begin{equation}\label{lavar} {\rm Var} \left[ a_1(\eta_t^\chi)\right]\le 2k. \end{equation} \end{lemma} \begin{proof} Using the notation \begin{equation*} \nabla f(x)=f(x+1)-f(x) \quad \text{and} \quad \Delta f=f(x+1)+f(x+1)-2f(x), \end{equation*} and set $\overline \cos (x):=\cos \big(\frac{2\pi x}{N}\big)$. We have \begin{multline} \mathcal L a_1(\eta):= \sum_{x\in {\ensuremath{\mathbb Z}} _N} \left(a_1(\eta^x)-a_1(\eta)\right)= -\sum_{x\in {\ensuremath{\mathbb Z}} _N} \nabla \eta(x)\nabla \overline \cos (x) \\=\sum_{x\in {\ensuremath{\mathbb Z}} _N} \eta(x)\Delta \overline \cos (x)= -\lambda_1 a_1(\eta) \end{multline} where the second equality comes from reindexing the sum and the last one from the identity \begin{equation} \Delta \overline \cos (x) =-\lambda_1 \overline \cos (x). \end{equation} From the Markov property, we have for every positive $t$, \begin{equation} \partial_s {\ensuremath{\mathbb E}} [M_{t+s} \ | \ {\ensuremath{\mathcal F}} _t ] |_{s=0}= \lambda_1 M_{t}+ e^{t\lambda_1}({\ensuremath{\mathcal L}} a_1)(\eta^\chi_t)=0. \end{equation} which implies that it is a martingale. In particular we have \begin{equation}\label{esp} {\ensuremath{\mathbb E}} \left[a_1(\eta^{\chi}_t)\right]= e^{-\lambda_1 t}a_1(\chi). \end{equation} Now let us try to estimate the variance of $M_t$: for the process with $k$ particle, the maximal transition rate is $2k$ (each of the $k$ particles can jump in an most $2$ directions independently with rate one). If a transition occurs at time $s$, the value of $M_s$ varies at most by an amount $$e^{\lambda_1 s}\max_{x\in {\ensuremath{\mathbb Z}} _N} |{\overline \cos} (x)-{\overline \cos} (x+1)|\le e^{\lambda_1 s}\frac{2\pi}{N}.$$ With this in mind we can obtain a bound on the bracket of $M$ (that is: the predictable process such that $M^2_t-\langle M\rangle_t$ is a martingale) \begin{equation} \langle M\rangle_t \le 2k\int_0^t e^{2\lambda_1 s}\left(\frac{2\pi}{N}\right)^2\dd s \end{equation} Then using the fact that ${\rm Var}(M_t)= {\ensuremath{\mathbb E}} \left[ \langle M\rangle_t \right]$, we have, for $N$ sufficiently large, for any $\chi\in \Omega_{N,k}$ and any $t\ge 0$ \begin{equation}\label{varance} {\rm Var} \left[ a_1(\eta_t^\chi)\right]=e^{-2\lambda_1 t} {\ensuremath{\mathbb E}} \left[ \langle M\rangle_t \right] \le 2k \int_0^t e^{2\lambda_1 (s-t)} \left(\frac{2\pi}{N}\right)^2\dd s\le \frac{4\pi^2k}{N^2\lambda_1}\le 2k \end{equation} where the last inequality comes from the fact that $\lambda_1\sim 4\pi^2 N^{-2}$. \end{proof} At equilibrium (\textit{i.e.} under the distribution $\mu$) $a_1(\eta)$ has mean zero and typical fluctuations of order $\sqrt{k}$. The equilibrium variance can either be computed directly or one can use \eqref{lavar} for $t\to \infty$ to obtain $${\rm Var}_{\mu}\left(a_1(\eta) \right)\le 2k.$$ From \eqref{lavar}, if ${\ensuremath{\mathbb E}} \left[a_1(\eta_t^\chi)\right] $ is much larger than $\sqrt{k}$ then $a_1(\eta_t^\chi)$ is much larger than $\sqrt{k}$ with large probability which implies $\| P^{\chi}_t-\mu\|$ has to be large. We need to use this reasoning for a $\chi$ which maximizes $a_1$. \subsection{Proof of Proposition \ref{dastate}} Using \cite[Proposition 7.8]{cf:LPW} (obtained from the Cauchy Schwartz inequality) and the estimates \eqref{esp}-\eqref{varance}, we have \begin{equation}\begin{split}\label{kit} \|P^{\chi}_t-\mu\|_{TV}&\ge \frac{\left({\ensuremath{\mathbb E}} \left[a_1(\eta_t^{\chi})\right]\right)^{2}}{\left({\ensuremath{\mathbb E}} \left[a_1(\eta_t^{\chi})\right]\right)^{2}+ 2\left[{\rm Var} \left( a_1(\eta_t^{\chi})\right)+{\rm Var}_{\mu} \left( a_1(\eta) \right) \right]}\\ &\ge\frac{1}{1+ 8k \exp(2\lambda_1 t) a_1(\chi)^{-2}}. \end{split}\end{equation} Consider $\chi=\chi_0$ being the configuration which minimizes $a_1$. \begin{equation} \chi_0(x):=\begin{cases} \mathbf{1}_{\{x\in \{-p,\dots, p\}\}} \text{ if } k=2p+1,\\ \mathbf{1}_{\{x\in \{-p+1,\dots, p\}\}} \text{ if } k=2p, \end{cases} \end{equation} It is rather straight-forward to check that for any $N\ge 2$, $k\le N/2$, we have $a_1(\chi_0)\ge k/2$. Thus using the above inequality for $t=t_N:= uN^2+ \frac{N^2}{8\pi^2}\log k$ ($u\in {\ensuremath{\mathbb R}} )$ the reader can check that \begin{equation} \liminf_{N\to \infty} \|P^{\chi_0}_{t_N}-\mu\|_{TV}\ge \lim_{N\to \infty} \frac{1}{1+32 k^{-1} \exp(-2\lambda_1 t_N)}= \frac{1}{1+32e^{- 8\pi^2 u}}, \end{equation} which implies both \eqref{laslande} and \eqref{wiltord}. To prove \eqref{meanstuf} we need to use \eqref{kit} for the set $$A^N_{\delta}:=\{ \chi\in \Omega_{N,k} \ | \ a_1(\chi)\ge \delta \sqrt{k}\}.$$ We have \begin{equation} {\bf d}(sN^2)\le \left[ 1-\mu( A_{\delta}) \right] +\frac{\mu( A_{\delta})}{1+ 8 \delta^{-2}\exp(2\lambda_1N^2 s) }. \end{equation} To conclude the proof, it is sufficient to prove that $\liminf_{N\to \infty} \mu_N(A_{\delta})>0$ for some small $\delta>0$. This can be done e.g. by showing that $\mu\left[ (a_1(\eta))^4\right]\le Ck^2$ (which is left as an exercise to the reader). \hfill $\quad \Box$ \bigskip \subsection{The exclusion in higher dimensions} \label{higherdef} Let us shortly present in this section a generalization of Proposition \ref{dastate} for the exclusion process in higher dimension $d\ge 2$. \medskip For $N\in {\ensuremath{\mathbb N}} $, and $k\le N^d/2$ we define the state-space of particle configurations as \begin{equation} \Omega^d_{N,k}:=\left\{ \eta\in \{0,1\}^{{\ensuremath{\mathbb Z}} _N^d} \ | \sum_{x\in {\ensuremath{\mathbb Z}} _N^d} \eta(x)=k \right\}. \end{equation} Given $x\sim y$ a pair of neighbor on the torus ${\ensuremath{\mathbb Z}} _N^d$, we set \begin{equation} \eta^{x,y}:=\begin{cases} \eta^{x,y}(x)=\eta(y),\\ \eta^{x,y}(y)=\eta(x)\\ \eta^{x,y}(z)=\eta(z), \quad \text{ for } z\notin \{x,y\}. \end{cases} \end{equation} and define the generator by \begin{equation} \mathcal L f(\eta):= \sumtwo{x, y\in {\ensuremath{\mathbb Z}} _N}{x\sim y} f(\eta^{x,y})-f(\eta). \end{equation} We set $d^{N,k,d}$ to be the distance to equilibrium of the chain with generator ${\ensuremath{\mathcal L}} $ at time $t$ (see \eqref{dat}). The we can adapt the proof of Proposition \ref{dastate} and show that \begin{proposition} For any sequence $k(N)$ satisfying $k(N)\le N^d/2$ and tending to infinity. we have \begin{equation} \lim_{u\to \infty} \lim_{N\to \infty} d^{N,k,d} \left( (8\pi^2)^{-1}N^2\log k-uN^2 \right)=1, \end{equation} and for any $u\in {\ensuremath{\mathbb R}} $ \begin{equation} \liminf_{N\to \infty} d^{N,k,d} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right) >0. \end{equation} \end{proposition} \begin{rem} Note that the result remains valid if the torus is replaced by the grid (\textit{i.e.} if we drop the periodic boundary condition) in which case $(8\pi^2)^{-1}$ has to be replaced by $(2\pi^2)^{-1}$. In view of this result, and of the content of the next section, it is natural to conjecture that $(8\pi^2)^{-1}N^2\log k$ is the mixing time of the exclusion process on the torus. \end{rem} \begin{proof} The proof is almost exactly the same. The eigenfunction which one has to consider is \begin{equation} a_1(\eta):=\sum_{x\in {\ensuremath{\mathbb Z}} ^d_N} \eta(x) \cos \left(\frac{x_1\pi}{N}\right). \end{equation} where $x_1\in {\ensuremath{\mathbb Z}} _N$ is the first coordinate of ${\ensuremath{\mathbb Z}} _N$. It is not difficult to check that if $\chi_0$ is a maximizer of ${\ensuremath{\mathbb Z}} _N^d$ (there might be many of them) $a_1(\chi^0)$ is larger than $k/2$. \end{proof} \section{Upper bound on the mixing time} \label{pubdecompo} \subsection{Decomposition of the proof} To complete the proof of the main result, we have to prove \begin{proposition}\label{dastate2} For any sequence $k(N)$ satisfying $k(N)\le N/2$ and tending to infinity, and for any $u\in {\ensuremath{\mathbb R}} $, we have \begin{itemize} \item[(i)] \begin{equation}\label{mikou} \lim_{s \to \infty} \limsup_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+sN^2 \right)=0, \end{equation} \item[(ii)] \begin{equation}\label{benarbia} \limsup_{N\to \infty} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right) <1. \end{equation} \item[(iii)] \begin{equation}\label{mikou2} \lim_{s\to \infty} \limsup_{N\to \infty} {\bf d}(sN^2)=0, \end{equation} \item[(iv)] \begin{equation}\label{benarbia2} \limsup_{N\to \infty} {\bf d}(uN^2)<1. \end{equation} \end{itemize} \end{proposition} The proof of this statement is much more involved than that of Proposition \ref{dastate} and relies on an explicit coupling of $P^{\chi}_t$ and the equilibrium measure $\mu$ for an arbitrary $\chi\in \Omega$, which requires two steps. \medskip In a first step we want to show that after time $t_0=(8\pi^2)^{-1}N^2\log k$, or even shortly before that time, the density of particle is close to $k/N$ everywhere on the torus and that the deviation from it are not larger than equilibrium fluctuation (which are of order $\sqrt{k}$). This part of the proof relies on comparison inequalities developed by Liggett \cite{cf:Lig77}, which allow to replace the exclusion process with $k$ independent random walks. \medskip In a second step, we construct a dynamical coupling of the process starting $\chi$ which has fluctuations of order $\sqrt{k}$, with one starting from equilibrium, using the height-function representation. We show that the two height functions couple within a time $O(N^2)$ which is what we need to conclude. The construction of the coupling and heuristic explanations are given in Section \ref{fluctuat}, while the proof is performed in Section \ref{multis}. \subsection{Control of the fluctuation of the particle density} To present the main proposition of the first step we need to introduce some notation Given $x\ne y$ in ${\ensuremath{\mathbb Z}} _N$, we define the interval $[x,y]$ to be the smallest (for the inclusion) subset $I$ of ${\ensuremath{\mathbb Z}} _N$ which contains $x$ and which satisfies \begin{equation}\label{theinterval} \forall z\in I\setminus\{y\},\ z+1\in I. \end{equation} Let $f$ be a function defined on ${\ensuremath{\mathbb Z}} _N$ we use the notation \begin{equation} \sum_{z=x}^y f(z):=\sum_{z\in [x,y]} f(z). \end{equation} We define the \textsl{length} of the interval (which we write $\#[x,y]$) to be the number of points in it (e.g.\ it is equal to $y-x+1$ if $1\le x\le y\le N$). We will prove the following result: given $A\ge 0$ we set \begin{equation} t_A=\frac{N^2}{8\pi^2} \log k- A N^2. \end{equation} \begin{proposition}\label{smallfluctu} There exists a constant $c$ such that, for all $A\in {\ensuremath{\mathbb R}} $, for all $N$ sufficiently large (depending on $A$) for all initial condition $\chi\in \Omega_{N,k}$, \begin{equation}\label{fluqueton} {\ensuremath{\mathbb P}} \left[\exists x,y \in {\ensuremath{\mathbb Z}} _N, \ \left| \sum_{z=x+1}^y \left(\eta^\chi_{t_A}(z)-\frac{k}{N}\right)\right|\ge \left(s+8e^{4\pi^2A}\right) \sqrt{k} \right] \le 2\exp\left( -c s^2\right). \end{equation} \end{proposition} \subsection{Coupling with small fluctuations} In the second step of our proof, we show that starting from a configuration with small fluctuations we can relax to equilibrium within time $O(N^2)$. Set \begin{equation}\label{etagro} \mathcal G_s := \left\{ \eta\in \Omega \ | \ \forall x, y \in {\ensuremath{\mathbb Z}} _N \left|\sum_{z=x+1}^y \left(\eta^\chi_t(z)-\frac{k}{N}\right)\right|\le s \sqrt{k}\right\} \end{equation} The following proposition establishes this diffusive relaxation to equilibrium in two ways: first it shows that one gets $\gep$ close to equilibrium within a time $C(s,\gep)$, but also that on the scale $N^2$ the distance becomes immediately bounded away from one for positive times. \begin{proposition}\label{csdf} For any $s\ge1$, given $\gep>0$ there exists a constant $C(s,\gep)$ such that \begin{equation} \forall \chi \in \mathcal G_s,\ \| P^{\chi}_{C(s,\gep) N^2}-\mu \|_{TV}\le \gep. \end{equation} For any $s,u>0$, there exists $c(s,u)>0$ such that \begin{equation}\label{gramicho} \forall \chi \in \mathcal G_s,\ \| P^{\chi}_{uN^2}-\mu \|_{TV}\le 1-c(s,u). \end{equation} \end{proposition} Now we show that Propositions \ref{smallfluctu} and \ref{csdf} are sufficient to prove \eqref{dastate2}. \begin{proof}[Proof of Proposition \ref{dastate2}] We use the semi-group property at time $t_A$. We have for any $\chi\in \Omega$ \begin{equation} P_{t_A+CN^2}^\chi(\cdot)=\sum_{\chi'\in \Omega} P_{t_A}^{\chi}(\chi')P^{\chi'}_{CN^2}\left(\cdot \right). \end{equation} Hence, using the triangle inequality, we have for any event $\mathcal G$ \begin{multline}\label{croot} \| P_{t_A+CN^2}^\chi-\mu\|\le \sum_{\chi'\in \Omega}P_{t_A}^{\chi}(\chi') \| P^{\chi'}_{CN^2}-\mu \| \le P_{t_A}^{\chi}(\mathcal G^c)+P_{t_A}^{\chi}(\mathcal G)\max_{\chi'\in \mathcal G}\| P^{\chi'}_{CN^2}-\mu \| \end{multline} We can now start the proof of \eqref{mikou}. According to Proposition \ref{smallfluctu}, if $s$ is sufficiently large, we have \begin{equation} P_{t_0}^{\chi}((\mathcal G_s)^c)\le \gep/2. \end{equation} Fixing such an $s$ (which we denote by $s(\gep)$), according to Proposition \ref{csdf} we can find a constant $C(\gep)$ such that \begin{equation} \max_{\chi'\in \mathcal G_s(\gep)}\| P^{\chi'}_{C(\gep)N^2}-\mu \|\le \gep/2, \end{equation} which is enough to conclude the proof, using \eqref{croot} with $A=0$, $\mathcal G=\mathcal G_{s(\gep)}$, and $C=C(\gep)$.\\ To prove \eqref{mikou2}, we note that for any ${\ensuremath{\mathcal G}} $ \begin{equation}\label{croot2} {\bf d}^{N,k}(uN^2)\le \mu(\mathcal G^c)+ \mu(\mathcal G) \max_{\chi'\in \mathcal G_s}\| P^{\chi'}_{uN^2}-\mu \| \end{equation} and we can conclude similarly using Proposition \ref{smallfluctu} to find $s(\gep)$ such that that $\mu({\ensuremath{\mathcal G}} ^c_{s(\gep)})<\gep/2$ for all $N$, and then $u$ large enough. \medskip We now prove \eqref{benarbia}. For a fixed $u<-1$, for $A=1-u$, we can find $s(u)$ sufficiently large such that \begin{equation} P_{t_A}^{\chi}(\mathcal G_{s(u)})\ge \frac 1 2. \end{equation} Using \eqref{gramicho} we obtain that \begin{equation} \max_{\chi \in \mathcal G_{s(u)} }\quad \| P^{\chi}_{N^2}-\mu \|_{TV}\le 1-c(s(u),1). \end{equation} We can then conclude by using \eqref{croot} for $A=s-1$ and $ \mathcal G=\mathcal G_{s(u)}$, that for large $N$ \begin{equation} d^{N,k} \left( (8\pi^2)^{-1}N^2\log k+uN^2 \right) <1-\frac{c(s(u),1)}{2}. \end{equation} Concerning \eqref{benarbia2}, we choose $s_0$ such that $\mu({\ensuremath{\mathcal G}} ^c_{s_0})\ge 1/2$, and use \eqref{gramicho} and \eqref{croot2} to show that \begin{equation} {\bf d}^{N,k} (uN^2)\le 1-\frac{c(s_0,u)}{2}. \end{equation} \end{proof} \section{Proof of Proposition \ref{smallfluctu}} \label{arratia} Let us first slightly modify the statement \begin{proposition}\label{secondfluctu} There exists a constant $c$ such that for all $N$ sufficiently large, for all $t\ge 3N^2$ for all $\chi\in \Omega_{N,k}$, for all $s\ge 0$ \begin{equation}\label{fluqueton2} {\ensuremath{\mathbb P}} \left[\exists x,y \in {\ensuremath{\mathbb Z}} _N, \ \left| \sum_{z=x+1}^y (\eta^\chi_t(z)-{\ensuremath{\mathbb E}} [\eta^\chi_t])\right|\ge s \sqrt{k} \right] \le 2\exp\left( -c s^2\right). \end{equation} \end{proposition} Proposition \ref{smallfluctu} can be deduced from Proposition \ref{secondfluctu} using the following Lemma which relies on heat-kernel estimates. The proof is based on the diagonalization of the Laplace operator on the discrete circle. The method is classic (see e.g \cite{cf:Fort}), but as we could not find a reference matching exactly our setup and result, we include the proof in the appendix for the sake of completeness. \begin{lemma}\label{bound} The following statement hold \begin{itemize} \item [(i)] For $N$ large enough, for all $\chi$, all $x\in {\ensuremath{\mathbb Z}} _N$ and $t\ge N^2$ \begin{equation} \left| {\ensuremath{\mathbb E}} [\eta^\chi_t(x)]-\frac{k}{N} \right| \le 4kN^{-1} e^{-\lambda_1 t}. \end{equation} In particular \begin{equation}\label{labiound} {\ensuremath{\mathbb E}} [\eta^\chi_t(x)]\le \frac{2k}{N}. \end{equation} \item [(ii)] If $X_t$ is a nearest neighbor random walk starting from $x_0\in {\ensuremath{\mathbb Z}} _N$ one has for all $t\ge N^2$, all $x,y\in {\ensuremath{\mathbb Z}} _N$ \begin{equation}\label{labiound2} {\ensuremath{\mathbb P}} \big[X_t\in [x,y] \big]\le \frac{2\#[x,y]}{N} \end{equation} \end{itemize} \end{lemma} \begin{proof}[Proof of Proposition \ref{smallfluctu}] We have for all $N$ sufficiently large and $x,y \in {\ensuremath{\mathbb Z}} _N$ \begin{equation} \sum_{z=x}^y\left| {\ensuremath{\mathbb E}} [\eta^\chi_{t_A}(z)]-\frac{k}{N} \right|\le 4\#[x,y] kN^{-1}e^{\lambda_1 t_A}\le 8\sqrt{k} e^{4\pi^2A}. \end{equation} Hence we have \begin{equation} \left| \sum_{z=x}^y \eta^\chi_{t_a}(z)-\frac{k}{N} \right|\le \sum_{z=x}^y\left| \eta^\chi_{t_A}(z)-\sum_{z=x}^y {\ensuremath{\mathbb E}} [\eta^\chi_{t_A}(z)] \right|+ 8\sqrt{k} e^{4\pi^2A}. \end{equation} and thus Proposition \ref{smallfluctu} follows from Proposition \ref{secondfluctu}. \end{proof} \begin{rem} Note that by taking $t= \infty$ with $A=0$ in \eqref{fluqueton2}, we also have a result concerning density fluctuation for the equilibrium measure $\mu$ which we will use during our proof. \begin{equation}\label{fluquetec} \mu\left( \exists x,y \in {\ensuremath{\mathbb Z}} _N, \ \left| \sum_{z=x+1}^y (\eta(z)-\frac{k}{N})\right|\ge s \sqrt{k} \right) \le 2\exp\left( -c s^2\right). \end{equation} \end{rem} The idea of the proof is to control the Laplace transform of the number of particle in each interval (and then we roughly have to sum over all intervals to conclude). To control this Laplace transform, with use a comparison inequality due to Liggett \cite{cf:Lig77}, which allows us to compare the simple exclusion with a particle system without exclusion, that is: $k$ independent random walks on the circle. With this comparison at hand, the Laplace transform can be controlled simply by using \eqref{labiound2}. \subsection{Estimate on the Laplace transform} From now on the initial condition $\chi$ is fixed, and for convenience, does not always appear in the notation. For $x\in{\ensuremath{\mathbb Z}} _N$ ($x\in \{1,\dots N-1\}$) and we set \begin{equation}\begin{split}\label{defst} S_{x,y}(t)&:= \sum_{z=x+1}^y (\eta^\chi_t(z)-{\ensuremath{\mathbb E}} [\eta^\chi_t]),\\ S_{x}(t)&:=S_{0,x}(t), \end{split} \end{equation} \begin{lemma}\label{laplacetrans} For all $x\in {\ensuremath{\mathbb Z}} _N$ and $t\ge N^2$ \begin{equation}\label{gronek} {\ensuremath{\mathbb E}} \left[e^{\alpha S_x(\eta_t)}\right] \le \exp\left(2\frac{kx}{N} \alpha^2\right). \end{equation} \end{lemma} \begin{rem} Of course the formula \eqref{gronek} remains valid for any interval of length $x$ by translation invariance \end{rem} \begin{proof}[Proof of Proposition \ref{secondfluctu}] . Note that we can a always consider in the proof that $s$ is sufficiently large (as the result is obvious for $s\le ((\log 2)/c)^{1/2}$). By the triangle inequality, we have for all $x,y\in {\ensuremath{\mathbb Z}} $. \begin{equation} \left|S_{x,y}(t)\right| \le \left|S_x(t)\right|+\left|S_y(t)\right| \end{equation} For convenience we decide to replace $s$ by $16s$ in \eqref{fluqueton2} (this only corresponds to changing the value of $c$ by a factor $256$). Hence it is sufficient to prove that for any $t\ge N^2$ we have \begin{equation}\label{fluctuation} {\ensuremath{\mathbb P}} \left[\exists x \in {\ensuremath{\mathbb Z}} _N, \ \left| S_x(t)\right|\ge 8s \sqrt{k} \right] \le 2\exp\left( -c s^2\right). \end{equation} Now let us show that we can replace $ x \in {\ensuremath{\mathbb Z}} _N$ by a smaller subset. Let $q_0$ be such that \begin{equation}\label{defqz} \frac{Ns}{\sqrt k} < 2^{q_0} \le \frac{2Ns}{\sqrt k} \end{equation} For $x\in\{0,\dots N-1\}$ (which we consider as an element of ${\ensuremath{\mathbb Z}} _N$, one can find $y\in 2^{q_0} \{0,\dots, \lceil (N-1) 2^{-q_0}\rceil \}$ (a multiple of $2^{q_0}$) such that $$y \le x \le y^+:= \max(y+2^{q_0},N),$$ We have, from the definition of \eqref{defst} \begin{equation}\label{grimberg} \begin{split} S_{x}(t)&\ge S_y(t)-\sum_{z=y+1}^x{\ensuremath{\mathbb P}} [\eta_t(z)=1],\\ S_{x}(t)&\le S_{y^+}(t)+\sum_{z=x+1}^{y^+}{\ensuremath{\mathbb P}} [\eta_t(z)=1] \end{split} \end{equation} From \eqref{labiound} and $$(y^+-y)\le 2^q_0\le 2Ns/(\sqrt{k}),$$ the second term of both equations in \eqref{grimberg} is smaller than $4Ns/(\sqrt{k})$ and hence \begin{equation} |S_{x}(t)|\le \max\left( | S_y(t)|, |S_{y^+}(t) |\right) + 4s\sqrt{k}. \end{equation} Thus, we can reduce \eqref{fluctuation} to proving \begin{equation}\label{fluctuation2} {\ensuremath{\mathbb P}} \left[\exists y \in 2^{q_0} \{0,\dots , \lfloor N 2^{-q_0} \rfloor \},\ | S_y(t)| \ge 4s \sqrt{k} \right] \le 2\exp\left( -c s^2\right). \end{equation} The next step relies on multi-scale analysis. Let $p$ be such that $N\in (2^p,2^{p+1}]$. Given $y\in 2^{q_0} \{0,\dots , \lfloor (N-1) 2^{-q_0} \rfloor \}$, we can decompose in base $2$ as follows \begin{equation} 2^{q_0} y=:\sum_{q=0}^{p-q_0} \gep_q 2^{p-q}, \end{equation} where $\gep_q\in \{0,1\}$. We set $y_{-1}:=0$ and $y_r:=\sum_{q=0}^{r} \gep_q 2^{p-q}$, for $r\in\{0,\dots,p-q_0\}$.\\ Using the triangle inequality again we have $|S_y(t)|\le \sum_{r=0}^{p-q_0} |S_{y_{r-1},y_r}(t)|,$ and hence \begin{equation}\label{carmniq} \left\{ |S_y(t)| \ge 4s \sqrt{k} \right\} \Rightarrow \left\{ \exists r\in\{0,\dots,p-q_0\} |S_{y_{r-1},y_r}(t)| \ge \left(\frac{3}{4}\right)^{r} s\sqrt{k} \right\}. \end{equation} Thus, the proof of Proposition \ref{smallfluctu} can be reduced to show the following. \begin{lemma}\label{fixeddev} Let us define \begin{multline} \mathcal H(s,t):= \Big\{\exists q\in\{q_0,\dots,p\},\ \exists y\in \{1,\dots,\lfloor N2^{-q}\rfloor\},\\ |S_{2^{q}(y-1), 2^{q}y}(t)| \ge \left(\frac{3}{4}\right)^{p-q} s\sqrt{N}\Big\} \end{multline} For every $t\ge N^2$ we have \begin{equation} {\ensuremath{\mathbb P}} [\mathcal H(s,t)]\le 2e^{-cs^2} \end{equation} \end{lemma} Indeed from \eqref{carmniq} and the reasoning taking place before, one has $$ \Big\{ \exists x\in {\ensuremath{\mathbb Z}} _N, |S_{x}(t)| \ge 8s\sqrt{N}\Big\} \subset \mathcal H(s,t).$$ \end{proof} \begin{proof}[Proof of Lemma \ref{fixeddev}] We have by union bound \begin{multline}\label{croco} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal H}} (s,t)]\le \sum_{q=q_0}^p \sum_{y=1}^{\lfloor N 2^{-q} \rfloor} {\ensuremath{\mathbb P}} \left[ |S_{2^{q}(y-1), 2^{q}y}(t)| \ge \left(\frac{3}{4}\right)^{p-q} s\sqrt{k} \right] \\ \le \sum_{q=0}^p 2^{p+1-q} \max_{y} {\ensuremath{\mathbb P}} \left[ |S_{2^{q}(y-1), 2^{q}y}(t)| \ge\left(\frac{3}{4}\right)^{p-q} s\sqrt{k} \right]. \end{multline} Thus we have to find a bound on $${\ensuremath{\mathbb E}} \left[ |S_{2^{q}(y-1), 2^{q}y}(t)| \le \left(\frac{3}{4}\right)^{p-q} s\sqrt{k} \right]$$ which is uniform in $y$ and is such that the sum in the second line of \eqref{croco} is smaller than $2e^{-cs^2}$. For what follows we can, without loss of generality consider only the case $y=1$, as all the estimates we use are invariant by translation on ${\ensuremath{\mathbb Z}} _N$. \medskip Using Lemma \ref{laplacetrans} and the Markov inequality, we have for any positive $\alpha\le \log 2$ \begin{equation}\label{chernov} {\ensuremath{\mathbb E}} \left[ |S_{2^{q}}(t)|\ge \left(\frac{3}{4}\right)^{p-q} s\sqrt{k} \right] \le \exp\left( 2^{q+1}\alpha^{2}\frac{k}{N}-\alpha\sqrt{k}\left(\frac{3}{4}\right)^{p-q} s\right). \end{equation} We can check that the right-hand side is minimized for $$\alpha=\alpha_{0}:=2^{-(q+2)}\left(\frac{3}{4}\right)^{p-q} s \frac{N}{\sqrt{k}},$$ Note that for all $q\ge q_0$, (recall \eqref{defqz}) one has \begin{equation} \alpha_{0}\le 2^{-(q_0+2)}\left(\frac{3}{4}\right)^{p-q_0} s \frac{N}{\sqrt{k}}\le \frac{1}{4}\left(\frac{3}{4}\right)^{p-q_0}\le \log 2. \end{equation} This ascertains the validity of \eqref{chernov}, and hence \begin{equation} {\ensuremath{\mathbb E}} \left[ |S_{2^{q}}(t)|\ge \left(\frac{3}{4}\right)^{p-q} s\sqrt{k} \right] \le e^{-s^{2}2^{q-3}N\left(\frac{3}{4}\right)^{2(p-q)}}. \end{equation} Using the fact that $N\ge 2^p$ we have \begin{equation} {\ensuremath{\mathbb P}} \left[ |S_{2^{q}}(t)| \ge \left(\frac{3}{4}\right)^{p-q} s\sqrt{k}\right] \le e^{-\frac{s^{2}}{8}\left(\frac{9}{8}\right)^{(p-q)}}. \end{equation} Using this in \eqref{croco} allows us to conclude (choosing $c$ appropriately). \end{proof} \subsection{Proof of Lemma \ref{laplacetrans}} We use a result of Liggett \cite{cf:Lig77} which provides a way to compare the simple exclusion with a simpler process composed of independent random walkers. If $f$ is a symmetric function on $({\ensuremath{\mathbb Z}} _N)^k$ and $\eta\in \Omega_{N,k}$ we set \begin{equation}\label{feta} f(\eta):=f(y_1,y_2,\dots,y_k) \end{equation} where $$\{y_1,\dots,y_k\}:=\{x \ | \ \eta(x)=1\}.$$ The above relation defines $(y_1,\dots,y_k)$ modulo permutation which is sufficient for the definition \eqref{feta}. We say that a function $f$ defined on $({\ensuremath{\mathbb Z}} _N)^2$ is positive definite if and only if for all $\beta$ such that $\sum_{x\in {\ensuremath{\mathbb Z}} _N}\beta(x)=0$, we have \begin{equation}\label{defpov} \sum_{x,y\in {\ensuremath{\mathbb Z}} ^2_N} \beta(x)\beta(y) f(x,y)\ge 0. \end{equation} We say that a function defined on $({\ensuremath{\mathbb Z}} _N)^k$ is positive definite if all its two dimensional marginals are. Note in particular a function that can be written in the form \begin{equation}\label{product} f(x_1,\dots, x_k)= C\prod_{i=1}^k g(x_k) \end{equation} where $C$ is a positive constant and $g$ is a non-negative function on ${\ensuremath{\mathbb Z}} _N$, is definite positive. \medskip Given $\chi\in \Omega_{N,k}$, let ${\bf X}^{\chi}_t:=(X^{\chi}_1(t),\dots, X^{\chi}_k(t))$ denote a set of independent random walk on ${\ensuremath{\mathbb Z}} _N$, starting from initial condition ${\bf x}^{\chi}:=(x^{\chi}_1,\dots,x^{\chi}_k)$ which satisfies \begin{equation}\label{codin} \{x^\chi_1,\dots,x^{\chi}_k\}:=\{x \ | \ \chi(x)=1\}. \end{equation} Of course \eqref{codin} defines $(x^{\chi}_1,\dots,x^{\chi}_k)$ only modulo permutation but this has no importance for what we are doing (e.g. we can fix $(x^0_1,\dots,x^0_k)$ to be minimal for the lexicographical order). \begin{proposition}\label{ligett} If $f$ is a symmetric definite positive then we have for all $t\ge 0$ \begin{equation} {\ensuremath{\mathbb E}} \left[f(\eta^\chi_t)\right]\le {\ensuremath{\mathbb E}} [f({\bf X}^{\chi}_t)] \end{equation} \end{proposition} \begin{proof} The proof in the case $k=2$ is detailed in \cite[Proof of Lemma 2.7]{cf:Lig74} perfectly adapts to the case of general $k$. We include it here for the sake of completeness. We let ${\ensuremath{\mathcal L}} '$ denote the generator of $k$ independent random walks and $Q_t$ the associated semigroup. We define the action of $P_t$ and $Q_t$ on functions as follows \begin{equation}\begin{split} P_tf(\chi)&:= \sum_{\eta \in\Omega_{N,k}} P_t(\chi,\eta)f(\eta),\\ Q_tf({\bf x})&:= \sum_{{\bf y} \in ({\ensuremath{\mathbb Z}} _N)^k} Q_t({\bf x},{\bf y})f({\bf y}). \end{split}\end{equation} As $Q_t$ is invariant by permutations of the labels, $Q_tf$ is also is a symmetric function and we can also consider $Q_tf(\eta)$ for $\eta\in \Omega$ (cf. \eqref{feta}) \medskip Using the standard property of Markov semi-group $\partial_t P_t:={\ensuremath{\mathcal L}} P_t=P_t {\ensuremath{\mathcal L}} $, we have \begin{equation} \left[Q_t(f)-P_t(f)\right](\chi)= \int^t_0 \partial_s \left[ P_{t-s} Q_s(f)(\chi)\right] \dd s\\= \int_{0}^t P_{t-s} \left( {\ensuremath{\mathcal L}} '-{\ensuremath{\mathcal L}} \right) Q_{s} f(\chi)\dd s. \end{equation} and our conclusion follows if we can prove that $\left( {\ensuremath{\mathcal L}} '-{\ensuremath{\mathcal L}} \right)P_{t-s} f(\chi)\ge 0$ for all $\chi$ (as composition by $P_{s-t}$ preserves positivity). First note that $g:=Q_{s}(f)$ is definite positive. Indeed if $\sum_{x\in {\ensuremath{\mathbb Z}} _N}\beta(x)=0$ we have \begin{multline}\label{ffesd} \sum_{x_1,x_2\in {\ensuremath{\mathbb Z}} _N} \beta(x_1)\beta(x_2)Q_sf(x_1,x_2,\dots,x_k)\\ =\sum_{y_3,\dots,y_N\in {\ensuremath{\mathbb Z}} _N} \left(\prod_{j\ge 3} p_t(x_j,y_j) \right)\left( \sum_{y_1,y_2\in {\ensuremath{\mathbb Z}} _N} \beta'(y_1)\beta'(y_2) f(y_1,y_2,\dots,y_k)\right). \end{multline} where $p_t$ is the discrete heat kernel on the circle. and $$\beta'(y):=\sum_{x\in {\ensuremath{\mathbb Z}} _N} p_t(x,y) \beta(x)$$ satisfies $\sum_{y\in {\ensuremath{\mathbb Z}} _N} \beta'(y)=0$. Thus the l.h.s of \eqref{ffesd} is non-negative as a sum of non-negative terms. Then, notice that the generator ${\ensuremath{\mathcal L}} '$ includes all the transition of ${\ensuremath{\mathcal L}} $ but also allows particle to jump on a neighboring site even if it is occupied. Hence we have, for all ${\bf x}$ with distinct coordinates \begin{equation}\label{dsasdaa} ({\ensuremath{\mathcal L}} '-{\ensuremath{\mathcal L}} )g({\bf x})= \sum_{\{i<j \ | \ x_i\sim x_j\}} g(\cdot,x_i,\cdot,x_i,\cdot)+g(\cdot,x_j,\cdot,x_j,\cdot)-2g(\cdot,x_i,\cdot,x_j,\cdot). \end{equation} where in the right-hand side, only the $i$-th and $j$-th coordinate appears in the argument of $g$. Note that each term in the r.h.s.\ of \eqref{dsasdaa} is of the form \eqref{defpov} with $\beta(x):= \mathbf{1}_{\{x=x_i\}}-\mathbf{1}_{\{x=x_j\}}$ (recall that $g$ is symmetric) and thus is positive. \end{proof} We want to apply Proposition \ref{ligett} to the function \begin{equation} f(x_1,\dots,x_k):=e^{\alpha \sum_{i=1}^k \left(\mathbf{1}_{\{x_i\in [1,y]\}}-{\ensuremath{\mathbb P}} \left[X_i(t)\in[1,y] \right] \right) } \end{equation} for $y\in \{1,\dots, N\}.$ Note that it is of the form \eqref{product} and thus is definite positive. \begin{lemma}\label{laplacetrans2} For all $N$ sufficiently large for all $\alpha\in {\ensuremath{\mathbb R}} $, $|\alpha|\le \log 2$ and for all $t\ge N^2$, we have \begin{equation}\label{criolodoido} {\ensuremath{\mathbb E}} \left[e^{\alpha \sum_{i=1}^k \left(\mathbf{1}_{\{X_i(t)\in [1,y]\}}-{\ensuremath{\mathbb P}} \left[X_i(t)\in[0,y] \right]\right)}\right] \le \exp\left( \frac{2ky}{N} \alpha^2\right). \end{equation} \end{lemma} To deduce \eqref{gronek} from \eqref{criolodoido} we just have to remark that $$\sum_{i=1}^k {\ensuremath{\mathbb P}} \left[X_i(t)\in[1,y] \right]=\sum_{z=1}^y \sum_{i=1}^k p_t(x^0_i,z)= \sum_{z=1}^y {\ensuremath{\mathbb P}} [\eta_t=1].$$ \begin{proof}[Proof of Lemma \ref{laplacetrans2}] We using the inequality $$\forall |x|\le |\log 2|,\ e^x\le 1+x+x^2$$ for the variable $Z=\alpha \left(\mathbf{1}_{X_1(t)\in [1,y]}-{\ensuremath{\mathbb P}} [X_1(t)\in [1,y]\right)$. As ${\ensuremath{\mathbb E}} [Z]=0$ and from Lemma \ref{bound} $(ii)$ we have for all $t\ge N^2$, $${\ensuremath{\mathbb E}} [Z^2]\le {\ensuremath{\mathbb P}} \left[X_1(t)\in[1,y] \right] \le 2y/N,$$ the integrated inequality gives \begin{equation} {\ensuremath{\mathbb E}} \left[ e^{\alpha \left(\mathbf{1}_{\{X_1(t)\in [1,y]\}}-{\ensuremath{\mathbb P}} [X_1\in [1,y]\right)} \right]\le 1+\frac{2\alpha^2y}{N} \le \exp(2\alpha^2 (y/N)). \end{equation} The independence of the $X^i$s is then sufficient to conclude. \end{proof} \section{Coupling $P_t^\chi$ with the equilibrium using the corner-flip dynamics} In this section we present the main tool which we use to prove Proposition \ref{csdf}: the corner-flip dynamics. The idea is to associate to each $\eta\in \Omega$ an height function, and consider the dynamics associated with this rate function instead of the original one and use monotonicity properties of this latter dynamics. This idea is is already present in the seminal paper of Rost investigating the asymetric exclusion on the line \cite{cf:Rost} and became since a classical tool in the study of particle system. In particular it is used e.g.\ in \cite{cf:Wilson, cf:Lac} to obtain bounds on the mixing time for the exclusion on the line. It has also been used as a powerful tool for the study of mixing of monotone surfaces starting with \cite{cf:Wilson}, and more recently in \cite{cf:CMST, cf:CMT}. \medskip Let us stress however that in \cite{cf:Wilson, cf:Lac}, the interface represention is used mostly as graphical tool in order to have a better intuition on an order that can be defined directly on $\Omega_{N,k}$. In the present we use the interface representation to construct a coupling which cannot be constructed considering only the original chain. In particular, note that our coupling is Markovian for the corner-flip dynamics but not for the underlying particle system. \label{fluctuat} \subsection{The $\xi$ dynamics} Let us consider the set of height functions of the circle. \begin{equation} \Omega'_{N,k}:=\left\{\xi: {\ensuremath{\mathbb Z}} _N\to {\ensuremath{\mathbb R}} \ | \ \xi(x_0)\in {\ensuremath{\mathbb Z}} , \forall x\in {\ensuremath{\mathbb Z}} _N,\ \xi(x)-\xi(x+1)\in \big\{-\frac k N, 1-\frac k N \big\} \right\}. \end{equation} Given $\xi$ in $\Omega'_{k,N}$, we define $\xi^x$ as \begin{equation} \begin{cases} \xi^x(y)&=\xi(y), \quad \forall y \ne x,\\ \xi^x(x)&=\xi(x+1)+\xi(x-1)-2\xi(x). \end{cases} \end{equation} and we let $\xi_t$ be the irreducible Markov chain on $\Omega'_{m,N}$ whose transition rates $p$ are given by \begin{equation}\label{cromik} \begin{cases} p(\xi,\xi^x)&=1, \quad \forall x\in {\ensuremath{\mathbb Z}} _N, \\ p(\xi,\xi')&=0, \quad \text{if } \xi'\notin \{ \xi^x \ | \ x\in {\ensuremath{\mathbb Z}} _N\}. \end{cases} \end{equation} We call this dynamics the corner-flip dynamics, as the transition $\xi\to\xi^x$ corresponds to flipping either a local maximum of $\xi$ (a "corner" for the graph of $\xi$) to a local minimum \textit{e vice versa}. It is, of course, not positive recurrent, as the state space is infinite and the dynamics is left invariant by vertical translation. However it is irreducible and recurrent. \medskip The reader can check (see also Figure \ref{partisys}) that $\Omega'_{N,k}$ is mapped onto $\Omega_{N,k}$, by the transformation $\xi \mapsto \nabla \xi$ defined by \begin{equation}\label{projec} \nabla \xi (x):=\xi(x)-\xi(x-1)+\frac{k}{N}, \end{equation} and that the image $\nabla \xi_t$ of the corner-flip dynamics $\xi_t$ under this transformation is the simple exclusion, down and up-flips corresponding to jumps $x\to x+1$ and $x\to x-1$ of the particles respectively. There is a natural order on the set $\Omega'_{N,k}$ defined by \begin{equation} \label{deforder} \xi\ge \xi' \quad \Leftrightarrow \quad \forall x\in {\ensuremath{\mathbb Z}} _N,\ \xi(x)\ge \xi'(x), \end{equation} which has the property of being preserved by the dynamics in certain sense (see Section \ref{grafff} for more details). \begin{figure}[hlt] \epsfxsize =9.5 cm \begin{center} \epsfbox{partisys.eps} \end{center} \caption{\label{partisys} The correspondence between the exclusion process and the corner-flip dynamics. In red, a particle jump and its corner-flip counterpart are defined. Note that this is not a one-to-one mapping as a particle configuration gives the height function only modulo translation (the height function is drawn on a cylinder whose base is the circle on which the particles are moving).} \end{figure} Given $\chi\in \Omega_{m,k}$, we define $(\xi^0_t)$ to be a process with transitions \eqref{cromik} starting from initial condition \begin{equation}\label{fryied} \xi_{0}^0(x):=\sum_{z=0}^x \chi (x)-\frac{kx}{N}, \end{equation} It follows from the above remark that fora all $t\ge 0$ we have \begin{equation}\label{laloi} {\ensuremath{\mathbb P}} \left[ \nabla \xi^0_t\in \cdot \right]=P^\chi_{t}. \end{equation} Our idea is to construct another dynamic $\xi^1_t$ which starts from a stationary condition (the gradient is distributed according to $\mu$) and to try to couple it with $\xi^0_t$ within time $O(L^2)$. The difficulty here lies in finding the right coupling. \subsection{Construction of the initial conditions $\xi^1_0$ and $\xi^2_0$} In fact we define not one but two stationary dynamics $\xi^1_t$ and $\xi^2_t$, satisfying \begin{equation}\label{equistart} {\ensuremath{\mathbb P}} \left[\nabla \xi^1_t\in \cdot \right]={\ensuremath{\mathbb P}} \left[\nabla \xi^2_t\in \cdot\right]=\mu. \end{equation} As $\mu$ is invariant for the dynamics $\nabla \xi_t$, \eqref{equistart} is satisfied for all $t$ as soon as it is satisfied for $t=0$. As we wish to use monotonicity as a tool, we want to have \begin{equation}\label{katzodue} \forall t\ge 0, \quad \xi^1_t\le \xi^0_t\le \xi^2_t, \end{equation} Then our strategy to couple $\xi^0_t$ with equilibrium is in fact to couple $\xi^1_t$ with $\xi^2_t$ and remark that if \eqref{katzodue} holds then \begin{equation}\label{paninosto} \forall t\ge 0,\ \xi^1_t=\xi^2_t \ \Rightarrow \ \xi^1_t=\xi^0_t= \xi^2_t \end{equation} \medskip We first have to construct the initial condition $\xi^1_0$ and $\xi^2_0$ which satisfies \eqref{equistart} \begin{equation}\label{katzone} \xi^1_0\le \xi^0_0\le \xi^2_0. \end{equation} \medskip Let us start with variable $\eta_0$ which has law $\mu$. We want to construct $\xi^1_0$ and $\xi^2_0$ which satisfies \begin{equation} \nabla \xi^i_0=\eta_0. \end{equation} Somehow, we also want the vertical distance between $\xi^1_0$ and $\xi^2_0$ to be as small as possible. We set for arbitrary $\eta \in \Omega_{N,k}$, or $\xi\in \Omega'_{N,k}$ \begin{equation}\begin{split}\label{defh} H(\eta)&:=\max_{x,y\in {\ensuremath{\mathbb Z}} _N} \left |\sum_{z=x+1}^y \left(\eta(z)-\frac{k}{N}\right)\right |,\\ H(\xi)&:=\max_{x,y\in {\ensuremath{\mathbb Z}} _N} \left | \xi(x)-\xi(y) \right |. \end{split}\end{equation} Finally set we set \begin{equation} \label{defh0} H_0:=\big\lceil H(\eta_0)+s\sqrt{k} \big\rceil \end{equation} and \begin{equation}\label{fryieed} \begin{split} \xi_{0}^1(x)&:=\sum_{z=1}^x \eta_0(x)-\frac{kx}{N}-H_0. \\ \xi_{0}^2(x)&:=\sum_{z=1}^x \eta_0(x)-\frac{kx}{N}+H_0. \end{split} \end{equation} Note that with this choice, \eqref{katzone} is satisfied for $\chi \in \mathcal G_s$ (see Figure \ref{highfunk}). \begin{figure}[hlt] \epsfxsize =8.5 cm \begin{center} \psfragscanon \psfrag{H0}{$2H_0$} \psfrag{eta10}{$\xi^2_0$} \psfrag{eta20}{$\xi^1_0$} \psfrag{eta00}{$\xi^0_0$} \epsfbox{highfunk.eps} \end{center} \caption{\label{highfunk} Representation of the three initial condition for the corner-flip dynamics. $\xi^1_0$ and $\xi^2_0$ are translated version of the same profile. The height $H_0$ is designed so that initially $\xi^0_0$ (whose variation are smaller than $s\sqrt{k}$ if $\chi\in \mathcal G_s$ is framed by $\xi^1_0$ and $\xi^2_0$). As the order is conserved by the graphical construction. $\xi^0_t$ couples with equilibrium when $\xi^1_t=\xi^2_t$.} \end{figure} \subsection{The graphical construction}\label{grafff} Now we present a coupling which satisfies \eqref{katzodue}. Note that in this case $\xi^1_t=\xi^0_t=\xi^2_t$ if and only if the area between the two paths, defined by \begin{equation}\label{paxvolumi} A(t):=\sum_{x\in {\ensuremath{\mathbb Z}} _N} \xi^2_t(x)-\xi^1_t(x), \end{equation} equals zero. \medskip The idea is then to find among the order-preserving possible one for which the ``volatility'' of $A(t)$ is the largest possible, so that it reaches zero faster. We want to make the corner-flips of $\xi^1$ and $\xi^2$ \textsl{as independent as possible} (of course making them completely independent is not an option since \eqref{katzodue} would not hold) \medskip We introduce now our coupling of the $\xi^i_t$, which is also a grand-coupling on $\Omega'_{N,k}$, in the sense that it allows us to construct $\xi_t$ starting from all initial condition on the same probability space. The evolution of the $(\xi_t)_{t\ge 0}$ is completely determined by auxiliary Poisson processes which we call clock processes. Set $$\Theta:=\left\{ (x,z) \ | \ x\in {\ensuremath{\mathbb Z}} _N \text{ and } z\in {\ensuremath{\mathbb Z}} +\frac{kx}{N} \ \right\}$$ And set ${\ensuremath{\mathcal T}} ^\uparrow$ and ${\ensuremath{\mathcal T}} ^\downarrow$ to be two independent rate-one clock processes indexed by $\Theta$ (${\ensuremath{\mathcal T}} ^\uparrow_{\theta}$ and ${\ensuremath{\mathcal T}} ^\downarrow_{\theta}$ are two independent Poisson processes of intensity one of each $\theta\in \Theta$). The trajectory of $\xi_t$ given $({\ensuremath{\mathcal T}} ^\uparrow,{\ensuremath{\mathcal T}} ^\downarrow)$ is given by the following construction \begin{itemize} \item $\xi_t$ is a c\`ad-l\`ag, and does not jump until one of the clocks indexed by $(x,\xi_t(x))$, $x\in {\ensuremath{\mathbb Z}} _N$. \item If ${\ensuremath{\mathcal T}} ^\downarrow_{(x,\xi_{t^-}(x))}$ rings at time $t$ and $x$ is a local maximum for $\xi_{t^-}$, then $\xi_{t}=\xi^x_{t^-}$. \item If ${\ensuremath{\mathcal T}} ^\uparrow_{(x,\xi_{t^-}(x))}$ rings at time $t$ and $x$ is a local minimum for $\xi_{t^-}$, then $\xi_{t}=\xi^x_{t^-}$. \end{itemize} The coupling of $\xi^0_t$, $\xi^1_t$ and $\xi^2_t$ is obtained by using the same clock process for all of them. The reader can check that with this coupling \eqref{katzodue} is a consequence of \eqref{katzone}. \begin{rem} In fact, the corner flip dynamics which is considered here, is in one to one correspondence with the zero-temperature stochastic Ising model on an infinite cylinder. With this in mind the coupling we have constructed just corresponds to the the graphical construction of this spin flip dynamics. See e.g. \cite[Section 2.3 and Figure 3]{cf:LST} for more details for the dynamics on a rectangle with mixed boundary condition. \end{rem} \begin{rem} Let us stress here that the coupling we use here is not the one the one of \cite{cf:Wilson} or \cite{cf:Rost} for which the updates of pair of neighbors are done simultaneously for the coupled chains. In particular it is not a Markovian coupling for the particle system (as the height function is not encoded in the particle configuration). This is a crucial point here as this is what allows the coupling time to be much shorter. Recall in particular in \cite{cf:Wilson} (see Table 1), it is shown that with the usual Markovian coupling, the coupling time is twice as large as the mixing time. \end{rem} To prove Proposition \ref{csdf} it is sufficient to prove that $\xi^1_t$ and $\xi^2_t$ typically merge within a time $O(L^2)$. More precisely, \begin{proposition}\label{nahnou} For all $s>0$, given $\gep>0$ there exists $C(\gep,s)$ such that for all sufficiently large $N$, \begin{equation}\label{nahnou1} {\ensuremath{\mathbb P}} [ \xi^1_{CN^2}\ne \xi^2_{CN^2} ]\le \gep. \end{equation} Similarly for all $s, u>0$, there exists $c(s,u)>0$ such that \begin{equation}\label{nahnou2} {\ensuremath{\mathbb P}} [ \xi^1_{uN^2}\ne \xi^2_{uN^2} ]\le 1-c(s,u). \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{csdf}] Given $s>0$ consider $(\xi^1_t)_{t\le 0}$, $(\xi^2_t)_{t\ge 0}$, constructed as above. Then given $\chi \in \mathcal G_s$, we construct $\xi^0_t$ the dynamics starting from the initial condition \eqref{fryied} and using the same clock process as $\xi^1_t$ and $\xi^2_t$. By definition of $\mathcal G_s$, \eqref{katzone} is satisfied and thus so is \eqref{katzodue} from the graphical construction. Recalling \eqref{laloi} and \eqref{equistart} we have \begin{equation} \|P^\chi_{t}-\mu\|\le {\ensuremath{\mathbb P}} \left[ \nabla \xi^0_{t}\ne \nabla \xi^1_{t}\right]\le {\ensuremath{\mathbb P}} \left[ \xi^0_{t}\ne \xi^1_{t}\right] \le {\ensuremath{\mathbb P}} \left[\xi^1_{t}\ne \xi^2_{t}\right]. \end{equation} for any $t>0$, where the last inequality is a consequence of \eqref{paninosto}. According to the above inequality, Proposition \ref{csdf} obviously is a consequence of Proposition \ref{nahnou}. \end{proof} \section{The proof of Proposition \ref{nahnou}}\label{multis} In order to facilitate the exposition of the proof, we choose to present it in the case $k=N/2$ first. We also chose to focus on \eqref{nahnou1}. The necessary modifications to prove \eqref{nahnou2} and to adapt the proofs for general $k$ are explained at the end of the section. \subsection{The randomly walking area} We are interested in bounding (recall \eqref{paxvolumi}) \begin{equation}\label{deftau} \tau :=\inf \{ t \ge 0 \ | \ A(t)=0 \}= \inf \{ t \ge 0 \ | \ \xi^1_t=\xi^2_t \} \end{equation} With our construction, $A(t)$, the area between the two curves is a ${\ensuremath{\mathbb Z}} _+$ valued martingale which only makes nearest neighbor jumps (corners flip one at a time). \medskip Hence $A(t)$ is just a time changed symmetric nearest neighbor walk on ${\ensuremath{\mathbb Z}} _+$ which is absorbed at zero. In order to get a bound for the time at which $A$ hits zero, we need to have reasonable control on the jump rate which depends on the particular configuration $(\xi^1_t,\xi^2_t)$ the system sits on. The jump rate is given by the number corners of $\xi^1_t$ and $\xi^2_t$ that can flip separately. More precisely, set \begin{multline}\label{defui} U_i(t):=\{x\in {\ensuremath{\mathbb Z}} _N \ | \ \xi^i_t \text{ has a local extremum at } x \text{ and } \\ \exists y\in \{x-1,x,x+1\}, \xi^2_t(y)> \xi^1_t(y)\}. \end{multline} The jump rate of $A(t)$ is given by \begin{equation} u(t):=\#U_1(t)+\#U_2(t). \end{equation} For $t\le \int_0^\tau u(t) \dd t$ let us define \begin{equation} \label{defjt} J(t):=\inf\left\{s \ | \ \int_0^s u(v)\dd v\ge t\right\}. \end{equation} By construction, the process $(X_t)_{t\ge 0}$ defined by \begin{equation}\label{defx} X_t:=A(J(t)) \end{equation} is a continuous time random walk on ${\ensuremath{\mathbb Z}} _+$ which jumps up and down with rate $1/2$. We have from the definition of the $\xi^i_0$ $$X_0=A(0)=2H_0N$$ which is of order $N^{3/2}$ and hence $X_t$ needs a time of order $N^{3}$ to reach $0$. What we used to estimate $A(0)$ is the following bound which can be derived from \eqref{fluquetec} and the definition of $H_0$ \eqref{defh0}, \begin{equation}\label{grimic} {\ensuremath{\mathbb P}} \left[A(0) \ge 2(s+r)N^{3/2} \right]={\ensuremath{\mathbb P}} \left[H_0\ge (s+r)N^{1/2} \right] \le {\ensuremath{\mathbb P}} \left[H(\eta_0)\ge rN^{1/2} \right] \le 2 e^{-cr^2}. \end{equation} \medskip If $u(t)$ were of order $N$ for all $t$ this would be sufficient to conclude that $A(t)$ reaches zero within time $O(N^2)$. This is however not the case: the closer $\xi^1_t$ and $\xi^2_t$ get, the smaller $u(t)$ becomes. A way out of this is to introduce a multi-scale analysis where the bound we require on $u$ depends on how small $A(t)$ already is. \subsection{Multi-scale analysis} We construct a sequence of intermediate stopping time $(\tau_i)_{i\ge 0}$ as follows. \begin{equation} \tau_i:=\inf\left\{t\ge 0 \ | \ A(t)\le N^{3/2}2^{-i} \right\}. \end{equation} We are interested in $\tau_i$ for $i\in\{0,\dots,K\}$ with \begin{equation}\label{defK} K_N:=\left\lceil \frac{1}{2}\log_2 N \right\rceil. \end{equation} Note that a number of $\tau_i$ can be equal to zero if $A(0)\le N^{3/2}$. We set $\tau_{-1}:=0$ for convenience. \medskip To bound the value of $\tau$, our aim is to control each increments $\Delta \tau_i=\tau_{i}-\tau_{i-1}$ for $i\le K$ as well $\tau-\tau_K$. A first step is to get estimates for the equivalent of the $\Delta \tau$ for the time rescaled process $X_t$ (recall \ref{defx}). We set for $i\in\{0,\dots,K\}$ \begin{equation}\label{freddo}\begin{split} {\ensuremath{\mathcal T}} _i&:=\int_{\tau_{i-1}}^{\tau_i} u(t) \dd t,\\ {\ensuremath{\mathcal T}} _\infty&:=\int_{\tau_K}^{\tau} u(t) \dd t. \end{split}\end{equation} As $X$ is diffusive, ${\ensuremath{\mathcal T}} _i$ is typically of order $(N^{3/2} 2^{-i})^2=N^34^{-i}$, and ${\ensuremath{\mathcal T}} _{\infty}$ is of order $N^2$. With this in mind, it is not too hard to believe that \begin{lemma}\label{cromican} Given $\gep, s>0$ there exists a constant $C(\gep,s)$ such that \begin{equation} {\ensuremath{\mathbb P}} [ \left \{\exists i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\ge C N^3 3^{-i}\right\} \cup \{{\ensuremath{\mathcal T}} _\infty\ge C N^2 \}]\le \gep. \end{equation} \end{lemma} \begin{proof} Let $Z_t$ denote a nearest neighbor walk on ${\ensuremath{\mathbb Z}} $ starting from $0$ and $T_a$ the first time $Z$ reaches $a$. It is rather standard that there exists a constant $C_1$ such that for every $a\ge 1$ and every $u\ge 0$ \begin{equation}\label{crooop} {\ensuremath{\mathbb P}} \left[ T_a \ge u a^2\right]\le C_1 u^{-1/2}. \end{equation} Note that for $i\ge 1$, ignoring the effect of integer rounding, $\tau_i$ has the same law as $T_a$ with $a=N^{3/2}2^{-{i}}$ and thus applying \eqref{crooop} we obtain that \begin{equation} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal T}} _i\ge u N^3 3^{-i}]\le C_1 u^{-1/2} (3/4)^{i/2}. \end{equation} In the same manner we have \begin{equation} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal T}} _{\infty}\ge u N^2]\le C_1 u^{-1/2}. \end{equation} We can then choose $u_0(\gep)$ large enough in a way that \begin{equation} C_1 u^{-1/2}\left( \sum_{i=1}^K (3/4)^{i/2}+ 1\right) \le \gep/2. \end{equation} Concerning ${\ensuremath{\mathcal T}} _0$, from \eqref{grimic}, one can find $C_2(\gep,s)$ such that \begin{equation} {\ensuremath{\mathbb P}} \left[A(0)\ge C_2 N^{3/2}\right]\le \gep/4. \end{equation} Conditionally on the event $A(0)\le C_2 N^{3/2}$, ${\ensuremath{\mathcal T}} _0$ is stochastically dominated by $T_a$ with $a=C_2N^{3/2}$ and hence using \eqref{crooop} and fixing $u_1(\gep,s)$ large enough (depending on $C_1$, $C_2$ and $\gep$) we obtain \begin{equation} {\ensuremath{\mathbb P}} \left[ {\ensuremath{\mathcal T}} _0 \ge u_1 (C_2)^2 N^{3}\right]\le {\ensuremath{\mathbb P}} \left[A(0)\ge C_2 N^{3/2}\right]+{\ensuremath{\mathbb P}} \left[ {\ensuremath{\mathcal T}} _0 \ge u A(0)^2\right]\ge \gep/2. \end{equation} Then we conclude by taking $C(\gep,s):=\max(u_0,u_1 (C_2)^2)$. \end{proof} What we have to check then is that the value of $u(t)$ is not too small in the time interval $[\tau_{i-1},\tau_i)$ for all $i\in \{0,\dots, K\}$. What we want to use is that for any $t\ge 0$, $\nabla \xi^1_t$ is at equilibrium so that $\xi^1_t$ has to present a ``density of flippable corners''. We introduce an event $\mathcal A$ which is aimed to materialize this fact. Given $x$ and $y$ in ${\ensuremath{\mathbb Z}} _N$ we set \begin{equation}\label{defj} j(x,y,\xi):=\#\{ z\in [x,y] \ | \ \xi(z) \text{ is a local extremum } \}. \end{equation} We have \begin{equation} \mu\left(j(x,y,\xi)\right)=\frac{(N-2)\#[x,y]}{2(N-1)}. \end{equation} We define \begin{equation}\label{defca} \mathcal A:= \Big\{ \forall t\le N^3, \forall (x,y) \in {\ensuremath{\mathbb Z}} _N^2,\ \#[x,y]\ge N^{1/4}\Rightarrow j(x,y,\xi^1_t)\le \frac{1}{3}\#[x,y] \Big\}, \end{equation} the event that a ``large" interval with an anomalously low density of corner does not appear before time $N^3$. \begin{lemma}\label{smalla} For all $N$ sufficiently large \begin{equation} {\ensuremath{\mathbb P}} [\mathcal A^c] \le \frac{1}{N}. \end{equation} \end{lemma} \begin{proof}[Proof of Lemma \ref{smalla}] Note that for any given time $t$, $\nabla \xi^1_t$ is distributed according to $\mu$ (because this is the case for $t=0$ and $\mu$ is the equilibrium measure for the $\nabla \xi$ dynamics). Now let us estimate the probability of $${\ensuremath{\mathcal E}} :=\left\{\eta \ | \ \forall x,y \in {\ensuremath{\mathbb Z}} _N,\ \#[x,y]\ge N^{1/4}\Rightarrow j(x,y,\eta)\le \frac{1}{3}\#[x,y]\right\},$$ under the measure $\mu$ where $j(x,y,\eta)$ is defined like its counter part for the height function \eqref{defj} replacing ``$x$ is a local extremum" by "$\eta(x)\ne \eta(x+1)$". We consider $\tilde \mu$ an alternative measure on $\{0,1\}^{{\ensuremath{\mathbb Z}} _N}$, under which the $\eta(x)$ are i.i.d.\ Bernoulli random variable with parameter $1/2$. By the local central limit Theorem for the random walk we have \begin{equation}\label{groom} \mu({\ensuremath{\mathcal E}} ):= \tilde\mu\left({\ensuremath{\mathcal E}} \ | \ \sum_{x\in {\ensuremath{\mathbb Z}} _N} \eta(x)=N/2\right)\le C_1\sqrt{N} \tilde \mu ({\ensuremath{\mathcal E}} ). \end{equation} Let us now estimate $\tilde \mu ({\ensuremath{\mathcal E}} ).$ First we remark that we can replace ``$ \#[x,y]\ge N^{1/4}$" in the definition of ${\ensuremath{\mathcal E}} $ by ``$\#[x,y] \in [N^{1/4}, 2N^{1/4}]$". Indeed, by dichotomy, if the proportion of local maxima is smaller than $1/3$ on a long interval, it has to be smaller than $1/3$ on a subinterval whose length belong to $[N^{1/4}, 2N^{1/4}]$. \medskip Set $x\in \{ \lceil N^{1/4} \rceil, N-1\}$, note that $\mathbf{1}_{\{\eta(z)\ne \eta(z+1)\}}$, $z\in \{1,\dots, x\}$ are IID Bernouilli variables of parameter $1/2$, and hence by standard large deviation results there exists a constant $C_2>0$ such that \begin{equation}\label{acroot} \tilde \mu\left( \sum_{z=1}^x \mathbf{1}_{\{\eta(z)\ne \eta(z+1)\}} \le N/3 \right)\le e^{-C_2x} \end{equation} By translation invariance we can deduce similar bounds for any translation of the interval $[1,x]$. Then, summing over all intervals and using \eqref{groom}, we deduce that there exists $C_3$ such that, for all $N$ sufficiently large $\mu({\ensuremath{\mathcal E}} )\le e^{-cN^{1/4}}.$ Now, we set $(T_i)_{i\ge 0}$ to be the times where the chain $\xi^2$ makes a transition. The chain $(\xi^1_{T_i})_{i\ge 0}$ is a discrete time Markov chain with equilibrium probability $\mu$ and hence by union bound \begin{equation} {\ensuremath{\mathbb P}} \left[\exists t\le T_i \ \nabla \xi^1_t(t)\notin {\ensuremath{\mathcal E}} \right] \le i e^{-C_3N^{1/4}}. \end{equation} This implies \begin{equation} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal A}} ] = {\ensuremath{\mathbb P}} \left[\exists t\le N^3 \nabla \xi^1_t(t) \notin {\ensuremath{\mathcal E}} \right] \le i e^{-C_3N^{1/4}}+{\ensuremath{\mathbb P}} \left[ T_i\le N^3\right]. \end{equation} As the transitions occur with a rate which is at most $N$, the second term is exponentially small e.g. for $i=N^5$ and this concludes the proof. \end{proof} Then when $\mathcal A$ holds, we can derive an efficient lower bound on $u(t)$ which just depend on $A(t)$. Recall \eqref{defh} \begin{lemma}\label{fromzea} When $\mathcal A$ holds we have for all $t\le N^3$ \begin{equation} u(t)\ge \frac 1 3 \min\left( N, \frac{A(t)}{\max(H(\xi^1_t)+H(\xi^2_t), N^{1/2})}\right) \end{equation} \end{lemma} \begin{proof} If $\xi^1_t$ and $\xi^2_t$ have no contact with each other, then $u(t)$ is equal to the total number of flippable corners in $\xi^2_t$ and $\xi^1_t$. If ${\ensuremath{\mathcal A}} $ holds, this number is larger than $N/3$, which, by definition of ${\ensuremath{\mathcal A}} $, is a lower bound for the number of corners on $\xi^1_t$ alone. When there exists $x$ such that $\xi^1_t(x)=\xi^2_t(x)$, we consider the set of active coordinates \begin{equation}\label{defct} C(t):=\{\exists y\in \{x-1,x,x+1\}, \xi^1_t(y)< \xi^2_t(y)\}. \end{equation} Note that when one of the $\xi^i_t$ (or both) have a local maximum at $x\in C(t)$ then when the corresponding corner flips,f it changes the value of $A(t)$. Our idea is to find a way to bound from below the number of $x$ in $C(t)$ for which $\xi^1(t)$ has a flippable corner, using the assumption that ${\ensuremath{\mathcal A}} $ holds. \medskip Let us decompose $C(t)$ into connected components (for the graph ${\ensuremath{\mathbb Z}} _N$) which are intervals as defined in \eqref{theinterval}. Assume that $[a,b]$ is a connected component of $C(t)$, it corresponds to a ``bubble'' between $\xi^1_t$ and $\xi^2_t$ (see Figure \ref{bubble}). For each bubble, we want to have a bound on the number of flippable corners and compare it to the area of the bubble. Set \begin{equation}\label{defuab} u_{[a,b]}(t):=j(a,b,\xi^1_t). \end{equation} and \begin{equation}\label{bubarea} A_{[a,b]}(t):=\sum_{x=a}^{b}\xi^2_t(x)-\xi^{1}_t(x). \end{equation} Note that \begin{equation}\label{lesum}\begin{split} u(t)&\ge \sum_{\text{ all bubbles}} u_{[a,b]}(t),\\ A(t)&= \sum_{\text{ all bubbles}} A_{[a,b]}(t). \end{split}\end{equation} \begin{figure}[hlt] \epsfxsize =12.5 cm \begin{center} \psfragscanon \psfrag{etw}{$\xi^2_t$} \psfrag{etav}{$\xi^1_t$} \psfrag{buble}{$[a,b]$} \epsfbox{bubbles.eps} \end{center} \caption{\label{bubble} The bubble decomposition: The interval $[a,b]$ displayed here corresponds to a bubble. The large circle dots corresponds to corner which do not flip simultaneously for $\xi^1_t$ and $\xi^2_t$, the total number of them is $u(t)$. Among those, the white circles are the one which are counted in one of the $u_{[a,b]}(t)$ (note that some of the corners on $\xi^1_t$ are not counted). The smaller circles corresponds to corner which flip together for $\xi^1_t$ and $\xi^2_t$ and thus do not contribute to $u(t)$.} \end{figure} For small bubbles (of length smaller than $N^{1/4}$), $\mathcal A$ does not give any information on the number of flippable corners. However, we can simply observe that in any bubble, there is at least one flippable corners (e.g. where $\min_{x\in [a,b]}\xi^1_x$ is attained). If $\#[a,b] \le N^{1/4}$, the area of the bubble satisfies \begin{equation} A_{[a,b]}(t)\le N^{1/2}. \end{equation} This is because $\xi^2_t-\xi^{1}_t$ is a Lipchitz function and equals zero at both ends. Hence necessarily \begin{equation}\label{esquiun} u_{[a,b]}(t)\ge \frac{A_{[a,b]}(t)}{N^{1/2}}. \end{equation} For large bubbles ($\#[a,b]\ge N^{1/4}$) we can use the fact that ${\ensuremath{\mathcal A}} $ hold. First, let us control the area: as $\xi^1_t$ and $\xi^2_t$ are in contact we have \begin{equation} \max_{x\in {\ensuremath{\mathbb Z}} _N} \xi^2_t(x)- \xi^1_t(x)\le H(\xi^1_t)+H(\xi^2_t), \end{equation} and hence \begin{equation} A_{[a,b]}(t)\le \#[a,b] (H(\xi^1_t)+H(\xi^2_t)). \end{equation} By the definition of ${\ensuremath{\mathcal A}} $ there are at least $\#[a,b]/3$ flippable corners on the path $\xi^1_t$ restricted to $[a,b]$. Thus \begin{equation} u_{[a,b]}(t)\ge \frac{A_{a,b}(t)}{3( H(\xi^1_t)+H(\xi^2_t))}. \end{equation} and we can deduce (recall \eqref{esquiun}) that for any value $\#[a,b]$ we have \begin{equation}\label{esqui2} u_{a,b}(t)\ge \frac{A_{a,b}(t)}{3\max(H(\xi^1_t)+H(\xi^2_t),\sqrt{N)})}. \end{equation} We conclude by summing \eqref{esqui2} over all bubbles and using \eqref{lesum}. \end{proof} The previous lemma gives us some control over $u(t)$ (if ${\ensuremath{\mathcal A}} $ holds) provided we can control $A(T)$ and \begin{equation}\label{defht} H(t):=H(\xi^1_t)+H(\xi^2_t). \end{equation} To control the area, we use our multi-scale construction: for $t\in [\tau_{i-1}, \tau_i)$, we have $$A(t)\ge N^{3/2} 2^{-i}.$$ To obtain a good control on $H(t)$ we use the following concentration result. \begin{lemma}\label{cromeski} There exists a constant $c$ such that for any $t\ge 0$ and $r\ge 0$. \begin{equation} {\ensuremath{\mathbb P}} \left[ H(t)\ge r \sqrt{N} \right]\le 2\exp(-c r^2). \end{equation} \end{lemma} \begin{proof} This just comes from the fact that for any $t>0$, $i=1,2$, $\nabla \xi^i_t$ are distributed according to $\mu$, and then we use \eqref{fluquetec}. \end{proof} We use it to show that most of the time $H(t)$ is of order $\sqrt{N}$. In fact we need a sightly more twisted statement that fits the multi-scale analysis. For the remainder of the proof set \begin{equation} \alpha:=\left(\sum_{i\ge 0} (i+1)^2 \right)^{-1} \end{equation} \begin{lemma}\label{coniduam} For any $\gep>0$, there exists a constant $C(\gep)$ such that for any $T\ge 0$ \begin{equation} {\ensuremath{\mathbb P}} \left[ \exists i\in \{0,\dots, K\}, \int^T_0 \mathbf{1}_{\{ H(t)\ge C (i+1)^2 \sqrt{N}\}}\dd t \ge (\alpha/2) (i+1)^{-2}T \right] \le \gep. \end{equation} \end{lemma} \begin{proof} For a fixed $i$ from Lemma \ref{cromeski}, we have \begin{equation} {\ensuremath{\mathbb E}} \left[ \int^T_0 \mathbf{1}_{\{H(t)\ge r(i+1)^2 \sqrt{N}\}} \dd t \right] \le 2Te^{-cr^2(i+1)^2}. \end{equation} Hence from the Markov inequality \begin{equation} {\ensuremath{\mathbb E}} \left[ \int^T_0 \mathbf{1}_{\{H(t)\ge r(i+1)^2 \sqrt{N}\}}\dd t\le (\alpha/2) (i+1)^{-2}T \right] \le 4\alpha^{-1}(i+1)^2 \exp(-cr^2(i+1)^2). \end{equation} Hence we obtain the result by choosing $C=r_0$ sufficiently large so that \begin{equation} 4 \sum_{i\ge 0} \alpha^{-1}(i+1)^2 \exp(-cr^2(i+1)^2)\le \gep. \end{equation} \end{proof} Combining Lemma \ref{cromican}, \ref{smalla}, \ref{fromzea}, and \ref{coniduam} we can now conclude. \begin{proof}[Proof of \eqref{nahnou1}] Let $\gep$ be fixed. We fix a constant $C_1(\gep)$ such that Lemma \ref{coniduam} holds for $\gep/3$ instead of $\gep$. $C_2(\gep,s)$ is chosen so that Lemma \ref{cromican} holds for $\gep/3$. We define the events \begin{equation} \begin{split} \mathcal B&:= \left\{ \forall i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge C_1 (i+1)^2 \sqrt{N}\}}\dd t\le(\alpha/2) (i+1)^{-2}T \right\},\\ \mathcal C&:=\left \{\forall i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\le C_2 N^3 3^{-i}\right\} \cap \{{\ensuremath{\mathcal T}} _\infty\le C_2 N^2 \}, \end{split} \end{equation} where \begin{equation}\begin{split} T&:=6 N^2 C_1 C_2 \alpha^{-1} \max_{i\ge 0}\left[ (i+1)^4(2/3)^i\right],\\ T'&:=T+C_2N^2. \end{split}\end{equation} We assume also that $N$ is large enough so that ${\ensuremath{\mathcal A}} $ holds with probability larger than $1-\gep/3$ (cf. Lemma \ref{fromzea}). Hence we have $${\ensuremath{\mathbb P}} [{\ensuremath{\mathcal A}} \cap{\ensuremath{\mathcal B}} \cap {\ensuremath{\mathcal C}} ] \ge 1-\gep$$ Now what remains to prove is that \begin{equation}\label{letrucs} \{{\ensuremath{\mathcal A}} \cap{\ensuremath{\mathcal B}} \cap {\ensuremath{\mathcal C}} \}\subset \{\tau \le T'\}. \end{equation} This implies \eqref{nahnou1}, with \begin{equation} C(\gep,s)=T'/N^2=C_2 \left( 6C_1\alpha^{-1} \max_{i\ge 0}\left[ (i+1)^4(2/3)^i\right]+1\right). \end{equation} We split the proof of \eqref{letrucs} in two statements. We want to show first that on the event ${\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} \cap {\ensuremath{\mathcal C}} $, \begin{equation}\label{ige1} \tau-\tau_K\le C_2 N^2. \end{equation} and then that \begin{equation}\label{ige2} \forall i\in \{0,\dots, K\}, \quad (\tau_{i}-\tau_{i-1})\le C_3(i+1)^{-2}N^2 , \end{equation} where $$C_3:=6 C_2C_1\max_{i\ge 0}\left[ (i+1)^4(2/3)^i\right].$$ Combined these inequalities, we have \begin{equation} \tau\le C_2N^2+\sum_{i=0}^K C_3(i+1)^{-2}N^2\le T'. \end{equation} Note that the \eqref{ige1} is an immediate consequence of ${\ensuremath{\mathcal C}} $ as \begin{equation} {\ensuremath{\mathcal T}} _\infty=\int_{\tau_K}^\tau u(t)\dd t\ge \tau-\tau_K. \end{equation} Let us turn to \eqref{ige2}. Let us assume that the statement is false and let $i_0$ denote the smallest $i$ such that $$(\tau_{i}-\tau_{i-1})> C_3 (i+1)^{-2} N^2.$$ The definition of $i_0$ (the fact that it is the smallest) implies that \begin{equation}\label{bet} \tau_{i_0-1}+ C_3(i_0+1)^{-2} N^2\le T. \end{equation} From ${\ensuremath{\mathcal B}} $ we have (using \eqref{bet} inequality for the second inequality) \begin{multline}\label{cardrive} \int_{\tau_{i_0-1}}^{\tau_{i_0}} \mathbf{1}_{\{H(t)\le C_1 (1+i_0)^2 \sqrt{N}\}}\dd t\\ \ge \int_{\tau_{i_0-1}}^{\tau_{{i_0}}+ C_3 (i_0+1)^{-2} N^2} \mathbf{1}_{\{ H(t)\le C_1 (1+i_0)^2 \sqrt{N}\}}\dd t \\ \ge C_3 (i_0+1)^{-2} N^2-\int_0^T \mathbf{1}_{\{ H(t)\ge C_1 (1+i_0)^2 \sqrt{N} \}}\dd t\\ \ge C_3 (i_0+1)^{-2} N^2-(\alpha/2) T(i_0+1)^{-2}\ge \frac{1}{2} C_3(i_0+1)^{-2} N^2. \end{multline} For all $t\le \tau_{i_0}$, we have $A(t)\ge N^{3/2} 2^{-i_0}$ by definition and thus from Lemma \ref{fromzea} and the assumption that ${\ensuremath{\mathcal A}} $ holds, we have \begin{equation} u(t)\ge \frac 1 3 \min\left(N, \frac{A(t)}{\max(H(t),N^{1/2})}\right)\ge\frac{N^{3/2} 2^{-i_0}}{3C_1 (i_0+1)^2 \sqrt{N}}\mathbf{1}_{\{H(t)\le C_1 (i_0+1)^2 \sqrt{N}\}}. \end{equation} Hence from \eqref{cardrive} \begin{multline} {\ensuremath{\mathcal T}} _{i_0}\ge \frac{N^{3/2} 2^{-i_0}}{C_1 (i_0+1)^2 \sqrt{N}} \int_{\tau_{i_0-1}}^{\tau_{i_0}} \mathbf{1}_{\{H(t)\le C_1 (i_0+1)^2 \sqrt{N}\}}\dd t\\ \ge \frac{1}{6 C_1} C_3 N^3 2^{i_0}(i_0+1)^{-4}\ge 3^{i_0}C_2 N^2, \end{multline} where the last inequality comes from the definition of $C_3$. This brings a contradiction to the fact that ${\ensuremath{\mathcal C}} $ holds and ends the proof of \eqref{ige1}. \end{proof} \subsection{Proof of \eqref{nahnou2}} We want to prove now that starting from $\chi\in \mathcal G_s$, we get significantly closer to equilibrium after a time $uN^2$. By mononicity in $u$ and $s$ we can restrict to the case where $u=2s^{-1}$ (to avoid using too many letters) and assume that $s$ is sufficiently large. The elements of the proof are essentially the same that for \eqref{nahnou1} but we have to be more careful. Instead of Lemma \ref{cromican} we have to prove the following statement \begin{lemma}\label{cromican3} There exists a constant $C$ such that for all $s$ sufficiently large \begin{equation} {\ensuremath{\mathbb P}} \left[ \left \{\forall i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\le s^{-7} N^3 3^{-i}\right\} \cap \{{\ensuremath{\mathcal T}} _\infty\le s^{-1}N^2 \}\right]\ge e^{-Cs^{9}} \end{equation} \end{lemma} \begin{proof} A first observation is that, by the Markov property for the random walk $X_t=A(J(t))$ (recall \eqref{defx}), the ${\ensuremath{\mathcal T}} _i$ are independent. To evaluate ${\ensuremath{\mathbb P}} \left[ {\ensuremath{\mathcal T}} _i\le s^{-6} N^3 3^{-i}\right]$, we are going to use a classical estimate of first hitting time of a given level for a simple symmetric random walk: the there exists a constant $C_1$ such that for all $a\ge 1$, for all $v\le a$ one has (using the notation of \eqref{cromican}) \begin{equation}\label{crooops} {\ensuremath{\mathbb P}} \left[ T_{a} \le a^{2}/v \right]\ge e^{-C_1 \max(v^{1/2},v)}, \end{equation} (it is sufficient to check the estimate when $v$ is large as for $v$ close to zero it is just equivalent to \eqref{crooop}). The time ${\ensuremath{\mathcal T}} _\infty$ is stochastically dominated by $T_N$ and thus \begin{equation}\label{triton} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal T}} _\infty\le s^{-1}N^2]\ge e^{-C_1 s}. \end{equation} Neglecting the effect of integer rounding, for $i\ge 0$, ${\ensuremath{\mathcal T}} _i$ is equal in law to $T_{2^{-i}N\sqrt{k}}$. Hence from \eqref{crooop} we have \begin{equation}\label{gronac} {\ensuremath{\mathbb P}} [ {\ensuremath{\mathcal T}} _i\ge s^{-7} N^3 3^{-i} ]\ge e^{-C_1 \max((4/3)^{-i/2}s^{7/2},(4/3)^{-i}s^{7}))}. \end{equation} Then note that there exists a constant $C_2$ such that for all $s\ge 1$ \begin{equation}\label{okafor} C_1\sum_{i=1}^K \max\left((4/3)^{-i/2}s^{7/2},(4/3)^{-i}s^{7}\right)\ge C_2s^7. \end{equation} For $i=0$, ${\ensuremath{\mathcal T}} _0$ depends on initial value of the area. Now let us note that from \eqref{grimic} we have for $s$ sufficiently large \begin{equation}\label{grima} {\ensuremath{\mathbb P}} \left[A(0)\le 4s N \right]\ge1- 2e^{-cs^2}\ge 1/2. \end{equation} Then conditioned on the event $\left\{ A(0)\le 4sN \right\}$, ${\ensuremath{\mathcal T}} _0$ is dominated by $T_{4sN}$ and hence \begin{equation} {\ensuremath{\mathbb P}} \left[{\ensuremath{\mathcal T}} _0\ge s^{-7} N^3 \ | \ A(0)\le 4s N \right]\ge \exp\left(-16 C_1 s^{9}\right). \end{equation} Combined with \eqref{grima}, this gives us \begin{equation}\label{gronic} {\ensuremath{\mathbb P}} \left[{\ensuremath{\mathcal T}} _0\ge s^{-4} N^3 \right]\ge e^{-16C_1 s^{9}}/2. \end{equation} Using the independence and multiplying the inequalities \eqref{gronic}, \eqref{gronac} and \eqref{triton}  (and using \eqref{okafor}) we obtain the result for some appropriate $C$. \end{proof} We also need an estimate on the probability that $H(t)$ (recall \eqref{defht}) is too large which slightly differs from Lemma \ref{coniduam} \begin{lemma}\label{coniduam2} Recall $\alpha:=(\sum_{i\ge 0}(i+1)^{-2})^{-1}$. There exists a constant $C$ such that for all $s$ sufficiently large and all $T$ \begin{equation} {\ensuremath{\mathbb P}} \left[ \exists i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge s^{5} (i+1)^{2} \sqrt{N}\}}\dd t \ge (\alpha/2)(i+1)^{-2}T \right] \le e^{-C s^{10}}. \end{equation} \end{lemma} \begin{proof} As for Lemma \ref{coniduam} we just use Lemma \ref{cromeski} and the Markov inequality. \end{proof} \begin{proof}[Proof of \eqref{nahnou2}] Set $T:=s^{-1}N^2$ and \begin{equation} \begin{split} \mathcal B'&:= \left\{ \forall i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge s^{5} (i+1)^{2} \sqrt{N}\}}\dd t\le (\alpha/2) (i+1)^{-2}T \right\},\\ \mathcal C'&:=\left \{\forall i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\le s^{-7} N^3 3^{-i}\right\} \cap \{{\ensuremath{\mathcal T}} _\infty\le N^2s^{-1} \}, \end{split} \end{equation} Then from Lemma \ref{cromican}, \ref{smalla}, and \ref{coniduam} we have for $s$ sufficiently large, for $N$ large enough (depending on $s$) \begin{equation} {\ensuremath{\mathbb P}} \left[{\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} '\cap {\ensuremath{\mathcal C}} ' \right]\ge \exp(-C_1 s^9)/2. \end{equation} What remains to prove is that $\tau \le 2N^2$ on the event ${\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} '\cap {\ensuremath{\mathcal C}} '$. First, we notice that from the definition of ${\ensuremath{\mathcal T}} _\infty$ \eqref{freddo}, ${\ensuremath{\mathcal C}} '$ readily implies that $$\tau-\tau_K\le {\ensuremath{\mathcal T}} _\infty\le N^2 s^{-1}$$ Hence to conclude we need to show that \begin{equation} \forall i\in \{0,\dots,K\}, \quad \tau_i-\tau_{i-1}\le \alpha (i+0)^{-2} T. \end{equation} Assume the statement is false and let $i_0$ be the smallest index such that it is not satisfied. Using Lemma \ref{fromzea} we have \begin{multline} {\ensuremath{\mathcal T}} _{i_0}=\int_{\tau_{i_0-1}}^{\tau_{i_0}} u(t)\dd t \ge \int_{\tau_{i_0-1}}^{\tau_{i_0}} \frac 1 3 \min\left( N , \frac{A(t)}{\max(H(t),N^{1/2})}\dd t\right)\\ \ge \frac{2^{-i_0} N}{3s^{5} (i+1)^{2}} \int_{\tau_{i_0-1}}^{\tau_{i_0}} \mathbf{1}_{\{H(t)\le s^{5} (i+1)^{2} \sqrt{N}\}}\dd t. \end{multline} Then one has from the definition of $i_0$ $$\tau_{i_0-1}+ \alpha (i+0)^{-2}\le N^2.$$ Hence from ${\ensuremath{\mathcal C}} '$ \begin{multline} \int_{\tau_{i_0-1}}^{\tau_{i_0}} \mathbf{1}_{\{H(t)\le s^{5} (i+1)^{2} \sqrt{N}\}}\dd t\\ \ge \int_{\tau_{i_0-1}}^{\tau_{i_0-1}+\alpha (i_0+1)^{-2} T} \mathbf{1}_{\{ H(t)\le s^5 (1+i_0)^2 \sqrt{N}\}}\dd t \\ \ge \alpha (i_0+1)^{-2} N^2-\int_0^{T} \mathbf{1}_{\{ H(t)\ge s^5 (1+i_0)^2 \sqrt{N} \}}\dd t\\ \ge (\alpha/2)(i_0+1)^{-2} T, \end{multline} and thus (recall $T=N^2s^{-1}$) \begin{equation} {\ensuremath{\mathcal T}} _{i_0}\ge (\alpha/2)2^{-i_0}(i_0+1)^{2} N^3s^{-6}\ge C_2 3^{-i_0} N^3 s^{-6}. \end{equation} for an explicit $C_2$. This brings a contradiction to ${\ensuremath{\mathcal B}} '$ if $s$ is chosen sufficiently large. \end{proof} \subsection{Proof of Proposition \ref{nahnou} for arbitrary $k$} The overall strategy is roughly the same, except that we start with an area which is of order $k^{1/2}N$. Hence most of the modifications in the proof can be performed just writing $\sqrt{k}$ instead of $N^{1/2}$. However Lemma \ref{smalla} does not hold for small values of $k$ and one needs a deeper change there. We define the $(\tau_i)_{i\ge 0}$ as follows ($\tau_{-1}:=0$) \begin{equation} \tau_i:=\inf\{t\ge 0 \ | \ A(t)\le k^{1/2}N2^{-i} \}. \end{equation} and we set \begin{equation}\label{defK2} K_N:=\left\lceil \frac{1}{2}\log_2 k \right\rceil. \end{equation} Note that a number of $\tau_i$ can be equal to zero if $A(0)\le N^{3/2}$. \medskip The time changed version ${\ensuremath{\mathcal T}} _i$, $i\in \{0,\dots,K\}\cup\{\infty\}$ of $\Delta\tau_i$ are defined as in \eqref{freddo}. We first write down how Lemma \ref{cromican}-\ref{cromican3} and \ref{coniduam}-\ref{coniduam2} can be reformulated in the context of $k$ particles (the proofs are exactly the same and thus are not included). \begin{lemma}\label{cromican2} Given $\gep$ there exists a constant $C_1(\gep,s)$ such that \begin{equation}\label{cranchk} {\ensuremath{\mathbb P}} \left[ \left \{\exists i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\ge C_1 N^2 k 3^{-i}\right\} \cup \{{\ensuremath{\mathcal T}} _\infty> C_1 N^2 \}\right]\le \gep. \end{equation} and a constant $C_2$ independent of the parameters \begin{equation}\label{cranchk2} {\ensuremath{\mathbb P}} \left[ \left \{\exists i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\ge N^2 k s^{-6}\right\} \cup \{{\ensuremath{\mathcal T}} _\infty> N^2 \}\right]\ge \exp\left(-C_2s^{8}\right). \end{equation} \end{lemma} \begin{lemma}\label{coniduam3} Recall $\alpha:=(\sum_{i\ge 0} (i_0+1)^{-2})^{-1}.$ For any $\gep>0$, exists a constant $C_1(\gep)$ such that for any $T\ge 0$ \begin{equation}\label{crimouk} {\ensuremath{\mathbb P}} \left[ \exists i\in \{0,\dots, K\}, \int^T_0 \mathbf{1}_{\{ H(t)\ge C_1 (4/3)^i \sqrt{k}\}}\dd t \ge (\alpha/4) (i+1)^{-2}T \right] \le \gep. \end{equation} Moreover there exist a constant $C_2$ such that for any $T\ge 0$, \begin{equation}\label{crimouk2} {\ensuremath{\mathbb P}} \left[ \exists i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge s^{5}(4/3)^i \sqrt{k}\}}\dd t \ge (\alpha/4) (i+1)^{-2}N^2 \right]\dd t \le e^{-C_2 s^{10}}. \end{equation} \end{lemma} A significant modification is however needed for Lemma \ref{smalla}, as for small $k$, we cannot define an event similar to \eqref{defca} which holds with high probability. Set (recall \eqref{defj}) \begin{multline} \mathfrak{A}:=\left\{ \xi \in \Omega_{k,N} \ | \ \#[x,y]\ge N/k (\log k)^2 \Rightarrow j(x,y,\xi)\ge \frac{k}{10N}\#[x,y]\right\}\\ \cup \left\{ \xi \ | \ \#[x,y]\le N/k (\log k)^2 \Rightarrow |\xi(x)-\xi(y)|\le (\log k)^4\right\}=:\mathfrak{A}_1\cup \mathfrak{A}_2. \end{multline} Note that ${\ensuremath{\mathfrak A}} $ is invariant by vertical translation of $\xi$ and thus only depends on $\nabla \xi$. Hence we can (improperly) consider it as a subset of $\Omega_{N,k}$. \begin{lemma}\label{smalla2} We have \begin{equation} \mu(\mathfrak{A})\le \frac{1}{k^2}, \end{equation} as a consequence one has for every $T\ge 0$ \begin{equation} {\ensuremath{\mathbb P}} \left[ \int^T_0 \mathbf{1}_{\{ \xi_1(t)\in \mathfrak A \}}\dd t \ge (T/k) \right]\le 1/k \end{equation} \end{lemma} \begin{proof} In this proof it is somehow easier to work with the particle system, hence we let $\mu$ be the uniform measure on $\Omega_{N,k}$. We consider $\tilde \mu$ an alternative measure on $\{0,1\}^{{\ensuremath{\mathbb Z}} _N}$, under which the $\eta(x)$ are i.i.d.\ Bernoulli random variable with parameter $k/N$. From the Local Central Limit Theorem (which in this simple case can be proved using the Stirling Formula), there exists a constant $C_1$ such that for all choices of $k$ and $N$ \begin{equation} \tilde \mu\left(\sum_{x\in {\ensuremath{\mathbb Z}} _N}\eta(x)=k \right)\ge \frac{1}{C_1\sqrt k}. \end{equation} Hence for any event $A\subset \{0,1\}^{{\ensuremath{\mathbb Z}} _N}$, we have \begin{equation} \mu(A)=\tilde \mu\left(A \ | \ \sum_{x\in {\ensuremath{\mathbb Z}} _N}\eta(x)=k \right)\le C_1\sqrt k \tilde \mu(A). \end{equation} Hence to prove the result, we just have to prove a slightly stronger upper-bound for the probability $\tilde \mu({\ensuremath{\mathfrak A}} )$. We start proving that \begin{equation} \tilde \mu({\ensuremath{\mathfrak A}} _2)\le \frac{1}{k^{3}}. \end{equation} In terms of particle, ${\ensuremath{\mathfrak A}} _2$ holds any interval of length smaller than $N/k (\log k)^2$ contains at most $(\log k)^4$ particles. Set $$m_{k,N}=m:= \lceil N/k (\log k)^2 \rceil.$$ It is a standard large deviation computation (computing the Laplace transform and using the Markov inequality) to show that there exists a constant $c$ such that \begin{equation}\label{gramnic} \tilde \mu\left( \sum_{x=1}^m \eta(x)\ge \frac{(\log k)^4}{2}\right)\le \exp(- c (\log k)^6). \end{equation} Hence by translation invariance, the probability that there exists an interval of the form $[(i-1)m+1,im]$, $i\in \{1,\dots \lceil N/m \rceil+1\}$ which contains at most $(\log k)^4/2$ particles is smaller than $$2 k (\log k)^2 \exp(- c (\log k)^6)\le k^{-2}.$$ As every interval of length smaller than $m$ is included in in the union of at most two intervals of the type $[(i-1)m+1,im]$ we have $$\tilde \mu({\ensuremath{\mathfrak A}} _2) \le \tilde \mu\left( \exists i \in \{1,\dots \lceil N/m \rceil+1\}, \sum_{x=(m-1)i+1}^{mi} \eta(x) \ge \frac{(\log k)^4}{2}\right)\le k^{-2}.$$ We now prove \begin{equation} \tilde \mu({\ensuremath{\mathfrak A}} _1)\le \frac{1}{k^{3}}. \end{equation} In terms of particle, having a local extremum at $x$ just corresponds to $\eta(x)\ne \eta(x+1)$. Note that if ${\ensuremath{\mathcal A}} $ occurs then, by dichotomy, there exists necessarily an interval of length comprised between $m$ and $2m$ in which the density of extrema is smaller than $k/10N$ and hence the total number of extrema in it is smaller than $\frac k {5N} m$. Setting $m'=\lceil m/2 \rceil$ this interval must include an interval of the type $[(i-1)m'+1,im']$, with $i\in \{1,\dots \lceil N/m' \rceil\}$, in which there are at most $k/5N m$ extrema. Hence, noting that $$ \frac k {5N} m\le \frac{(\log k)^2}{5} \quad \text{ and } \lceil N/m' \rceil\le 3 k(\log k)^2,$$ we have \begin{multline}\label{gromido} \tilde \mu({\ensuremath{\mathfrak A}} _1)\le \tilde \mu\left( \exists i\in \{1,\dots \lceil N/m' \rceil\} \sum_{x=(i-1)m'+1}^{im'} \mathbf{1}_{\{\eta(x)\ne \eta(x+1)\}}\ge \frac{(\log k)^2}{5}\right) \\ \le 3 k(\log k)^2\tilde \mu\left( \sum_{i=1}^{m'} \mathbf{1}_{\{\eta(x)\ne \eta(x+1)\}}\ge \frac{(\log k)^2}{5}\right). \end{multline} Now we remark that $$\tilde\mu\left(\eta(x)\ne \eta(x+1)\right)=\frac{3k(N-k)}{N^2}\ge \frac{k} N.$$ As the variables $\mathbf{1}_{\{\eta(2x-1)\ne \eta(2x)\}}$ are i.i.d for $x\in \{1, \lceil m'/2 \rceil\}$ we can use standard large deviation techniques for sums of i.i.d. variables and obtain that there exists a constant $c$ for which \begin{equation} \tilde \mu\left( \sum_{i=1}^{m'} \mathbf{1}_{\{\eta(x)\ne \eta(x+1)\}}\ge \frac{(\log k)^2}{5} \right)\le \tilde \mu\left( \sum_{x=1}^{\lceil m'/2 \rceil} \mathbf{1}_{\{\eta(2x-1)\ne \eta(2x)\}}\ge \frac{(\log k)^2}{5}\right) \le e^{-c(\log k)^2}. \end{equation} This combined with \eqref{gromido} allows us to conclude. \end{proof} \begin{lemma}\label{fromzea2} When $\xi_1(t)\in {\ensuremath{\mathfrak A}} $ we have \begin{equation} u(t)\ge \frac{1}{10}\min\left(k, \frac{A(t)k}{N \max(H(t), (\log k)^6)}\right) \end{equation} \end{lemma} \begin{proof} If $\xi^1_t\in {\ensuremath{\mathfrak A}} $ and $\xi^2_t(x)>\xi^1_t(x)$, for all $x\in {\ensuremath{\mathbb Z}} _N$, then all corners of $\xi^1_t$ give a contribution to $u(t)$ and from the assumption $\xi_1(t)\in {\ensuremath{\mathfrak A}} $ there are at least $k/10$ of them. \medskip If there are some contacts between $\xi^1_t$ and $\xi^2_t$, the idea of the proof is to control the contribution to the area and to $u(t)$ of each bubble. Recall \eqref{defct} and \eqref{bubarea}. Assume that the interval $[a,b]$ with $\#[a,b]\le N/k (\log k)^2$ is a bubble. It has at least one flippable corner. For any $x\in [a,b]$, $i=1,2$ we have \begin{equation} \xi^1_t(a)-\frac{ k\#[a+1,x]}{N} \le \xi^i_t(x)\le \xi^1_t(b)+\frac{ k\#[x+1,b]}{N}, \end{equation} and hence \begin{equation} \max_{x\in [a,b]} (\xi^2_t-\xi^1_t)(x)\le (\xi^1_t(b)- \xi^1_t(a))+\frac{ k\#[a,b]}{N} \end{equation} From the definition of ${\ensuremath{\mathfrak A}} $, the right-hand side is smaller than $2(\log k)^4$ and hence \begin{equation} A_{[a,b]}(t)\le 2(\log k)^4\#[a,b]\le (N/k) (\log k)^6. \end{equation} And hence (recall \eqref{defuab}, \eqref{bubarea}) \begin{equation}\label{smabub} u_{[a,b]}(t)\ge 1\ge \frac{A_{[a,b]}(t) k}{N (\log k)^6}. \end{equation} For large bubbles with $\#[a,b]\ge \frac{2N} k (\log k)^2$ \begin{equation} A_{[a,b]}(t)\le H(t) \#[a,b] \end{equation} and thus from the definition of ${\ensuremath{\mathfrak A}} $ \begin{equation}\label{bigbub} u_{[a,b]}(t) \ge \frac{k}{10N} \#[a,b]\ge \frac{A_{[a,b]}(t)k}{10N H(t)}. \end{equation} The lemma is the proved by summing \eqref{smabub} and \eqref{bigbub} over all bubbles. \end{proof} We are now ready to combine the ingredients for the the of Proposition \ref{nahnou} for arbitrary $k$. We prove only \eqref{nahnou1}, as \eqref{nahnou2} can also be obtained in the same manner by adapting the technique used in the case $k=N/2$. \begin{proof}[Proof of \eqref{nahnou1} for arbitrary $k$] We fix a constant $C_1(\gep)$ such that \eqref{cranchk} holds for $\gep/3$ instead of $\gep$. $C_2(\gep,s)$ is chosen so that \eqref{crimouk} holds for $\gep/3$. \begin{equation} \begin{split} \mathcal A&:=\left\{\int^T_0 \mathbf{1}_{\{ \xi_1(t)\notin \mathfrak A \}}\dd t \le (T/k)\right\},\\ \mathcal B&:= \left\{ \forall i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge C_1 (i+1)^2 \sqrt{k}\}}\le (\alpha/4) (i+1)^{-2}T \right\},\\ \mathcal C&:=\left \{\forall i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\le C_2 N^2k 3^{-i}\right\} \cap \{{\ensuremath{\mathcal T}} _\infty< C_2 N^2 \}, \end{split} \end{equation} where \begin{equation}\begin{split} T&:=40 N^2 C_1 C_2 \alpha^{-1} \max_{i\ge 0}\left[ (i+1)^4(2/3)^i\right],\\ T'&:=T+C_2N^2. \end{split}\end{equation} From our definitions one has \begin{equation} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} \cap{\ensuremath{\mathcal C}} ] \ge 1- \gep. \end{equation} Then we conclude by doing the same reasoning as in the case $k=N/2$ that $${\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} \cap{\ensuremath{\mathcal C}} \subset \{\tau \le T'\} $$ using that for $k$ sufficiently large, on the event ${\ensuremath{\mathcal A}} \cap {\ensuremath{\mathcal B}} $ we have \begin{equation} \forall i\in \{0,\dots, K\}, \int^{T}_0 \mathbf{1}_{\{ H(t)\ge C_1 (i+1)^2 \sqrt{k} \text{ or } \xi_1(t)\notin \mathfrak A \}}\le (\alpha/2) (i+1)^{-2}T \end{equation} Concerning \eqref{nahnou2} we set $T=s^{-1}N^2$ \begin{equation} \begin{split} \mathcal A'&:=\left\{\int^{T}_0 \mathbf{1}_{\{ \xi_1(t)\notin \mathfrak A \}}\dd t \le (T/k)\right\},\\ \mathcal B'&:= \left\{ \forall i\in \{0,\dots, K\}, \int^{N^2}_0 \mathbf{1}_{\{ H(t)\ge s^{6}(i+1)^2 \sqrt{k}\}}\dd t\le (\alpha/4) (i+1)^{-2}N^2 \right\},\\ \mathcal C'&:=\left \{\forall i\in \{0,\dots, K\}, {\ensuremath{\mathcal T}} _i\le s^{-7} N^2k 3^{-i}\right\} \cap \{{\ensuremath{\mathcal T}} _\infty< s^{-1}N^2/2 \}, \end{split} \end{equation} and use \eqref{cranchk2} , \eqref{crimouk2} and \eqref{fromzea2} to have for all sufficiently large $s$, \begin{equation} {\ensuremath{\mathbb P}} [{\ensuremath{\mathcal A}} '\cap {\ensuremath{\mathcal B}} '\cap{\ensuremath{\mathcal C}} '] \ge e^{-Cs^{-6}} \end{equation} and conclude as in the proof for $k=N/2$ that $$ \{\tau \le 2N^2\}\subset {\ensuremath{\mathcal A}} '\cap {\ensuremath{\mathcal B}} '\cap{\ensuremath{\mathcal C}} ' ,$$ using that if $N$ (and thus $k$) is sufficiently large we have on the event ${\ensuremath{\mathcal A}} '\cap{\ensuremath{\mathcal B}} '$ \begin{equation} \forall i\in \{0,\dots, K\}, \int^{N^2}_0 \mathbf{1}_{\{ H(t)\ge s^{6} (i+1)^2 \sqrt{k} \text{ or } \xi_1(t)\notin \mathfrak A \}}\dd t\le (\alpha/4) (i+1)^{-2}N^2. \end{equation} \end{proof} {\bf Acknowledgement: } The author is grateful to Fran\c{c}ois Huyveneers and Simenhaus for enlightening discussions and in particular for bringing to his knowledge the existence of the comparison inequalities for the exclusion process of Lemma \ref{ligett}.
285fceffe83db98198c5af7d82d096d4fe42b8a7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The aim of developing models capable of quantitatively forecasting the effects of policy changes has been an objective of economic science since at least \citeasnoun{hurwicz:50} and \citeasnoun{marschak:53}. Recent years, in contrast, have seen the popularity of design-based research strategies that focus on estimating particular parameters or causal effects. The design-based approach does not typically lend itself to ex ante policy evaluations involving changes outside of historical experience. Both \citeasnoun{angrist/pischke:10} and \citeasnoun{heckman:10} attribute the growth of the design-based research program to skepticism on the value of structural modeling for counterfactual analysis given its reliance on parametric and/or behavioral assumptions. Though opinions vary greatly on the credibility of structural models, there is one area of consensus: there are relatively few systematic comparisons of the ex ante counterfactual predictions of structural models after policy changes have occurred to predictions made from simpler methods. For instance, \citeasnoun{angrist/pischke:10} lament: \begin{quote} ``\textsf{Many new empirical industrial organization studies forecast counterfactual outcomes based on models and simulations, without a clear foundation in experience. [...] At minimum, we'd expect such a judgment to be based on evidence showing that the simulation-based approach delivers reasonably accurate predictions. As it stands, proponents of this work seem to favor it as a matter of principle.''}\footnote{There is a small literature in the context of merger analysis. \citeasnoun{peters:06} examines the predictive value of airline mergers and find that structural simulation methods yield poor predictions of post-merger ticket prices. \citeasnoun{ashenfelter/hosken:08} argue that ``transparently'' identified design-based estimates of the mergers differ markedly from those produced by the structural approach. \citeasnoun{einav/levin:10} acknowledge that there should be more retrospective analysis of past mergers, but question whether information from particular mergers can be extrapolated for other ones.} \end{quote} The goal of this paper is to fill this void by evaluating the performance of predictions from discrete-choice models of demand, which underlie much of the work in the new empirical industrial organization, using a large-scale policy change taking place in Boston in 2014. Each year, thousands of Boston's families submit rank order lists of public schools in the city's school choice plan. In 2013, Boston Public Schools (BPS), the Mayor, and members of the school committee wanted to change this plan to encourage students to attend schools closer to home. BPS put forth a number of plans that redraw the boundaries of the city and modify the set of schools applicants are allowed to rank by eliminating some choices and adding other choices. After these plans were described to the public, there was widespread interest in forecasting the choices families would make and in comparing the final assignments under these alternative proposals.\footnote{For more details, see the materials available at \textsf{http://bostonschoolchoice.org} and press accounts by \citeasnoun{goldstein:12} and \citeasnoun{handy:12}.} \citeasnoun{pathak/shi:13} used historical choices to predict the choices that participants would express under these alternative plans. They reported estimates of models of school demand based on historical choices and used those estimates as a basis for extrapolating how schools would be ranked under different choice menus. The analysis played a significant role in evaluating trade-offs among the proposals and ultimately led to the selection of a new plan. In 2014, families throughout Boston will rank schools under new choice menus, with the first deadline to submit preferences on January 31. The methods employed to make counterfactual forecasts in Boston are based on discrete choice models of demand. These methods are commonly used to forecast from structural models and were initially designed with this aim in mind \cite{mcfadden:01}. McFadden and co-authors studied the impact of BART, a new fixed-rail rapid transit system in San Francisco Area. They collected data on the travel behavior of a sample of individuals in 1972, prior to the introduction of BART, and estimated models that were used to predict the behavior of the same individuals in 1975 after BART began. Multinomial logit models were estimated using pre-BART commuter data at the realized attributes of the alternatives (auto alone, carpool, bus). \citeasnoun{mcfadden:01} reports that \begin{quote} ``\textsf{our overall forecasts for BART were quite accurate, particularly in comparison to the official 1973 forecast, obtained from aggregate gravity models, that BART would carry 15 percent of commute trips. We were lucky to be so accurate, given the standard errors of our forecasts, but even discounting luck, our study provided strong evidence that disaggregate RUM-based models could outperform conventional methods.}'' \end{quote} Based on the BART experience, random utility models became widely adopted in travel analysis and many other areas of economics involving consumer choice \cite{mcfadden:01}. More than three decades have progressed since the BART analysis and there has been considerable progress in demand modeling during this time. For instance, \citeasnoun{mcfadden:01} states that the methods used to account for substitution between modes of transportation were inferior to the modeling methods used today. The aim of this paper is to compare discrete-choice based forecasts of changes in school demand in Boston's school choice plan to alternative forecasts that come from a statistical model not founded on utility maximization. We will study how well ex ante counterfactual predictions match actual submitted choices of participants in the first year of the new system, and also examine other outcomes related to the assignments students obtain. This paper lays out our methodology and forecasts before new choices are submitted. A companion follow-up paper will compare these forecasts to newly reported choices when they become available. There are many reasons our exercise has the potential to provide unusually compelling evidence on the forecasting performance of structural demand models. First, the forecasts are based on flexible models of demand exploiting the revealed preferences that families expressed historically. The data not only includes student's top choices, but their entire submitted rank order list of schools. Rank order list data can potentially reveal richer information about substitution patterns between choices (see, e.g., \citeasnoun{berry/levinsohn/pakes:04}). Moreover, our dataset includes a large number of observables, including information on student characteristics and exact geographic location. Furthermore, Boston's choice plan has been in existence for more than two decades, meaning that there is a wealth of knowledge and shared experience about the system. The current strategy-proof system, in place since 2005, eliminates the need for participants to be strategic about their choices and the advice BPS provides participants in their school guide reflects this feature.\footnote{For instance, the 2012 School Guide states: ``List your school choices in your true order of preference. If you list a popular school first, you won't hurt your chances of getting your second choice school if you don't get your first choice.} Our exercise should be particularly informative on substitution patterns since the policy change predominantly involves a change in choice menus and relatively small changes in the characteristics of choices. As preference data become more widely available, there are also a growing number of papers that estimate models of school demand, and our results will speak to their reliability as a policy planning tool for school districts (see, e.g., \citeasnoun{abdulkadiroglu/agarwal/pathak:13}, \citeasnoun{hastings/kane/staiger:05}, \citeasnoun{walters:12}). Finally, since discrete choice models of demand are a building-block for many structural models that examine more complicated situations involving dynamics and strategic interactions, the relatively simple counterfactual environment should allow us to easily compare our predictions and understand reasons for their performance abstracting away from these additional complications. On the other hand, the premise of our exercise, and other forecasts based on discrete choice models of demand is that preferences are stable, can be estimated adequately, and can be used to make predictions in different environments. In a field experiment, \citeasnoun{hastings/weinstein:08} provide evidence that choice behavior in Charlotte's school choice plan can easily be swayed by informational cues. In other contexts, there is similar evidence that information interventions that give people the same information that is already available in a simpler format change choice behavior, and are evidence of ``comparison frictions'' \cite{kling:12}. If these features of choice dominate decision-making, then they may interfere with the reliability of any forecasts based on historical revealed preferences. Indeed, if the details of how counterfactuals are presented matter more than what we can learn from past data using demand models, it may motivate a reevaluation of the use of demand models for analyzing counterfactuals in our context. Our work also has the potential to provide evidence on how particular post-BART developments in discrete choice modeling are important for accurate counterfactual prediction. There are only a small number of papers that make ex ante forecasts and compare them ex post to what actually occurs. The closest project is the study of BART and its ex post validation described above \cite{mcfadden/talvitie:77}. \citeasnoun{carrell/sacerdote/west:14} take reduced form estimates of peer effects and use these estimates to design an experiment grouping peers together to increase student achievement. Their results show that the reduced form estimates do not provide an adequate guide to predict the effects of their peer grouping experiment. Other work uses social experiments as a validation tool for structural modeling. \citeasnoun{wise:85} estimates a model of housing demand and compares the predicted impacts of a subsidy program to those from a randomized experiment. \citeasnoun{todd/wolpin:06} estimate a model on control households from a randomized social experiment without using post-program data and compare the model predictions about program impacts to the experimental impact estimates. They report that their model's predicted program impacts track the experimental results. This project proceeds in two steps. First, we report on demand models used to guide the policy change in Boston in 2013 in the redesign of their school choice plan, updating the estimates in \citeasnoun{pathak/shi:13}. We build on those methods and report on ex-ante forecasts on the performance of demand models. This report and our analysis plan is being reported in January 2014, before the completion of preference submission in Boston. Part II of this project will use data submitted on schools to evaluate the forecasts and measure the strengths and limitations of discrete choice analysis in our context. \section{School Choice in Boston} Boston Public Schools is home to one of the nation's most iconic school choice plans, which initially evolved out of a court-ordered busing plan in the 1970s. Until 2014, the city was divided into the North, West, and East Zone for elementary school admissions. There are about 25 elementary schools in each zone. Students residing in a zone are allowed to rank any school choice in the zone as well as any school within a 1 mile walk zone and a handful of city-wide schools on their application form. At each school, students are prioritized as follows: continuing students have the highest priority, followed by students who had a sibling at the school, followed by other students. Within each group, for half of the program seats, students residing in the walk-zone obtain priority (but this priority does not apply to the other half of the school seats). A single lottery number draw serves as a tie-breaker.\footnote{\citeasnoun{dkps:12} present additional details on Boston's implementation of this algorithm.} Since 2005, after students submit their choices, they are processed through the student-proposing deferred acceptance algorithm, which works as follows: \begin{itemize} \item Round 1: Each student applies to his first choice school. School $s$ ranks applicants by their priority, rejecting the lowest-ranking students in excess of its capacity, with the rest provisionally admitted (students not rejected at this step may be rejected in later steps). \item Round $\ell>1$: Students rejected in Round $\ell$-1 apply to their next most preferred school (if any). School $s$ considers these students \textit{and} provisionally admitted students from the previous round, ranks them by their priority, rejecting the lowest-ranking students in excess of capacity, producing a new provisional admit list (again, students not rejected at this step may be rejected in later steps). \end{itemize} The algorithm terminates when either every student is matched to a school or every unmatched student has been rejected by every school he has ranked. There have been many attempts to reform Boston's school choice plan including community-wide task forces in 2003 and 2009. Aside from changing the assignment algorithm in 2005, however, there have been no significant changes to the three zone plan since 1999 (see \citeasnoun{aprs:05} for more details). In 2012, due to concerns about transportation costs and the overall merits of busing children far from home, outgoing Mayor Menino spent the last year of his administration advocating for a ``radically different school assignment process---one that puts priority on children attending schools closer to their homes'' \cite{menino:12}. In Fall 2012, BPS proposed five different plans that all restricted participant choice by reducing the number of schools students could rank.\footnote{The initial plans suggested dividing the city into 6, 9, 11, or 23 zones, or assignment based purely on neighborhood.} The idea behind each of these plans was to reduce competition from non-neighborhood applicants at each school. When these plans were publicly unveiled in September 2012, they were met with widespread criticism (see, e.g., \citeasnoun{seelye:12}). These plans and other proposals from the community became the center of a year-long, city-wide discussion on school choice. The plan that was eventually chosen was devised by \citeasnoun{shi:13} and became known as the Home Based plan. Initially, BPS categorizes each school into Tiers, which are computed using a combination of standardized test score growth and levels on the Massachusetts Comprehensive Assessment System (MCAS) tests for the past two years. For the 2014 admissions cycle, Tiers were finalized as of January 2013. Under the new plan, every family is allowed to choose from any school within a mile (as the crow flies), along with the two closest Tier 1, the four closest Tier 1 or 2, the six closest Tier 1, 2 or 3 schools, and the three closest ``option schools'' chosen by BPS using internal simulations. The menu of choices also includes the closest early learning center (ELC) and closest school with an advanced work class (AWC) program.\footnote{There are two idiosyncratic exceptions to the choice menu composition due to Boston's unique geography. First, students residing in parts of Roxbury, Mission Hill and Dorchester are allowed to rank the Jackson Mann school. Second, students in East Boston are eligible to apply to any school in East Boston. East Boston students have priority over non-East Boston students at East Boston schools. Non-East Boston students have priority over East Boston students for Non-East Boston Schools. Finally, there are certain provisions in the plan for students who are limited English proficient or special needs. Limited English proficiency students of ELD level 1, 2, 3 are allowed to apply to any compatible ELL program within their ELL zone, which is a specially constructed six-zone overlay of Boston. Substantially separated special education students do not apply in Round 1, the focus of our investigation.} Families can easily access their choice menu via an online portal, which shows a map of all schools in the choice menu and a summary of their attributes. As before, choices are processed through the student-proposing deferred acceptance algorithm, though the new plan also eliminates walk zone priority. Our analysis focuses on Round 1, where the vast majority of students obtain their initial placement. Students who do not obtain an assignment after the algorithm is run are allowed to participate in an administrative assignment round where BPS places these students to remaining schools based on proximity. For the purposes of our study, we do not model the placement of administratively assigned students and only consider the assignments produced by the assignment mechanism. \section{Data} The main data sources for this project are BPS Round 1 choice data and enrollment data for 2010-2013. For each year, the Round 1 choice data was collected in January or February of that calendar year, for application to the school year that began in September of the same calendar year. The enrollment data is a snapshot taken in December of the same year, 11 months after the choice data and three months after the school year began. The choice data contains for each student who participated in round 1 his/her student identification number; grade; English proficiency status and first language; special education or disability status; geocode (a geographic partitioning of the city used by BPS); school program to which the student has guaranteed priority (designation for continuing students); student identification numbers of the student's siblings currently enrolled in BPS; lottery number; first 10 choices and priorities to each; school program to which the student was assigned and the priority under which he/she was assigned. Using the assigned school and program codes, we infer the capacity available for Round 1 assignment for each school program. We assume that this reflects true Round 1 capacities. The enrollment data, which covers the vast majority of the students in the Round 1 choice data contains student identification numbers; enrolled school and program; grade; geocode; address; gender; race; languages spoken at home; dates of entrance and, if applicable, withdrawal from BPS; food service code (whether the student's socio-economic situation qualifies for free or reduced lunch). (Since BPS began offering free lunch to all students after September 2013, the food service code will not be available in the future.) In addition, we have access to a data set of school characteristics for each of the four years. The school dataset includes for each year and each school the school code, address, school type, \% of students of each race, \% of students who are English Language Learners (ELL), \% of students who have Special Education (SPED) requirements, and \% of students who scored Advanced or Proficient in grades 3, 4 and 5 for MCAS math and English in the previous year. Since the assignment reform is mainly for elementary school assignment, we focus on the entry grades kindergarten 0 to 2 (K0, K1, K2). K2 is the main entry grade to elementary schools in Boston. Table~\ref{tab:totalSupplyDemand} shows the total number of Round 1 applicants in each of the kindergarten grades, as well as the total Round 1 capacity. As can be seen, only a small fraction of seats are available in grade K0, while more than half the seats are available in K1. \begin{table}[!htbp] \centering \caption{Aggregate supply and demand in grades K0-2, in years 2010-2013.} \label{tab:totalSupplyDemand} \begin{tabular}{l c c c| c c c} & \multicolumn{3}{c}{Applicants} & \multicolumn{3}{c}{Inferred Capacity} \\ Year & K0 & K1 & K2 & K0 & K1 & K2\\ \hline 2010 & 803 & 2134 & 3473 & 148 & 1676 & 3139\\ 2011 & 704 & 2202 & 3556 & 170 & 1689 & 3328\\ 2012 & 1001 & 2660 & 3985 & 181 & 1921 & 3689\\ 2013 & 913 & 2599 & 4038 & 155 & 1890 & 3979\\ \end{tabular} \end{table} \noindent Students who are assigned to K0 or K1 in the previous grade enter the assignment system the next year as continuing students. These have priority to their current seat over new students. We define every non-continuing student as ``new.'' Figure~\ref{fig:totalTrend} plots the total number of new and continuing applicants to BPS for four years. \begin{figure}[!htbp] \centering \caption{Number of new and continuing applicants to BPS (K0-2)} \label{fig:totalTrend} \includegraphics[width=0.8\textwidth]{Round1TotalTrend.png} \end{figure} To make use of geocode data, we take the latitude and longitude centroids of each geocode (provided by BPS) and compute a mapping from geocodes to ``neighborhoods,'' which gives a geographic partition of Boston into 14 regions.\footnote{Originally BPS has 16 neighborhoods, but we combined three small contiguous neighborhoods Central Boston, Back Bay, and Fenway/Kenmore into one neighborhood that we call ``Downtown.'' This is because these neighborhoods approximately make up the Boston downtown, and they each have very few number of applicants. Combining them still yields one of the smallest neighborhoods by number of applicants.} Table~\ref{tab:applicantDemo} shows the neighborhood breakdown of the Round 1 applicants and demographic profiles for each neighborhood, including an estimate of household income using the median household income from the 2010 Census of the census block group where the centroid of the student's geocode lies\footnote{We used the 2012 ESRI demographics data set, which is available at \url{http://www.esri.com/data/esri_data/demographic-overview/demographic.}}, the percentage of applicants who are English Language Learners (ELL), and the racial composition of the applicant pool from this neighborhood. This table aggregates all four years of Round 1 choice data for all 3 kindergarten grades. \begin{table}[!htbp] \centering \caption{Applicants' demographics across neighborhoods.} \label{tab:applicantDemo} \footnotesize \begin{tabular}{l c c c c c c c c} Neighborhood & \% Total & Income Est. (K) & ELL & Black & Hispanic & White & Asian & Other\\ \hline Allston-Brighton & 5\% & 53.9 & 51\% & 8\% & 36\% & 28\% & 22\% & 5\%\\ Charlestown & 2\% & 60.7 & 24\% & 12\% & 27\% & 49\% & 10\% & 2\%\\ Downtown & 3\% & 58.6 & 41\% & 10\% & 18\% & 36\% & 30\% & 5\%\\ East Boston & 14\% & 33.8 & 76\% & 2\% & 77\% & 16\% & 3\% & 2\%\\ Hyde Park & 6\% & 53.4 & 28\% & 40\% & 42\% & 13\% & 2\% & 3\%\\ Jamaica Plain & 6\% & 47.6 & 31\% & 13\% & 44\% & 30\% & 5\% & 8\%\\ Mattapan & 7\% & 36.0 & 30\% & 56\% & 40\% & 1\% & 1\% & 2\%\\ North Dorchester & 6\% & 40.5 & 49\% & 28\% & 40\% & 11\% & 17\% & 4\%\\ Roslindale & 8\% & 54.9 & 28\% & 17\% & 42\% & 34\% & 3\% & 5\%\\ Roxbury & 14\% & 29.6 & 36\% & 45\% & 49\% & 2\% & 1\% & 2\%\\ South Boston & 3\% & 37.8 & 36\% & 16\% & 38\% & 36\% & 6\% & 3\%\\ South Dorchester & 14\% & 44.2 & 35\% & 33\% & 35\% & 13\% & 16\% & 3\%\\ South End & 4\% & 47.0 & 38\% & 29\% & 40\% & 15\% & 11\% & 4\%\\ West Roxbury & 7\% & 65.6 & 21\% & 16\% & 27\% & 47\% & 6\% & 4\%\\ \hline All Neighborhoods & 100\% & 44.3 & 40\% & 26\% & 44\% & 19\% & 8\% & 3\%\\ \end{tabular} \end{table} Our analysis also uses distance estimates between each student's home and each school. To account for geographic barriers and road availabilities, we use walking distances provided by Google Maps API. For students for whom we cannot find a valid address by matching to the enrollment data\footnote{This occurs when the student is either not found in the enrollment data or the spelling of the address does not yield a valid result on Google maps, or the result ended up being more than 0.5 mile from the centroid of the applicant's geocode, indicating a data error.}, we used the centroid of the student's geocode as a proxy for home location. \section{Methodology} \label{sec:methodology} The aim of our analysis is to explore alternative approaches of forecasting outcomes and to evaluate the accuracy of each approach. We target outcomes that are important to BPS operations and decision-making. We make these predictions for 2014 Round 1, where the deadline to submit preferences is January 31, 2014. We hoped to commit to numerical forecasts before the outcome data is collected, but because of computational challenges, we were only able to calibrate two out of three of our demand models by January 2014. At that time, we published the partial forecasts in an earlier version of this report, in which we also committed to the specification for the third model, which we were not able to calibrate yet\cite{earlier-version}. In this current version, we include estimates and forecasts from all three models, and we do not depart from the specification for the third model previously committed to. All these efforts were to make sure that we are free from multiple-hypothesis testing or post-analysis bias. We forecast three assignment outcomes. The first is the number of unassigned students per neighborhood. This outcome is important for BPS because they have publicly committed to assign each K2 student to a set within his/her Home Based choice menu. Two other assignment outcomes are average access to quality, as defined by students' chances of getting into a Tier 1 or 2 school, and average distance to assigned school for each neighborhood. These were the two most important metrics by which the city committee during the 2012-2013 school assignment reform made their decisions, and the numbers they examined were based on forecasts arising from demand modeling. This analysis examines the accuracy of such an approach. We also forecast market shares, as more direct measures of the choice patterns themselves, apart from interactions with the assignment system. We examine school market shares for each neighborhood, for top 1, top 2, and top 3 choices. This represents the demand and substitution patterns of families' choices across different neighborhoods. Because most of the available data is for K1 and K2, and because these grades are more important for BPS strategic and operational policies, we focus on these two grades in the analysis. Given the actual choice data and lottery numbers, and given a table of school program capacities, the above moments can be computed deterministically. Program capacities are control variables that BPS often varies over the assignment cycle. Our simulation engine, which can be seen as a function mapping program capacities to outcome forecasts. To commit to specific forecasts, it is necessary to specify program capacities. For simplicity, we use Round 1 inferred capacities from the previous year. Forecasting the above moments involves forecasting the application pool in 2014, how applicants choose schools, and simulating the BPS assignment algorithm to yield final outcomes. We describe these steps in detail in the following subsections. \subsection{Demand Models} The focus of our study is alternative approaches to predict families' demand. While a full demand model should include families' decisions to apply to BPS, we do not have sufficient data about each family's outside options to precisely estimate such a model, so in this project we choose to focus on choices among BPS alternatives, and forecast the application pool using an ad-hoc approach described in section~\ref{sec:populationForecast}. We consider three types of demand models, which are ordered in increasing complexity. \begin{itemize} \item {\em Naive Model:} Assume that all families choose based on a simple rule that agrees with intuition and naturally arises from how the new assignment system has been presented. \item {\em Multinomial Logit Model:} This model is one of the simplest and most widely used approaches in demand modeling, especially for industrial organization applications. The 2012-2013 Boston school assignment reform heavily leaned on an analysis based on such a model, described in \citeasnoun{pathak/shi:13}. \item {\em Mixed-Logit Model:} This model is a popular alternative to the multinomial logit model due to its greater flexibility in capturing complex substitution patterns that violate the Independence of Irrelevant Alternatives (IIA) property of pure logit models. One theoretical justification of such a model is that any Random Utility Maximization (RUM) consistent demand model can be approximated to arbitrary accuracy by a mixed-logit model \cite{mcfadden/train:00}. \end{itemize} \noindent We now describe the details and specifications of each model. \subsubsection{Naive Model} There are many possibilities for specifying statistical models of demand, which are not based on the random utility framework. We chose one particular model so that our investigation of more sophisticated models can be compared to an alternative. For instance, \citeasnoun{nevo/whinston:10} point out that when evaluating the performance of simulation based merger analysis, it is important to compare these methods to other possibilities. Even though it necessarily requires some ad hoc choices, our naive benchmark represents such an alternative. From interactions with parents and BPS staff, we learned that many people expect families to simply choose schools with the best Tier schools first, breaking ties with distance. For ELL students, BPS staff stated that if given sufficient information, families would place a premium on ELL programs since they offer targeted programming, especially language-specific ELL programs in their home language. Other patterns are suggested by the choice data. For example, the vast majority of continuing students (91\%) select the next grade level of their current program first, an expected pattern since families may not like having to move schools. Furthermore, for students who have a sibling currently attending BPS, 66\% of them rank a school first that a sibling goes to, an expected pattern if families value having siblings attend the same school for transportation or other reasons. If families' choices are not based on coherent and stable preferences, but are strongly influenced by BPS' publicity or framing efforts, the following simple model may adequately approximate families' choices. Each family ranks the school programs in their personalized menu based on the following hierarchy: \begin{center} \begin{tabular}{c l} Hierarchy & Criteria \\ \hline 1 (most important) & present school program \\ 2 & another program in current school \\ 3 & school where sibling attends \\ 4 & (for ELL students) ELL program \\ 5 & (for ELL students) ELL program in home language \\ 6 & better Tier school \\ 7 & closer walking distance \\ \end{tabular} \end{center} \noindent Students only consider the hierarchy that pertain to them. For example, new applicants do not consider hierarchies 1 or 2, and non-ELL students do not consider hierarchies 4 and 5. Outside of a random utility framework, a model of this type is a natural choice: for example, such a hierarchical model was used by the independent consulting group WXY, when commissioned by the BPS to analyze various counterfactuals. \subsubsection{Multinomial Logit} \label{sec:logit} This model assumes that rankings of each student $i$ are induced by underlying utilities for each school program $j$, and that the utilities can be approximated by the following model: \[u_{ij} = \boldsymbol{\beta} \cdot \mathbf{F}_{ij} + \epsilon_{ij},\] \noindent where $\mathbf{F}_{ij}$ is a vector of observable characteristics pertaining to student $i$ and choice option $j$, $\epsilon_{ij}$'s are iid random variables following a standard Gumbel distribution, and $\boldsymbol{\beta}$ is a vector of parameters to be fitted. As usual, we standardize the error term without loss of generality since the model is invariant under multiplication or addition by constants. By the same reason we normalize one of the components of $\boldsymbol{\beta}$ to zero. A key implication of this model is that choices follow the Independence of Irrelevant Alternatives (IIA) property: the relative market shares of two programs does not depend on whether a third option is available. This means that substitution between programs follows the same proportional pattern across all choices. Although this property may be unrealistic for school choice, as two choices made by the same family may be correlated due to common, unobservable characteristics, it is plausible that the model may nevertheless provide a reasonable forecast of the moments that matter for decision making. We fit this model by Maximum Likelihood Estimation (MLE) and obtain the covariance matrix of estimated coefficients by taking the inverse of the Hessian of the log likelihood function at the maximum. Table \ref{tab:logitCoefficents} shows the estimated coefficients for various specifications, using each of the 2012 and 2013 Round 1 choice data for grades K1 and K2. There are three specifications \textit{Simple} (which does not use students' demographics), \textit{Full} (which fully interacts students' race and income estimates with several key school characteristics), and \textit{Reduced} (which removes the insignificant terms in Full and combines terms for efficiency). The features used are the following: \begin{itemize} \item distance: walk distance from home to school. \item continuing: indicator for whether the student has guaranteed status for the school program. \item sibling: indicator for whether student has sibling at the school. \item ell match: indicator for the student being English Language Learner (ELL) and the program being specialized for ELL. \item ell language match: indicator for the student being ELL and the program having a language-specific ELL program in the student's first language. \item walk zone: indicator for whether student lives in the school's walk-zone, which is approximately a 1-mile circle around the school.\footnote{The one mile circle is only approximate because the walk zones in our data are defined by drawing a one-mile circle from the school and including all geocodes that intersect the circle. A family from a geocode on the circle's boundary may be a little further than one-mile from the school, but still in the walk zone.} \item black/hispanic: indicator for whether the student is black or hispanic. \item mcas: the proportion of students at the school who scored Advanced or Proficient in the previous year's MCAS standardized test for math, averaging the proportions for grades 3, 4 and 5. (The MCAS test begins at grade 3. Grade 5 is the highest grade in many elementary schools. We only choose math because it is highly correlated with English.\footnote{This correlation is about .84 in years 2012 and 2013.}) \item \% white/asian: the proportion of students at the school who are white or asian. \end{itemize} In each model, we include a fixed effect for each school, which captures any common propensities to choose a school, due to perceived school quality, facilities, and other unobserved characteristics. In the \textit{Full} specification, we interact the student's income estimate, along with indicators for race (black, asian, hispanic, other, or unknown), with the school's mcas, \% white/asian and distance. (This represents $6 \times 3 = 18$ terms). \begin{table}[!htbp] \centering \caption{Estimated coefficients for logit models. Each model is estimated using maximum likelihood, using the choice data for grades K1 and K2. The standard errors are estimated using the inverse of the Hessian of the log likelihood function at the maximum.} \label{tab:logitCoefficents} \scriptsize \begin{tabular}{@{\extracolsep{-5pt}}lD{.}{.}{-3} D{.}{.}{-3} D{.}{.}{-3} | D{.}{.}{-3} D{.}{.}{-3} D{.}{.}{-3} } \\[-1.8ex]\hline \hline \\[-1.8ex] \\[-1.8ex] & \multicolumn{3}{c}{2012 Data} & \multicolumn{3}{c}{2013 Data} \\ \\[-1.8ex] & \multicolumn{1}{c}{\textit{Simple}} & \multicolumn{1}{c}{\textit{Full}} & \multicolumn{1}{c|}{\textit{Reduced}} & \multicolumn{1}{c}{\textit{Simple}} & \multicolumn{1}{c}{\textit{Full}} &\multicolumn{1}{c}{\textit{Reduced}}\\ & \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c|}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)}\\ \hline \\[-1.8ex] distance & -0.395^{***} & -0.438^{***} & -0.365^{***} & -0.459^{***} & -0.499^{***} & -0.403^{***}\\ & (0.005) & (0.018) & (0.014) & (0.006) & (0.019) & (0.015)\\ & & & & & & \\ continuing & 4.070^{***} & 4.029^{***} & 4.027^{***} & 4.401^{***} & 4.347^{***} & 4.354^{***}\\ & (0.053) & (0.052) & (0.052) & (0.054) & (0.054) & (0.054)\\ & & & & & & \\ sibling & 2.143^{***} & 2.101^{***} & 2.104^{***} & 2.141^{***} & 2.107^{***} & 2.102^{***}\\ & (0.037) & (0.037) & (0.037) & (0.038) & (0.038) & (0.038)\\ & & & & & & \\ ell match & 1.548^{***} & 1.550^{***} & 1.548^{***} & 1.210^{***} & 1.202^{***} & 1.211^{***}\\ & (0.035) & (0.035) & (0.035) & (0.040) & (0.040) & (0.040)\\ & & & & & & \\ ell language match & 0.719^{***} & 0.610^{***} & 0.606^{***} & 0.791^{***} & 0.671^{***} & 0.672^{***}\\ & (0.043) & (0.044) & (0.043) & (0.049) & (0.049) & (0.049)\\ & & & & & & \\ walk zone & 0.570^{***} & 0.497^{***} & 0.500^{***} & 0.474^{***} & 0.396^{***} & 0.399^{***}\\ & (0.019) & (0.019) & (0.019) & (0.020) & (0.020) & (0.020)\\ & & & & & & \\ distance * black/hispanic & & & 0.115^{***} & & & 0.114^{***}\\ & & & (0.010) & & & (0.011)\\ & & & & & & \\ distance * income est. & & -0.233^{***} & -0.262^{***} & & -0.252^{***} & -0.296^{***}\\ & & (0.022) & (0.021) & & (0.024) & (0.023)\\ & & & & & & \\ mcas * black & & -0.599^{***} & -0.874^{***} & & -0.904^{***} & -1.062^{***}\\ & & (0.166) & (0.105) & & (0.175) & (0.111)\\ & & & & & & \\ mcas * income est. & & 0.506^{**} & 0.424^{*} & & 0.959^{***} & 0.906^{***}\\ & & (0.232) & (0.221) & & (0.267) & (0.252)\\ & & & & & & \\ \% white/asian * black/hispanic & & & -2.581^{***} & & & -2.666^{***}\\ & & & (0.097) & & & (0.094)\\ & & & & & & \\ \% white/asian * income est. & & 1.908^{***} & 1.982^{***} & & 1.447^{***} & 1.778^{***}\\ & & (0.218) & (0.211) & & (0.228) & (0.219)\\ & & & & & & \\ school fixed effects & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c|}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} & \multicolumn{1}{c}{Yes} \\ & & & & & & \\ full interaction & & \multicolumn{1}{c}{Yes} & & & \multicolumn{1}{c}{Yes} & \\ & & & & & & \\ \hline \\[-1.8ex] Log likelihood& \multicolumn{1}{c}{-70,969}& \multicolumn{1}{c}{-70,013}& \multicolumn{1}{c|}{-70,090}& \multicolumn{1}{c}{-65,944}& \multicolumn{1}{c}{-64,763}& \multicolumn{1}{c}{-64,829}\\ \# of Parameters& \multicolumn{1}{c}{81}& \multicolumn{1}{c}{99}& \multicolumn{1}{c|}{87}& \multicolumn{1}{c}{83}& \multicolumn{1}{c}{101}& \multicolumn{1}{c}{89}\\ \# Students & \multicolumn{1}{c}{6,644} & \multicolumn{1}{c}{6,644} & \multicolumn{1}{c|}{6,644} & \multicolumn{1}{c}{6,627} & \multicolumn{1}{c}{6,627} & \multicolumn{1}{c}{6,627} \\ \# Choices & \multicolumn{1}{c}{27,905} & \multicolumn{1}{c}{27,905} & \multicolumn{1}{c|}{27,905} & \multicolumn{1}{c}{26,991} & \multicolumn{1}{c}{26,991} & \multicolumn{1}{c}{26991} \\ \hline \\[-1.8ex] \textit{Note:} & \multicolumn{6}{l}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \end{table} All of the specifications in Table \ref{tab:logitCoefficents} yield a significant and negative coefficient on distance. The magnitudes of the coefficients can be interpreted as follows: for the \textit{Simple} specification fitted using 2012 data, the distance coefficient is $-0.395$, which means that for two school programs that are otherwise identical but one is one mile further from home, the student is more likely to choose the closer one $\frac{e^{0.395}}{1+e^{0.395}} \approx 60\%$ of times. All estimates yield highly significant and positive coefficients for continuing, sibling, ell match, and ell language match. To gain further intuition, one can examine the ratios of these estimates with the distance coefficient. For example, all else being equal, students in the \textit{Simple} specification in 2012 are on average willing to travel $\frac{4.070}{0.395} \approx 10.3$ extra miles to go to a continuing program, $5.4$ extra miles to go to school with a sibling, $3.9$ extra miles to go to an ELL program (if the student is ELL), and $1.8$ extra miles to go to an ELL program specialized to his/her home language. Being in the walk zone is relevant because only students who live outside the walk zone are provided busing. Moreover, it is correlated with extreme proximity. Because of these potentially conflicting influences, the positive coefficient for walk zone is difficult to interpret. Another complication is that before 2014, students in the walk zone get walk zone priority to go to the school. Although this should not theoretically affect choice rankings because the mechanism is strategyproof, families may not fully appreciate this property and rank walk-zone schools higher because they think they have better chances to get into them. Alternative, the significance of this variable may indicate that student's perceive a significant fixed costs to having to attend a school that requires a bus ride. In creating the \textit{Reduced} specification, we first fit the \textit{Full} model. However, we found that the estimates for black and hispanic students are statistically indistinguishable, except for interaction with mcas, in which the coefficient for black students are significantly negative while the coefficient for hispanic students is insignificant. Moreover, the results for the other races are unstable between the years or insignificant, most likely due to lack of data because as seen in Table~\ref{tab:applicantDemo} few students are asian (8\%) or other (3\%). Thus, in the \textit{Reduced} specification, we group black and hispanics together, except for the interaction with mcas, and we remove the other race dummies, implicitly grouping them with whites. We included all the terms involving income estimates since they tend to be statistically significant. The coefficients suggest that black/hispanic students are willing to travel further than other students, and tend to choose schools that have lower \% white/asian. This may be due to demographic preferences, or preferences for unobserved characteristics that are correlated with demographics, such as school culture or environment. Black students seem to disproportionately choose schools with lower math scores even with our controls for distance and neighborhood income. Because the \textit{Reduced} specification captures all of the significant and stable interaction terms in the \textit{Full} specification, but has more precise estimates, so we opt for this specification and simply refer to it by ``Logit'' in the rest of this paper. \subsubsection{Mixed Logit (MLogit)} \label{sec:mixed-logit} This model adds random coefficients to multinomial logit. Specifically, we use the following formulation, \[u_{ij} = \boldsymbol{\beta}\cdot\mathbf{F}_{ij} + \boldsymbol{\gamma_i}\cdot \mathbf{G}_{ij} + \epsilon_{ij},\] where $\mathbf{F}_{ij}$, $\epsilon_{ij}$ and $\boldsymbol{\beta}$ are observed features, iid taste shocks, and fixed coefficients as before. However, we allow the addition of a subset of features $\mathbf{G}_{ij}$ to interact with random coefficients $\boldsymbol{\gamma_i}$, which we assume to be zero mean jointly Gaussian distributed random variables, with possible covariance restrictions. The assumption of zero mean is without loss of generality since the means are captured by the fixed coefficients. The assumption of Gaussian distribution is for convenience. In terms of features we include, we use the fixed coefficients as in the \textit{Reduced} specification of the logit model, but add random coefficients to the following features, which we organize into ``blocks,'' assuming independence across blocks, but allowing arbitrary covariance within each block. \begin{center} \begin{tabular}{c l} Block & Features \\ \hline A & ell match \\ B & walk zone \\ C & distance, mcas, \% white/asian \\ \end{tabular} \end{center} \noindent This formulation allows students to have heterogeneous preferences of going to an ELL program (if applicable), of choosing schools in the one-mile walk zone\footnote{The walk zones are only approximately one mile disc, because they were originally defined using geocodes.}, and of trading off distance, academics, and school demographics. We also include school fixed effects in order to capture the many unobserved school characteristics, such as safety, reputation, facilities, environment, and teaching quality. Because the model no longer has closed form log-likelihood functions, and the log-likelihood functions are no longer guaranteed to be globally concave, we fit the model by Markov Chain Monte Carlo (MCMC), which is a commonly employed method for fitting such models in practice \cite{train:03}. One technical difficulty in our situation is that we have many school fixed effects. As far as we are aware, the state of the art MCMC techniques for including fixed effects in mixed-logit models is described in~\citeasnoun{train:03}, and it involves adding a layer of Gibbs sampling and simulating the conditional distribution of the fixed effects using the Random Walk Metropolis-Hasting algorithm. However, in our case, there are 75 schools, so this step requires simulating a 75-dimensional distribution, which is prohibitively slow using Random Walk Metropolis (RWM).\footnote{See \citeasnoun{katafygiotis/zuev:2008} for geometric insight to why RWM breaks down in high dimensions when the dimensions are correlated.} Hence, we speed up computation by using Hamiltonian Monte Carlo (HMC)\footnote{See \citeasnoun{neal:2011}}, which incorporates the gradient of the log likelihood function, so can more quickly update the 75-dimensional estimate for fixed effects. We fit the above model by using 1,000,000 iterations of MCMC sampling, throwing out the first half as burn-in. To check for the convergence of the estimates, we repeated this 6 times with independent draws, sometimes with random starting values, and found the results to be near identical. Details of how we fit the mixed logit model are in Appendix~\ref{app:mixedlogit-mcmc}. The estimates are in Table~\ref{tab:mlogitCoefficents}. Note that beside the fixed coefficients in the \textit{Reduced} specification of the simple logit model, we also estimate the standard deviations of the random coefficients, denoted by $\sigma(\text{ell match})$, $\sigma(\text{walk zone})$, $\sigma(\text{distance})$, $\sigma(\text{mcas})$, and $\sigma(\text{\% white/asian})$. These are the square roots of the respective variances. We also estimate the correlation coefficients $\rho(\text{distance},\text{mcas})$, $\rho(\text{distance},\text{\% white/asian})$, and $\rho(\text{mcas},\text{\% white/asian})$, which are computed by dividing the respective covariance terms by the product of the standard deviations. In the rest of the paper, we refer to this model as ``MixedLogit'' or ``MLogit'' for short. \begin{table}[!htbp] \centering \caption{Coefficients for Mixed Logit (MLogit) models, compared to simple Logit.} \label{tab:mlogitCoefficents} \scriptsize \begin{tabular}{@{\extracolsep{-5pt}}lD{.}{.}{-3} D{.}{.}{-3} | D{.}{.}{-3} D{.}{.}{-3} } \\[-1.8ex]\hline \hline \\[-1.8ex] \\[-1.8ex] & \multicolumn{2}{c}{2012 Data} & \multicolumn{2}{c}{2013 Data} \\ \\[-1.8ex] & \multicolumn{1}{c}{\textit{Logit}} & \multicolumn{1}{c|}{\textit{MLogit}} & \multicolumn{1}{c}{\textit{Logit}} & \multicolumn{1}{c}{\textit{MLogit}} \\ & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c|}{(7)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(8)}\\ \hline \\[-1.8ex] distance & -0.365^{***} & -0.638^{***} & -0.403^{***} & -0.674^{***}\\ & (0.014) & (0.037) & (0.015) & (0.039)\\ & & & & \\ continuing & 4.027^{***} & 4.777^{***} & 4.354^{***} & 4.966^{***}\\ & (0.052) & (0.069) & (0.054) & (0.068)\\ & & & & \\ sibling & 2.104^{***} & 2.478^{***} & 2.102^{***} & 2.451^{***}\\ & (0.037) & (0.045) & (0.038) & (0.045)\\ & & & & \\ ell match & 1.548^{***} & 1.892^{***} & 1.211^{***} & 1.311^{***}\\ & (0.035) & (0.058) & (0.040) & (0.059)\\ & & & & \\ ell language match & 0.606^{***} & 0.610^{***} & 0.672^{***} & 0.967^{***}\\ & (0.043) & (0.052) & (0.049) & (0.060)\\ & & & & \\ walk zone & 0.500^{***} & 0.339^{***} & 0.399^{***} & 0.185^{***}\\ & (0.019) & (0.028) & (0.020) & (0.028)\\ & & & & \\ distance * black/hispanic & 0.115^{***} & 0.188^{***} & 0.114^{***} & 0.183^{***}\\ & (0.010) & (0.024) & (0.011) & (0.024)\\ & & & & \\ distance * income est. & -0.262^{***} & -0.295^{***} & -0.296^{***} & -0.343^{***}\\ & (0.021) & (0.049) & (0.023) & (0.052)\\ & & & & \\ mcas * black & -0.874^{***} & -1.100^{***} & -1.062^{***} & -1.371^{***}\\ & (0.105) & (0.153) & (0.111) & (0.144)\\ & & & & \\ mcas * income est. & 0.424^{*} & 1.065^{***} & 0.906^{***} & 0.925^{***}\\ & (0.221) & (0.299) & (0.252) & (0.313)\\ & & & & \\ \% white/asian * black/hispanic & -2.581^{***} & -3.732^{***} & -2.666^{***} & -3.861^{***}\\ & (0.097) & (0.162) & (0.094) & (0.148)\\ & & & & \\ \% white/asian * income est. & 1.982^{***} & 2.633^{***} & 1.778^{***} & 2.217^{***}\\ & (0.211) & (0.322) & (0.219) & (0.311)\\ & & & & \\ $\sigma(\text{ell match})$ & & 1.638^{***} & & 1.358^{***}\\ & & (0.058) & & (0.063)\\ & & & & \\ $\sigma(\text{walk zone})$ & & 0.981^{***} & & 0.878^{***}\\ & & (0.030) & & (0.030)\\ & & & & \\ $\sigma(\text{distance})$ & & 0.392^{***} & & 0.409^{***}\\ & & (0.011) & & (0.011)\\ & & & & \\ $\sigma(\text{mcas})$ & & 2.275^{***} & & 2.121^{***}\\ & & (0.086) & & (0.101)\\ & & & & \\ $\sigma(\text{\% white/asian})$ & & 2.672^{***} & & 2.512^{***}\\ & & (0.093) & & (0.106)\\ & & & & \\ $\rho(\text{distance} ,\text{mcas})$ & & -0.232^{***} & & -0.285^{***}\\ & & (0.041) & & (0.043)\\ & & & & \\ $\rho(\text{distance} ,\text{\% white/asian})$ & & -0.089^{**} & & -0.055\\ & & (0.039) & & (0.040)\\ & & & & \\ $\rho(\text{mcas} ,\text{\% white/asian})$ & & 0.035 & & -0.110^{*}\\ & & (0.056) & & (0.061)\\ \hline \\[-1.8ex] \textit{Note:} & \multicolumn{4}{l}{$^{*}$p$<$0.1; $^{**}$p$<$0.05; $^{***}$p$<$0.01} \\ \end{tabular} \end{table} \subsection{Forecasting the Applicant Pool} \label{sec:populationForecast} An important driver of the number of unassigned students and the average access to Tier 1 or 2 schools is the number of applicants from each neighborhood. Regardless of how applicants choose schools, a large influx of new applicants from a neighborhood would drive up the number of unassigned from that neighborhood and drive down the average access to top Tier schools from that neighborhood. If we had data on all potential applicants and their non-BPS options, we might include this aspect as part of the structural model. In the absence of such data, we still need to reflect this uncertainty and to capture any first-order trends in the neighborhood participation patterns. In forecasting the applicant pool, we consider new and continuing students separately. This is because continuing students are already in the enrollment data of the previous year, while for new students we need to use previous year's applicants' demographics as proxies. Figure~\ref{fig:totalNewTrends} plots the number of new applicants to BPS in Round 1 for grades K0-2 for years 2010-2013, as well as the regression line with respect to the year. As seen, the number of applicants is on average increasing each year, at a rate of 6\% a year on average, although it is not steady. For example, in 2012 there was an above-expected number of applicants.\footnote{The influx of applicants in 2012 Round 1 raised operational pressures for BPS, as it had to add about 10 new classrooms to accommodate.} We model the next year's total number of new applicants by a normal distributed random variable, having mean and standard deviation being the predicted mean and standard error of the regression line. \begin{figure}[!htbp] \centering \caption{Trend in total number of applicants to BPS.} \label{fig:totalNewTrends} \includegraphics[width=0.8\textwidth]{TotalFit2014.png} \end{figure} Figure~\ref{fig:newRatio2014} shows what proportion of this total is distributed into each grade and neighborhood combination. Since we study two grades and there are 14 neighborhoods, there are 28 time series in these plots. Most of the time series do not exhibit obvious trends. We model each of the 28 proportions next year as a normally distributed random variable. To estimate the mean and standard deviation, we run a regression with respect to year for each of these 28 time series, and discard all regressions for which the slope has less than 95\% significance level. For the neighborhood and grade combinations in which we discard the regression, we forecast next year's proportion using the previous 4 years' sample mean and sample standard deviation. For the neighborhood and grade combinations in which the regression slope has 95\% significance, we use the predicted mean and standard error of the regression.\footnote{Note that the standard error has one fewer degree of freedom than the sample standard deviation.} The regression lines we kept are for K1 Charlestown and K2 Downtown, for which we detected a steady upward trend in the number of applicants. \begin{figure} \centering \caption{Proportion of new applicants distributed into each grade, neighborhood combination.} \label{fig:newRatio2014} \includegraphics[width=0.8\textwidth]{2014K1NewRatio.png} \\ \includegraphics[width=0.8\textwidth]{2014K2NewRatio.png} \end{figure} We therefore model the total number of new applicants from each neighborhood in the next year as a product of two independent normals, one representing a BPS wide shock and one a neighborhood shock. The common shock captures the uncertain effect that BPS publicity or policy initiatives have on the propensity for families to apply to BPS round 1. The neighborhood shock captures local population surges or unobserved reasons that affect participation. By using one common shock for all grades, we implicitly assume that different grades exhibit the same reactions to BPS policies, and are trending in the same directions.\footnote{The means and standard deviations of these estimates are tabulated in Appendix~\ref{app:populationForecast}. After multiplying the two normals, we truncate at zero if the product is negative and round to the nearest integer.} To check this, we plot in Figure~\ref{fig:stableProportions} the proportion of new applicants of each grade through the four years. As seen, the relatively horizontal lines suggest that modeling the aggregate participation of both K1 and K2 using the same random variable may be a reasonable approximation. \begin{figure}[!htbp] \centering \caption{The proportion of new applicants of each grade.} \label{fig:stableProportions} \includegraphics[width=0.8\textwidth]{StableProportion.png} \end{figure} For continuing students, we define the Round 1 continuing ratio for a grade and neighborhood as the proportion of relevant students from the previous year's enrollment data who decide to continue in this year's Round 1. Figure~\ref{fig:contRatio2014} plots these for grades K1 and K2. As seen, due to the lower number of continuing students in K1 (recall that comparatively few students enroll in K0), the estimates for K1 are highly variable, while the K2 continuing ratios are around 70\% to 90\%. \begin{figure}[!htbp] \centering \caption{For each grade and neighborhood, the proportion of potentially continuing students from the previous year who apply as continuing students in the current year.} \label{fig:contRatio2014} \includegraphics[width=0.8\textwidth]{2014K1ContRatio.png} \\ \includegraphics[width=0.8\textwidth]{2014K2ContRatio.png} \end{figure} We use the same approach to detect linear trends. However, in this data we failed to find any significant trends in continuing ratios, so we model them as normally distributed random variables, with mean and standard deviation according to the sample mean and sample standard deviation for years 2010-2013. The estimates are in Appendix~\ref{app:populationForecast}. To simulate the pool applicant pool for next year, we independently draw the total number of new applicants, the proportion of this to allocate to each grade and neighborhood\footnote{Since the total includes K0 but we do not estimate proportions for K0, we do not require these draws to add up to 1.}, and the continuation ratio for each grade and neighborhood. Fixing these realizations for one sample of choice data, we draw new applicants by sampling with replacement the previous year's new applicants, but treating them as if they applied in the new year, and we sample the given number of new applicants from each neighborhood and grade. For continuing students, we go through all potentially continuing students and decide whether to include them independently, with probability according to the generated continuation ratio for that grade and neighborhood. We use this method to generate the applicant pool for all our simulations. \subsection{Simulation} \label{sec:simulation} We use 4 layers of random draws for our simulations. \begin{enumerate} \item {\em Population Draw}: Draw a pool of applicants according to the steps in Section~\ref{sec:populationForecast}. This represents uncertainty in participation rates. \item {\em Coefficient Draw}: For the logit and mixed-logit demand models, draw the coefficients as jointly normal random variables, using the estimated means and covariance matrix. This represents uncertainty in the demand model. \item {\em Preference Draw}: Having fixed a specific demand model and parameters, simulate for each student a complete ranking over his/her personalized menu of options, according to the randomness inherent in the demand model. Truncate this to the first ten choices. \item {\em Lottery Draw}: Generate iid lottery numbers for each student. The lottery numbers are distributed uniformly between zero and one, with lower lottery numbers being better. \end{enumerate} After doing these steps once, we have one set of simulated choice data, just as what we might have received from BPS. From this we can deterministically compute all of our outcome metrics by imitating the BPS assignment algorithm.\footnote{For validity of this project, we do not need to replicate the BPS internal system exactly. In fact, we purposely deviate in our back testing simulations for 2013, by ignoring the walk-zone priority. This is because the year of interest, 2014, does not include such a priority, and our goal in back testing is to be able to quantify how well we expect various models to do for 2014. Having the walk-zone priority would skew the access to quality estimates for the Naive model, since under high competition for Tier 1 and 2 schools, the walk-zone priority would give students living near such schools a huge advantage. As a result, we will not be able to distinguish how much of the results for Naive is due to choice patterns or due to the obsolete priority rules.} The reason we truncate to first 10 choices is as follows: currently our choice data from BPS truncates to first 10 choices, although students can rank arbitrarily many. The previous report, \citeasnoun{pathak/shi:13}, provides evidence that assuming everyone ranks 10 choices yields reasonably accurate forecasts. Moreover, our earlier report on which the city committee based the 2012-2013 reform assumed that families ranked 10 options, and we keep the same assumptions to validate or invalidate the methodology of the earlier report. An alternative approach is to model outside options and assume that unranked programs are inferior to the outside option. However, we observe in the data that often students end up enrolling to options they did not rank but could have ranked, suggesting that this assumption is invalid. In our interactions with parents and BPS staff, it seems that many families are ranking few options not because they have better outside options, but because they feel confident they would get into the ones they picked and did not bother, or because they do not understand that ranking more options do not harm their chances to top choices. Future work is needed to better model this situation. \subsection{Evaluation} Having computed the assignment using the simulated choice data and lottery numbers, we compute the outcome measures as follows. In all of the analysis, we compute measures for grades K1 and K2 separately. \begin{itemize} \item {\em Unassigned}: Tally the number of unassigned students from each neighborhood after each round. \item {\em Access to Quality}: For each student, define his/her access to quality as the highest (worst) lottery number he/she can have and still be able to be assigned to a Tier 1 or 2 school. (Recall that lottery numbers are uniformly distributed between zero and one, so this can be interpreted as a probability.) We estimate this by finding for each Tier 1 or 2 school program the highest (worst) lottery number he/she needs to obtain an offer. More precisely, if the program is not filled to capacity, the student can get in even with the worst lottery number; if it is filled to capacity, we look at the worst lottery number the student needs to be able to displace one student to obtain an offer.\footnote{For computing this metric, we ignore the possibility of that student displacing someone else at his/her next choice and starting a chain reaction that cycles back to the first student, since this is unlikely to occur in markets with a large number of participants \cite{kojima/pathak:09}.} Then we take the maximum over all of his/her Tier 1 or 2 options and term this his/her ``access to quality.'' Finally, we average the access to quality of all students of a neighborhood to compute the neighborhood average access. \item {\em Distance}: For each neighborhood, take all the assigned students from this neighborhood and average their walk distances to assigned school. \item {\em Top $k$ Market Share}: Take all the top $k$ choices of students from a neighborhood. For each school, find the proportion of these choices that are to this school.\footnote{If a student ranks multiple programs of the same school as part of his/her top $k$ choice, he/she contributes multiple ``votes'' for that school, since we treat each top $k$ choice as one ``vote.''} \end{itemize} We compute these metrics for each generated choice data and average across many simulations to find the mean prediction. For a given choice data, we compute the above metrics and define the prediction error for each neighborhood as follows: for unassigned, access to quality, and distance, since we have a scalar value for each neighborhood, we simply take the absolute value of the difference between predicted mean and actual realization. For top $k$ market share, since for each neighborhood we have a vector of school market shares that add up to one, we define prediction error as the total variation distance between these probability vectors. Given vectors of market shares $\mathbf{p}$ and $\mathbf{q}$, define the total variation distance between them as \[\text{total variation distance} = \frac{1}{2} \sum_{j} |p_j - q_j|.\] This metric is standard for probability distributions, and can be interpreted as the least total movement needed to redistribute market shares by moving from one school to another, to turn the predicted shares into the actual shares. Finally we judge the overall prediction accuracy by taking the Root Mean Squared Error (RMSE) over the neighborhoods. Specifically, for each metric, we take the prediction error for each neighborhood, square these numbers, take the mean of the squared errors, and take the square room of the mean. We choose this over Mean Absolute Deviation (MAD) because we want to penalize being far off for any specific neighborhood over being slightly off for many neighborhoods. This is because for capacity planning and for policy evaluation, the penalties are large for not foreseeing a large shortfall of seats in a neighborhood or not foreseeing vast inequities between neighborhoods. \section{Results} \subsection{Back Testing for 2013} \label{sec:back-testing} We first test our methodology by using it to predict 2013 outcomes from 2010-2012 data, and quantifying the prediction error since we have the actual choice data in 2013. This gives us an idea of how well we might expect to predict future outcomes in 2014 Round 1. One caveat to keep in mind is that while the assignment system from 2010 through 2013 are the same, there are significant changes in 2014. Therefore, the results here may not reflect the results for 2014. As a uniform measure of how well the models predict each outcome, we compute the tail distribution of Root Mean Squared Error (RMSE) for each model and each outcome. This measure gives a sense of how much we expect to be off by on average. It's worth noting that even if a model is completely correct, there are multiple levels of randomness in the simulations themselves, so there will always be some forecast error. By evaluating the tail distribution at the actual RMSE, we estimate a p-value of the likelihood that this magnitude of error occurs when the model is correct. We plot the p-values for the first three outcome measures for 2013 K2 in Table~\ref{tab:prediction2013K2}. For brevity, we only show results for K2 in the section, but the corresponding plots for K1 are in Appendix~\ref{app:resultsK1}. All plots in our back testing exercise are computed using 400 independent simulations. Since we can no longer use 2013 data, we create population forecasts following the methodology in section~\ref{sec:populationForecast}, but using only 3 years of data. The estimates are in Appendix~\ref{app:populationForecast}. \begin{figure}[h!] \centering \caption{Back testing with assignment outcomes for 2013 K2. Tail distribution plots. \label{tab:prediction2013K2}} \subfloat[][Unassigned (Naive)]{ \includegraphics[width=0.33\textwidth]{2013UnassignedK2Naive.png} } \subfloat[][Unassigned (Logit)]{ \includegraphics[width=0.33\textwidth]{2013UnassignedK2Logit.png} } \subfloat[][Unassigned (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013UnassignedK2MixedLogit.png} } \\ \subfloat[][Access to Quality (Naive)]{ \includegraphics[width=0.33\textwidth]{2013Access_to_QualityK2Naive.png} } \subfloat[][Access to Quality (Logit)]{ \includegraphics[width=0.33\textwidth]{2013Access_to_QualityK2Logit.png} } \subfloat[][Access to Quality (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013Access_to_QualityK2MixedLogit.png} } \\ \subfloat[][Distance (Naive)]{ \includegraphics[width=0.33\textwidth]{2013DistanceK2Naive.png} } \subfloat[][Distance (Logit)]{ \includegraphics[width=0.33\textwidth]{2013DistanceK2Logit.png} } \subfloat[][Distance (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013DistanceK2MixedLogit.png} }\\ \end{figure} Figure~\ref{tab:prediction2013K2} shows that in 2013 the Naive model for unassigned has an expected RMSE of 13.3 students per neighborhood, while the actual data yields a RMSE of 25.8 students per neighborhood. (In other words, the model is expected to be off by about 13.3 students, while it's actually off by 25.8) If the Naive model were correct, such a large deviation occurs with 2\% probability, which indicates that while the Naive model may not be the most satisfactory, it is not totally implausible in terms of its predictions for number of unassigned per neighborhood. The Logit model on the other hand produces an actual RMSE of 18.7 students, which is much smaller than Naive. The p-value is larger at 6.2\%. This implies that the Logit model's predictions for unassigned cannot be rejected at 95\% confidence level. MixedLogit produces a smaller actual RMSE of 18.1 students and the p-value is 7.8\%. Hence, in terms of forecasting the numbers and locations of unassigned students, MixedLogit performs the best, followed closely by Logit, and Naive performs the worst. For Access to Quality, the Naive model yields almost 10 times the expected error, with p-value of about zero, while the Logit model is reasonably on target, with p-value of 30\%. MixedLogit yields a smaller error and a larger p-value of 32.8\%. However, for distance, none of the models seem to explain the data, with all models having near zero p-values. The Naive model is off by 0.46 miles per neighborhood, and both Logit and MixedLogit are off by 0.26 miles per neighborhood. Next we examine the neighborhood by neighborhood predictions themselves. These are shown in Tables~\ref{tab:unassigned2013K2}, \ref{tab:aoq2013K2}, and \ref{tab:distance2013K2}. For each model, we show the 95\% confidence interval as estimated in the 400 simulations. The Naive model over-predicts the number of unassigned everywhere, possibly due to some schools that should have been chosen much not being chosen often enough by the Naive rule. (Recall that we assume students rank at most 10 options, so low Tier schools far away would not be ranked at all under the Naive rule.) Logit performs better since school popularity levels are captured in the fixed effects. For the majority of neighborhoods, the actual realization is within 95\% confidence interval, suggesting that Logit could have been a reasonable model for capacity planning purposes, at least in 2013. MixedLogit yields very similar results to simple Logit, and the confidence intervals for all neighborhoods overlap significantly with Logit. \begin{table}[!htbp] \centering \caption{Back testing unassigned predictions for 2013 K2.} \label{tab:unassigned2013K2} \footnotesize \begin{tabular}{l c c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.) & Actual\\ \hline Allston-Brighton & 32.80 & (17.00,51.02) & 8.83 & (0.00,24.00) & 7.66 & (0.00,23.02) & 11\\ Charlestown & 49.49 & (33.00,66.00) & 25.58 & (11.00,44.00) & 28.25 & (12.00,45.00) & 18\\ Downtown & 28.58 & (16.00,43.00) & 13.12 & (3.00,25.00) & 14.04 & (4.00,27.00) & 23\\ East Boston & 118.22 & (67.00,183.03) & 88.76 & (36.95,152.03) & 91.56 & (36.00,150.05) & 81\\ Hyde Park & 41.57 & (24.00,62.00) & 5.88 & (0.00,19.00) & 5.68 & (0.00,18.02) & 13\\ Jamaica Plain & 42.52 & (25.00,61.00) & 12.55 & (2.00,29.00) & 12.50 & (2.00,27.00) & 32\\ Mattapan & 33.88 & (14.00,58.00) & 5.46 & (0.00,20.02) & 5.86 & (0.00,23.00) & 21\\ North Dorchester & 36.54 & (18.00,60.05) & 6.41 & (0.00,24.02) & 7.48 & (0.00,23.02) & 16\\ Roslindale & 45.98 & (21.98,73.05) & 18.21 & (2.00,42.02) & 19.93 & (4.00,44.00) & 51\\ Roxbury & 109.14 & (73.97,154.03) & 18.78 & (2.00,54.08) & 20.56 & (3.00,56.02) & 47\\ South Boston & 20.35 & (6.00,39.00) & 3.90 & (0.00,16.00) & 4.12 & (0.00,17.00) & 7\\ South Dorchester & 72.47 & (37.00,113.03) & 12.98 & (0.00,43.00) & 14.65 & (0.00,49.02) & 52\\ South End & 45.78 & (32.00,62.00) & 18.82 & (5.00,35.00) & 19.75 & (6.97,36.02) & 26\\ West Roxbury & 53.11 & (27.98,84.03) & 26.14 & (7.00,51.00) & 27.28 & (8.00,54.00) & 48\\ \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Back testing access to quality predictions for 2013 K2.} \label{tab:aoq2013K2} \footnotesize \begin{tabular}{l c c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.) & Actual\\ \hline Allston-Brighton & 1 & (1.00,1.00) & 0.94 & (0.82,1.00) & 0.95 & (0.85,1.00) & 1\\ Charlestown & 1 & (1.00,1.00) & 0.92 & (0.79,1.00) & 0.94 & (0.83,1.00) & 1\\ Downtown & 1 & (1.00,1.00) & 0.94 & (0.85,1.00) & 0.96 & (0.88,1.00) & 1\\ East Boston & 1 & (1.00,1.00) & 0.97 & (0.89,1.00) & 0.98 & (0.90,1.00) & 1\\ Hyde Park & 0.56 & (0.49,0.64) & 0.97 & (0.88,1.00) & 0.97 & (0.86,1.00) & 0.99\\ Jamaica Plain & 0.79 & (0.72,0.87) & 0.91 & (0.81,1.00) & 0.91 & (0.81,1.00) & 0.96\\ Mattapan & 0.52 & (0.44,0.60) & 0.97 & (0.88,1.00) & 0.97 & (0.86,1.00) & 1.00\\ North Dorchester & 0.58 & (0.50,0.66) & 0.97 & (0.87,1.00) & 0.97 & (0.86,1.00) & 1\\ Roslindale & 0.73 & (0.64,0.83) & 0.91 & (0.80,1.00) & 0.90 & (0.79,1.00) & 0.96\\ Roxbury & 0.74 & (0.66,0.81) & 0.91 & (0.82,1.00) & 0.91 & (0.81,1.00) & 0.96\\ South Boston & 0.44 & (0.36,0.53) & 0.98 & (0.87,1.00) & 0.97 & (0.85,1.00) & 1\\ South Dorchester & 0.53 & (0.46,0.61) & 0.98 & (0.89,1.00) & 0.98 & (0.87,1.00) & 1\\ South End & 1 & (1.00,1.00) & 0.93 & (0.82,1.00) & 0.95 & (0.85,1.00) & 1\\ West Roxbury & 0.73 & (0.63,0.83) & 0.90 & (0.78,1.00) & 0.89 & (0.77,1.00) & 0.96\\ \end{tabular} \end{table} For access to quality, since the assignment system in 2013 gave each family at least about 25 choices in each zone, there were likely some Tier 1 or 2 school with some left over capacity, so the marginal student could have changed choices to selecting all Tier 1 or 2 schools and selecting them first, and gotten in a Tier 1 or 2 school with near certain probability. This results in the very high actual access to quality measures shown in Table~\ref{tab:aoq2013K2}. However, Naive forecasts a much tougher level of competition for Tier 1 or 2 schools, because it assumes every non-continuing students without siblings or ELL considerations go for such schools first. Although the MCAS data on which the Tiers were based were released months before, the Tiers themselves were finalized only around the time of the 2013 Round 1 choice. Hence, they were not salient at the time the choices were made. Again, Logit would have been a reasonable model for access to quality, with MixedLogit producing almost identical results. \begin{table}[!htbp] \centering \caption{Back testing distance predictions for 2013 K2.} \label{tab:distance2013K2} \footnotesize \begin{tabular}{l c c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.) & Actual\\ \hline Allston-Brighton & 2.09 & (1.75,2.45) & 1.47 & (1.26,1.68) & 1.52 & (1.31,1.74) & 1.45\\ Charlestown & 1.52 & (1.18,1.84) & 1.77 & (1.43,2.07) & 1.71 & (1.43,2.00) & 0.98\\ Downtown & 1.59 & (1.30,1.91) & 1.51 & (1.30,1.77) & 1.52 & (1.27,1.78) & 1.57\\ East Boston & 2.88 & (2.63,3.12) & 2.30 & (2.00,2.61) & 2.37 & (2.03,2.70) & 1.82\\ Hyde Park & 2.93 & (2.67,3.23) & 2.22 & (2.01,2.48) & 2.22 & (2.02,2.43) & 2.19\\ Jamaica Plain & 1.66 & (1.53,1.80) & 1.57 & (1.45,1.71) & 1.57 & (1.43,1.72) & 1.43\\ Mattapan & 2.28 & (2.16,2.42) & 2.22 & (2.08,2.35) & 2.16 & (2.03,2.30) & 2.23\\ North Dorchester & 1.68 & (1.44,1.93) & 1.48 & (1.32,1.67) & 1.51 & (1.30,1.72) & 1.39\\ Roslindale & 1.84 & (1.73,1.96) & 1.80 & (1.68,1.91) & 1.77 & (1.66,1.90) & 1.65\\ Roxbury & 1.89 & (1.78,2.00) & 1.58 & (1.49,1.69) & 1.58 & (1.49,1.69) & 1.50\\ South Boston & 1.59 & (1.35,1.83) & 1.33 & (1.15,1.52) & 1.36 & (1.14,1.56) & 1.26\\ South Dorchester & 1.75 & (1.65,1.83) & 1.78 & (1.69,1.88) & 1.74 & (1.64,1.84) & 1.76\\ South End & 1.79 & (1.56,2.00) & 1.62 & (1.42,1.83) & 1.61 & (1.37,1.84) & 1.39\\ West Roxbury & 2.12 & (1.95,2.28) & 2.12 & (1.93,2.33) & 2.08 & (1.91,2.26) & 2.18\\ \end{tabular} \end{table} For distance, Naive again over-predicts because of its preference on going for better Tier schools despite possibly further distances, which does not reflect the trade-off implicitly shown by most families in 2013. Logit produced more reasonable results, with the actual realization being within the 95\% confidence interval in the majority of neighborhoods. However, for the neighborhoods at the peripheries of the city, Charlestown and East Boston, which are separated by bridges from the rest of the city, Logit over-estimates distances, suggesting that it predicts families are more willing to cross the bridges than they actually are. Again, MixedLogit yields very similar results as Logit. We repeat the same exercise using market shares, for top 1, 2 and 3 choices. The tail distribution plots are in Figure~\ref{fig:tail2013K2}. However, for this metric neither models seem to explain the data, with p-values being near zero for all cases. Nevertheless, the error in terms of total variation distance is about half as large with Logit compared to with Naive, suggesting that it is a much better model for market shares in 2013. MixedLogit improves over Logit in the actual error in all cases, but the improvements are small. The tables showing market share details from each neighborhood to each school are in Appendix D. Since these tables are much longer, we put them in a separate document titled ``Supplementary Data'' to this report, available as an ancillary file. \begin{figure}[h!] \centering \caption{Back testing market share predictions for 2013 K2. Tail distribution plots. \label{fig:tail2013K2}} \subfloat[][Top 1 (Naive)]{ \includegraphics[width=0.33\textwidth]{2013Top_1_Market_ShareK2Naive.png} } \subfloat[][Top 1 (Logit)]{ \includegraphics[width=0.33\textwidth]{2013Top_1_Market_ShareK2Logit.png} } \subfloat[][Top 1 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013Top_1_Market_ShareK2MixedLogit.png} }\\ \subfloat[][Top 2 (Naive)]{ \includegraphics[width=0.33\textwidth]{2013Top_2_Market_ShareK2Naive.png} } \subfloat[][Top 2 (Logit)]{ \includegraphics[width=0.33\textwidth]{2013Top_2_Market_ShareK2Logit.png} } \subfloat[][Top 2 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013Top_2_Market_ShareK2MixedLogit.png} }\\ \subfloat[][Top 3 (Naive)]{ \includegraphics[width=0.33\textwidth]{2013Top_3_Market_ShareK2Naive.png} } \subfloat[][Top 3 (Logit)]{ \includegraphics[width=0.33\textwidth]{2013Top_3_Market_ShareK2Logit.png} } \subfloat[][Top 3 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2013Top_3_Market_ShareK2MixedLogit.png} }\\ \end{figure} \subsection{Forecasts for 2014} \label{sec:future} We show the predicted outcomes for 2014 K2 and the associated expected error distributions. The tail distributions of aggregate errors are in figures~\ref{fig:outcomes2014K2} and \ref{fig:marketShare2014K2}. The neighborhood by neighborhood estimates are in tables~\ref{tab:unassigned2014K2}, \ref{tab:aoq2014K2}, and \ref{tab:distance2014K2}. The K1 estimates are in the appendix. All estimates for 2014 are done using 1,000 simulations. Suppose that families choices are driven by framing instead of underlying preferences, then Naive would do better than in 2013, and might even perform the best out of the three models. If this were the case, then it would suggest that we should re-think using demand modeling for analyzing counterfactuals, because details of how counterfactuals are presented may matter more than what we can learn from past data using demand models. On the other hand, if framing, although possibly significant, is not crucial to the outcome, then we expect the Logit model to do equally well in 2014, reasonably accurately predicting the outcome measures of unassigned, access to quality, and distance for most neighborhoods. We would also expect MixedLogit to perform better, since it is more flexible in capturing substitution patterns. However, judging by the small differences in the back testing results for 2013, we expect the improvements to be small. \begin{figure}[h!] \centering \caption{Forecasts for assignment outcomes in 2014 K2. Tail distribution plots. \label{fig:outcomes2014K2}} \subfloat[][Unassigned (Naive)]{ \includegraphics[width=0.33\textwidth]{2014UnassignedK2Naive.png} } \subfloat[][Unassigned (Logit)]{ \includegraphics[width=0.33\textwidth]{2014UnassignedK2Logit.png} } \subfloat[][Unassigned (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014UnassignedK2MixedLogit.png} }\\ \subfloat[][Access to Quality (Naive)]{ \includegraphics[width=0.33\textwidth]{2014Access_to_QualityK2Naive.png} } \subfloat[][Access to Quality (Logit)]{ \includegraphics[width=0.33\textwidth]{2014Access_to_QualityK2Logit.png} } \subfloat[][Access to Quality (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014Access_to_QualityK2MixedLogit.png} }\\ \subfloat[][Distance (Naive)]{ \includegraphics[width=0.33\textwidth]{2014DistanceK2Naive.png} } \subfloat[][Distance (Logit)]{ \includegraphics[width=0.33\textwidth]{2014DistanceK2Logit.png} } \subfloat[][Distance (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014DistanceK2MixedLogit.png} }\\ \end{figure} \begin{figure}[h!] \centering \caption{Forecasts for market shares for 2014 K2. Tail distribution plots. \label{fig:marketShare2014K2}} \subfloat[][Top 1 (Naive)]{ \includegraphics[width=0.33\textwidth]{2014Top_1_Market_ShareK2Naive.png} } \subfloat[][Top 1 (Logit)]{ \includegraphics[width=0.33\textwidth]{2014Top_1_Market_ShareK2Logit.png} } \subfloat[][Top 1 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014Top_1_Market_ShareK2MixedLogit.png} }\\ \subfloat[][Top 2 (Naive)]{ \includegraphics[width=0.33\textwidth]{2014Top_2_Market_ShareK2Naive.png} } \subfloat[][Top 2 (Logit)]{ \includegraphics[width=0.33\textwidth]{2014Top_2_Market_ShareK2Logit.png} } \subfloat[][Top 2 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014Top_2_Market_ShareK2MixedLogit.png} }\\ \subfloat[][Top 3 (Naive)]{ \includegraphics[width=0.33\textwidth]{2014Top_3_Market_ShareK2Naive.png} } \subfloat[][Top 3 (Logit)]{ \includegraphics[width=0.33\textwidth]{2014Top_3_Market_ShareK2Logit.png} } \subfloat[][Top 3 (MLogit)]{ \includegraphics[width=0.33\textwidth]{2014Top_3_Market_ShareK2MixedLogit.png} }\\ \end{figure} \begin{table}[!htbp] \centering \caption{Unassigned predictions for 2014 K2.} \label{tab:unassigned2014K2} \footnotesize \begin{tabular}{l c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.)\\ \hline Allston-Brighton & 4.07 & (0.00,14.00) & 1.28 & (0.00,9.00) & 1.18 & (0.00,10.00)\\ Charlestown & 0.68 & (0.00,8.00) & 1.25 & (0.00,10.00) & 1.38 & (0.00,11.00)\\ Downtown & 1.24 & (0.00,5.00) & 1.09 & (0.00,7.00) & 1.36 & (0.00,9.00)\\ East Boston & 47.22 & (2.00,96.00) & 49.22 & (5.00,107.00) & 49.00 & (6.00,111.03)\\ Hyde Park & 2.90 & (0.00,16.00) & 5.09 & (0.00,18.00) & 5.12 & (0.00,18.00)\\ Jamaica Plain & 18.15 & (6.00,34.00) & 1.68 & (0.00,7.00) & 2.07 & (0.00,8.00)\\ Mattapan & 17.73 & (3.00,37.02) & 3.86 & (0.00,18.00) & 5.17 & (0.00,20.00)\\ North Dorchester & 18.79 & (7.00,34.00) & 2.07 & (0.00,8.00) & 3.02 & (0.00,11.00)\\ Roslindale & 4.66 & (0.00,22.00) & 3.69 & (0.00,20.00) & 4.27 & (0.00,22.00)\\ Roxbury & 42.07 & (20.98,67.03) & 4.24 & (0.00,15.00) & 5.97 & (0.00,18.00)\\ South Boston & 8.93 & (1.00,20.00) & 0.35 & (0.00,4.00) & 0.63 & (0.00,5.00)\\ South Dorchester & 19.80 & (3.00,49.00) & 10.52 & (0.00,36.00) & 11.18 & (0.00,39.00)\\ South End & 9.30 & (0.00,22.00) & 4.03 & (0.00,14.00) & 4.70 & (0.00,15.00)\\ West Roxbury & 5.43 & (0.00,25.00) & 8.78 & (0.00,30.00) & 8.42 & (0.00,28.02)\\ \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Access to quality predictions for 2014 K2.} \label{tab:aoq2014K2} \footnotesize \begin{tabular}{l c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.)\\ \hline Allston-Brighton & 0.80 & (0.70,0.94) & 0.98 & (0.87,1.00) & 0.98 & (0.88,1.00)\\ Charlestown & 0.94 & (0.83,1.00) & 0.98 & (0.91,1.00) & 0.99 & (0.90,1.00)\\ Downtown & 0.85 & (0.76,0.93) & 0.92 & (0.85,0.97) & 0.91 & (0.84,0.96)\\ East Boston & 0.80 & (0.72,0.90) & 0.87 & (0.77,0.98) & 0.88 & (0.77,0.98)\\ Hyde Park & 0.69 & (0.59,0.81) & 0.86 & (0.76,0.94) & 0.86 & (0.76,0.94)\\ Jamaica Plain & 0.68 & (0.60,0.78) & 0.89 & (0.78,1.00) & 0.90 & (0.77,1.00)\\ Mattapan & 0.59 & (0.50,0.69) & 0.97 & (0.85,1.00) & 0.96 & (0.83,1.00)\\ North Dorchester & 0.50 & (0.42,0.58) & 0.77 & (0.63,0.94) & 0.77 & (0.63,0.93)\\ Roslindale & 0.74 & (0.66,0.84) & 0.98 & (0.89,1.00) & 0.98 & (0.88,1.00)\\ Roxbury & 0.56 & (0.50,0.63) & 0.83 & (0.72,0.93) & 0.83 & (0.72,0.92)\\ South Boston & 0.48 & (0.40,0.57) & 0.72 & (0.61,0.86) & 0.70 & (0.58,0.82)\\ South Dorchester & 0.60 & (0.52,0.68) & 0.87 & (0.76,0.97) & 0.86 & (0.75,0.96)\\ South End & 0.67 & (0.59,0.74) & 0.82 & (0.73,0.92) & 0.80 & (0.72,0.89)\\ West Roxbury & 0.78 & (0.70,0.88) & 0.89 & (0.81,0.95) & 0.89 & (0.81,0.95)\\ \end{tabular} \end{table} \begin{table}[!htbp] \centering \caption{Distance predictions for 2014 K2.} \label{tab:distance2014K2} \footnotesize \begin{tabular}{l c c c c c c} Neighborhood & Naive & (95 \% C.I.) & Logit & (95 \% C.I.) & MLogit & (95 \% C.I.)\\ \hline Allston-Brighton & 1.44 & (1.23,1.66) & 1.27 & (1.10,1.47) & 1.28 & (1.09,1.50)\\ Charlestown & 0.97 & (0.70,1.26) & 0.94 & (0.76,1.12) & 0.95 & (0.78,1.14)\\ Downtown & 1.30 & (1.06,1.64) & 1.23 & (1.08,1.40) & 1.23 & (1.08,1.39)\\ East Boston & 1.75 & (1.60,1.93) & 1.24 & (1.05,1.44) & 1.27 & (1.06,1.51)\\ Hyde Park & 2.04 & (1.89,2.20) & 1.80 & (1.67,1.94) & 1.78 & (1.64,1.92)\\ Jamaica Plain & 1.29 & (1.19,1.39) & 1.17 & (1.07,1.26) & 1.16 & (1.07,1.26)\\ Mattapan & 1.80 & (1.69,1.93) & 1.71 & (1.61,1.83) & 1.71 & (1.60,1.84)\\ North Dorchester & 1.29 & (1.17,1.42) & 1.17 & (1.08,1.27) & 1.17 & (1.07,1.27)\\ Roslindale & 1.73 & (1.62,1.82) & 1.53 & (1.44,1.63) & 1.52 & (1.43,1.62)\\ Roxbury & 1.37 & (1.28,1.46) & 1.21 & (1.15,1.28) & 1.21 & (1.15,1.28)\\ South Boston & 1.46 & (1.26,1.69) & 1.21 & (1.06,1.37) & 1.20 & (1.04,1.35)\\ South Dorchester & 1.52 & (1.43,1.61) & 1.43 & (1.35,1.51) & 1.43 & (1.35,1.51)\\ South End & 1.41 & (1.25,1.58) & 1.30 & (1.15,1.44) & 1.28 & (1.14,1.43)\\ West Roxbury & 1.89 & (1.76,2.02) & 1.70 & (1.57,1.83) & 1.69 & (1.56,1.82)\\ \end{tabular} \end{table} \clearpage \section{Conclusion} This paper reports on ex ante forecasts using a discrete choice model of demand of a large-scale change to choice menus occurring in Boston's school choice plan in 2014. It also reports on a simpler statistical forecast that is not based on an underlying random utility model. The methodology and target outcomes are described before information on new preferences is available to avoid any scope for post-analysis bias. Part II of this report will revisit these forecasts using data from the new system to assess the strengths and limitations of discrete choice models of demand in our context. \clearpage \newpage
66d98d07b26e2d19a5d53bfd555a3b7b7f4ff592
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Proof of \textit{commuting conversions in} $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$\\ (Lemma~\ref{lemma:bvtatidrulein commuting conversions}, page~\pageref{lemma:bvtatidrulein commuting conversions})} \label{section:Proof of lemma:bvtatidrulein commuting conversions} The proof is, first, by cases on $\rho$, and, then, by cases on $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$. Fixed $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$, the proof is by cases on $R$ which must contain a redex of $\mbox{$\mathsf{ai}\!\!\downarrow$}, \mbox{$\mathsf{q}\!\!\downarrow$}$, or $\mbox{$\mathsf{u}\!\!\downarrow$}$, that, after $\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}$, leads to the chosen $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$. \par We start with $\rho \equiv \mbox{$\mathsf{ai}\!\!\downarrow$}$. \begin{itemize} \item Let $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}} \vlread \approx\vlsbr[\atma;\natma]$. So, $\vlstore{\vlsbr<\natma;[b;\vlne\atmb]>} \vlsbr[\atma;\vlfo{b}{\vlread}]$, and $\vlstore{\vlsbr<\natma;[b;\vlne\atmb]>} \vlsbr[\atma;\vlread]$ are the most relevant forms of $R$. Others can be $\vlstore{[b;\vlne\atmb]} \vlsbr[\atma;<\natma;\vlfo{b}{\vlread}>]$, and $\vlstore{[b;\vlne\atmb]} \vlsbr[[\atma;\natma];\vlfo{b}{\vlread}]$, and $\vlsbr<[\atma;\natma];[b;\vlne\atmb]>$, and $\vlstore{[b;\vlne\atmb]} \vlsbr<[\atma;\natma];\vlfo{b}{\vlread}>$. \par We fully develop only the first case with $R\approx \vlstore{\vlsbr<\natma;[b;\vlne\atmb]>} \vlsbr[\atma;\vlfo{b}{\vlread}]$. In it the derivation\\ $\vlderivation{ \vliq{\mathsf{ai}\downarrow ,\eqref{align:unit-seq} ,\eqref{align:alpha-intro}}{} {\vlsbr[\atma;\vlfo{b}{<\natma;[b;\vlne\atmb]>}]}{ \vlin{\mathsf{at}\downarrow\llcorner}{} {\vlsbr[\atma;\natma]}{ \vlhy{\vlone}}}}$ transforms to $ \vlderivation{ \vliq{\eqref{align:alpha-intro} ,\mathsf{u}\downarrow}{} {\vlsbr[\atma;\vlfo{b}{<\natma;[b;\vlne\atmb]>}]}{ \vliq{\eqref{align:unit-pa} ,\mathsf{q}\downarrow ,\eqref{align:unit-pa}}{} {\vlfo{b}{\vlsbr[\atma;<\natma;[b;\vlne\atmb]>]}}{ \vlin{\mathsf{at}\downarrow\llcorner}{} {\vlfo{b}{\vlsbr<[\atma;\natma];[b;\vlne\atmb]>}}{ \vliq{\mathsf{ai}\downarrow ,\eqref{align:alpha-intro}}{} {\vlfo{b}{\vlsbr[b;\vlne\atmb]}}{ \vlhy{\vlone}}}}}} $. \par If, instead, $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread \approx \vlstore{\vlsbr<\natma;[b;\vlne\atmb]>} \vlsbr[\atma;\vlread]$, then no instances of $\mbox{$\mathsf{u}\!\!\downarrow$}$ are required, but only one of $\mbox{$\mathsf{q}\!\!\downarrow$}$. \item Let $S\vlhole\approx\vlsbr[\vlholer{S'\vlhole};U']$. \begin{itemize} \item If $R\approx \vlsbr[\vlholer{S'[\atma;\natma]} ;S''[b;\vlne\atmb]]$, with $U' \approx \vlsbr{S''[b;\vlne\atmb]}$, then $\vlupsmash{ \vlderivation{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr[\vlholer{S'[\atma;\natma]} ;S''[b;\vlne\atmb]]}{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr[\vlholer{S'[\atma;\natma]} ;U'']}{ \vlhy{\vlsbr[R';U'']}}}}}$ transforms to $\vlupsmash{ \vlderivation{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr[\vlholer{S'[\atma;\natma]} ;S''[b;\vlne\atmb]]}{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr[R' ;S''[b;\vlne\atmb]] }{ \vlhy{\vlsbr[R';U'']}}}}}$, for some $R'$, and $U''$. === \item If $R\approx \vlsbr[\vlholer{S'[\atma;\natma]} ;U'] \equiv \vlsbr[S''[b;\vlne\atmb] ;U']$, then $\vlupsmash{ \vlderivation{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr[S''[b;\vlne\atmb];U']}{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr[\vlholer{S'''[\atma;\natma]};U']}{ \vlhy{\vlsbr[R';U']}}}}}$, for some $\vlholer{S'''\vlhole}$, which is $S''\vlsbr[b;\vlne\atmb]$ with $\vlsbr[b;\vlne\atmb]$ replaced by $\vlone$, and $R'$, transforms to $ \vlderivation{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr[\vlholer{S'[\atma;\natma]} ;U']}{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr[S''''\vlsbr[b;\vlne\atmb] ;U'] }{ \vlhy{\vlsbr[R';U']}}}} $ for some $S''''\vlhole$ which is $S'\vlsbr[\atma;\natma]$, with $\vlsbr[\atma;\natma]$ replaced by $\vlone$. \end{itemize} \item Let $S\vlhole\approx\vlfo{c}{\vlholer{S'\vlhole}}$ where $c$ may also coincide to $\atma$, or $b$. This case is analogous to the last point of the previous case, because $\vlstore{\vlsbr[\atma;\natma]} \vlholer{S'\vlread} \equiv S''\vlsbr[b;\vlne\atmb]$, for some $S''\vlhole$. \item Let $S\vlhole\approx\vlsbr<\vlholer{S'\vlhole};U'>$. \begin{itemize} \item If $R\approx \vlstore{\vlholer{S'[\atma;\natma]}} \vlsbr<\vlread ;S''[b;\vlne\atmb]>$, with $U' \approx \vlsbr{S''[b;\vlne\atmb]}$, then $\vlupsmash{ \vlderivation{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr<\vlholer{S'[\atma;\natma]} ;S''[b;\vlne\atmb]>}{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr<\vlholer{S'[\atma;\natma]} ;U''>}{ \vlhy{\vlsbr<R';U''>}}}}}$ transforms to $\vlupsmash{ \vlderivation{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr<\vlholer{S'[\atma;\natma]} ;S''[b;\vlne\atmb]>}{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr<R' ;S''[b;\vlne\atmb]> }{ \vlhy{\vlsbr<R';U''>}}}}}$, for some $R'$, and $U''$. \item If $\vlstore{ \vlsbr<\vlholer{S'\vlsbr[\atma;\natma]};U'> } R\approx\vlread \equiv \vlsbr<S''[b;\vlne\atmb];U'>$, then $\vlupsmash{ \vlderivation{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr<S''[b;\vlne\atmb];U'>}{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr<\vlholer{S'''[\atma;\natma]};U'>}{ \vlhy{\vlsbr<R';U'>}}}}}$, for some $\vlholer{S'''\vlhole}$, which is $S''\vlsbr[b;\vlne\atmb]$, with $\vlsbr[b;\vlne\atmb]$ replaced by $\vlone$, and $R'$, transforms to $ \vlderivation{ \vliq{\mathsf{at}\downarrow\llcorner}{} {\vlsbr<\vlholer{S'[\atma;\natma]} ;U'>}{ \vliq{\mathsf{ai}\downarrow}{} {\vlsbr<S''''\vlsbr[b;\vlne\atmb] ;U'> }{ \vlhy{\vlsbr<R';U'>}}}} $ for some $S''''\vlhole$ which is $S'\vlsbr[\atma;\natma]$, with $\vlsbr[\atma;\natma]$ replaced by $\vlone$. \end{itemize} \end{itemize} Now we focus on the case with $\rho \equiv \mbox{$\mathsf{q}\!\!\downarrow$}$. \begin{itemize} \item Let $\vlholer{S\vlhole} \approx S'\vlsbr[<U';S''\vlhole> ;<U'';U'''>]$. Then $R\approx S'\vlsbr[<U';S''[\atma;\natma]> ;<U'';U'''>]$, and\\ $\vlderivation{ \vlin{\mathsf{ai}\downarrow}{} {S'\vlsbr[<U';S''[\atma;\natma]> ;<U'';U'''>]}{ \vlin{\mathsf{q}\downarrow}{} {S'\vlsbr[<U';S''\,\vlscn{\vlone}> ;<U'';U'''>]}{ \vlhy{S'\vlsbr<[U';U''] ;[S''\,\vlscn{\vlone};U''']>}}}}$ transforms to $\vlupsmash{ \vlderivation{ \vlin{\mathsf{q}\downarrow}{} {S'\vlsbr[<U';S''[\atma;\natma]> ;<U'';U'''>]}{ \vlin{\mathsf{ai}\downarrow}{} {S'\vlsbr<[U';U''] ;[S''[\atma;\natma];U''']>}{ \vlhy{S'\vlsbr<[U';U''] ;[S''\,\vlscn{\vlone};U''']>}}}}}$. \item Let $\vlholer{S\vlhole} \approx S'\vlsbr[<S''\vlhole;U'> ;<U'';U'''>]$. This case is analogous to the previous one. \end{itemize} Finally, let $\rho \equiv \mbox{$\mathsf{u}\!\!\downarrow$}$. Then $\mbox{$\mathsf{u}\!\!\downarrow$}$ involves the redex of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ whenever $\vlholer{S\vlhole}$ is $\vlstore{ \vlholer{ S' \vlsbr[\vlstore{S''\vlhole} \vlfo{\atma}{\vlread} ;\vlfo{\atma}{U'}] } }\vlread$. So, $R\approx S' \vlsbr[\vlstore{S''\vlsbr[\atma;\natma]} \vlfo{\atma}{\vlread} ;\vlfo{\atma}{U'}]$, and $\vlderivation{ \vlin{\mathsf{ai}\downarrow}{} {S' \vlsbr[\vlstore{S''\vlsbr[\atma;\natma]} \vlfo{\atma}{\vlread} ;\vlfo{\atma}{U'}]}{ \vlin{\mathsf{u}\downarrow}{} {S' \vlsbr[\vlstore{S''\,\vlscn{\vlone}} \vlfo{\atma}{\vlread} ;\vlfo{\atma}{U'}]}{ \vlhy{\vlstore{ \vlsbr[S''\,\vlscn{\vlone};U']} S'\vlfo{\atma}{\vlread}}}}}$ transforms to\\ $\vlderivation{ \vlin{\mathsf{u}\downarrow}{} {S' \vlsbr[\vlstore{S''\vlsbr[\atma;\natma]} \vlfo{\atma}{\vlread} ;\vlfo{\atma}{U'}]}{ \vlin{\mathsf{ai}\downarrow}{} {\vlstore{ \vlsbr[S''[\atma;\natma];U']} S'\vlfo{\atma}{\vlread}}{ \vlhy{\vlstore{ \vlsbr[S''\,\vlscn{\vlone};U']} S'\vlfo{\atma}{\vlread}}}}}$. \section{Proof of \textit{Big-step commuting conversions in $ \BV\mathsf{Q}\xspace $} (Proposition~\ref{proposition:Big-step commuting conversions in BVT}, page~\pageref{proposition:Big-step commuting conversions in BVT})} \label{section:Proof of proposition:Big-step commuting conversions in BVT} We apply Point~\ref{enum:Splitting-seq} of Theorem~\ref{theorem:Splitting-ALT} to the part of $ \bvtPder $ that includes $ \bvtPder'$ and $ (*) $ for which we forcefully have $\vlstore{\strFN{\vlsbr<T;S''[\atma;\natma]>} \cap\strBN{\vlsbr<T;S''[\atma;\natma]>}=\emptyset} \vlread$. There are $\strK_1, \strK_2$, and, possibly, some atoms $\vec{b}$ such that: {\small $$ \vlderivation{ \vlde{\mathcal{E}}{\BV\mathsf{Q}\xspace} {S' \vlsbr<T;S''[\atma;\natma]>}{ \vlhy{\vlfo{\vec{b}} {\vlsbr[\vlsbr<T;S''[\atma;\natma]> ;\strK_1;\strK_2]}}}} \qquad \vlderivation{ \vlpd{\mathcal{Q}'}{\BV\mathsf{Q}\xspace} {\vlsbr[T;\strK_1]}} \qquad \vlderivation{ \vlpd{\mathcal{Q}''}{\BV\mathsf{Q}\xspace} {\vlsbr[S''[\atma;\natma];\strK_2]}} $$ \noindent We observe that $\mathcal{Q}''$ can always be of the form: {\small $$ \vlderivation{ \vlin{\mathsf{ai}\downarrow}{} {\vlsbr[S''[\atma;\natma];\strK_2]} { \vlpd{\mathcal{Q}'''}{\BV\mathsf{Q}\xspace} {\vlsbr[S''\,\vlscn{\vlone};\strK_2]}}} $$ } \noindent just because no constraint exists to annihilate the occurrence of $\atma$, and of $\natma$ in $S''\vlhole$. We compose all the derivations, and proof outlined so far: {\small $$ \vlderivation{ \vlde{\bvtDder'}{\BV\mathsf{Q}\xspace} {R}{ \vlde{\mathcal{E}}{\BV\mathsf{Q}\xspace} {S' \vlsbr<T;S''[\atma;\natma]>}{ \vlin{\mathsf{q}\downarrow}{} {\vlfo{\vec{b}} {\vlsbr [<T;S''[\atma;\natma]> ;\strK_1;\strK_2]}}{ \vlde{\mathcal{Q}'}{\BV\mathsf{Q}\xspace} {\vlfo{\vec{b}} {\vlsbr <[T;\strK_1] ;[S''[\atma;\natma] ;\strK_2]>} }{ \vlin{\mathsf{ai}\downarrow}{(**)} {\vlfo{\vec{b}} {\vlsbr [S''[\atma;\natma] ;\strK_2]} }{ \vlpr{\mathcal{Q}'''}{\BV\mathsf{Q}\xspace} {\vlfo{\vec{b}} {\vlsbr [S''\,\vlscn{\vlone} ;\strK_2]} }}}}}}} $$ The resulting proof contains $(**)$ whose \textnormal{\textsf{Seq}}\xspace-number is strictly smaller than the one of $(*)$. \section{Proof of \textit{\textbf{A language of \invertible\ structure s}} (proposition~\ref{proposition:Invertible structures are invertible}, page~\pageref{proposition:Invertible structures are invertible})} \label{section:Proof of proposition:Invertible structures are invertible} This proof rests on Shallow splitting of \cite{Roversi:TLCA11,Roversi:unpub2012-I} whose statement we recall here. \begin{proposition}[\textit{\textbf{Shallow Splitting}}] \label{proposition:Shallow Splitting} Let $R, T$, and $P$ be structures, and $\atma$ be a name, and $ \bvtPder$ be a proof of $\BV\mathsf{Q}\xspace$. \begin{enumerate} \item\label{enum:Shallow-Splitting-seq} If $\vlstore{\vlsbr[<R;T>;P]} \bvtInfer{\bvtPder} {\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$, then there are $\vlstore{\vlsbr<P_1;P_2>\bvtJudGen{\BV\mathsf{Q}\xspace}{} P} \bvtInfer{\bvtDder}{\vlread}$, and $\vlstore{\bvtJudGen{\BV\mathsf{Q}\xspace}{} {\vlsbr[R;P_1]}} \bvtInfer{\bvtPder_1}{\ \vlread}$, and $\vlstore{\bvtJudGen{\BV\mathsf{Q}\xspace}{} {\vlsbr[T;P_2]}} \bvtInfer{\bvtPder_2}{\ \vlread}$, for some $P_1$, and $P_2$. \item\label{enum:Shallow-Splitting-copar} If $\vlstore{\vlsbr[(R;T);P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$, then there are $\vlstore{\vlsbr[P_1;P_2] \bvtJudGen{\BV\mathsf{Q}\xspace}{} P} \bvtInfer{\bvtDder}{\vlread}$, and $\vlstore{\bvtJudGen{\BV\mathsf{Q}\xspace}{}{\vlsbr[R;P_1]}} \bvtInfer{\bvtPder_1}{\ \vlread}$, and $\vlstore{\bvtJudGen{\BV\mathsf{Q}\xspace}{}{\vlsbr[T;P_2]}} \bvtInfer{\bvtPder_2}{\ \vlread}$, for some $P_1$, and $P_2$. \item\label{enum:Shallow-Splitting-atom} Let $\vlstore{\vlsbr[R;P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$ with $R\approx\vlsbr[\atmLabL_1;\vldots;\atmLabL_m]$, such that $ i\neq j $ implies $ \atmLabL_i \neq \vlne{\atmLabL_j} $, for every $ i,j \in\Set{1,\ldots,m}$, and $ m>0 $. Then, for every structure $R_0$, and $R_1$, if $R\approx\vlsbr[R_0;R_1]$, there exists $\vlstore{\vlne{R_1} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlsbr[R_0;P]} \bvtInfer{\bvtDder} {\ \vlread}$. \item\label{enum:Shallow-Splitting-fo} If $\vlstore{\vlsbr[\vlfo{\atma}{R};P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{}{} \vlread}$, then there are $\vlstore{\vlfo{\atma}{T} \bvtJudGen{\BV\mathsf{Q}\xspace}{} P} \bvtInfer{\bvtDder}{\vlread}$, and $\vlstore{\bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlsbr[R;T]} \bvtInfer{\bvtPder'}{\ \vlread}$, for some $T$. \end{enumerate} \end{proposition} Now, we reason by induction on $\vlstore{\vlsbr[\vlneT;P]} \Size{\vlread}$, proceeding by cases on the form of $\vlneT$. \par As a \emph{first case} we assume $\vlneT\approx\vlne{\vlone}$, and we cope with a base case. The assumption becomes $\vlstore{\vlsbr[\vlne{\vlone};P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{}{} \vlread}$ which is exactly: \[ \vlderivation { \vlde{\bvtPder}{} {\vlsbr[\vlne{\vlone};P] \approx P} { \vlhy{\vlne{\vlone} \approx \vlone }}} \] \par As a \emph{second case} we assume $\vlneT\approx \vlsbr[\natma_1;\vldots;\natma_m]$, and we cope with another base case. The assumption becomes $\vlstore{\vlsbr[[\natma_1;\vldots;\natma_m];P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{}{} \vlread}$. We conclude by Point~\ref{enum:Shallow-Splitting-atom} of Shallow Splitting (Proposition~\ref{proposition:Shallow Splitting}) which implies $\vlstore{\vlsbr(\atma_1;\vldots;\atma_m) \bvtJudGen{\BV\mathsf{Q}\xspace}{} P} \vlread$. \par As a \emph{third case} we assume $\vlneT\approx \vlsbr(R_1;R_2)$. So, the assumption is $\vlstore{\vlsbr[(R_1;R_2);P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{}{} \vlread}$. \par Point~\ref{enum:Shallow-Splitting-copar} of Shallow Splitting (Proposition~\ref{proposition:Shallow Splitting}) implies $\vlstore{\vlsbr[P_1;P_2] \bvtJudGen{}{} P} \bvtInfer{\bvtDder}{\ \vlread}$, and $\vlstore{\bvtJudGen{}{} \vlsbr[R_1;P_1]} \bvtInfer{\mathcal{Q}_1}{\ \vlread}$, and $\vlstore{\bvtJudGen{}{} \vlsbr[R_2;P_2]} \bvtInfer{\mathcal{Q}_2}{\ \vlread}$, for some $P_1, P_2$. \par Both $R_1$, and $R_2$ are invertible, and $\vlstore{\Size{\vlsbr[R_1;P_1]}} \vlread$ $< \vlstore{\vlsbr[(R_1;R_2);P]} \Size{\vlread}$, and $\vlstore{\Size{\vlsbr[R_2;P_2]}} \vlread$ $< \vlstore{\vlsbr[(R_1;R_2);P]} \Size{\vlread}$. So, the inductive hypothesis holds on $\mathcal{Q}_1$, and $\mathcal{Q}_2$. We get $\vlstore{\vlne{R_1} \bvtJudGen{}{} P_1} \bvtInfer{\mathcal{E}_1}{\ \vlread}$, and $\vlstore{\vlne{R_2} \bvtJudGen{}{} P_2} \bvtInfer{\mathcal{E}_2}{\ \vlread}$. We conclude by: \[ \vlderivation { \vlde{\bvtDder}{} {P} { \vlde{\mathcal{E}_1}{} {\vlsbr[P_1;P_2]} { \vlde{\mathcal{E}_2}{} {\vlsbr[\vlne{R_1};P_2]} { \vliq{\eqref{align:negation-pa}}{} {\vlsbr[\vlneR_1;\vlneR_2]}{ \vlhy{\vlsbr\vlne{(R_1;R_2)}}}}}}} \] \par As a \emph{fourth case} we assume $\vlneT\approx\vlfo{\atma}{R}$ such that, without loss of generality, $\atma\in\strBN{\vlfo{\atma}{R}}$. So, the assumption is $\vlstore{\vlsbr[\vlfo{\atma}{R};P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{}{} \vlread}$. \par Point~\ref{enum:Shallow-Splitting-fo} of Shallow Splitting (Proposition~\ref{proposition:Shallow Splitting}) implies $\vlstore{P} \bvtInfer{\bvtDder}{\ \vlfo{\atma}{T} \bvtJudGen{}{} \vlread}$, and $\vlstore{\vlsbr[R;T]} \bvtInfer{\mathcal{Q}}{\ \bvtJudGen{}{} \vlread}$, for some $T$. \par Both $R$ invertible, and $\vlstore{\vlsbr[R;T]} \Size{\vlread}$ $< \vlstore{\vlsbr[\vlfo{\atma}{R};P]} \Size{\vlread}$, imply the induction holds on $\mathcal{Q}$. We get $\vlstore{\vlneR \bvtJudGen{}{} T} \bvtInfer{\mathcal{E}}{\ \vlread}$. \par So, we conclude that: \[ \vlderivation { \vlde{\bvtDder}{} {P} { \vlde{\mathcal{E}}{} {\vlfo{\atma}{T}} { \vliq{\eqref{align:negation-fo}}{} {\vlfo{\atma}{\vlneR}} { \vlhy{\vlne{\vlfo{\atma}{R}}}} }}} \] \section{Proving point~\eqref{enumerate:Trivial derivations and rightcontext s-03} of \textit{Process structure s, \trivial\ derivation s and right-context s} (Proposition~\ref{proposition:Rightcontext s preserve communication}, page~\pageref{proposition:Rightcontext s preserve communication})} \label{section:Proof of Rightcontext s can preserve external communication} The proof is by induction on the size of $ \pincE $, proceeding by cases on the form of $\vlholer{S'\vlhole}$, which, by assumption, is a process structure, so it can assume only specific forms. \begin{itemize} \item The base case is $\vlholer{S'\vlhole} \approx \vlstore{\vlsbr<\vlhole;U>} \vlread$, for some $U$. So, $\vlholer{S'\,\vlscn{\vlone}} \approx \vlstore{\vlsbr<\vlone;U>} \vlread \approxU$. Moreover, $\mapPincToDi{\pincE}=\vlsbr<b;U>$ implies that $ \pincE $ is $ \pincSec{b}{\pincE'} $ for some $\pincE'$ such that\ $\mapPincToDi{\pincE'}=U$. Since we can prove: {\small $$ \vlinf{\pincact}{} {\pincLTSJud{\pincSec{b} {\pincE'} } {\pincE' } {b}} {} $$ we are done because $\mapPincToDi{F}=\vlsbr<\vlone;U>\approxU=\mapPincToDi{\pincE'}$. \par A first remark is that we cannot have $\vlholer{S'\vlhole} \approx \vlsbr<\vlholer{\breve{S}'\vlhole};F>$ with $\vlholer{\breve{S}'\vlhole}\not\approx\vlhole$. Otherwise $\vlholer{S'\vlhole}$ would not be a process structure. \par A second remark is that $ U\approx \vlone $ does not pose any problem. In such a case $ \pincE $ is $ \pincSec{b}{\pincZer} $, and we can write $\pincLTSJud{\pincSec{b} {\pincZer}}{\pincZer}{b}$. \item Let $\vlholer{S'\vlhole} \approx \vlsbr[\vlholer{\breve{S}'\vlhole};U]$. The assumptions $\mapPincToDi{\pincE}=\vlsbr[\vlholer{\breve{S}'\,\vlscn{b}};U]$, and $\mapPincToDi{F}=\vlsbr[\vlholer{\breve{S}'\,\vlscn{\vlone}};U]$ imply that $\pincE$ is $\pincPar{\pincE'}{\pincE''}$, and $F$ is $\pincPar{F'}{\pincE''}$, for some $\pincE' ,\pincE''$, and $F'$ such that\ $\mapPincToDi{\pincE'}=\vlholer{\breve{S}'\,\vlscn{b}}$, and $\mapPincToDi{F'}=\vlholer{\breve{S}'\,\vlscn{\vlone}}$, and $\mapPincToDi{\pincE''}=U$. We can prove: {\small $$ \vlstore{ } \vlinf{\pinccntxp}{} {\pincLTSJud{\pincPar{\pincE'} {\pincE''} } {\pincPar{F'} {\pincE''} } {\pincLabL} } {\pincLTSJud{\pincE'} {F'} {\pincLabL} } $$ because the premise holds thanks to the inductive hypotheses, also assuring the desired constraints on $\pincLabL$. \item Let $\vlholer{S'\vlhole} \approx \vlfo{\atma}{\vlholer{\breve{S}'\vlhole}}$. The assumptions $\mapPincToDi{\pincE}=\vlfo{\atma}{\vlholer{\breve{S}'\,\vlscn{b}}}$, and $\mapPincToDi{F}=\vlfo{\atma}{\vlholer{\breve{S}'\,\vlscn{\vlone}}}$ imply that $\pincE$ is $\pincNu{\pinca}{\pincE'}$, and $F$ is $\pincNu{\pinca}{F'}$, for some $\pincE'$, and $F'$ such that\ $\mapPincToDi{\pincE'}=\vlholer{\breve{S}'\,\vlscn{b}}$, and $\mapPincToDi{F'}=\vlholer{\breve{S}'\,\vlscn{\vlone}}$. We can prove: {\small $$ \vlderivation{ \vlin{\rho}{} {\pincLTSJud{\pincNu{\pinca} {\pincE'} } {\pincNu{\pinca} {F'} } {\pincLabL'} }{ \vlhy{\pincLTSJud{\pincE'} {F'} {\pincLabL} }} } $$ \par\noindent because the premise holds thanks to the inductive argument. Of course we choose $\rho$, depending on $\atma$. If $\atma\equivb$, then $\rho$ must be $\pincpi$, and $\pincLabL'\equiv\pincLabT$. Otherwise, if $\atma\not\equivb$, then $\rho$ must be $\pincpe$, and $\pincLabL'\equiv\pincLabL$. \end{itemize} Point~\eqref{enumerate:Trivial derivations and rightcontext s-01} of this Proposition excludes any further case. \section{Proving point~\eqref{enumerate:Trivial derivations and rightcontext s-04} of \textit{Process structure s, \trivial\ derivation s and right-context s} (Proposition~\ref{proposition:Rightcontext s preserve communication}, page~\pageref{proposition:Rightcontext s preserve communication})} \label{section:Proof of Rightcontext s can preserve internal communication} The proof is by induction on the size of $ \pincPar{\pincE}{F}$, proceeding by cases on the forms of $\vlholer{S' \vlhole}$, and $\vlholer{S''\vlhole}$, which, by assumption, are process structure s, so they can assume only specific forms. \begin{itemize} \item The base case has $\vlholer{S'\vlhole} \approx \vlsbr<\vlhole;U'>$, and $\vlholer{S''\vlhole} \approx \vlsbr<\vlhole;U''>$, for some $U'$, and $U''$ every of which may well be $ \pincZer $. So, $\vlholer{S'\,\vlscn{\vlone}} \approx \vlsbr<\vlone;U'> \approx U'$, and $\vlholer{S''\,\vlscn{\vlone}} \approx \vlsbr<\vlone;U''> \approx U''$. The assumptions $\mapPincToDi{\pincE} =\vlsbr<b ;U' >$, and $\mapPincToDi{F} =\vlsbr<\vlne\atmb;U''>$, and $\mapPincToDi{\pincE'}=\vlsbr<\vlone;U' >\approxU'$, and $\mapPincToDi{F'}=\vlsbr<\vlone;U''>\approxU''$ imply that $\pincE =\pincSec{b }{\pincE' }$, and $F =\pincSec{{\overline{b}}}{\pincE'}$. We can write: {\small $$ \vlderivation{ \vliin{\pinccom}{} {\pincLTSJud{\pincPar{(\pincSec{b} {\pincE'}) } {(\pincSec{{\overline{b}}} {F'}) } } {\pincPar{\pincE'} {F'} } {\pincLabT} } \vlin{\pincact}{} {\pincLTSJud{\pincSec{b} {\pincE'}} {\pincE'} {b} }{\vlhy{}} } \vlin{\pincact}{} {\pincLTSJud{\pincSec{{\overline{b}}} {\pincE'}} {\pincE'} {{\overline{b}}} }{\vlhy{}} } } $$ We remark that neither $\vlholer{S'\vlhole} \approx \vlsbr<\vlholer{\breve{S}'\vlhole};U'>$ with $\vlholer{\breve{S}'\vlhole}\not\approx\vlhole$, nor $\vlholer{S'\vlhole} \approx \vlsbr<\vlholer{\breve{S}''\vlhole};U''>$ with $\vlholer{\breve{S}''\vlhole}\not\approx\vlhole$, can hold. Otherwise neither $\vlholer{S'\vlhole}$, nor neither $\vlholer{S''\vlhole}$ could be process structure s. \item Let $\vlholer{S'\vlhole} \approx \vlsbr[\vlholer{\breve{S}'\vlhole};U']$. So, $\vlholer{S'\,\vlscn{\vlone}} \approx \vlsbr[\breve{S}'\,\vlscn{\vlone};U']$. The assumptions $\mapPincToDi{\pincE } =\vlsbr[\vlholer{\breve{S}'\vlscn{b}};U']$, and $\mapPincToDi{\pincE'} =\vlsbr[\vlholer{\breve{S}'\vlscn{\vlone}};U']$ imply that $\pincE =\pincPar{G_1 }{G_2}$, and $\pincE' =\pincPar{G'_1}{G_2}$ such that\ $\mapPincToDi{G_1 } =\vlholer{\breve{S}'\vlscn{b}}$, and $\mapPincToDi{G'_1} =\vlholer{\breve{S}'\vlscn{\vlone}}$, and $\mapPincToDi{G_2} =U'$. \begin{itemize} \item Let $\vlholer{S''\vlhole} \approx \vlsbr[\vlholer{\breve{S}''\vlhole};U'']$. So, $\vlholer{S''\,\vlscn{\vlone}} \approx \vlsbr[\breve{S}''\,\vlscn{\vlone};U'']$. The assumptions $\mapPincToDi{F } =\vlsbr[\vlholer{\breve{S}''\vlscn{\vlne\atmb}};U'']$, and $\mapPincToDi{F'} =\vlsbr[\vlholer{\breve{S}''\vlscn{\vlone}};U'']$ imply that $F =\pincPar{H_1 }{H_2}$, and $F' =\pincPar{H'_1}{H_2}$ such that\ $\mapPincToDi{H_1 } =\vlholer{\breve{S}''\vlscn{\vlne\atmb}}$, and $\mapPincToDi{H'_1} =\vlholer{\breve{S}''\vlscn{\vlone}}$, and $\mapPincToDi{H_2} =U''$. We can prove: {\small $$ \vlderivation{ \vlin{\pinccntxp}{} { \pincLTSJud{\pincPar{G_1} {\pincPar{G_2} {\pincPar{H_1} {H_2}} } } {\pincPar{G'_1} {\pincPar{G_2} {\pincPar{H'_1} {H_2}} } } {\pincPreT} }{ \vlin{\pinccntxp}{} { \pincLTSJud{\pincPar{G_1} {\pincPar{H_1} {H_2}} } {\pincPar{G'_1} {\pincPar{H'_1} {H_2}} } {\pincPreT} }{ \vlhy{\pincLTSJud{\pincPar{G_1} {H_1} } {\pincPar{G'_1} {H'_1} } {\pincPreT}}}} } $$ The premise holds thanks to the inductive hypothesis because both $ \pincPar{G_1} {H_1}$ is smaller than $ \pincPar{G_1} {\pincPar{G_2} {\pincPar{H_1} {H_2}}}$. \item Let $\vlholer{S''\vlhole} \approx \vlsbr<\vlholer{\breve{S}''\vlhole};U''>$ with $\vlholer{\breve{S}''\vlhole}\approx\vlhole$. Otherwise $\vlholer{S''\vlhole}$ could not be a process structure. So, $\vlholer{S''\,\vlscn{\vlone}} \approx \vlsbr<\vlone;U''>\approxU''$. The assumptions $\mapPincToDi{F } =\vlsbr<\vlne\atmb;U''>$, and $\mapPincToDi{F'} =\vlsbr<\vlone;U''> \approx U''$ imply that $F =\pincSec{{\overline{b}}}{F'}$. We can prove: {\small $$\vlderivation{ \vlin{\pinccntxp}{} { \pincLTSJud{\pincPar{\pincPar{G_1} {G_2}} {(\pincSec{\vlne\atmb} {F'}) } } {\pincPar{\pincPar{G'_1} {G_2}} {F'} } {\pincPreT} }{ \vlhy{\pincLTSJud{\pincPar{G_1} {(\pincSec{\vlne\atmb} {F'}) } } {\pincPar{G'_1} {F'} } {\pincPreT}}} } $$ The premise holds thanks to the inductive hypothesis because $\pincPar{G_1} {(\pincSec{\vlne\atmb} {F'})}$ is smaller than $\pincPar{\pincPar{G_1} {G_2}} {(\pincSec{\vlne\atmb} {F'})}$. \item Let $\vlholer{S''\vlhole} \approx \vlfo{\atma}{\vlholer{\breve{S}''\vlhole}}$, for any $\atma$. So, $\vlholer{S''\,\vlscn{\vlone}} \approx \vlfo{\atma}{\vlholer{\breve{S}''\,\vlscn{\vlone}}}$. The assumptions $\mapPincToDi{F } =\vlfo{\atma}{\vlholer{\breve{S}''\vlscn{\vlne\atmb}}}$, and $\mapPincToDi{F'} =\vlfo{\atma}{\vlholer{\breve{S}''\vlscn{\vlone}}}$ imply that $F =\pincNu{b}{H }$, and $F' =\pincNu{b}{H'}$, for some $H$, and $H'$ such that $\mapPincToDi{H } =\vlholer{\breve{S}''\vlscn{\vlne\atmb}}$, and $\mapPincToDi{H'} =\vlholer{\breve{S}''\vlscn{\vlone}}$. We can prove: {\small $$ \vlderivation{ \vlin{\pinccntxp}{} {\pincLTSJud{\pincPar{\pincPar{G_1} {G_2}} {\pincNu{b}{(H)}} } {\pincPar{\pincPar{G'_1} {G_2}} {\pincNu{b}{(H')}} } {\pincPreT} }{ \vlhy{\pincLTSJud{\pincPar{G_1} {\pincNu{b}{(H)}} } {\pincPar{G'_1} {\pincNu{b}{(H')}} } {\pincPreT} }} } $$ \par\noindent The premise holds thanks to the inductive hypothesis because $\pincPar{G_1} {\pincNu{b}{(H)}}$ is smaller than $\pincPar{\pincPar{G_1} {G_2}} {\pincNu{b}{(H)}}$. \end{itemize} \item Let $\vlholer{S'\vlhole} \approx \vlsbr<\vlholer{\breve{S}'\vlhole};U'>$ with $\breve{S}'\vlhole\approx\vlhole$. Otherwise $\vlholer{S'\vlhole}$ could not be a process structure. So, $\vlholer{S'\,\vlscn{\vlone}} \approx \vlsbr<\vlone;U'>\approxU'$. The assumptions $\mapPincToDi{\pincE } =\vlsbr<b;U'>$, and $\mapPincToDi{\pincE'} =\vlsbr<\vlone;U'> \approx U''$ imply that $\pincE =\pincSec{b}{\pincE'}$. \begin{itemize} \item We already considered the case with $\vlholer{S''\vlhole} \approx \vlsbr[\vlholer{\breve{S}''\vlhole};U'']$. It is enough to switch $\vlholer{S'\vlhole}$ and $\vlholer{S''\vlhole}$. \item Letting $\vlholer{S''\vlhole} \approx \vlsbr<\vlholer{\breve{S}''\vlhole};U''>$, with $\breve{S}''\vlhole\approx\vlhole$, otherwise $\vlholer{S''\vlhole}$ could not be a process structure, becomes the base case, we started with. \item Let $\vlholer{S''\vlhole} \approx \vlfo{\atma}{\vlholer{\breve{S}''\vlhole}}$, for any $\atma$. So, $\vlholer{S''\,\vlscn{\vlone}} \approx \vlfo{\atma}{\vlholer{\breve{S}''\,\vlscn{\vlone}}}$ where, thanks to \eqref{align:PPi-structural-congruence}, we can always be in a situation such that $\atma$ is different from every element in $\strFN{\vlholer{S'\,\vlscn{b}}}$. The assumptions $\mapPincToDi{F } =\vlfo{\atma}{\vlholer{\breve{S}''\vlscn{\vlne\atmb}}}$, and $\mapPincToDi{F'} =\vlfo{\atma}{\vlholer{\breve{S}''\vlscn{\vlone}}}$ imply that $F =\pincNu{b}{H }$, and $F' =\pincNu{b}{H'}$, for some $H$, and $H'$ such that $\mapPincToDi{H } =\vlholer{\breve{S}''\vlscn{\vlne\atmb}}$, and $\mapPincToDi{H'} =\vlholer{\breve{S}''\vlscn{\vlone}}$. We can prove: {\small $$ \vlderivation{ \vlin{\rho}{} {\pincLTSJud{\pincPar{\pincNu{\atma} {(\pincSec{b} {\pincE'})} } {\pincNu{\pinca} {H} } } {\pincPar{\pincNu{\atma} {\pincE'}} {\pincNu{\pinca}{H'}} } {\pincPreT} }{ \vlhy{\pincLTSJud{\pincPar{\pincSec{b} {\pincE'} } {H} } {\pincPar{\pincE'} {H'} } {\pincPreT} }} } $$ \par\noindent where $\rho$ can be any between $\pincpi$, and $\pincpe$. The premise holds thanks to the inductive hypothesis because $\pincPar{\pincSec{b}{\pincE'}}{H}$ is smaller than $\pincPar{\pincNu{\atma}{(\pincSec{b}{\pincE'})}}{\pincNu{\pinca}{H}}$. \end{itemize} \item Let $\vlholer{S'\vlhole} \approx \vlfo{\atma}{\vlholer{\breve{S}'\vlhole}}$ for a given $\atma$. So, $\vlholer{S'\,\vlscn{\vlone}} \approx \vlfo{\atma}{\vlholer{\breve{S}'\,\vlscn{\vlone}}}$. The assumptions $\mapPincToDi{\pincE } =\vlfo{\atma}{\vlholer{\breve{S}'\vlscn{b}}}$, and $\mapPincToDi{\pincE'} =\vlfo{\atma}{\vlholer{\breve{S}'\vlscn{\vlone}}}$ imply that $\pincE =\pincNu{\pinca}{G }$, and $\pincE' =\pincNu{\pinca}{G'}$, for some $G$, and $G'$ such that $\mapPincToDi{G } =\vlholer{\breve{S}'\vlscn{b}}$, and $\mapPincToDi{G'} =\vlholer{\breve{S}'\vlscn{\vlone}}$. \begin{itemize} \item We already considered the case with $\vlholer{S''\vlhole} \approx \vlsbr[\vlholer{\breve{S}''\vlhole};U'']$. It is enough to switch $\vlholer{S'\vlhole}$ and $\vlholer{S''\vlhole}$. \item We already considered the case with $\vlholer{S''\vlhole} \approx \vlsbr<\vlholer{\breve{S}''\vlhole};U''>$. It is enough to switch $\vlholer{S'\vlhole}$ and $\vlholer{S''\vlhole}$. \item Let $\vlholer{S''\vlhole} \approx \vlfo{c}{\vlholer{\breve{S}''\vlhole}}$, for any $c$. So, $\vlholer{S''\,\vlscn{\vlone}} \approx \vlfo{c}{\vlholer{\breve{S}''\,\vlscn{\vlone}}}$. The assumptions $\mapPincToDi{F } =\vlfo{c}{\vlholer{\breve{S}''\vlscn{\vlne\atmb}}}$, and $\mapPincToDi{F'} =\vlfo{c}{\vlholer{\breve{S}''\vlscn{\vlone}}}$ imply that $F =\pincNu{c}{H }$, and $F' =\pincNu{c}{H'}$, for some $H$, and $H'$ such that $\mapPincToDi{H } =\vlholer{\breve{S}''\vlscn{\vlne\atmb}}$, and $\mapPincToDi{H'} =\vlholer{\breve{S}''\vlscn{\vlone}}$. We need to consider the following cases where (i) $\rho$ can be $\pincpi$, or $\pincpe$, and (ii) the premise of all the given derivations exists thanks to the inductive arguments we have used so far in this proof. \begin{itemize} \item As a first case let $\atma\equivc$, and $\atma,c\not\equivb$. We can prove: {\small $$ \vlderivation{ \vlin{\rho}{} { \pincLTSJud{\pincPar{\pincNu{\pinca} {G} } {\pincNu{\pinca} {H} } } {\pincPar{\pincNu{\pinca} {G'} } {\pincNu{\pinca} {H'} } } {\pincLabT} }{ \vlhy{ \pincLTSJud{\pincPar{G } {H } } {\pincPar{G' } {H' } } {\pincLabT} }}} $$ We can proceed in the same way also when $\atma,c\equivb$, the derivation becoming: {\small $$ \vlderivation{ \vlin{\rho}{} { \pincLTSJud{\pincPar{\pincNu{b} {G} } {\pincNu{b} {H} } } {\pincPar{\pincNu{b} {G'} } {\pincNu{b} {H'} } } {\pincLabT} }{ \vlhy{ \pincLTSJud{\pincPar{G } {H } } {\pincPar{G' } {H' } } {\pincLabT} }}} $$ \item As a third case let $\atma\equivb$, and $c\not\equivb$. we can prove: {\small $$ \vlderivation{ \vlin{\rho}{} {\pincLTSJud{\pincPar{\pincNu{b} {G}} {\pincNu{c} {H}} \pincCong \pincPar{\pincNu{d} {G\subst{d}{b}}} {\pincNu{d} {\pincNu{c} {H}}} } {\pincPar{\pincNu{d} {G'\subst{d}{b}}} {\pincNu{d} {\pincNu{c} {H'}}} \pincCong \pincPar{\pincNu{b} {G'}} {\pincNu{c} {H'}} } {\pincLabT} }{ \vlhy {\pincLTSJud{\pincPar{G\subst{d}{b}} {\pincNu{c}{H}} } {\pincPar{G'\subst{d}{b}} {\pincNu{c} {H'}} } {\pincLabT} }} } $$ \par\noindent where $ d $ neither occurs in $ G $, nor it occurs in $\pincNu{c}{H} $ so that we can apply \eqref{align:PPi-structural-congruence}. \end{itemize} \end{itemize} \end{itemize} \section{Proof of \textit{\textbf{Soundness w.r.t. external communication}} (Theorem~\ref{theorem:Soundness w.r.t. external communication}, page~\pageref{theorem:Soundness w.r.t. external communication})} \label{section:Proof of theorem:Soundness w.r.t. external communication} We proceed on the possible forms that $\mapPincToDi{\pincE}$ can assume, in relation with the form of $R$. Point~\eqref{enumerate:Trivial derivations and rightcontext s-03} of Proposition~\ref{proposition:Rightcontext s preserve communication} will help concluding. \begin{itemize} \item[\textbf{First case.}] We focus on $\bvtDder$ concluding with $\vlstore{\vlsbr<\vlne\atmb;R>} \vlsbr[\mapPincToDi{\pincE};\vlfo{b}{\vlread}]$. In the simplest case, Points~\eqref{enumerate:Trivial derivations and rightcontext s-01}, and \eqref{enumerate:Trivial derivations and rightcontext s-02} of Proposition~\ref{proposition:Rightcontext s preserve communication} imply that either $\vlstore{\vlholer{S'\,\vlscn{b}}} \mapPincToDi{\pincE}\approx \vlsbr[\vlfo{b}{\vlread};\mapPincToDi{\pincE''}]$, or $\vlstore{\vlsbr<b;\mapPincToDi{\pincE''}>} \mapPincToDi{\pincE}\approx \vlfo{b}{\vlread}$, for some $\pincE''$, and $\vlholer{S'\vlhole}$, such that $b\in\strFN{\vlholer{S'\,\vlscn{b}}}$. \begin{enumerate} \item Let $\vlstore{\vlsbr<b;\mapPincToDi{\pincE''}>} \mapPincToDi{\pincE}\approx\vlfo{b}{\vlread}$. So, $\pincE$ is $\pincNu{b}{(\pincSec{b}{\pincE''})}$. We can take $ G $ coinciding to $\pincE''$, because $\vlstore{\vlsbr<\vlone;\mapPincToDi{\pincE''}>} \vlfo{b}{\vlread} \approx \vlfo{b}{\mapPincToDi{\pincE''}}$. We can prove: {\small $$ \vlderivation { \vlin{\pincpi}{} {\pincLTSJud{\pincNu{b} {(\pincSec{b} {\pincE''})} } {\pincNu{b} {\pincE''} } {\pincLabT}} { \vlin{\pincact}{} {\pincLTSJud{\pincSec{b}{\pincE''}} {\pincE''} {b}}{ \vlhy{}}}} $$ \item Let $\vlstore{\vlholer{S'\,\vlscn{b}}} \mapPincToDi{\pincE}\approx \vlsbr[\vlfo{b}{\vlread};\mapPincToDi{\pincE''}]$. So, $ \pincE $ is $ \pincPar{\pincNu{b}{\pincE'}} {\pincE''} $ where $ \mapPincToDi{\pincE'} \approx \vlholer{S'\,\vlscn{b}}$. We can take $G$ as $\pincPar{\pincNu{b}{G'}}{\pincE''}$ where $\vlstore{\vlholer{S'\,\vlscn{\vlone}}} \mapPincToDi{G'} \approx \vlfo{b}{\vlread}$. We can prove: {\small $$ \vlderivation{ \vlin{\pinccntxp}{} {\pincLTSJud{\pincPar{\pincNu{b} {\pincE'}} {\pincE''} } {\pincPar{\pincNu{b} {G'}} {\pincE''} } {\pincPreT}} { \vlin{\pincpi}{} {\pincLTSJud{\pincNu{b}{\pincE'} } {\pincNu{b}{G'} } {\pincPreT}} { \vlhy{\pincLTSJud{\pincE' } {G' } {b}}}}} $$ Point~\eqref{enumerate:Trivial derivations and rightcontext s-03} of Proposition~\ref{proposition:Rightcontext s preserve communication} implies that the premise holds. \end{enumerate} In fact, the most general situations that Points~\eqref{enumerate:Trivial derivations and rightcontext s-01}, and \eqref{enumerate:Trivial derivations and rightcontext s-02} of Proposition~\ref{proposition:Rightcontext s preserve communication} imply are: {\small $$ \vlstore{ \vlsbr[\vlfo{\atma_1} {\vldots \vlfo{\atma_m} {\vlholer{S' \,\vlscn{b }}} \vldots} ;\mapPincToDi{\pincE'}] } \mapPincToDi{\pincE}\approx\vlread \qquad\qquad\qquad\qquad \vlstore{ \vlsbr<b ;\mapPincToDi{\pincE'}> } \mapPincToDi{\pincE}\approx \vlfo{\atma_1} {\vldots \vlfo{\atma_m} {\vlread} \vldots} $$ where $a_i \not\equiv a_j$, for every $1\leq i, j\leq m$, and $b\equiv a_i$, for some $1\leq i \leq m$. We can resume to the situation we have just developed in detail, by rearranging the occurrences of \textnormal{\textsf{Sdq}}\xspace, thanks to congruence \eqref{align:PPi-structural-congruence}. \item[\textbf{Second case.}] Let us assume that $\bvtDder$ concludes with $\vlstore{\vlsbr<\vlne\atmb;R'>}R\approx\vlread$. Points~\eqref{enumerate:Trivial derivations and rightcontext s-01}, and \eqref{enumerate:Trivial derivations and rightcontext s-02} of Proposition~\ref{proposition:Rightcontext s preserve communication} imply either $\vlstore{\vlsbr<b;\mapPincToDi{\pincE'}>} \mapPincToDi{\pincE}\approx\vlread$, or $\vlstore{\vlholer{S'\,\vlscn{b}}} \mapPincToDi{\pincE}\approx\vlsbr[\vlread;\mapPincToDi{\pincE'}]$, where $b\in\strFN{\vlholer{S'\,\vlscn{b}}}$. Both combinations are simple sub-cases of the previous ones, just developed in detail. \end{itemize} \section{Proof of \textit{\textbf{Soundness w.r.t.\xspace\ internal communication}} (Theorem~\ref{theorem:Soundness w.r.t. internal communication}, page~\pageref{theorem:Soundness w.r.t. internal communication})} \label{section:Proof of theorem:Soundness w.r.t. internal communication} \begin{itemize} \item As a base case, let $\mapPincToDi{\pincE}\approx \vlsbr[<b;\mapPincToDi{\pincE}'> ;<\vlne\atmb;\mapPincToDi{\pincE}''>]$, for some process $\pincE'$, and $\pincE''$. So, $\pincE$ is $ \pincPar{(\pincSec{b} {\pincE'})} {(\pincSec{{\overline{b}}} {\pincE''}) }$, and $\vlholer{S'\vlhole} \approx\vlsbr<\vlhole;\mapPincToDi{\pincE'}>$, and $\vlholer{S''\vlhole} \approx\vlsbr<\vlhole;\mapPincToDi{\pincE''}>$. We can take $G$ to be $ \pincPar{\pincE'}{\pincE''}$ because $ \vlsbr[<\vlone;\mapPincToDi{\pincE'}> ;<\vlone;\mapPincToDi{\pincE''}>] \approx \vlsbr[\mapPincToDi{\pincE}' ;\mapPincToDi{\pincE}'']$. We can write: {\small $$ \vlderivation{ \vliin{\pinccom}{} {\pincLTSJud{\pincPar{(\pincSec{b} {\pincE'}) } {(\pincSec{{\overline{b}}} {\pincE''}) } } {\pincPar{\pincE'} {\pincE''} } {\pincLabT} } \vlin{\pincact}{} {\pincLTSJud{\pincSec{b} {\pincE'} } {\pincE'} {b} }{\vlhy{}} } \vlin{\pincact}{} {\pincLTSJud{\pincSec{{\overline{b}}} {\pincE'}} {\pincE'} {{\overline{b}}} }{\vlhy{}} } } $$ \item Let $\vlstore{ \vlsbr[\vlfo{c} {\vlholer{S' \,\vlscn{b}}} ;\vlfo{c} {\vlholer{S''\,\vlscn{\vlne\atmb}}} ;\mapPincToDi{\pincE'''}] } \mapPincToDi{\pincE}\approx \vlread$, for some $\pincE'''$, and $c$. We remark that $c$ is either different from $b$ in both $\vlfo{c}{\vlholer{S' \,\vlscn{b}}}$, and $\vlfo{c}{\vlholer{S''\,\vlscn{\vlne\atmb}}}$, or it is equal to $ b $ in both of them. Otherwise, we could not get to the premise of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ in $\bvtDder'$. So, $\pincE$ is $ \pincPar{\pincNu{c}{\pincE'} } {\pincPar{\pincNu{c}{\pincE''}} {\pincE'''} }$, where $ \mapPincToDi{\pincE'} \approx \vlholer{S'\,\vlscn{b}} $, and $ \mapPincToDi{\pincE''} \approx \vlholer{S''\,\vlscn{\vlne\atmb}}$. We can take $ G $ as $ \pincPar{\pincNu{c}{G'} } {\pincPar{\pincNu{c}{G''}} {\pincE'''} }$, because $\vlstore{ \vlsbr[\vlfo{c}{\vlholer{S'\,\vlscn{\vlone}}} ;\vlfo{c}{\vlholer{S''\,\vlscn{\vlone}}} ;\mapPincToDi{\pincE'''}]} \mapPincToDi{G}\approx \vlread$, with $ \mapPincToDi{G'} \approx \vlholer{S'\,\vlscn{\vlone}} $, and $ \mapPincToDi{G''} \approx \vlholer{S''\,\vlscn{\vlone}}$. We can write: {\small $$ \vlderivation{ \vlin{\pinccntxp}{} {\pincLTSJud{\pincPar{\pincNu{c}{\pincE'} } {\pincPar{\pincNu{c} {\pincE''}} {\pincE'''} } } {\pincPar{\pincNu{c}{G'}} {\pincPar{\pincNu{c}{G''}} {\pincE'''} } } {\pincLabT} }{ \vlin{\rho}{} {\pincLTSJud{\pincPar{\pincNu{c}{\pincE'} } {\pincNu{c} {\pincE''} } } {\pincPar{\pincNu{c}{G'}} {\pincNu{c}{G''}} } {\pincLabT} }{ \vlhy{\pincLTSJud{\pincPar{\pincE' } {\pincE'' } } {\pincPar{G'} {G''} } {\pincLabT} }}} } $$ where $\rho$ can be $\pincpe$, or $\pincpi$. The premise follows from Point~\eqref{enumerate:Trivial derivations and rightcontext s-04} of Proposition~\ref{proposition:Rightcontext s preserve communication}. \item Let $\vlstore{ \vlsbr[\vlholer{S' \,\vlscn{b}} ;\vlholer{S''\,\vlscn{\vlne\atmb}} ;\mapPincToDi{\pincE'''}] } \mapPincToDi{\pincE}\approx \vlfo{c}{\vlread}$, for some $\pincE'''$, and $c$. So, $\pincE$ is $\pincNu{c}{(\pincPar{\pincE'} {\pincPar{\pincE''} {\pincE'''} })}$, where $ \mapPincToDi{\pincE'} \approx \vlholer{S'\,\vlscn{b}} $, and $ \mapPincToDi{\pincE''} \approx \vlholer{S''\,\vlscn{\vlne\atmb}}$. We can take $G $ as $\pincNu{c}{ (\pincPar{G'} {\pincPar{G''} {\pincE'''} })}$, because $\vlstore{ \vlsbr[\vlholer{S'\,\vlscn{\vlone}} ;\vlholer{S''\,\vlscn{\vlone}} ;\mapPincToDi{\pincE'''}]} \mapPincToDi{G}\approx \vlfo{c}{\vlread}$, with $ \mapPincToDi{G'} \approx \vlholer{S'\,\vlscn{\vlone}} $, and $ \mapPincToDi{G''} \approx \vlholer{S''\,\vlscn{\vlone}}$. We can write: {\small $$ \vlderivation{ \vlin{\rho}{} {\pincLTSJud{\pincNu{c}{(\pincPar{\pincE'} {\pincPar{\pincE''} {\pincE'''} }) } \approx \pincPar{\pincNu{c}{(\pincPar{\pincE'} {\pincPar{\pincE''} {\pincE'''} }) } } {\pincNu{c} {\pincZer}} } {\pincPar{\pincNu{c}{(\pincPar{G'} {\pincPar{G''} {\pincE'''} }) } } {\pincNu{c} {\pincZer}} \approx \pincNu{c}{(\pincPar{G'} {\pincPar{G''} {\pincE'''} }) } } {\pincLabT} }{ \vlin{\pinccntxp}{} {\pincLTSJud{\pincPar{\pincPar{\pincE'} {\pincE''}} {\pincZer} } {\pincPar{G'} {\pincPar{G'} {\pincZer}} } {\pincLabT} }{ \vlhy{\pincLTSJud{\pincPar{\pincE'} {\pincE''} } {\pincPar{G'} {G''} } {\pincLabT} }}} } $$ where $\rho$ can be $\pincpe$, or $\pincpi$. The premise follows from Point~\eqref{enumerate:Trivial derivations and rightcontext s-04} of Proposition~\ref{proposition:Rightcontext s preserve communication}. \par Of course, if $\mapPincToDi{\pincE} \approx \vlsbr[\vlholer{S' \,\vlscn{b }} ;\vlholer{S''\,\vlscn{\vlne\atmb}} ;\mapPincToDi{\pincE'''}]$, for some $\pincE'''$, we can proceed as here above, dropping $\rho$. \end{itemize} Assuming that $ (*) $ is the lowermost instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$} $ of $ \bvtDder $ excludes other cases that would impede getting to the premise of $(*)$ itself in a \trivial\ derivation\ like $ \bvtDder'$ has to be. \section{Intermezzo} \label{section:Communication core of BVT} We keep the content of this section at an intuitive level. We describe how structures of $ \BV\mathsf{Q}\xspace $ model terms in a language whose syntax is not formally identified yet, but which is related to the one of Milner $ \mathsf{CCS}\xspace $. \begin{example}[\textbf{\textit{Modeling internal communication inside $\BV\mathsf{Q}\xspace$}}] \label{example:Modeling internal communication inside BVT} Derivations of $\BV\mathsf{Q}\xspace$ model internal communication if we look at structures of $\BV\mathsf{Q}\xspace$ as they were terms of Milner $\mathsf{CCS}\xspace$, as in \cite{Brus:02:A-Purely:wd}. Let us focus on \eqref{equation:PPi-internal-interaction-example-ll} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-internal-interaction-example-ll} \input{./PPi-internal-interaction-example-ll} \end{equation} \end{minipage} \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-internal-interaction-example-rr} \input{./PPi-internal-interaction-example-rr} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent The instance of $\mbox{$\mathsf{q}\!\!\downarrow$}$ moves atoms $\atma$, and $\natma$, one aside the other, and $\mbox{$\mathsf{ai}\!\!\downarrow$}$ annihilates them. Annihilation can be seen as an internal communication between the two components $\vlsbr<\atmaColor{Red};\pincE>$, and $\vlsbr<\natmaColor{Blue};F>$ of the structure $\vlsbr[<\atmaColor{Red};\pincE>;<\natmaColor{Blue};F>]$. The usual way to formalize such an internal communication is \eqref{equation:PPi-internal-interaction-example-rr}, derivation that belongs to the labeled transition system\ of Milner $\mathsf{CCS}\xspace$. The sequential composition of \eqref{equation:PPi-internal-interaction-example-rr} stands for \textnormal{\textsf{Seq}}\xspace, parallel composition for \textnormal{\textsf{Par}}\xspace, and both $\pincE$, and $F$ in \eqref{equation:PPi-internal-interaction-example-ll} are represented by corresponding processes $\pincE$, and $F$ in \eqref{equation:PPi-internal-interaction-example-rr}. \end{example} \begin{example}[\textbf{\textit{Modeling external communication inside $\BV\mathsf{Q}\xspace$}}] \label{example:Modeling external communication inside BVT} Derivations of $\BV\mathsf{Q}\xspace$ model external communication if we look at structures of $\BV\mathsf{Q}\xspace$ as they were terms of Milner $\mathsf{CCS}\xspace$, as in \cite{Brus:02:A-Purely:wd}. Let us focus on \eqref{equation:PPi-external-interaction-example-ll} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-external-interaction-example-ll} \input{./PPi-external-interaction-example-ll} \end{equation} \end{minipage} \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-external-interaction-example-rr} \input{./PPi-external-interaction-example-rr} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent We look at $\vlsbr[<\atmaColor{Red};\pincE>;\natmaColor{Blue}]$ as containing two sub-structures with different meaning. The structure $\vlsbr<\atmaColor{Red};\pincE>$ corresponds to the process $\pincSec{\atma}{\pincE}$. Instead, $\natmaColor{Blue}$ can be seen as an action of the context ``around'' $\vlsbr<\atmaColor{Red};\pincE>$. This means that \eqref{equation:PPi-internal-interaction-example-ll} formalizes Milner $\mathsf{CCS}\xspace$ derivation \eqref{equation:PPi-internal-interaction-example-rr}. \end{example} \begin{remark}[\textbf{\textit{``Processes'', and ``contexts'' are first-citizens}}] \label{remark:Processes and contexts are first-citizens} The structure $\vlsbr[<\atmaColor{Red};\pincE>;\natmaColor{Blue}]$ is equivalent to $\vlsbr [<\atmaColor{Red};\pincE> ;<\natmaColor{Blue};\vlone>]$ in \eqref{equation:PPi-external-interaction-example-ll}. This highlights a first difference between modeling the communication by means of (a sub-system of) $\BV\mathsf{Q}\xspace$, instead than with Milner $\mathsf{CCS}\xspace$. This latter constantly separates terms from the contexts they interact with. Instead, the structures of $\BV\mathsf{Q}\xspace$ make no difference, and represent contexts as first-citizens. Namely, choosing which structures are the ``real processes'', and which are ``contexts'' is, somewhat, only matter of taste. Specifically, in our case, we could have said that $ \vlsbr{<\natmaColor{Blue};\vlone>} $ represents the process $ \pincSec{\pincna}{\pincZer}$, instead than the context. \end{remark} \begin{example}[\textbf{\textit{Hiding communication}}] \label{example:Hiding communication} Derivations in $\BV\mathsf{Q}\xspace$ model hidden communications of Milner $\mathsf{CCS}\xspace$ thanks to \textnormal{\textsf{Sdq}}\xspace. So, we strictly extend the correspondence between a DI\xspace system and Milner $ \mathsf{CCS}\xspace $, as given in \cite{Brus:02:A-Purely:wd}. We build on Example~\ref{example:Modeling external communication inside BVT}, placing an instance of \textnormal{\textsf{Sdq}}\xspace around every of the two components of $\vlsbr[<\atmaColor{Red};\pincE>;\natmaColor{Blue}]$ in \eqref{equation:PPi-external-interaction-example-ll}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-external2internal-interaction-example-ll} \input{./PPi-external2internal-interaction-example-ll} \end{equation} \end{minipage} \begin{minipage}{.487\textwidth} \begin{equation} \label{equation:PPi-external2internal-interaction-example-rr} \input{./PPi-external2internal-interaction-example-rr} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent We can look at \textnormal{\textsf{Sdq}}\xspace, which binds $\atmaColor{Red}$, and $\natmaColor{Blue}$ as restricting the visibility of the communication. The derivation in the labeled transition system\ of Milner $\mathsf{CCS}\xspace$ that models \eqref{equation:PPi-external2internal-interaction-example-ll} is \eqref{equation:PPi-external2internal-interaction-example-rr}. \end{example} \begin{example}[\textbf{\textit{More freedom inside $\BV\mathsf{Q}\xspace$}}] \label{example:Escaping close correspondence with CCS} Inside $\vlsbr [<\atmaColor{Red};\pincE> ;<\natmaColor{Blue};<\natmbColor{Blue};<\natmcColor{Blue};F>>> ;<\atmbColor{Red};<\atmcColor{Red};\vlneF>>]$, of \eqref{equation:PPi-invertible-interaction-example-ll} among others, we can identify the ``processes'' $G_1\equiv \vlsbr<\atmaColor{Red};\pincE>$, $G_2\equiv \vlsbr<\natmaColor{Blue};<\natmbColor{Blue};<\natmcColor{Blue};F>>>$, $G_3\equiv \vlsbr<\natmbColor{Blue};<\natmcColor{Blue};F>>$, and $G_4\equiv \vlsbr<\atmbColor{Red};<\atmcColor{Red};\vlneF>>$: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:PPi-invertible-interaction-example-ll} \input{./PPi-invertible-interaction-example-ll} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent The lowermost instance of $\mbox{$\mathsf{q}\!\!\downarrow$}$ predisposes $G_1$, and $G_2$ to an interaction through $\atmaColor{Red}$, and $\natmaColor{Blue}$. However, only the instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ makes the interaction effective. Before that, the instance of $\mbox{$\mathsf{i}\!\!\downarrow$}$ identifies $G_4$ as the negation of $G_3$, and annihilates them in a whole. So, \eqref{equation:PPi-invertible-interaction-example-ll} suggests that modeling process computations inside $\BV\mathsf{Q}\xspace$ may result more flexible than usual, because it introduces a notion of ``negation of a process'' which sounds as a higher-order ingredient of proof-search-as-computation. \end{example} \section{Soundness of ${\BV\mathsf{Q}\llcorner}\xspace$ w.r.t.\xspace\ $ \CCSR $} \label{section:Soundness of BVTCC} The goal is proving Soundness whose formal statement is in Theorem~\eqref{equation:PPi-soundness-example-00} below. We remark that our statement generalizes the one in \cite{Brus:02:A-Purely:wd}, and our proof pinpoints many of the details missing in \cite{Brus:02:A-Purely:wd}. \par Soundness relies on the notions ``reduction of a \nontrivial\ derivation'', and ``environment structure s that are consumed'', and needs some technical lemma. \paragraph{Reduction of non-\trivial\xspace, and \standard\ derivation s of ${\BV\mathsf{Q}\llcorner}\xspace$.} Let $R$, and $T$ be process structure s. Let $\bvtDder$ be a non-\trivial\xspace, and standard\xspace\ derivation ${\small \vlderivation{ \vlde{\bvtDder'}{{\BV\mathsf{Q}\llcorner}\xspace} {R}{ \vlin{\mathsf{at}\downarrow\llcorner}{(*)} {\vlholer{S\vlsbr[\atma;\natma]}}{ \vlde{\bvtDder''}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlholer{S\vlsbr[\vlone]}}{ \vlhy{T}}}}} }$, where $(*)$ is the lowermost occurrence of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ in $\bvtDder$. The \dfn{reduction of $\bvtDder$} is the derivation $\mathcal{E}$ of rules of ${\BV\mathsf{Q}\llcorner}\xspace$ that we get from $\bvtDder$ by (i) replacing $\vlone$ for all occurrences of $\atma$, and $\natma$ in $\bvtDder'$ that, eventually, form the redex of $(*)$, and by (ii) eliminating all the fake instances of rules that the previous step may have created. \begin{fact}[\textit{\textbf{Reduction preserves process structure s}}] \label{fact:Reduction preserve process structures} Let $R$, and $T$ be process structure s. For every non-\trivial\xspace, and standard\xspace\ derivation $\bvtInfer{\bvtDder} {\, T \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} R}$, its reduction $\bvtInfer{\mathcal{E}} {\, T' \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} R'}$ is such that both $R'$, and $T'$ are process structure s. Moreover, $\mathcal{E}$ may not be non-\trivial\xspace, namely, no $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ may remain in $\mathcal{E}$. However, if $\mathcal{E}$ is non-\trivial\xspace, then it is standard\xspace. \end{fact} \begin{proof} The first statement follows from the definition of process structure s. If we erase any sub-structure from a given process structure, we still get a process structure\ which, at least, is $\vlone$. Moreover, the lowermost instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ disappears, after a reduction. So, if it was the only one, none remains. Finally, reduction does not alter the order of rules in $\bvtDder$. \end{proof} \begin{fact}[\textit{\textbf{Preserving right-context s}}] \label{fact:Preserving rightcontext s} Let $\bvtDder$ be a trivial derivation $\vlstore{ \vlsbr[\pincE;R] } \bvtInfer{\bvtDder} {\, S'\,\vlscn{\atma} \bvtJudGen{\Set{\mathsf{q}\downarrow,\mathsf{u}\downarrow}}{} S\vlscn{\atma}}$, for some $S\vlhole, S'\vlhole$, and $ \atma $. \begin{enumerate} \item \label{enumerate:Preserving rightcontext s-00} If $S\vlscn{\atma}$ is not a right-context, then $S'\,\vlscn{\atma}$ cannot be a right-context\ as well. \item \label{enumerate:Preserving rightcontext s-01} If $S'\,\vlscn{\atma}$ is a right-context, then $S\,\vlscn{\atma}$ is a right-context\ as well. \end{enumerate} \end{fact} \begin{proof} \begin{enumerate} \item If $S\vlscn{\atma}$ is not a right-context, then it has form $S\vlscn{\atma}\approxS_0\,\vlsbr<R;S_1\,\vlscn{\atma}>$, with $R\not\approx\vlone$, for some $S_0\vlhole$, and $S_1\vlhole$. \textnormal{\textsf{Seq}}\xspace is non commutative. So, going upward in $\bvtDder$, there is no hope to transform $S_0\,\vlsbr<R;S_1\,\vlscn{\atma}>$ into some $\vlstore{S'_0\,\vlsbr<\vlholer{S'_1\,\vlscn{\atma}};R'>} \vlholer{\vlread}$ where the occurrence of $ \atma $ in the first structure is the same occurrence as $ \atma $ in the second one. Moreover, $\vlinf{}{}{\vlsbr<R;T>}{\vlsbr[R;T]}$ is not derivable in $\Set{\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}\subset \BV\mathsf{Q}\xspace $. So, $S_0\vlsbr<R;S_1\,\vlscn{\atma}>$ cannot transform into some $\vlstore{S'_0\,\vlsbr[R';\vlholer{S'_1\,\vlscn{\atma}}]} \vlholer{\vlread}$, going upward in $\bvtDder$. \item By contraposition of the previous point~\eqref{enumerate:Preserving rightcontext s-00}. \end{enumerate} \end{proof} \begin{proposition}[\textbf{\textit{Process structure s, \trivial\ derivation s, and right-context s}}] \label{proposition:Rightcontext s preserve communication} \label{fact:Trivial derivations and rightcontext s} Let $R$ be a process structure, and $\bvtDder$ be a \trivial\ derivation\ $\vlstore{ \vlholer{S\vlsbr[b;\vlne\atmb]} } \bvtInfer{\bvtDder} {\, \vlread \bvtJudGen{\Set{\mathsf{q}\downarrow,\mathsf{u}\downarrow}}{} R}$, for some $\vlholer{S\vlhole}, b$, and $\vlne\atmb$. Then: \begin{enumerate} \item \label{enumerate:Trivial derivations and rightcontext s-10} $R\not\approx\vlone$, and both $b, \vlne\atmb$ occur in it. \item \label{enumerate:Trivial derivations and rightcontext s-00} The structure $R$ is a right-context\ for both $b$, and $\vlne\atmb$. Namely, $R\approx\vlholer{S'\,\vlscn{b }}$, and $R\approx\vlholer{S''\,\vlscn{\vlne\atmb}}$ for some $\vlholer{S' \vlhole}$, and $\vlholer{S''\vlhole}$. \item \label{enumerate:Trivial derivations and rightcontext s-01} $R\not\approx\vlsbr\tilde{S}'<\pincalpha;\breve{S}' \,\vlscn{b }>$, and $R\not\approx\vlsbr\tilde{S}''<\pincalpha;\breve{S}''\,\vlscn{\vlne\atmb}>$, for any $\tilde{S}'\vlhole, \tilde{S}''\vlhole, \breve{S}'\vlhole$, and $\breve{S}''\vlhole$. \item \label{enumerate:Trivial derivations and rightcontext s-02} $\vlstore{ \vlsbr[\vlholer{S' \,\vlscn{b}} ;\vlfo{b}{\vlholer{S''\,\vlscn{\vlne\atmb}}} ;T]} R\not\approx\vlread$, with $b\in\strFN{\vlholer{S' \,\vlscn{b}}}$, and $\vlstore{ \vlsbr[\vlfo{b}{\vlholer{S' \,\vlscn{b}}} ;\vlholer{S''\,\vlscn{\vlne\atmb}} ;T]} R\not\approx\vlread$, with $\vlne\atmb\in\strFN{\vlholer{S'' \,\vlscn{\vlne\atmb}}}$, for any $\vlholer{S'\vlhole}, \vlholer{S''\vlhole}$, and process structure\ $T$. \item \label{enumerate:Trivial derivations and rightcontext s-02'} Let $\vec{\atma}$ be a, possibly empty, sequence of names. Let $T$ be a process structure, possibly such that $T\approx\vlone$. Then $\vlstore{ \vlsbr[\vlholer{S' \,\vlscn{b }} ;\vlholer{S''\,\vlscn{\vlne\atmb}} ;T] } R\approx\vlfo{\vec{\atma}}{\vlread}$ such that either (i) $b\in\strFN{\vlholer{S' \,\vlscn{b }}}$, and $\vlne\atmb\in\strFN{\vlholer{S''\,\vlscn{\vlne\atmb}}}$, or (ii) $b\in\strBN{\vlholer{S' \,\vlscn{b }}}$, and $\vlne\atmb\in\strBN{\vlholer{S''\,\vlscn{\vlne\atmb}}}$. \item \label{enumerate:Trivial derivations and rightcontext s-03} Let $\vlholer{S'\,\vlscn{b}}$ be the one in Point~\eqref{enumerate:Trivial derivations and rightcontext s-02'} here above. If $ \pincE $, and $ F $ are processes such that\ $\mapPincToDi{\pincE}=\vlholer{S'\,\vlscn{b}}$, and $\mapPincToDi{F}=\vlholer{S'\,\vlscn{\vlone}}$, then $\pincLTSJud{\pincE} {F} {\pincLabL}$, where $\pincLabL$ is $\pincLabT$, if $b\in\strBN{\vlholer{S'\,\vlscn{b}}}$, and $\pincLabL$ is $b$, if $b\in\strFN{\vlholer{S'\,\vlscn{b}}}$. The same holds by replacing $\vlholer{S''\vlhole}$ for $\vlholer{S'\vlhole}$, and $\vlne\atmb$ for $b$. \item \label{enumerate:Trivial derivations and rightcontext s-04} Let $\vlholer{S'\,\vlscn{b}}$, and $\vlholer{S''\,\vlscn{b}}$ be the ones in Point~\eqref{enumerate:Trivial derivations and rightcontext s-02'} here above. If $ \pincE, F, \pincE'$, and $F'$ are processes such that\ $\mapPincToDi{\pincE} =\vlholer{S'\,\vlscn{b}}, \mapPincToDi{F} =\vlholer{S''\,\vlscn{\vlne\atmb}}, \mapPincToDi{\pincE'}=\vlholer{S'\,\vlscn{\vlone}}$, and $\mapPincToDi{F'}=\vlholer{S''\,\vlscn{\vlone}}$, then $\pincLTSJud{\pincPar{\pincE} {F}} {\pincPar{\pincE'} {F'}} {\pincPreT}$. \end{enumerate} \end{proposition} \begin{proof} Concerning point~\eqref{enumerate:Trivial derivations and rightcontext s-10}, since no rule of $\bvtDder$ generates atoms both $b$, and $\vlne\atmb$ must already occur in $R$. \pa Concerning point~\eqref{enumerate:Trivial derivations and rightcontext s-00}, we start from point~\eqref{enumerate:Trivial derivations and rightcontext s-10}, and we look at $\vlstore{\vlsbr[b;\vlne\atmb]}\vlholer{S\vlread}$ by first ``hiding'' $b$, which gives $\vlholer{S_0\,\vlscn{b}} \equiv \vlstore{\vlsbr[b;\vlne\atmb]} \vlholer{S\vlread}$, for some $\vlholer{S_0\,\vlhole}$, and then ``hiding'' $\vlne\atmb$ yielding $\vlholer{S_1\,\vlscn{\vlne\atmb}} \equiv \vlstore{\vlsbr[b;\vlne\atmb]} \vlholer{S\vlread}$, for some $\vlholer{S_1\,\vlhole}$. Then, we apply point~\eqref{enumerate:Preserving rightcontext s-01} of Fact~\ref{fact:Preserving rightcontext s} to $\vlholer{S_0\,\vlscn{b}}$. It implies that $R\approxS'\,\vlscn{b }$ is a right-context, for some $S'\vlhole$. Analogously, point~\eqref{enumerate:Preserving rightcontext s-01} on Fact~\ref{fact:Preserving rightcontext s} to $\vlholer{S_1\,\vlscn{\vlne\atmb}}$ implies that $R\approxS''\,\vlscn{\vlne\atmb}$ is a right-context, for some $S''\vlhole$. \pa Point~\eqref{enumerate:Trivial derivations and rightcontext s-01}, directly follows from point~\eqref{enumerate:Trivial derivations and rightcontext s-00}. \pa Point~\eqref{enumerate:Trivial derivations and rightcontext s-02} holds because, for example, $b$ cannot enter the scope of $\vlfo{b}{\vlholer{S''\,\vlscn{\vlne\atmb}}}$. \pa Point~\eqref{enumerate:Trivial derivations and rightcontext s-02'} follows from \eqref{enumerate:Trivial derivations and rightcontext s-02}. \pa Point~\eqref{enumerate:Trivial derivations and rightcontext s-03} holds by proceeding inductively on $\Size{\pincE}$, and by cases on the form of $\vlholer{S'\vlhole}$, or $\vlholer{S''\vlhole}$, respectively. (Details, relative to $\vlholer{S'\vlhole}$, in Appendix~\ref{section:Proof of Rightcontext s can preserve external communication}.) \pa Point~\eqref{enumerate:Trivial derivations and rightcontext s-04} holds thanks to points~\eqref{enumerate:Trivial derivations and rightcontext s-02}, and~\eqref{enumerate:Trivial derivations and rightcontext s-03}, by proceeding inductively on $\Size{\pincPar{\pincE}{F}}$, and by cases on the form of $\vlholer{S'\vlhole}$, and $\vlholer{S''\vlhole}$. (Details in Appendix~\ref{section:Proof of Rightcontext s can preserve internal communication}.) \end{proof} \par The coming theorem says that the absence of interactions, as in a \trivial\ derivation, models non interacting transitions inside the labeled transition system\ of $ \CCSR $. We include proof details here, and not in an Appendix, because this proof supplies tha simplest technical account of what we shall do for proving soundness. \begin{theorem}[\textit{\textbf{\Trivial\ derivation s model empty computations in labeled transition system}}] \label{theorem:Soundness w.r.t. trivial deductions} Let $\pincE$, and $F$ be processes, with $F$ simple\xspace. If $\vlstore{\mapPincToDi{\pincE}} \bvtInfer{\bvtDder} {\, \mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$ is trivial --- beware, not necessarily in ${\BV\mathsf{Q}\llcorner}\xspace$ ---, then $\pincLTSJud{\pincE} {F} {\pincPreT}$. \end{theorem} \begin{proof Fact~\ref{fact:Trivial derivations preserve normal structures} implies that $\mapPincToDi{\pincE}$ is simple\xspace, like $\mapPincToDi{F}$ is, and that $\bvtDder$ can only contain instances of $\mbox{$\mathsf{u}\!\!\downarrow$}$, if any rule occurs. We proceed by induction on the number $n$ of instances of $\mbox{$\mathsf{u}\!\!\downarrow$}$ in $\bvtDder$. \par If $n=0$, forcefully $\mapPincToDi{\pincE}\equiv\mapPincToDi{F}$. We conclude by $\pincrefl$, i.e.\xspace $\pincLTSJud{\pincE} {\pincE} {\pincLabT}$. Otherwise, the last rule of $\bvtDder$ is: {\small $$ \vlinf{\mathsf{u}\downarrow}{} {S\vlsbr[\vlfo{\atma}{\mapPincToDi{\pincE'}} ;\vlfo{\atma}{\mapPincToDi{\pincE''}}]} {S\vlfo{\atma}{\vlsbr[\mapPincToDi{\pincE'} ;\mapPincToDi{\pincE''}]}} $$ for some context $S\vlhole$, and processes $\pincE'$, and $\pincE''$, such that $\mapPincToDi{\pincE} \approx S\vlsbr[\vlfo{\atma}{\mapPincToDi{\pincE'}} ;\vlfo{\atma}{\mapPincToDi{\pincE''}}]$. We can proceed by cases on the form of $S\vlhole$. \begin{itemize} \item Let $S\vlhole\approx\vlhole$. So, $\pincE$ must be $\pincPar{\pincNu{\pinca}{\pincE'}} {\pincNu{\pinca}{\pincE''}}$, and we can write: {\small $$ \vlderivation{ \vliin{\pinctran}{} {\pincLTSJud{\pincPar{\pincNu{\pinca} {\pincE'}} {\pincNu{\pinca} {\pincE''}} } {F} {\pincLabT} } \vlin{\pincpi}{} {\pincLTSJud{\pincPar{\pincNu{\pinca} {\pincE'}} {\pincNu{\pinca} {\pincE''}} } {\pincPar{\pincNu{\pinca} {(\pincPar{\pincE'} {\pincE''})}} {\pincNu{\pinca} {\pincZer} } \pincCong \pincNu{\pinca} {(\pincPar{\pincE'} {\pincE''}) } } {\pincLabT} }{ \vlin{\pincrefl}{} {\pincLTSJud{\pincPar{\pincE'} {\pincE''} } {\pincPar{\pincE'} {\pincE''} \pincCong \pincPar{\pincPar{\pincE'} {\pincE''}} {\pincZer} } {\pincLabT} }{ }{}} } \vlhy{\pincLTSJud{\pincNu{\pinca} {(\pincPar{\pincE'} {\pincE''}) } } {F} {\pincLabT} } } }$$ where $\pincLTSJud{\pincNu{\pinca} {(\pincPar{\pincE'} {\pincE''})}} {F} {\pincLabT}$ holds by induction because $\vlstore{\vlfo{\atma}{\vlsbr[\mapPincToDi{\pincE'} ;\mapPincToDi{\pincE''}]}} \mapPincToDi{F}\bvtJudGen{\Set{\mathsf{u}\downarrow}}{} \vlread$ is shorter than $\bvtDder$. \item Let $S\vlhole\approx\vlsbr[\vlhole;T]$. So, $\pincE$ must be $\pincPar{\pincPar{\pincNu{\pinca}{\pincE'}} {\pincNu{\pinca}{\pincE''}}} {F'} $, with $\mapPincToDi{F'}=T$. The case is analogous to the previous one, with the proviso that an instance of $\pinccntxp$ must precede the instance of $\pincpi$. In particular, $\pincLTSJud{\pincPar{\pincNu{\pinca} {(\pincPar{\pincE'} {\pincE''}) } } {F'} } {F} {\pincLabT}$ holds by induction because $\vlstore{\vlsbr[\vlfo{\atma}{\vlsbr[\mapPincToDi{\pincE'} ;\mapPincToDi{\pincE''}]} ;\mapPincToDi{F'}]} \mapPincToDi{F}\bvtJudGen{\Set{\mathsf{u}\downarrow}}{} \vlread$ is shorter than $\bvtDder$. \end{itemize} \par The third case $\vlstore{\vlsbr<\pincLabL;\vlhole>} S\vlhole\approx\vlread$ that we could obtain by assuming $\pincE = \pincSec{\pincLabL} {\pincE'}$ cannot occur because $\pincE$ would not be simple\xspace, against assumptions. \end{proof \begin{remark}[\textbf{\textit{Why do we define \simple\ structure s as such?}}] \label{remark:Why process structures include normal ones} Theorem~\ref{theorem:Soundness w.r.t. trivial deductions} would not hold if we used ``process structure s'' in place of ``\simple\ structure s''. Let us pretend, for a moment, that $ F $ be any process structure, and not only a simple\xspace\ one, indeed. The bottommost rule in $ \bvtDder $ might well be: {\small $$ \vlinf{\mathsf{q}\downarrow}{} {\vlsbr[\mapPincToDi{\pincE'} ;<\mapPincToDi{\pincLabL} ;\mapPincToDi{\pincE''}>]} {\vlsbr<\mapPincToDi{\pincLabL} ;[\mapPincToDi{\pincE'} ;\mapPincToDi{\pincE''}]>} $$ for some $\pincE'$, and $\pincE''$, such that $\pincE = \pincPar{\pincE'} {(\pincSec{\pincLabL} {\pincE''})}$. By induction, $\pincLTSJud{\pincSec{\pincLabL} {(\pincPar{\pincE'} {\pincE''})} } {F} {\pincLabT}$. However, in the labeled transition system~\eqref{equation:PPi-LTS-from-BVT} of $\CCSR$ we cannot deduce $\pincLTSJud{\pincPar{\pincE'} {(\pincSec{\pincLabL} {\pincE''})} } {\pincSec{\pincLabL} {(\pincPar{\pincE'} {\pincE''})}} {\pincLabT}$ whenever $ \pincLabL $ occurs free in $ \pincE' $. So, as we did in the definition of simple process es, we must eliminate any occurrence of \textnormal{\textsf{Seq}}\xspace structure. \end{remark} \begin{theorem}[\textbf{\textit{Soundness w.r.t.\xspace\ internal communication}}] \label{theorem:Soundness w.r.t. internal communication} Let $\pincE$, and $F$ be processes, with $F$ simple\xspace, and $\pincE\not\approx\vlone$. Let $\bvtDder$ be the derivation {\small $\vlderivation{ \vlde{\bvtDder'}{{\BV\mathsf{Q}\llcorner}\xspace} {\mapPincToDi{\pincE}} { \vlin{\mathsf{at}\downarrow\llcorner}{(*)} {\vlholer{ S\vlsbr[b;\vlne\atmb] } } { \vlde{\bvtDder''}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlholer{ S\vlscn{\vlone} } } { \vlhy{\mapPincToDi{F}}}}} } $\, which, besides being standard\xspace, we assume to be non-\trivial\xspace, and such that\ $(*)$ is its lowermost instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. If, for some process $G$, the derivation $\bvtInfer{\mathcal{E}} {\,\mapPincToDi{F} \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} \mapPincToDi{G}}$ is the reduction of $\bvtDder$, then $\pincLTSJud{\pincE} {G} {\pincLabT}$. \end{theorem} \begin{proof The derivation $\bvtDder'$ satisfies the assumptions of Point~\eqref{enumerate:Trivial derivations and rightcontext s-00} in Proposition~\ref{fact:Trivial derivations and rightcontext s} which implies $\mapPincToDi{\pincE}\approx\vlholer{S' \,\vlscn{b}}$, and $\mapPincToDi{\pincE}\approx\vlholer{S''\,\vlscn{\vlne\atmb}}$, for some $ \vlholer{S'\,\vlscn{\vlne\atmb}} $, and $ \vlholer{S''\,\vlscn{\vlne\atmb}} $, which must be process structure s. We proceed on the possible distinct forms that $\mapPincToDi{\pincE}$ can assume. Point~\eqref{enumerate:Trivial derivations and rightcontext s-04} of Proposition~\ref{proposition:Rightcontext s preserve communication} will help concluding. (Details in Appendix~\ref{section:Proof of theorem:Soundness w.r.t. internal communication}.) \end{proof \paragraph{Environment structure s that get consumed.} Let $T$, and $U$ be process structure s, and $R$ be an environment structure. Let $\vlstore{ \vlsbr[T;R] } \bvtInfer{\bvtDder} {\, U \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} \vlread}$ which, since belongs to ${\BV\mathsf{Q}\llcorner}\xspace$, is standard\xspace. We say that $\bvtDder$ \dfn{consumes $R$} if every atom of $R$ eventually annihilates with an atom of $T$ thanks to an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, so that none of them occurs in $U$. \begin{example}[\textbf{\textit{Consuming environment structures}}] Derivations that consume the environment structure $\vlsbr<\natmaColor{Red};\atmbColor{Blue}>$ that occurs in their conclusion are~\eqref{equation:tracing-sequential-interactions-01}, and \eqref{equation:tracing-sequential-interactions-00}. If we consider only a part of \eqref{equation:tracing-sequential-interactions-01}, as here below, we get a standard derivation that does not consume $\vlsbr<\natmaColor{Red};\atmbColor{Blue}>$: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:BV2-consuming-environment-structures} \input{./BV2-consuming-environment-structures} \end{equation} \end{minipage} \end{example} \begin{theorem}[\textbf{\textit{Soundness w.r.t. external communication}}] \label{theorem:Soundness w.r.t. external communication} Let $\pincE$, and $F$ be processes, and $R$ be an environment structure. Let $F$ be simple\xspace, and $\pincE\not\approx\vlone$. Let $\bvtDder$ be a non-\trivial\xspace, and \standard\ derivation\ that assumes one of the two following forms: {\small $$\vlderivation{ \vlde{\bvtDder'}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlsbr[\mapPincToDi{\pincE};<\vlne\atmb;R>]} { \vlin{\mathsf{at}\downarrow\llcorner}{(*)} {\vlholer{ S\vlsbr[b;\vlne\atmb] } } { \vlde{\bvtDder''}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlholer{ S\vlscn{\vlone} } } { \vlhy{\mapPincToDi{F}}}}}} \qquad\qquad\qquad\textrm{or}\qquad\qquad\qquad \vlderivation{ \vlde{\bvtDder'}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlsbr[\mapPincToDi{\pincE};\vlfo{b}{<\vlne\atmb;R>}]} { \vlin{\mathsf{at}\downarrow\llcorner}{(*)} {\vlholer{ S\vlsbr[b;\vlne\atmb] } } { \vlde{\bvtDder''}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlholer{ S\vlscn{\vlone} } } { \vlhy{\mapPincToDi{F}}}}}} $$ such that\ $(*)$ is its lowermost instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, and $\vlne\atmb$ in $S\vlsbr[b;\vlne\atmb]$ is the same occurrence of $\vlne\atmb$ as the one in $\vlsbr<\vlne\atmb;R>$. If $\vlstore{\vlsbr[\mapPincToDi{G};R]} \bvtInfer{\mathcal{E}} {\, \mapPincToDi{F} \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} \vlread}$ is the reduction of $\bvtDder$, then $\pincLTSJud{\pincE} {G} {\pincLabT}$ if $b\in\strBN{\pincE}$. Otherwise, if $b\in\strFN{\pincE}$, then $\pincLTSJud{\pincE} {G} {b}$. \end{theorem} \begin{proof First, $\bvtDder$ necessarily consumes $\vlsbr<\vlne\atmb;R>$, or $\vlstore{\vlsbr<\vlne\atmb;R>} \vlfo{b}{\vlread}$ in either cases. The reason is twofold. Being $\mapPincToDi{F}$ a \simple\ structure\ implies it cannot contain any \textnormal{\textsf{Seq}}\xspace\ structure which, instead, is one of the operators that can compose $\vlsbr<\vlne\atmb;R>$, or $\vlstore{\vlsbr<\vlne\atmb;R>} \vlfo{b}{\vlread}$. Moreover, no occurrence of $b$ inside $R$ can annihilate with the first occurrence of $\vlne\atmb$ inside $\vlsbr<\vlne\atmb;R>$, or $\vlstore{\vlsbr<\vlne\atmb;R>} \vlfo{b}{\vlread}$. \par Second, $\bvtDder'$ satisfies the assumptions of Proposition~\ref{proposition:Rightcontext s preserve communication}. So, its Point~\eqref{enumerate:Trivial derivations and rightcontext s-00} applies to $\vlstore{\vlsbr<\vlne\atmb;R>} \vlsbr[\mapPincToDi{\pincE};\vlread]$, and $\vlstore{\vlsbr<\vlne\atmb;R>} \vlsbr[\mapPincToDi{\pincE};\vlfo{b}{\vlread}]$. Since $\vlne\atmb$ occurs in $\vlsbr<\vlne\atmb;R>$, for some $\vlholer{S'\vlhole}$, it must be $\mapPincToDi{\pincE}\approx\vlholer{S'\,\vlscn{b }}$ in which the occurrence of $ b $ we outline is the one that annihilates the given $ \vlne\atmb $. We proceed on the possible forms that $\mapPincToDi{\pincE}$ can assume, in relation with the form of $R$. Point~\eqref{enumerate:Trivial derivations and rightcontext s-03} of Proposition~\ref{proposition:Rightcontext s preserve communication} will help concluding. (Details in Appendix~\ref{section:Proof of theorem:Soundness w.r.t. external communication}.) \end{proof \begin{theorem}[\textbf{\textit{Soundness}}] \label{equation:PPi-soundness-example-00} Let $\pincE$, and $F $ be processes with $F $ simple\xspace. For every \standard\ derivation\ $\bvtDder $, and every environment structure\ $ R $, if $ \vlderivation{ \vlde{\bvtDder}{{\BV\mathsf{Q}\llcorner}\xspace} {\vlsbr[\mapPincToDi{\pincE};R]}{ \vlhy{\mapPincToDi{F}}}} $, and $\bvtDder$ consumes $R$, then $\pincLTSJud{\pincE}{F} {\mapDiToPinc{R}{\emptyset}}$. \end{theorem} \begin{proof} As a basic case we assume $\mapPincToDi{\pincE}\approx\vlone$. This means that $ \pincE $ is $ \pincZer $. Moreover, since $\bvtDder$ consumes $R$, and no atom exists in $\mapPincToDi{\pincE}$ to annihilate atoms of $R$, we must have $\mapPincToDi{F}\approx\vlone$, i.e.\xspace $ F \equiv \pincZer$, and $R\approx\vlone$. Since $\pincLTSJud{\pincZer}{\pincZer}{\pincPreT}$, thanks to $\pincrefl$, we are done. \par Instead, if $\mapPincToDi{\pincE}\not\approx\vlone$, in analogy with \cite{Brus:02:A-Purely:wd}, we proceed by induction on the number of rules in $\bvtDder$, in relation with the two cases where $R\approx\vlone$, or $R\not\approx\vlone$. \par Since $\bvtDder$ is non-\trivial\xspace, and standard\xspace, we can focus on its lowermost occurrence $ (*) $ of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. Let us assume the redex of $ (*) $ be $\vlsbr[b;\vlne\atmb]$. We can have the following cases. \begin{itemize} \item Let $R\approx\vlone$, and $\bvtInfer{\mathcal{E}} {\, \mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \mapPincToDi{G}}$ be the reduction of $\bvtDder$. \begin{enumerate} \item The first case is with $\mathcal{E}$ non-\trivial\xspace. The inductive hypothesis holds on $\mathcal{E}$, and we get $\pincLTSEXTJud{G} {F} {\pincPreT=\mapDiToPinc{\vlone}{\emptyset}} {r}{&}$. \item The second case is with $\mathcal{E}$ trivial, so we cannot apply the inductive hypothesis on $\mathcal{E}$. However, Theorem~\ref{theorem:Soundness w.r.t. trivial deductions} holds on $\mathcal{E}$, and we get $\pincLTSJud{G} {F} {\pincPreT}$. \end{enumerate} Finally, both $\bvtDder$, and $ \mathcal{E} $ satisfy the assumptions of Theorem~\ref{theorem:Soundness w.r.t. internal communication}, so it implies $\pincLTSJud{\pincE} {G} {\pincPreT}$, and the statement we are proving holds thanks to $\pinctran$. \item Let $\vlone \not\approxR \approx\vlstore{\vlsbr<\vlne\atmb;T>} \vlfo{b}{\vlread}$, for some environment structure\ $ T $. Let $\vlstore{\vlsbr[\mapPincToDi{G};\vlfo{b}{\vlsbr<\vlone;T>}]} \bvtInfer{\mathcal{E}} {\, \mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$ be the reduction of $\bvtDder$. Since $\vlstore{\vlsbr<\vlone;T>} \vlfo{b}{\vlread}$ is an environment structure, it is canonical, so, necessarily $\vlstore{\vlsbr<\vlone;T>} \vlfo{b}{\vlread} \approx \vlfo{b}{T} \approxT$ because $ \vlne\atmb\not\in\strFN{T} $. Hence, $\vlstore{\vlsbr[\mapPincToDi{G};T]} \bvtInfer{\mathcal{E}} {\, \mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$. Moreover, since $\vlne\atmb$ disappears along $ \bvtDder $, we forcefully have $b\in\strBN{\mapPincToDi{\pincE}}$. \begin{enumerate} \item Let $\mathcal{E}$ be non-\trivial\xspace. The inductive hypothesis holds on $\mathcal{E}$, implying $\pincLTSJud{G} {F} {\mapDiToPinc{T}{\emptyset}}$. Moreover, $ \bvtDder $ satisfies the assumptions of Theorem~\ref{theorem:Soundness w.r.t. external communication} which implies $\pincLTSJud{\pincE} {G} {\pincLabT}$ also because, as we said, $b\in\strBN{\mapPincToDi{\pincE}}$. So, the statement holds because $\mapDiToPinc{T}{\Set{b,\vlne\atmb}} \pincCong \pincLabT;\mapDiToPinc{T}{\Set{b,\vlne\atmb}} =\mapDiToPinc{\vlne\atmb}{\Set{b,\vlne\atmb}} ;\mapDiToPinc{T}{\Set{b,\vlne\atmb}} =\vlstore{\vlsbr<\vlne\atmb;T>} \mapDiToPinc{\vlfo{b}{\vlread}}{\emptyset}$, and by $\pinctran$ we get $\vlstore{\vlsbr<\vlne\atmb;T>} \pincLTSEXTJud{\pincE} {F} {\mapDiToPinc{\vlfo{b}{\vlread}}{\emptyset}} {r}{&}$. \item The second case is with $\mathcal{E}$ trivial, so we cannot apply the inductive hypothesis on $\mathcal{E}$. However, Theorem~\ref{theorem:Soundness w.r.t. trivial deductions} holds on $\mathcal{E}$, and we get $\pincLTSJud{G} {F} {\pincPreT}$, which implies $T\approx\vlone$. Indeed, if $T\not\approx\vlone$, then $ \bvtDder' $ could not consume $T$. The reason is that being $ \mathcal{E} $ a \trivial\ derivation, it cannot contain any instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$. But a $ \bvtDder' $ not consuming $ T $, would mean $ \bvtDder $ not consuming $ R $, against assumption. Finally, Theorem~\ref{theorem:Soundness w.r.t. external communication} holds on $\bvtDder$, and implies $\pincLTSJud{\pincE} {G} {\pincLabT}$, because, as we said, $b\in\strBN{\mapPincToDi{\pincE}}$. So, the statement holds because $\mapDiToPinc{\vlone}{\Set{b,\vlne\atmb}} \pincCong \pincLabT;\mapDiToPinc{\vlone}{\Set{b,\vlne\atmb}} =\mapDiToPinc{\vlne\atmb}{\Set{b,\vlne\atmb}} ;\mapDiToPinc{\vlone}{\Set{b,\vlne\atmb}} =\vlstore{\vlsbr<\vlne\atmb;\vlone>} \mapDiToPinc{\vlfo{b}{\vlread}}{\emptyset}$, and by $\pinctran$ we get $\vlstore{\vlsbr<\vlne\atmb;\vlone>} \pincLTSEXTJud{\pincE} {F} {\mapDiToPinc{\vlfo{b}{\vlread}}{\emptyset}} {r}{&}$. \end{enumerate} We could proceed in the same way when $\vlone \not\approxR \approx\vlstore{\vlsbr<b;T>} \vlfo{b}{\vlread}$. \item Let $\vlone\not \approxR \approx\vlstore{\vlsbr<\vlne\atmb;T>}\vlread$. Then, both $\vlstore{\vlsbr[\mapPincToDi{G};T]} \bvtInfer{\mathcal{E}} {\, \mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$, and $b\in\strFN{\mapPincToDi{\pincE}}$ for the reasons analogous to the ones given in the previous case. \begin{enumerate} \item The first case is with $\mathcal{E}$ non-\trivial\xspace. The inductive hypothesis holds on $\mathcal{E}$, and we get $\pincLTSJud{G} {F} {\mapDiToPinc{T}{\emptyset}} $. Moreover, Theorem~\ref{theorem:Soundness w.r.t. external communication} holds on $\bvtDder$, and implies $\pincLTSJud{\pincE} {G} {b}$, because, as we said, $b\in\strFN{\mapPincToDi{\pincE}}$. So, the statement holds because $\mapDiToPinc{\vlne\atmb}{\emptyset} ;\mapDiToPinc{T}{\emptyset} =\vlstore{\vlsbr<\vlne\atmb;T>} \mapDiToPinc{\vlread}{\emptyset}$, and by $\pinctran$ we get $\vlstore{\vlsbr<\vlne\atmb;T>} \pincLTSEXTJud{\pincE} {F} {\mapDiToPinc{\vlread}{\emptyset}} {r}{&}$. \item The second case is with $\mathcal{E}$ trivial, so we cannot apply the inductive hypothesis on $\mathcal{E}$. However, Theorem~\ref{theorem:Soundness w.r.t. trivial deductions} holds on $\mathcal{E}$, and we get $\pincLTSJud{G} {F} {\pincPreT}$, which implies $T\approx\vlone$ for reasons analogous to the ones given in the previous case. Moreover, Theorem~\ref{theorem:Soundness w.r.t. external communication} holds on $\bvtDder$, and implies $\pincLTSJud{\pincE} {G} {b}$, because, as we said, $b\in\strFN{\mapPincToDi{\pincE}}$. So, the statement holds because $\mapDiToPinc{\vlne\atmb}{\emptyset} \pincCong \mapDiToPinc{\vlne\atmb}{\emptyset} ;\pincLabT =\mapDiToPinc{\vlne\atmb}{\emptyset} ;\mapDiToPinc{\vlone}{\emptyset} =\vlstore{\vlsbr<\vlne\atmb;\vlone>} \mapDiToPinc{\vlread}{\emptyset}$, and by $\pinctran$ we get $\vlstore{\vlsbr<\vlne\atmb;\vlone>} \pincLTSEXTJud{\pincE} {F} {\mapDiToPinc{\vlread}{\emptyset}} {r}{&}$. \end{enumerate} We could proceed in the same way when $\vlone \not\approxR \approx\vlstore{\vlsbr<b;T>} \vlread$. \end{itemize} \end{proof} \subsection{An instance of the proof of Soundness} \label{subsection:An instance of Soundness proof} The derivation~\eqref{equation:example-reduction-10} is standard\xspace. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:example-reduction-10} \begin{minipage}{.47\textwidth} \input{./SBV2-example-reduction-10} \end{minipage} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Hence, \eqref{equation:example-reduction-10} is an instance of the assumption $\vlstore{ \vlsbr[\mapPincToDi{\pincE};R] } \bvtInfer{\bvtDder} {\, \mapPincToDi{F} \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace}{} \vlread}$ in Theorem~\ref{equation:PPi-soundness-example-00} above. The structure $\vlstore{\vlsbr[<\atmaColor{Red};\atmbColor{Blue};\mapPincToDi{\pincE'}> ;<\natmaColor{Red};\mapPincToDi{F'}>]} \vlfo{\atma}{\vlread}$ in~\eqref{equation:example-reduction-10} plays the role of $ \mapPincToDi{\pincE} $, while $ \natmbColor{Blue} $ corresponds to $ R $. Finally $\vlstore{\vlsbr[\mapPincToDi{\pincE'} ;\mapPincToDi{F'}]} \vlfo{\atma}{\vlread}$ plays the role of $ \mapPincToDi{F}$, for some process $\pincE'$, and $F'$. By definition, $ \pincE = \pincNu{\pinca} {(\pincPar{(\pincSec{\textcolor{Red}{a}} {\pincSec{\textcolor{Blue}{b}}{\pincE'}})} {(\pincSec{\textcolor{Red}{\pincna}}{F'})})}$, and $ F = \pincNu{\pinca} {(\pincPar{\pincE'}{F'})}$. Once identified the lowermost instance $(*)$ of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, we replace $\vloneRed$ for all those occurrences of atoms that, eventually, annihilate in $(*)$. So, \eqref{equation:example-reduction-10} becomes the structure~\eqref{equation:example-reduction-00} which is not a derivation because it contains fake instances of rules. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \begin{minipage}{.45\textwidth} \label{equation:example-reduction-00} $$\input{./SBV2-example-reduction-00}$$ \end{minipage} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Removing all the fake rules, we get to $ \mathcal{E} $ in~\eqref{equation:example-reduction-20}: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:example-reduction-20} \begin{minipage}{.47\textwidth} $$\input{./SBV2-example-reduction-20}$$ \end{minipage} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent The lowermost instance $(*)$ of $ \mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$} $ in~\eqref{equation:example-reduction-10} has disappeared from~\eqref{equation:example-reduction-20}. The inductive argument on~\eqref{equation:example-reduction-20} implies $\pincLTSJud{\pincNu{\pinca} {(\pincPar{(\pincSec{\textcolor{Blue}{b}} {\pincE'})} {F'})} } {\pincNu{\pinca} {(\pincPar{\pincE'}{F'})} } {\textcolor{Blue}{b}}$. Since we can prove: \par\vspace{\baselineskip}\noindent {\scriptsize \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.1cm} \begin{equation} \label{equation:PPi-example-LTS-02-reconstruction} \input{./PPi-example-LTS-02-reconstruction} \end{equation} \end{minipage} \vspace{\baselineskip}\par\noindent by transitivity, we conclude $\pincLTSJud{\pincNu{\pinca} {(\pincPar{(\pincSec{\textcolor{Red}{a}} {\pincSec{\textcolor{Blue}{b}} {\pincE'}}) } {(\pincSec{\textcolor{Red}{\pincna}} {F'})} )} } {\pincNu{\pinca} {(\pincPar{\pincE'}{F'})} } {\textcolor{Blue}{b}}$. \subsection{Recasting labeled transitions to proof-search} \label{subsection:Reducing the labeled transitions to proof-search steps} Once connected $\BV\mathsf{Q}\xspace$, and $\CCSR$ as in the previous subsection, we get back to our initial reachability problem. Let us assume we want to check $\pincLTSJud {\pincE} {F} {\pincLabL_1;\cdots;\pincLabL_n}$ in $ \CCSR $, where $ F$ is a simple process. The following steps recast the problem of $ \CCSR $ into a problem of searching inside $\BV\mathsf{Q}\xspace$: \begin{enumerate} \item \label{enumerate:how-soundness-works-01} First we ``compile'' both $\pincE$, and $F$ into process structure s $\mapPincToDi{\pincE}$, and $\mapPincToDi{F}$, where $\mapPincToDi{F}$ is forcefully simple\xspace. Then, we fix an $ R $ such that $ \mapDiToPinc{R}{\emptyset} = \pincLabL_1;\cdots;\pincLabL_n$. \item \label{enumerate:how-soundness-works-02} Second, it is sufficient to look for $\vlstore{\vlsbr[\mapPincToDi{\pincE} ;\vlne{\mapPincToDi{F}} ;R]} \bvtInfer{\bvtPder} {\ \bvtJudGen{}{} \vlread}$ inside $ \BV\mathsf{Q}\xspace $ as the up-fragment of $ \SBV\mathsf{Q}\xspace $ is admissible for $ \BV\mathsf{Q}\xspace $ (Corollary~\ref{theorem:Admissibility of the up fragment} \cite{Roversi:unpub2012-I}.) . \item \label{enumerate:how-soundness-works-03} Finally, if $ \bvtPder $ of point~\eqref{enumerate:how-soundness-works-02} here above exists, we can conclude $\pincLTSJud {\pincE} {F} {\pincLabL_1;\cdots;\pincLabL_n}$ in $ \CCSR $. \end{enumerate} Point~\ref{enumerate:how-soundness-works-03} rests on some simple observations. The structure $\vlne{\mapPincToDi{F}}$ is invertible\ thanks to Fact~\ref{fact:Basic properties of simplestructure}. So, it exists $\vlstore{\vlsbr[\mapPincToDi{\pincE};R]} \bvtInfer{\bvtDder'} {\mapPincToDi{F} \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$ where both $ \mapPincToDi{\pincE} $, and $ \mapPincToDi{F} $ are \textnormal{\textsf{Tensor}}\xspace-free because they are process structure s. The same holds for $R$ which is an environment structure. Consequently, every instance of $\bvtswirulein$ in $\bvtDder$, if any, can only be $ \vlderivation{ \vlin{\bvtswirule}{} {\vlsbr[(R;\vlone);U]}{ \vlhy{\vlsbr([R;U];\vlone)}} } $, and it can be erased. This means that $ \bvtDder $ only contains rules that belong to $\Set{\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}} $. Standardization (Theorem~\ref{theorem:Standardization in bvtatrdrulein...}), which applies to $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$, implies we can transform $\bvtDder$ in $\BV\mathsf{Q}\xspace$ to a \standard\ derivation\ $\mathcal{E}$ of ${\BV\mathsf{Q}\llcorner}\xspace$. The only missing step is in the coming section. It shows that proof-search in $ {\BV\mathsf{Q}\llcorner}\xspace $ is sound w.r.t.\xspace\ the computations of the labeled transition system\ defined for $ \CCSR $. \section{How computing in $\CCSR$ by means of $\BV\mathsf{Q}\xspace$} \label{section:How computing in CCSR by means of BVTCC} Given $\BV\mathsf{Q}\xspace$, and $\CCSR$ we illustrate how transforming questions about the existence of computations of $ \CCSR $ into questions about proof-search inside the standard fragment ${\BV\mathsf{Q}\llcorner}\xspace$ of $\BV\mathsf{Q}\xspace$. Let $\pincE$, and $F$, be two processes of $\CCSR$, with $ F$ simple. Let us assume we want to check $\pincLTSJud {\pincE} {F} {\pincLabL_1;\cdots;\pincLabL_n}$. Next we highlight the main steps to answer such a question by answering a question about proof-search inside $\BV\mathsf{Q}\xspace$, without resuming to computations in the labeled transition system\ of $\CCSR$. \par To that purpose, this section has two parts. The first one formalizes the notions that makes the link between processes of $ \CCSR $, and structures of $ \BV\mathsf{Q}\xspace $ precise. The second part, i.e.\xspace Subsection~\ref{subsection:Reducing the labeled transitions to proof-search steps}, delineates the steps to transform one question into the other, eventually justifying also the need to prove the Soundness of ${\BV\mathsf{Q}\llcorner}\xspace$ --- not $\BV\mathsf{Q}\xspace$ --- w.r.t.\xspace\ $\CCSR$, in Section~\ref{section:Soundness of BVTCC}. \subsection{Connecting $\CCSR$, and $ \BV\mathsf{Q}\xspace $} \label{subsubsection:Formally connecting CCSR, and BVTCC} \paragraph{Process structure s.} They belong to the language of the grammar~\eqref{align:BV2-process-structures} here below, and, clearly, they are \textnormal{\textsf{Tensor}}\xspace-free: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{align:BV2-process-structures} \input{./BV2-process-structures} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Like at page~\ref{fig:BVT-structures}, we range over variable names of process structure s by $\atmLabL, \mathfrak{m}$, and $ \mathfrak{n} $. \begin{fact}[\textbf{\textit{Processes correspond to process structure s}}] \label{fact:From a process to a process structure} Processes, and process structure s isomorphically correspond thanks to the following isomorphism, so extending the correspondence in \cite{Brus:02:A-Purely:wd} among $\mathsf{CCS}\xspace$ terms, and $\mathsf{BV}\xspace$ structures. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:SBV2-to-PPi-map-process} \input{./SBV2-to-PPi-map-process} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent \end{fact} \paragraph{Environment structure s.} \label{paragraph:Environmentstructure s} Let us recall Example~\eqref{example:Modeling external communication inside BVT}. It shows that representing an external communication as a derivation of $\BV\mathsf{Q}\xspace$ requires to assign a specific meaning to the structures in the conclusion of the derivation. One structure represents a process. The other one encodes the labels that model the sequence of messages between the process, and an environment. So, we need to identify the \dfn{environment structure s}, namely the set of structures that can fairly represent the sequence of messages. By definition, we say that every \dfn{environment structure}\ is a \emph{canonical} structure (page~\pageref{paragraph:Structures in canonical form}) that the following grammar \eqref{equation:SBV2-environment-structures} generates: \par\vspace{\baselineskip}\noindent \fbox{ \begin{minipage}{.974\linewidth} {\small \begin{equation} \label{equation:SBV2-environment-structures} \input{./SBV2-environment-structures} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent If different from $\vlone$, we have to think of every environment structure\ as a list, possibly in the scope of some instance of \textnormal{\textsf{Sdq}}\xspace, that we can consume from its leftmost component, onward. \begin{example}[\textit{\textbf{Environment structure s}}] \label{example:Environment structures} Let $\natma,\atma_1,\natma_1,b_1,b_2 \not\approx \vlone$. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.4cm} \input{./SBV2-example-environment-structures} \vspace{-.5cm} \end{minipage} \par\vspace{\baselineskip}\noindent \eqref{eqnarray:example-environment-structure-01} is not an environment structure\ because $b_4$ does not occur in the structure. \eqref{eqnarray:example-environment-structure-02} is not an environment structure\ because $\vlone$ occurs in it. \end{example} \begin{fact}[\textbf{\textit{Environment structure s map to sequences of actions}}] \label{fact:From an environment structure to a set of actions} The map~\eqref{equation:SBV2-to-PPi-map-actions} takes both an environment structure, and a set of atoms as arguments. The map transforms a given environment structure\ to a sequence of actions that may work as a label of transitions in~\eqref{equation:PPi-LTS-from-BVT}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.5cm} \begin{equation} \label{equation:SBV2-to-PPi-map-actions} \input{./SBV2-to-PPi-map-actions} \end{equation} \vspace{-.7cm} \end{minipage} \par\vspace{\baselineskip}\noindent Given an environment structure, the map yields the corresponding sequence, if its second argument is $\emptyset$. \end{fact} \begin{example}[\textbf{\textit{From an environment structure\ to actions}}] \label{example:From an environment structure to a set of actions} Both $b_1$, and $b_2$ are internal actions of $\vlstore{\vlsbr<\atma_1 ;\vlfo{b_2} {<\natma_1 ;\vlfo{b_1}{<b_2;b_1>}>}>} \mapDiToPinc{\vlread}{\emptyset} = \atma_1;\natma_1; \pincPreT;\pincPreT \pincCong \atma_1 ;\natma_1$ in~\eqref{eqnarray:example-environment-structure-00}. Intuitively, if a variable name $\pincLabL$ that occurs in a structure $\pincE$ belongs to $X$ in $\mapDiToPinc{\pincE}{X}$, then $\pincLabL$ gets mapped to $\pincLabT$. The reason why $\pincLabL$ is in $X$ is that $ \pincLabL $ is not a free name of $ \pincE $. \end{example} \paragraph{\Trivial\ derivation s.} By definition, a derivation $\bvtDder$ of $\BV\mathsf{Q}\xspace$ is \dfn{trivial} if (i) $\bvtDder$ only operates on \textnormal{\textsf{Tensor}}\xspace-free structures, and (ii) $\bvtDder$ does not contain any occurrence of $\mbox{$\mathsf{ai}\!\!\downarrow$}$. All the others are \dfn{\nontrivial\ derivation s}. \begin{example}[\textbf{\textit{A \trivial\ derivation}}] \label{example:A trivial derivation} It is in~\eqref{equation:A-trivial-derivation} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:A-trivial-derivation} \input{./SBV2-trivial-derivation} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Being trivial\ does not mean without rules. ``Trivial'' identifies a derivation where no communication, represented by instances of $\mbox{$\mathsf{ai}\!\!\downarrow$}$, occur. \end{example} \begin{fact}[\textit{\textbf{\Trivial\ derivation s on process structure s are quite simple}}] \label{fact:Trivial derivations preserve process structures} Let $R$, and $T$ be process structure s, and $\bvtInfer{\bvtDder} {\, T \bvtJudGen{\mathsf{B}\xspace}{} R}$ be trivial. Then $\mathsf{B}\xspace =\Set{\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}} $, and all the instances of $\mbox{$\mathsf{q}\!\!\downarrow$}$ in $ \bvtDder $ have form $\vlderivation{ \vliq{\mathsf{q}\downarrow}{} {\vlsbr[<\pincLabL;R'>;R'']}{ \vlhy{\vlsbr<\pincLabL;[R';R'']>}} } $, or $\vlderivation{ \vliq{\mathsf{q}\downarrow}{} {\vlsbr[R';R'']}{ \vlhy{\vlsbr<R';R''>}} }$, for some $R', R''$, and $\pincLabL\not\approx\vlone$. \end{fact} \begin{proof} By definition, no $\mbox{$\mathsf{ai}\!\!\downarrow$}$ can exist in $\bvtDder$. Let us assume an instance $ \vlinf{\bvtswirule }{} {\vlsbr[(R;T);U]} {\vlsbr([R;U];T)} $ exists in $\bvtDder$. Since $ \bvtDder $ is \textnormal{\textsf{Tensor}}\xspace-free, it must be $ T \approx \vlone$ and we can eliminate such an $ \bvtswirulein $. Let us assume one instance of $\mbox{$\mathsf{q}\!\!\downarrow$}$ exists in $\bvtDder$. In general it would be $\vlinf{\mathsf{q}\downarrow}{(*)} {\vlsbr[<\pincLabL;R'>;<\mathfrak{m};R''>]} {\vlsbr<[\pincLabL;\mathfrak{m}];[R';R'']>}$, for some $\pincLabL\,,\mathfrak{m}, R'$, and $R''$. So, let us assume such a $(*)$ occurs in $\bvtDder$ with $\pincLabL\,, \mathfrak{m}\not\approx\vlone$. In absence of $\mbox{$\mathsf{ai}\!\!\downarrow$}$, even though we might have $\pincLabL\approx\vlne\mathfrak{m}$, the structure $\vlsbr[\pincLabL;\mathfrak{m}]$ could not disappear from $\bvtDder$, namely from $T$. Consequently, $T$ could not be a process structure, against assumption. \end{proof} \paragraph{\Simple\ structure s.} This notion strengthens the idea that ``trivial'' stands for ``no interactions''. A structure $R$ is a \emph{\simple\ structure} if it satisfies two constraints. First, it must belong to the language of~\eqref{equation:SBV2-normal-structures}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.1cm} \begin{equation} \label{equation:SBV2-normal-structures} \input{./SBV2-normal-structures} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Second, if $ \pincLabL_1,\ldots,\pincLabL_n $ are all, and only, the variable names that occur in $ R $, then $ i\neq j $ implies $ \pincLabL_i\neq\pincLabL_j $, for every $ i,j\in\Set{1,\ldots, n} $. \begin{fact}[\textbf{\textit{Basic properties of \simple\ structure s}}] \label{fact:Basic properties of simplestructure} \begin{itemize} \item Trivially, by definition, \simple\ structure s are co-invertible, because every of them is the negation of an \invertible\ structure\ (Proposition~\ref{proposition:Invertible structures are invertible}.) \item \Simple\ structure s are the logical counterpart of simple process es, thanks to the isomorphism~\eqref{equation:SBV2-to-PPi-map-process}. \end{itemize} \end{fact} \begin{example}[\textbf{\textit{\Simple\ structure s}}] \label{example:Normal structures stand for simple processes} The following table shows some instances of \simple\ structure s which correspond to the simple process es\ in Example~\eqref{example:Simple processes}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \input{SBV2-example-simple-structures} \end{minipage} \par\vspace{\baselineskip}\noindent Both the second, and the third structures are simple\xspace\ because belong to~\eqref{equation:SBV2-normal-structures}, and $\pinca,b,{\overline{c}}$ is the list of their pairwise distinct variable names. All the structures are coinvertible because negation of $\vlsbr(\natma;b)$, and $\vlsbr(\natma;\vlfo{b}{(\vlfo{d}{(\natma;c)};\vlne\atmb)};\natma)$, and $\vlsbr(\vlfo{b}{(\vlfo{c}{(\natma;c)};\vlne\atmb)};\natma)$, respectively, which all are invertible. \qed \end{example} The following fact formalizes that \trivial\ derivation s operating on \simple\ structure s only, represent computations where only instances of $\mbox{$\mathsf{u}\!\!\downarrow$}$ occur. In Section~\ref{section:Soundness of BVTCC} this will allow to see that a \trivial\ derivation\ on \simple\ structure s stands for a process that cannot communicate, neither internally, nor externally. \begin{fact}[\textbf{\textit{\Trivial\ derivation s on \simple\ structure s contain almost no rules}}] \label{fact:Trivial derivations preserve normal structures} For any simple\xspace\ $T$, if $\bvtInfer{\bvtDder} {\, T \bvtJudGen{\mathsf{B}\xspace}{} R}$ is trivial, then $\mathsf{B}\xspace=\Set{\mbox{$\mathsf{u}\!\!\downarrow$}}$, and $R$ is simple\xspace\ as well. \end{fact} \begin{proof} Fact~\ref{fact:Trivial derivations preserve process structures} implies that the derivation $\bvtDder$ only contains instances of $\mbox{$\mathsf{u}\!\!\downarrow$}$, and of very specific instances of $\mbox{$\mathsf{q}\!\!\downarrow$}$. Both kinds of rules neither erase, nor introduce atoms, or new occurrences of \textnormal{\textsf{Seq}}\xspace in between $R$, and $T$. Let us assume that $\bvtDder$ effectively contains an instance of $\mbox{$\mathsf{q}\!\!\downarrow$}$ with reduct $\vlsbr<\atmLabL;R'>$, for some $\atmLabL$, and $R'$. Then, the occurrence of \textnormal{\textsf{Seq}}\xspace would occur in $T$, as well, making it not simple\xspace, against our assumption. So, no occurrence of $\mbox{$\mathsf{q}\!\!\downarrow$}$ exists in $\bvtDder$. This, of course, does not prevent the existence of $\vlsbr<\atmLabL;R'>$ along $\bvtDder$, and, in particular, inside $R$. However, $\mbox{$\mathsf{u}\!\!\downarrow$}$ could not eliminate it, and an occurrence of \textnormal{\textsf{Seq}}\xspace would be inside $T$. In that case $T$ could not be simple\xspace, against assumption. But if no occurrence of $\vlsbr<\atmLabL;R'>$ is inside $\bvtDder$, then our assumptions imply that $R$ is a \simple\ structure. \qed \end{proof} \section{Communication, and concurrency with logic restriction} \label{section:Communicating, and concurrent processes with restriction} The correspondences Section~\ref{section:Communication core of BVT} highlights, justify the introduction of a calculus of processes which we identify as $\CCSR$. Specifically, $\CCSR$ is a calculus of communicating, and concurrent processes, with a logic-based restriction, whose operational semantics is driven by the logical behavior of $ \mbox{$\mathsf{u}\!\!\downarrow$} $ rule. \begin{remark}[\textbf{\textit{$\CCSR$ vs. Milner $\mathsf{CCS}\xspace$}}] \label{remark:CCSR vs Milners CCS} It will turn out that $\CCSR$ is not Milner $\mathsf{CCS}\xspace$ \cite{Miln:89:Communic:qo} . The concluding Section~\ref{section:Final discussion, and future work} will discuss on this. \end{remark} \paragraph{Actions on terms of $\CCSR$.} Let $\pinca, b, c, \ldots$ denote the elements of a countable set of \dfn{names}, and let $\pincna, {\overline{b}}, {\overline{c}}, \ldots$ denote the elements of a countable set of \dfn{co-names}. The set of \dfn{labels}, which we range over by $\pincLabL$\,, $\mathfrak{m}$, and $\mathfrak{n}$ contains both names, and co-names, and nothing else. Let $\pincLabT$ be the \dfn{silent}, or \dfn{perfect action}, different from any name, and co-name. The (set of) \dfn{sequences of actions} contains equivalence classes defined on the language that \eqref{equation:PPi-LTS-labels} yields: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:PPi-LTS-labels} \input{./PPi-LTS-labels} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent By definition, the equivalence relation~\eqref{align:PPi-labels-congruence} here below induces the congruence $\pincCong$ on~\eqref{equation:PPi-LTS-labels}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{align:PPi-labels-congruence} \input{./PPi-labels-congruence} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent We shall use $\atmalpha, \beta$, and $\gamma$ to range over the elements in the set of actions sequences. \paragraph{Processes of $\CCSR$.} The terms of $\CCSR$, i.e.\xspace \dfn{processes}, belong to the language of the grammar \eqref{align:PPi-syntax} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{align:PPi-syntax} \input{./PPi-syntax} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent We use $ \pincE, F, G$, and $ H $ to range over processes. The \dfn{inactive process} is $\pincZer$, the \dfn{parallel composition} of $\pincE$, and $F$ is $\pincPar{\pincE}{F}$. The \dfn{sequential composition} $\pincSec{\pincLabL}{\pincE}$ sets the occurrence of the \dfn{action prefix} $\pincLabL$ before the occurrence of $\pincE$. \dfn{Logic restriction} $\pincNu{\pinca}{\pincE}$ hides all, and only, the occurrences of $\pinca$, and $ \pincna $, inside $ \pincE $, which becomes invisible outside $\pincE$. \paragraph{Size of processes.} The size $\Size{\pincE}$ of $\pincE$ is the number of symbols of $\pincE$. \paragraph{Congruence on processes of $\CCSR$.} We partition the processes of $\CCSR$ up to the smallest congruence which, by abusing notation, we keep calling $\pincCong$, and which we obtain as reflexive, transitive, and contextual closure of the relation \eqref{align:PPi-structural-congruence} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \vspace{-.2cm} \begin{minipage}{.974\linewidth} \begin{equation} \label{align:PPi-structural-congruence} \input{./PPi-structural-congruence} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent In~\eqref{align:PPi-structural-congruence} (i) $\pincE\subst{\pinca}{b}$ denotes a standard clash-free substitution of $\pinca$ for both $b$, and ${\overline{b}}$ in $\pincE$ that we can define as usual, and (ii) $\pincFN{\cdot}$ is the set of free-names of a term in $\CCSR$, whose definition, again, is the obvious one. Namely, neither $\pinca$, nor $\pincna$ belong to the set $\pincFN{\pincNu{\pinca}{\pincE}}$. \paragraph{Labeled transition system\ of $\CCSR$.} Its rules are in~\eqref{equation:PPi-LTS-from-BVT}, and they justify why $ \CCSR $ is not Milner $ \mathsf{CCS}\xspace $. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:PPi-LTS-from-BVT} \input{./PPi-LTS-from-BVT} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent In~\eqref{equation:PPi-LTS-from-BVT}, the rule $\pincact$ implements external communication, by firing the action prefix $\pincLabL$, as usual. The rule $\pinccom$ implements internal communication, annihilating two complementary actions. The rules $\pincpi$, and $\pincpe$ allow processes, one aside the other, to communicate, even when both are inside a logic restriction. This is a consequence of the logical nature of \textnormal{\textsf{Sdq}}\xspace, which binds names, and co-names, up to their renaming, indeed. The rule $\pinccntxp$ leaves processes, one aside the other, to evolve independently. Finally, $\pincrefl$ makes the relation reflexive. \begin{example}[\textbf{\textit{Using the labeled transition system}}] \label{example:Using the labeled transition system} As a first example, we rewrite $\pincNu{\pinca} {(\pincPar{(\pincSec{\pinca} {\pincSec{b} {\pincE})} } {\pincSec{\pincna} {F} }) }$ to $\pincNu{\pinca}{(\pincPar{\pincE}{F})}$, observing the action $b$, as follows: \par\vspace{\baselineskip}\noindent {\scriptsize \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.3cm} \begin{equation} \label{equation:Using the labeled transition system-01} \input{./PPi-example-LTS-01} \end{equation} \end{minipage} \vspace{\baselineskip}\par As a second example, we show that the labeled transition system \eqref{equation:PPi-LTS-from-BVT} allows some interaction which originates from the logical nature of \textnormal{\textsf{Sdq}}\xspace. In $ \CCSR $ we model that $\pincPar{\pincNu{\pinca} {(\pincSec{\pinca}{\pincSec{b} {\pincE}} )} } {\pincNu{\pinca} {(\pincSec{\pincna} {F })}}$ reduces to $\pincNu{\pinca}{(\pincPar{\pincE}{F})}$, observing $b$, unlike in Milner $\mathsf{CCS}\xspace$: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.4cm} \begin{equation} \label{equation:Using the labeled transition system-02} \input{./PPi-example-LTS-02} \end{equation} \end{minipage} \vspace{\baselineskip}\par\noindent \end{example} \paragraph{Simple process es.} They are the last notion we introduce in this section. They are useful for technical reasons which Section~\ref{section:Soundness of BVTCC} will make apparent. A process $\pincE$ is a \textit{simple process} whenever it satifies two constraints. First, $ \pincE $ must belong to the language of~\eqref{equation:PPi-simple-processes}: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.1cm} \begin{equation} \label{equation:PPi-simple-processes} \input{./PPi-simple-processes} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent Second, if $\pincLabL_1,\ldots,\pincLabL_n$ are all, and only, the action prefixes that occur in $ \pincE $, then $ i\neq j $ implies $\pincLabL_i\neq\vlne{\pincLabL_j}$, for every $i,j\in\Set{1,\ldots,n}$. \begin{example}[\textbf{\textit{Simple process es}}] \label{example:Simple processes} Some are in the following table. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \input{SBV2-example-simple-processes} \end{minipage} \par\vspace{\baselineskip}\noindent Both the second, and the third process are simple\xspace\ because they belong to~\eqref{equation:PPi-simple-processes}, and $\pinca,b,{\overline{c}}$ is the list of their pairwise distinct action prefixes. \end{example} \begin{remark}[\textbf{\textit{Aim, and nature of simple process es}}] In coming Section~\ref{section:How computing in CCSR by means of BVTCC} we shall intuitively show that simple process es play the role of results of computations when we use derivations of $\BV\mathsf{Q}\xspace$ to compute what the labeled transition system\ in~\eqref{equation:PPi-LTS-from-BVT} can, in fact, compute by itself. \end{remark} \section{Internalizing derivability of $ \BV\mathsf{Q}\xspace$} \label{section:Internalizing the derivability in BVT} Roughly, internalizing derivability in $ \BV\mathsf{Q}\xspace $ shows when we can ``discharge assumptions''. It is another of the properties we need to recast reachability problems in a suitable calculus of communicating, and concurrent processes, to proof-search inside (a fragment) of $ \BV\mathsf{Q}\xspace $. The internalization links to the notion of \invertible\ structure s. \paragraph{Invertible, and \coinvertible\ structure s.} We define them in~\eqref{equation:SBV2-invertible-structures-in-words} here below. \par\vspace{\baselineskip}\noindent { \fbox{ \begin{minipage}{.95\linewidth} \begin{equation} \label{equation:SBV2-invertible-structures-in-words} \begin{minipage}{.8\linewidth} \begin{center} \input{./SBV2-invertible-structures-in-words} \end{center} \end{minipage} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent If $T$ is invertible, then, by definition, $\vlne{T}$ is \dfn{co-invertible}. \begin{remark} Clearly, definition~\eqref{equation:SBV2-invertible-structures-in-words} here above omits the implication ``If $\bvtInfer{\bvtDder} {\vlne{T} \bvtJudGen{\BV\mathsf{Q}\xspace}{} P}$, then $\vlstore{\vlsbr[T;P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$'' on purpose. It always holds because $ \mbox{$\mathsf{i}\!\!\downarrow$} $ is derivable in $ \BV\mathsf{Q}\xspace $. Moreover, our \invertible\ structure s inspire to the namesake concept in \cite{Stra:03:System-N:mb}. \end{remark} \par\vspace{\baselineskip}\noindent The following proposition gives sufficient conditions for a structure to be invertible. \begin{proposition}[\textit{\textbf{A language of \invertible\ structure s}}] \label{proposition:Invertible structures are invertible} The following grammar \eqref{equation:SBV2-invertible-structures} generates \invertible\ structure s. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.95\linewidth} \begin{equation} \label{equation:SBV2-invertible-structures} \input{./SBV2-invertible-structures} \end{equation} \end{minipage} \end{proposition} \begin{proof} Let $\vlstore{\vlsbr[\vlne{T};P]} \bvtInfer{\bvtPder}{\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread}$ be given with $\vlne{T}$ in \eqref{equation:SBV2-invertible-structures}. We reason by induction on $\vlstore{\vlsbr[\vlne{T};P]} \Size{\vlread}$, and we build $\bvtDder$ of~\eqref{equation:SBV2-invertible-structures-in-words}, proceeding by cases on $\vlne{T}$. (Details in Appendix~\ref{section:Proof of proposition:Invertible structures are invertible}.) \end{proof} \section{Standardization inside a fragment of $\BV\mathsf{Q}\xspace$} \label{section:Standardization inside BVT} Taken a derivation $ \bvtDder $ of $ \BV\mathsf{Q}\xspace $, standardization reorganizes $\bvtDder$ into another derivation $\mathcal{E} $ with the same premise, and conclusion, as $ \bvtDder $. The order of application of the instances of $ \mbox{$\mathsf{ai}\!\!\downarrow$} $ in $ \mathcal{E} $ satisfies a specific, given constraint which some examples illustrate. Standardization in $ \BV\mathsf{Q}\xspace $ is one of the properties we need to recast reachability problems in a suitable calculus of communicating, and concurrent processes, to proof-search inside (a fragment) of $ \BV\mathsf{Q}\xspace $. \begin{example}[\textbf{\textit{\Standard\ derivation s of $\BV\mathsf{Q}\xspace$}}] \label{example:Standard proofs of BVT} Both~\eqref{equation:tracing-sequential-interactions-01}, and~\eqref{equation:tracing-sequential-interactions-00} here below are \standard\ derivation s of the same conclusion $ \vlsbr[<\textcolor{Red}{a};R>;<\textcolor{Blue}{\pincnb};T>;<\textcolor{Red}{\pincna};\textcolor{Blue}{b}>]$ from the same premise $ \vlsbr[R;T] $. \par\vspace{\baselineskip}\noindent {\scriptsize \fbox{ \begin{minipage}{.482\linewidth} \begin{equation} \label{equation:tracing-sequential-interactions-01} \input{./SBV2-tracing-sequential-interactions-01} \end{equation} \end{minipage} \begin{minipage}{.482\linewidth} \begin{equation} \label{equation:tracing-sequential-interactions-00} \input{./SBV2-tracing-sequential-interactions-00} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent They are standard\xspace because every occurrence of $ \mbox{$\mathsf{ai}\!\!\downarrow$} $ \emph{does not appear to the right-hand side} of an instance of $ \textnormal{\textsf{Seq}}\xspace $. \end{example} \begin{remark}[\textbf{\textit{Proof-thoeretical meaning of standardization}}] Standardization says that (i) any of the structures inside $R$, and $T$ of $\vlsbr<R;T>$ will never interact, and (ii) all the interactions inside $R$ must occur before the interactions inside $T$. \end{remark} \paragraph{Our goal} is to show that we can transform a \emph{sufficiently large} set of derivations in $ \BV\mathsf{Q}\xspace $ into standard ones. We start by supplying the main definitions. \paragraph{Right-context s.} We rephrase, inductively, and extend to $\BV\mathsf{Q}\xspace$ the namesake definition in \cite{Brus:02:A-Purely:wd}. The following grammar generates \emph{right-context s} which we denote as $\vlholer{S\vlhole}$. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:SBV2-right-contexts-inductive} \input{./SBV2-right-contexts-inductive} \end{equation} \end{minipage} \begin{example}[\textbf{\textit{Right-context s}}] \label{example:Rightcontexts} A right-context\ is $\vlstore{\vlsbr[\atma;\vlfo{c}{[b;<\vlhole;\vlne\atmc;d>]}]}\vlread$.\\ Instead, $\vlstore{\vlsbr[\atma;\vlfo{c}{[b;<\vlne\atmc;\vlhole;d>]}]}\vlread$ is not. \end{example} \paragraph{Left atomic interaction.} Recalling it from \cite{Brus:02:A-Purely:wd}, the \emph{left atomic interaction} is: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:SBV2-atomic-interaction-left} \input{./SBV2-right-atomic-interaction} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent \begin{example}[\textbf{\textit{Some left atomic interaction instances}}] \label{example:} Let three proofs of $\BV\mathsf{Q}\xspace$ be given: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.321\linewidth} \begin{equation} \label{equation:example-standardizable} \input{./SBV2-example-standardizable} \end{equation} \end{minipage} \begin{minipage}{.321\linewidth} \begin{equation} \label{equation:example-standardized} \input{./SBV2-example-standardized} \end{equation} \end{minipage} \begin{minipage}{.321\linewidth} \begin{equation} \label{equation:example-non-standardizable} \input{./SBV2-example-non-standardizable} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent The two occurrences of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in \eqref{equation:example-standardizable} can correctly be seen as two instances of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, as outlined by \eqref{equation:example-standardized}. Instead, the occurrence of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in \eqref{equation:example-non-standardizable} cannot be seen as an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ as it occurs to the right of \textnormal{\textsf{Seq}}\xspace, namely in the context $ \vlsbr<[\atma;\natma];\vlhole>$ which is not in~\eqref{equation:SBV2-right-contexts-inductive}. \end{example} \begin{fact} \label{fact:atil is ati} By definition, every occurrence of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$ is one of $\mbox{$\mathsf{ai}\!\!\downarrow$}$. The vice versa is false. \end{fact} \paragraph{\Standard\ derivation s of $\BV\mathsf{Q}\xspace$.} Let $ R $, and $ T $ be structures. A derivation $\bvtInfer{\bvtDder}{T \bvtJudGen{\BV\mathsf{Q}\xspace}{} R}$ is \emph{standard\xspace} whenever all the atomic interactions that $\bvtDder$ contains can be labeled as $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. We notice that nothing forbids $ T\approx\vlone $. \subsection{Full standardization} \label{subsection:Full standardization} \paragraph{\textnormal{\textsf{Seq}}\xspace-number of $ \mbox{$\mathsf{ai}\!\!\downarrow$} $.} Let $ \bvtPder $ be $ \vlderivation{ \vldd{\bvtDder'}{\BV\mathsf{Q}\xspace} {R}{ \vlin{\mathsf{ai}\downarrow}{(*)} {S\vlsbr[\atma;\natma]}{ \vlpd{\bvtPder'}{\BV\mathsf{Q}\xspace} {S\vlscn{\vlone}}}}} $. The \textnormal{\textsf{Seq}}\xspace-number of $ (*) $ counts the number of occurrences of \textnormal{\textsf{Seq}}\xspace inside $ S\vlhole $ to the left-hand side of the redex $ \vlsbr[\atma;\natma] $ of $ (*) $. Of course \textnormal{\textsf{Seq}}\xspace-number equals zero whenever we can relabel $ (*) $ as $ \mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$} $. It is greater than zero, otherwise. \begin{proposition}[\textbf{\textit{Commuting conversions in $ \BV\mathsf{Q}\xspace $}}] \label{proposition:Big-step commuting conversions in BVT} Let $ \bvtPder $ be $ \vlderivation{ \vldd{\bvtDder'}{\BV\mathsf{Q}\xspace} {R}{ \vlin{\mathsf{ai}\downarrow}{(*)} {S \vlsbr<T;S'[\atma;\natma]>}{ \vlpd{\bvtPder'}{\BV\mathsf{Q}\xspace} {S \vlsbr<T;S'\,\vlscn{\vlone}>}}}} $ with $ (*) $ an instance of $ \mbox{$\mathsf{ai}\!\!\downarrow$} $ we cannot label $ \mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$} $. There is $\vlupsmash{ \vlderivation{ \vldd{\bvtDder''}{\BV\mathsf{Q}\xspace} {R}{ \vlin{\mathsf{ai}\downarrow}{(**)} {S''\vlsbr[\atma;\natma]}{ \vlpd{\bvtPder''}{\BV\mathsf{Q}\xspace} {S''\,\vlscn{\vlone}}}}} }$ without $ (*) $, and such that the \textnormal{\textsf{Seq}}\xspace-number of $ (**) $ is strictly smaller than the \textnormal{\textsf{Seq}}\xspace-number of $ (*) $ in $ \bvtPder $. \end{proposition} \begin{proof} Corollary of Splitting (Point~\ref{enum:Splitting-seq} Theorem~\ref{theorem:Splitting-ALT} in \cite{Roversi:unpub2012-I}), which, we recall, proves that the up-fragment of $ \SBV\mathsf{Q}\xspace $ is admissible for $ \BV\mathsf{Q}\xspace $. (Details in Appendix~\ref{section:Proof of proposition:Big-step commuting conversions in BVT}.) \end{proof} \paragraph{The fragment ${\BV\mathsf{Q}\llcorner}\xspace$.} We define ${\BV\mathsf{Q}\llcorner}\xspace$ as the set of rules $ (\BV\mathsf{Q}\xspace\setminus\Set{\mbox{$\mathsf{ai}\!\!\downarrow$}})\cup \Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}}$. So, $ {\BV\mathsf{Q}\llcorner}\xspace $ contains \standard\ derivation s only, by definition. Comparing $ {\BV\mathsf{Q}\llcorner}\xspace $ with $ \Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}} $ in the previous subsection, and whose derivations we know how to make standard, we see we can use both $ \bvtswirulein $, and $ \mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$} $, but not $ \mbox{$\mathsf{ai}\!\!\downarrow$} $. \begin{theorem}[\textbf{\textit{Completeness of $ {\BV\mathsf{Q}\llcorner}\xspace $ w.r.t.\xspace $ \BV\mathsf{Q}\xspace $}}] \label{theorem:Partial completeness of BVTL wrt BVT} For every proof $ \bvtInfer{\bvtPder} {\ \bvtJudGen{\BV\mathsf{Q}\xspace} {} R} $, there exists a standard\xspace proof $ \bvtInfer{\mathcal{E} } {\ \bvtJudGen{{\BV\mathsf{Q}\llcorner}\xspace} {} R} $. \end{theorem} \begin{proof} By induction on the sum $ s $ of \textnormal{\textsf{Seq}}\xspace-numbers of those $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in $ \bvtPder$ which cannot be labeled as $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. Of course, if $s=0$ then $ \bvtPder $ is already in $ {\BV\mathsf{Q}\llcorner}\xspace $, because it only contains instances of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, if any. If $ s>0 $, then we stepwise decrease $s$ by iteratively applying Proposition~\ref{proposition:Big-step commuting conversions in BVT}. \end{proof} \subsection{Standardization} \label{subsection:Standardization} We reorganize derivations of $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}\subset\BV\mathsf{Q}\xspace$ which operate on \dfn{\textnormal{\textsf{Tensor}}\xspace-free structures} only. \paragraph{\textnormal{\textsf{Tensor}}\xspace-free structures.} By definition, $ R $ in $ \BV\mathsf{Q}\xspace $ is \dfn{\textnormal{\textsf{Tensor}}\xspace-free} whenever it does not contain $\vlsbr(R_1;\vldots;R_n)$, for any $R_1,\ldots,R_n$, and $n>1$. \paragraph{Our goal} is to prove the following theorem, inspiring to the standardization in \cite{Brus:02:A-Purely:wd}: \begin{theorem}[\textbf{\textit{Standardization in $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$}}] \label{theorem:Standardization in bvtatrdrulein...} Let $T$, and $R$ be \textnormal{\textsf{Tensor}}\xspace-free. For every $\bvtInfer{\bvtDder} {T \bvtJudGen{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{ai}\downarrow,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {} R}$, there is a \standard\ derivation\ $\bvtInfer{\mathcal{E}} {T \bvtJudGen{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{q}\downarrow,\mathsf{u}\downarrow}}{} R}$. \end{theorem} \par\vspace{\baselineskip}\noindent It proof relies on the coming lemmas, and proposition. \begin{lemma}[\textbf{\textit{Existence of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$}}] \label{lemma:Existence of bvtatrdrulein} The topmost instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in a proof $\bvtInfer{\bvtPder} {\ \bvtJudGen{\BV\mathsf{Q}\xspace}{} R}$ is always an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. \end{lemma} \begin{proof} Let $\bvtPder$ be $ \vlderivation{ \vlde{}{\BV\mathsf{Q}\xspace} {R}{ \vlin{\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}}{} {S\vlsbr[\atma;\natma]}{ \vlpr{\mathcal{Q}}{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {S\vlscn{\vlone}}}}} $ with $\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}$ its topmost instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ which cannot be relabeled as $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. By contraction, let us assume $S\vlhole$ be a non right-context, namely $S\vlhole \approxS'\vlsbr<T;S''\vlhole>$ for some $S'\vlhole,S''\vlhole$, and $T$ such that $T\not\approx\vlone$. In this case, to let the names of $T$, and, may be, those ones of $S''\,\vlscn{\vlone}$, to disappear from $ \vlderivation{ \vlin{\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}}{} {S'\vlsbr<T;S''[\atma;\natma]>}{ \vlpr{\mathcal{Q}}{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {S'\vlsbr<T;S''\,\vlscn{\vlone}>}}} $ we would have to apply at least one instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ which would occur in $\mathcal{Q}$, against our assumption on the position of $\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}$. \end{proof} \begin{lemma}[\textbf{\textit{Commuting conversions in $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$}}] \label{lemma:bvtatidrulein commuting conversions} Let $R, T$, and $S\vlscn{\vlone}$ be \textnormal{\textsf{Tensor}}\xspace-free. Also, let $\rho\in\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$. Finally, let $\bvtDder$ be $\ \vlderivation{ \vlin{\mathsf{ai}\downarrow^{\bullet}}{} {R}{ \vlin{\rho}{} {\vlholer{S\vlsbr[\atma;\natma]}}{ \vlhy{T}}}} $, where $\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}$ is the topmost occurrence of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ which is not $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. Then, there is $ \vlderivation{ \vlde{\bvtDder} {\Set{\mathsf{at}\downarrow\llcorner,\mathsf{ai}\downarrow,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {R}{ \vlin{\mathsf{ai}\downarrow^{*}}{} {V}{ \vlhy{T}}}} $, where $V$, and all the structures of $\bvtDder$ are \textnormal{\textsf{Tensor}}\xspace-free, and $\mbox{$\mathsf{ai}\!\!\downarrow$}^{*}$ may be an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. \end{lemma} \begin{proof} The proof is, first, by cases on $\rho$, and, then, by cases on $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$. Fixed $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$, the proof is by cases on $R$ which must contain a redex of $\mbox{$\mathsf{ai}\!\!\downarrow$}, \mbox{$\mathsf{q}\!\!\downarrow$}$, or $\mbox{$\mathsf{u}\!\!\downarrow$}$, that, after $\mbox{$\mathsf{ai}\!\!\downarrow$}^{\bullet}$, leads to the chosen $\vlstore{\vlholer{S\vlsbr[\atma;\natma]}}\vlread$. (Appendix~\ref{section:Proof of lemma:bvtatidrulein commuting conversions}.) \end{proof} \begin{proposition}[\textbf{\textit{One-step standardization in $ \Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}} $}}] \label{proposition:One-step standardization of BVTCC} Let $ \vlderivation{ \vlde{\bvtDder'}{} {R}{ \vlin{\mathsf{ai}\downarrow^\bullet}{} {U}{ \vlde{\bvtDder''}{} {V}{ \vlhy{T}}}}} $ be a derivation in $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{ai}\!\!\downarrow$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}$ such that $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$ is the topmost instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$. There exists a derivation $\bvtInfer{\mathcal{E}} {T \bvtJudGen{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{ai}\downarrow,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {} R}$ where $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$ has been eventually moved upward to transform it into an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. \end{proposition} \begin{proof} Let $n$ be the number of rules in $\bvtDder''$. If $U\approx \vlstore{\vlsbr[\atma;\natma]}\vlholer{S\vlread}$, with $ \vlsbr[\atma;\natma] $ the redex of $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$, then $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$ is already an instance of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, and we are done. Otherwise, we can apply Lemma ~\ref{lemma:bvtatidrulein commuting conversions} moving $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$ one step upward, getting to $\bvtInfer{\mathcal{E}} {T \bvtJudGen{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{ai}\downarrow,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {} R}$, where $\mbox{$\mathsf{ai}\!\!\downarrow$}^\bullet$ is no more than $n-1$ rules far from $T$. An obvious inductive argument allows to conclude thanks to Lemma~\ref{lemma:Existence of bvtatrdrulein}. \end{proof} \paragraph{Proof of Theorem~\ref{theorem:Standardization in bvtatrdrulein...}.} Let $X_{\bvtDder}$ be the set of all instances of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in $\bvtDder$, that can be directly seen as instances of $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$, and $Y_{\bvtDder}$ the set of all other instances of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in $\bvtDder$. If $Y_{\bvtDder}=\emptyset$ we are done because $\mathcal{E}$ is $\bvtDder$ where every instance of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in $X_{\bvtDder}$, if any, can be directly relabeled as $\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}$. Otherwise, let us pick the topmost occurrence of $\mbox{$\mathsf{ai}\!\!\downarrow$}$ in $\bvtDder$ out of $Y_{\bvtDder}$, and apply Proposition~\ref{proposition:One-step standardization of BVTCC} to it. We get $\bvtInfer{\mathcal{E}} {T \bvtJudGen{\Set{\mathsf{at}\downarrow\llcorner,\mathsf{ai}\downarrow,\mathsf{q}\downarrow,\mathsf{u}\downarrow}} {} R}$, whose set $Y_{\mathcal{E}}$ is strictly smaller than $Y_{\bvtDder}$. An obvious inductive argument allows to conclude. \paragraph{Standard fragment ${\BV\mathsf{Q}\llcorner}\xspace$ of $\BV\mathsf{Q}\xspace$.} After Theorem~\ref{theorem:Standardization in bvtatrdrulein...} it is sensible defining ${\BV\mathsf{Q}\llcorner}\xspace$ as $\Set{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$},\mbox{$\mathsf{q}\!\!\downarrow$},\mbox{$\mathsf{u}\!\!\downarrow$}}\subset\BV\mathsf{Q}\xspace$ whose derivations contain \textnormal{\textsf{Tensor}}\xspace-free only structures. \section{Recalling the systems $\SBV\mathsf{Q}\xspace$ and $\BV\mathsf{Q}\xspace$} \label{section:Systems SBVT and BVT} We briefly recall $ \SBV\mathsf{Q}\xspace $, and $\BV\mathsf{Q}\xspace $ from \cite{Roversi:unpub2012-I}. \paragraph{Structures.} Let $\atma, b, c, \ldots$ denote the elements of a countable set of \dfn{positive propositional variables}. Let $\natma, \vlne\atmb, \vlne\atmc, \ldots$ denote the elements of a countable set of \dfn{negative propositional variables}. The set of \dfn{names}, which we range over by $\atmLabL, \mathfrak{m}$, and $\mathfrak{n}$, contains both positive, and negative propositional variables, and nothing else. Let $\vlone$ be a constant, different from any name, which we call \dfn{unit}. The set of \dfn{atoms} contains both names and the unit, while the set of \dfn{structures} identifies formulas of $\mathsf{SBV}\xspace$. Structures belong to the language of the grammar in~\eqref{fig:BVT-structures}. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{fig:BVT-structures} \input{./BV2-structures} \end{equation} \end{minipage} \par\vspace{\baselineskip}\noindent We use $ R, T, U, V$ to range over structures, in which $\vlne{R}$ is a \textnormal{\textsf{Not}}\xspace, $\vlrobrlR\vlteT\vlrobrr$ is a \textnormal{\textsf{CoPar}}\xspace, $\vlsbr<R;T>$ is a \textnormal{\textsf{Seq}}\xspace, $\vlsqbrlR\vlpaT\vlsqbrr$ is a \textnormal{\textsf{Par}}\xspace, and $\vlfo{\atma}{R}$ is a self-dual quantifier \textnormal{\textsf{Sdq}}\xspace, which comes with the proviso that $\atma$ must be a positive atom. Namely, $\vlfo{\natma}{R}$ is not in the syntax. \textnormal{\textsf{Sdq}}\xspace\ induces obvious notions of \dfn{free}, and \dfn{bound names} \cite{Roversi:unpub2012-I}. \paragraph{Size of the structures.} The \dfn{size} $\Size{R}$ of $R$ is the number of occurrences of atoms in $R$ plus the number of occurrences of \textnormal{\textsf{Sdq}}\xspace\ that effectively bind an atom. For example, $\vlstore{\vlsbr[\atma;\natma]}\Size{\vlread} = \vlstore{\vlfo{b}{\vlsbr[\atma;\natma]}}\Size{\vlread}=2$, while $\vlstore{\vlfo{\atma}{\vlsbr[\atma;\natma]}} \Size{\vlread}=3$. \paragraph{(Structure) Contexts.} We denote them by $S\vlhole$. A context is a structure with a single hole $\vlhole$ in it. If $S\vlscn{R}$, then $R$ is a \dfn{substructure} of $S$. We shall tend to shorten $\vlstore{\vlsbr[R;U]}S\vlscn{\vlread}$ as $S{\vlsbr[R;U]}$ when ${\vlsbr[R;U]}$ fills the hole $\vlhole$ of $S\vlhole$ exactly. \paragraph{Congruence $\approx$ on structures.} Structures are partitioned by the smallest congruence $\approx$ we obtain as reflexive, symmetric, transitive and contextual closure of the relation $\sim$ whose defining clauses are \eqref{align:negation-atom}, through \eqref{align:alpha-symm} here below. \par\vspace{\baselineskip}\noindent \fbox{ \begin{minipage}{.974\linewidth} {\small \input{./BV2-structure-equivalences} \vspace{-.5cm} \end{minipage} \par\vspace{\baselineskip}\noindent \emph{Contextual closure} means that ${S{\vlscn{R}}} \approx {S{\vlscn{T}}}$ whenever $R \approx T$. Thanks to \eqref{align:alpha-symm}, we abbreviate $ \vlfo{\atma_n}{\vldots\vlfo{\atma_1}{R}\vldots} $ as $ \vlfo{\vec{\atma}}{R}$, where we may also interpret $ \vec{\atma} $ as one of the permutations of $ \atma_1, \ldots, \atma_n $. \paragraph{\Canonical\ structure s.} \label{paragraph:Structures in canonical form} We inspire to the normal forms of \cite{Gugl:06:A-System:kl} to define structures in \dfn{canonical form} inside $\SBV\mathsf{Q}\xspace$. \Canonical\ structure s will be used to define environment structure s (Section~\ref{section:How computing in CCSR by means of BVTCC}, page~\pageref{paragraph:Environmentstructure s}.) A structure $R$ is \emph{canonical} when either it is the unit $\vlone$, or the following four conditions hold: (i) the only negated structures appearing in $R$ are negative propositional variables, (ii) no unit $\vlone$ appears in $R$, but at least one name occurs in it, (iii) the nesting of occurrences of \textnormal{\textsf{Par}}\xspace, \textnormal{\textsf{Tensor}}\xspace, \textnormal{\textsf{Seq}}\xspace, and \textnormal{\textsf{Sdq}}\xspace build a right-recursive syntax tree of $R$, and (iv) no occurrences of \textnormal{\textsf{Sdq}}\xspace can be eliminated from $R$, while maintaining the equivalence. \begin{example}[\textbf{\textit{\Canonical\ structure s}}] \label{example:Canonical structures} The structure $\vlstore{\vlsbr[(\natma;\vlne\atmb);\vlfo{c}{\vlne\atmc}]}\vlread$ is not canonical, but it is equivalent to the canonical\ one $\vlstore{\vlsbr[\natma;(\vlne\atmb;\vlfo{c}{\vlne\atmc})]}\vlread$ whose syntax tree is right-recursive. Other non \canonical\ structure s are $\vlstore{\vlsbr[\atma;(\vlone;b)]}\vlne{\vlread}$, and $\vlsbr(\vlne{\vlread};<\vlone;\vlne\atmb>)$, and $\vlstore{\vlsbr[\natma;(\vlne\atmb;\vlfo{d}{\vlne\atmc})]}\vlread$. The first two are equivalent to $\vlsbr(\natma;\vlne\atmb)$ which, instead, is canonical. Finally, also $\vlstore{\vlsbr[\atma;\vlone]}\vlread$ is not canonical, equivalent to the canonical\ one $\atma$. \end{example} \begin{fact}[\textbf{\textit{Normalization to \canonical\ structure s}}] \label{fact:Structures normalize to canonical forms} Given a structure $R$: (i) negations can move inward to atoms, and, possibly, disappear, thanks to \eqref{align:negation-atom}, \ldots, \eqref{align:negation-fo}, (ii) units can be removed thanks to \eqref{align:unit-co}, \ldots, \eqref{align:unit-pa}, and (iii) brackets can move rightward by \eqref{align:assoc-co}, \ldots, \eqref{align:assoc-pa}. \par So, for every $R$ we can take the equivalent \canonical\ structure\ which is either $\vlone$, or different from $\vlone$. \end{fact} \paragraph{The system $\SBV\mathsf{Q}\xspace$.} It contains the set of inference rules in \eqref{fig:SBVT} here below. Every rule has form $\vldownsmash{\vlinf{\rho}{}{R}{T}}$, \dfn{name} $\rho$, \dfn{premise} $T$, and \dfn{conclusion} $R$. \par\vspace{\baselineskip}\noindent \fbox{ \begin{minipage}{.974\linewidth} {\small \begin{equation} \label{fig:SBVT} \input{./SBV2-system} \end{equation} \end{minipage} \paragraph{Derivations vs. proofs.} A \dfn{derivation} in $\SBV\mathsf{Q}\xspace$ is either a structure or an instance of the above rules or a sequence of two derivations. Both $\bvtDder$, and $\mathcal{E}$ will range over derivations. The topmost structure in a derivation is its \dfn{premise}. The bottommost is its \dfn{conclusion}. The \dfn{length} $\Size{\bvtDder}$ of a derivation $\bvtDder$ is the number of rule instances in $\bvtDder$. A derivation $\bvtDder$ of a structure $R$ in $\SBV\mathsf{Q}\xspace$ from a structure $T$ in $\SBV\mathsf{Q}\xspace$, only using a subset $\mathsf{B}\xspace\subseteq\SBV\mathsf{Q}\xspace$ is $\vlderivation { \vlde{\bvtDder}{\mathsf{B}\xspace}{R}{ \vlhy {T}}} $. The equivalent \emph{space-saving} form is $\bvtInfer{\bvtDder}{T \bvtJudGen{\mathsf{B}\xspace}{}R}$. The derivation $\vlupsmash{ \vlderivation { \vlde{\bvtDder}{\mathsf{B}\xspace}{R}{ \vlhy {T}}}}$ is a \dfn{proof} whenever $T\approx \vlone$. We denote it as $\vlupsmash{ \vlderivation { \vlde{\bvtPder}{\mathsf{B}\xspace}{R}{ \vlhy {\vlone}}}}$, or $\vlupsmash{\vlproof{\bvtPder}{\mathsf{B}\xspace}{R}}$, or $\bvtInfer{\bvtPder}{\ \bvtJudGen{\mathsf{B}\xspace}{}R}$. Both $\bvtPder$, and $\mathcal{Q}$ will range over proofs. In general, we shall drop $\mathsf{B}\xspace$ when clear from the context. In a derivation, we write $ \vliqf{\rho_1,\ldots,\rho_m,n_1,\ldots,n_p}{}{R}{T} $, whenever we use the rules $\rho_1,\ldots,\rho_m$ to derive $R$ from $T$ with the help of $n_1,\ldots,n_p$ instances of \eqref{align:negation-atom}, \ldots, \eqref{align:symm-co}. To avoid cluttering derivations, whenever possible, we shall tend to omit the use of negation axioms \eqref{align:negation-atom}, \ldots, \eqref{align:negation-fo}, associativity axioms \eqref{align:assoc-co}, \eqref{align:assoc-se}, \eqref{align:assoc-pa}, and symmetry aximos \eqref{align:symm-pa}, \eqref{align:symm-co}. This means we avoid writing all brackets, as in $\vlsbr[R;[T;U]]$, in favor of $\vlsbr[R;T;U]$, for example. Finally if, for example, $q>1$ instances of some axiom $(n)$ of \eqref{align:negation-atom}, \ldots, \eqref{align:alpha-symm} occurs among $n_1,\ldots,n_p$, then we write $(n)^q$. \paragraph{\dfn{Up} and \dfn{down} fragments of $\SBV\mathsf{Q}\xspace$.} The set $\Set{\mbox{$\mathsf{ai}\!\!\downarrow$}, \bvtswirulein, \mbox{$\mathsf{q}\!\!\downarrow$}, \mbox{$\mathsf{u}\!\!\downarrow$}}$ is the \dfn{down fragment} $\BV\mathsf{Q}\xspace$ of $\SBV\mathsf{Q}\xspace$. The \dfn{up fragment} is $\Set{\mbox{$\mathsf{ai}\!\!\uparrow$},\bvtswirulein, \mbox{$\mathsf{q}\!\!\uparrow$}, \mbox{$\mathsf{u}\!\!\uparrow$}}$. So $\bvtswirulein$ belongs to both. \begin{corollary}[\cite{Roversi:TLCA11,Roversi:unpub2012-I}] \label{theorem:Admissibility of the up fragment} The up-fragment $\Set{\mbox{$\mathsf{ai}\!\!\uparrow$}, \mbox{$\mathsf{q}\!\!\uparrow$}, \mbox{$\mathsf{u}\!\!\uparrow$}}$ of $\SBV\mathsf{Q}\xspace$ is admissible for $\BV\mathsf{Q}\xspace$. This means that we can transform any proof $\bvtInfer{\bvtPder}{\ \bvtJudGen{\SBV\mathsf{Q}\xspace}{}R}$ into a proof $\bvtInfer{\mathcal{Q}}{\ \bvtJudGen{\BV\mathsf{Q}\xspace}{}R}$ free of every occurrence of rules that belong to the up-fragment of $\SBV\mathsf{Q}\xspace$. \end{corollary} \begin{remark} Thanks to Corollary~\ref{theorem:Admissibility of the up fragment}, we shall always focus on the up-fragment $\BV\mathsf{Q}\xspace$ of $\SBV\mathsf{Q}\xspace$. \end{remark} \section{Final discussion, and future work} \label{section:Final discussion, and future work} This work shows that $\BV\mathsf{Q}\xspace$ \cite{Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I}, which we can consider as a minimal extension of $\mathsf{BV}\xspace$ \cite{Gugl:06:A-System:kl}, is expressive enough to model concurrent and communicating computations, as expressed by the language $\CCSR$, whose logic-based restriction con hide actions to the environment in an unusual flexible way, as compared to the restriction of Milner $ \mathsf{CCS}\xspace $. The reason why, in various points, we have kept relating $\CCSR$ with a fragment of Milner $\mathsf{CCS}\xspace$ is twofold. First, we start from the programme of~\cite{Brus:02:A-Purely:wd}, that shows the connections between $\mathsf{BV}\xspace$ and the smallest meaningful fragment of Milner $\mathsf{CCS}\xspace$. Second, it is evident we can define ${\BV\mathsf{Q}^{-}}\xspace$ as follows. We take $\BV\mathsf{Q}\xspace\setminus\Set{\mbox{$\mathsf{u}\!\!\downarrow$}}$ and we forbid clauses \eqref{align:alpha-intro}, and~\eqref{align:alpha-varsub} on its structures. So defined, ${\BV\mathsf{Q}^{-}}\xspace$ would be very close to the fragment of Milner $ \mathsf{CCS}\xspace $, which we have called $\CCSRM$, and which only contains restriction, and both sequential, and parallel composition. The reason is that ${\BV\mathsf{Q}^{-}}\xspace$ could simulate the two standard rules for restriction: {\small \[ \vlinf{} {\pincLabL\not\in\Set{\pinca,\pincna}} {\pincLTSJud{\pincNu{\pinca}{\pincE}} {\pincNu{\pinca}{\pincE'}} {\pincLabL}} {\pincLTSJud{\pincE} {\pincE'} {\pincLabL}} \qquad\qquad \vlinf{} {\pincLabL\in\Set{\pinca,\pincna}} {\pincLTSJud{\pincNu{\pinca}{\pincE}} {\pincNu{\pinca}{\pincE'}} {\pincLabT}} {\pincLTSJud{\pincE} {\pincE'} {\pincLabL}} \] } but not the rules $\pincpi$, and $\pincpe$ in~\eqref{equation:PPi-LTS-from-BVT}. However, in fact, \textnormal{\textsf{Sdq}}\xspace looks much closer to the hiding operator $\pincNuPi{\pinca}{\pincE}$ of $\pi$-calculus \cite{SangiorgiWalker01}. Clause~\eqref{align:alpha-symm} ``is'' $\pincNuPi{\pinca}{\pincNuPi{b}{\pincE}}\approx\pincNuPi{b}{\pincNuPi{\pinca}{\pincE}}$. Clause~\eqref{align:alpha-intro} generalizes $\pincNuPi{\pinca}{\pincZer}\approx\pincZer$. The instance: {\small \vlstore{ \vlderivation{ \vliq{\eqref{align:alpha-intro},\mathsf{u}\downarrow}{} {\vlsbr[\vlfo{\pinca}{\pincE};F]}{ \vlhy{\vlfo{\pinca}{\vlsbr[\pincE;F]}}}} } \begin{equation} \label{equation:monodirectional-scope-extrusion} \vlread \end{equation} } weakly corresponds to scope extrusion $\pincNuPi{\pinca}{(\pincPar{\pincE}{F})}\approx\pincPar{\pincNuPi{\pinca}{\pincE}}{F}$ which holds, in both directions, whenever $\pinca$ is not free in $F$. We postpone the study of semantics and of the relation between $\CCSR$, and the corresponding fragment of $\pi$-calculus, to future work. \par Further future work we see as interesting, is about the generalization of Soundness. We believe that a version of Soundness where no restriction to simple process es holds. The reason is twofold. First, thanks to the Splitting theorem of $\BV\mathsf{Q}\xspace$ \cite{Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I} it is possible to prove that every \emph{proof} of $\BV\mathsf{Q}\xspace$ can be transformed in a standard\xspace\ proof of $\BV\mathsf{Q}\xspace$. So, no need to restrict to \textnormal{\textsf{Tensor}}\xspace-free derivations of $ \BV\mathsf{Q}\xspace $ exists to have standard\xspace\ proofs. Second, the reduction process looks working on standard\xspace\ proofs as well, and no obstacle seems to exist to the application of inductive arguments analogous to those ones we have used to prove our current Soundness. \par We conclude with a remark on the ``missing'' Completeness. Our readers may have noticed the lack of any reference to a Completeness of $\BV\mathsf{Q}\xspace$, w.r.t.\xspace $\CCSR$. Completeness would say that $\BV\mathsf{Q}\xspace$ has enough derivations to represent any computation in the labeled transition system\ of $ \CCSR $. Formally, it would amount to: \begin{theorem}[\textbf{\textit{Completeness of $\BV\mathsf{Q}\xspace$}}] \label{theorem:Completeness of BVT} For every process structure\ $\pincE$, and $ F $, if $\pincLTSJud{\mapPincToDi{\pincE}} {\mapPincToDi{F}} {\mapDiToPinc{R}{\emptyset}}$, then $\vlstore{ \vlsbr[\pincE;R] }\bvtInfer{\bvtDder} { F \bvtJudGen{\BV\mathsf{Q}\xspace}{} \vlread }$. \end{theorem} Ideally, we leave the proof of Theorem~\eqref{theorem:Completeness of BVT} as an exercise. The system $\BV\mathsf{Q}\xspace$ is so flexible that, proving it complete, amounts to show that every rule of $\CCSR$ is derivable in $\BV\mathsf{Q}\xspace$. \section{Introduction} \label{section:Introduction} This is a work in structural proof-theory which builds on \cite{Brus:02:A-Purely:wd,Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I}. Broadly speaking we aim at using structural proof theory to study primitives of paradigmatic programming languages, and to give evidence that some are the natural ones, while others, which we might be used to think of as ``given once for all'', can, in fact, be refined or generalized. In our case this means to keep developing the programme in \cite{Brus:02:A-Purely:wd} whose goal is establishing a correspondence between proof-search of a logical system, and computations in a process algebra. From \cite{Brus:02:A-Purely:wd}, we already know that both (i) sequential composition of Milner $\mathsf{CCS}\xspace$ \cite{Miln:89:Communic:qo} gets modeled by the non commutative logical operator \textnormal{\textsf{Seq}}\xspace of $\mathsf{BV}\xspace$ \cite{Gugl:06:A-System:kl}, which is the paradigmatic calculus of structures in Deep Inference\xspace, and (ii) parallel composition of Milner $\mathsf{CCS}\xspace$ gets modeled by the commutative logical operator \textnormal{\textsf{Par}}\xspace of $\mathsf{BV}\xspace$ so that communication becomes logical annihilation. This is done under a logic-programming analogy. It says that the terms of a calculus $\mathcal{C}$ --- which is a fragment of Milner $ \mathsf{CCS}\xspace $ in the case of \cite{Brus:02:A-Purely:wd} --- correspond to formulas of a logical system $ \mathcal{L} $ --- which is $ \mathsf{BV}\xspace $ in the case of \cite{Brus:02:A-Purely:wd} ---, and that computations inside $\mathcal{C}$ recast to searching cut-free\xspace proofs in $\mathcal{L}$, as summarized in~\eqref{equation:introduction-non-curry-howard-correspondence} here below. \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \begin{equation} \label{equation:introduction-non-curry-howard-correspondence} \input{./introduction-non-curry-howard-correspondence} \end{equation} \end{minipage} \vspace{\baselineskip}\par\noindent \paragraph{Contributions.} We show that in~\eqref{equation:introduction-non-curry-howard-correspondence} we can take $\BV\mathsf{Q}\xspace$ \cite{Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I} for $\mathcal L$, and $ \CCSR $ for $\mathcal C$. The system $\BV\mathsf{Q}\xspace$ extends $\mathsf{BV}\xspace$ with a self-dual quantifier, while $\CCSR$ is introduced by this work (Section~\ref{section:Communicating, and concurrent processes with restriction}). The distinguishing aspect of $\CCSR$ is its operational semantics which subsumes the one of the fragment of Milner $\mathsf{CCS}\xspace$ that contains sequential, parallel, and restriction operators, and which we identify as $\CCSRM$. Specifically, the self-dual quantifier of $\CCSR$ allows to relax the operational semantics of the restriction operator in $\CCSRM$ without getting to an inconsistent calculus of processes. This is a direct consequence of (the analogous of) the a cut-elimination\xspace property for $\BV\mathsf{Q}\xspace$ \cite{Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I}. \par The main step that allows to take $\BV\mathsf{Q}\xspace$ for $\mathcal L$, and $\CCSR$ for $\mathcal C$ is proving Soundness of $ \BV\mathsf{Q}\xspace $ with respect to $ \CCSR $ (Section~\ref{section:Soundness of BVTCC}). The following example helps explaining what Soundness amounts to. Let us suppose we want to observe what the following judgment describes: \par\vspace{\baselineskip}\noindent {\small \fbox{ \begin{minipage}{.974\linewidth} \vspace{-.1cm} \begin{equation} \label{equation:intro-example-00-term-to-compute} \input{./intro-example-00-term-to-compute} \end{equation} \end{minipage} \vspace{\baselineskip}\par\noindent The process $\pincSec{\pinca}{\pincSec{b}{\pincE}} $ can perform actions $\pinca$, and $b$, in this order, before entering $ \pincE $. The other process can perform $\pincna$ before entering $F$. In particular, $ \pincSec{\pinca}{\pincSec{b}{\pincE}} $, and $ \pincSec{\pincna}{F} $ internally communicate when simultaneously firing $\pinca$, and $\pincna$. In any case, firing on $\pinca$, or $\pincna$, would remain private because of the outermost restriction $\pincNu{\pinca}{\,\cdot\,} $ which hides both $\pinca$, and $\pincna$ to the environment\footnote{ We write something related to Milner $\mathsf{CCS}\xspace$. Indeed, hiding both $\pinca$, and $\pincna$ in Milner $\mathsf{CCS}\xspace$ is $\pincNu{\Set{\pinca,\pincna}}{\,\cdot\,}$.}. The action $b$ is always observable because $b$ differs from $\pinca$. Of course, we might describe one of the possible dynamic evolutions of~\eqref{equation:intro-example-00-term-to-compute} thanks to a suitable labeled transition system\ able to develop a derivation like~\eqref{equation:intro-example-00}: \par\vspace{\baselineskip}\noindent \fbox{ \begin{minipage}{.974\linewidth} {\small \vspace{-.3cm} \begin{equation} \label{equation:intro-example-00} \input{./intro-example-00} \end{equation} \end{minipage} \vspace{\baselineskip}\par\noindent Soundness says that instead of rewriting $\pincSec{\pinca}{\pincSec{b}{\pincE}}$ to $\pincSec{\pincna}{F}$, as in~\eqref{equation:intro-example-00}, we can (i) compile the whole judgment $\pincLTSJud{\pincNu{\pinca} {(\pincPar{(\pincSec{\pinca} {\pincSec{b} {\pincE} }) } {\pincSec{\pincna} {F} }) } } {\pincNu{\pinca}{(\pincPar{\pincE}{F})}} {b}$ to a structure, say $R$, of $\BV\mathsf{Q}\xspace$, and (ii) search for a cut-free\xspace proof, say $\bvtPder$ of $R$, and (iii) if $\bvtPder$ exists, then Soundness assures that~\eqref{equation:intro-example-00-term-to-compute} holds. So, in general, Soundness recasts the reachability problem ``Is it true that $\pincLTSJudShort{\pincE}{F}{\pincalpha}$'' to a problem of proof search. Noticeably, the Soundness we prove poses weaker constraints on the form of $F$ than those ones we find in Soundness of~\cite{Brus:02:A-Purely:wd}. Specifically, only the silent process $\pincZer$ can be the target of the reachability problem in~\cite{Brus:02:A-Purely:wd}. Here, $F$ can belong to the set of \emph{simple process es} which contains $\pincZer$. Intuitively, every simple process\ different from $\pincZer$ is normal with respect to internal communication, but is alive if we consider the external ones. Finally, from a technical standing point, our proof of Soundness in neatly decomposed in steps that makes it reusable for further extensions of both $\BV\mathsf{Q}\xspace$, and $\CCSRM$. \paragraph{Road map.} Section~\ref{section:Systems SBVT and BVT} recalls $\BV\mathsf{Q}\xspace$ and its symmetric version $ \SBV\mathsf{Q}\xspace $ mainly from \cite{Roversi:unpub2012-I}. Section~\ref{section:Standardization inside BVT} is about two proof-theoretical properties of $\BV\mathsf{Q}\xspace$ which were not proved in \cite{Roversi:2010-LLCexDI,Roversi:TLCA11,Roversi:unpub2012-I} but which Soundness relies on. The first one says that every \textnormal{\textsf{Tensor}}\xspace-free derivations of $\BV\mathsf{Q}\xspace$ has at least corresponding \emph{standard\xspace} one. The second one supplies sufficient conditions for a structure of $\BV\mathsf{Q}\xspace$ to be invertible, somewhat internalizing derivability of $\BV\mathsf{Q}\xspace$. Section~\ref{section:Communication core of BVT} has the pedagogical aim of showing, with many examples, why the derivations of $ \BV\mathsf{Q}\xspace $ embody a computational meaning. Section~\ref{section:Communicating, and concurrent processes with restriction} introduces $\CCSR$, namely the process calculus that $\BV\mathsf{Q}\xspace$ embodies. Section~\ref{section:How computing in CCSR by means of BVTCC} first formalizes the connections between $\BV\mathsf{Q}\xspace$, and $\CCSR$. Then it shows how computations inside the labeled transition system\ of $\CCSR$ recast to proof-search inside $\BV\mathsf{Q}\xspace$, justifying the need to prove Soundness. Section~\ref{section:Soundness of BVTCC} proves Soundness, starting with a pedagogical overview of what proving it means. Section~\ref{section:Final discussion, and future work} points to future work, mainly focused on $\CCSR$. \subsubsection*{Proof of \uppercase{#1} \ref{#3}, page \pageref{#3}}\label{APXZ#3}{#5}}} }{\ifthenelse{\boolean{bodyafterstatement}} {\ifthenelse{\equal{#2}{}}{\begin{#1}}{\begin{#1}[#2]}\label{#3}{#4}\end{#1}{#5}} {\ifthenelse{\equal{#2}{}}{\begin{#1}}{\begin{#1}[#2]}\label{#3}{#4}\end{#1}} }} \newcommand{\mapPincToDi}[1] {\llparenthesis\, {#1}\, \rrparenthesis} % \newcommand{\mapPincPrefToDi}[1] {\llbracket\,\! {#1}\, \rrbracket} % \newcommand{\mapDiToPinc}[2] {\llbracket\,\! #1\, \rrbracket_{#2}} % \newcommand{\ccsact}{\mathsf{Act}} % \newcommand{\ccscomu}{\mathsf{Com}_1} % \newcommand{\ccscomd}{\mathsf{Com}_2} % \newcommand{\ccscomt}{\mathsf{Com}_3} % \newcommand{\ccsres}{\mathsf{Res}} % \newcommand{\pincLTSJud}[3]{\xymatrix@1{#1\, \ar[r]^-{\ #3\ } & \,#2}} \newcommand{\pincLTSJudShort}[3]{#1\, \stackrel{#3}{\longrightarrow} \,#2} % \newcommand{\pincLTSEXTJud}[5] {\xymatrix@1{#1\, \ar[r#4]^-{\ #3\ } &#5 \,#2}} \newcommand{\pincRSJud}[3]{\xymatrix@1{#1\, \ar@{>>}[r]^-{\ #3\ } &\, #2}} % \newcommand{\pincRSEXTJud}[5] {\xymatrix@1{#1\, \ar@{>>}[r# ]^{\ #3\ } &# \, #2}} % \newcommand{\pincCong}{\approxeq} % \newcommand{\pincact}{\mathsf{a}} % \newcommand{\pinccom}{\mathsf{c}} % \newcommand{\pincres}{\mathsf{res}} % \newcommand{\pinccntxp}{\mathsf{ctx}} % \newcommand{\pincpi}{\mathsf{p_i}} % \newcommand{\pincpe}{\mathsf{p_e}} % \newcommand{\pinctran}{\mathsf{trn}} % \newcommand{\pincrefl}{\mathsf{rfl}} % \newcommand{\pincsec}{\mathsf{s}} % \newcommand{\pincPref}{\pi} \newcommand{\overline{\pi}}{\overline{\pi}} \newcommand{\pincZer}{\mathbf{0}} % \newcommand{\pincSec}[2]{{#1}.{#2}} % \newcommand{\pincPar}[2]{#1\mid#2} % \newcommand{\pincNu}[2]{#2|_{#1}} % \newcommand{\pincNuPi}[2]{(\nu #1){#2}} % \newcommand{\pincPreI}[2]{#1#2} \newcommand{\pincPreO}[2]{\overline{#1}#2} \newcommand{\pincPreT}{\epsilon} \newcommand{\pincLabI}[2]{#1} \newcommand{\pincLabT}{\epsilon} \newcommand{\pincLabL}{\mathfrak{l}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{n}}{\mathfrak{n}} \newcommand{\vlne{\mathfrak{l}}}{\vlne{\mathfrak{l}}} \newcommand{\vlne{\mathfrak{m}}}{\vlne{\mathfrak{m}}} \newcommand{\vlne{\mathfrak{n}}}{\vlne{\mathfrak{n}}} \newcommand{\pincL}{\mathfrak{L}} \newcommand{\pincE}{E} \newcommand{F}{F} \newcommand{G}{G} \newcommand{H}{H} \newcommand{P}{P} \newcommand{Q}{Q} \newcommand{\pinca}{a} \newcommand{\textcolor{Red}{a}}{\textcolor{Red}{a}} \newcommand{b}{b} \newcommand{\textcolor{Blue}{b}}{\textcolor{Blue}{b}} \newcommand{c}{c} \newcommand{d}{d} \newcommand{\pincna}{{\overline{a}}} \newcommand{\textcolor{Red}{\pincna}}{\textcolor{Red}{\pincna}} \newcommand{{\overline{b}}}{{\overline{b}}} \newcommand{\textcolor{Blue}{\pincnb}}{\textcolor{Blue}{{\overline{b}}}} \newcommand{{\overline{c}}}{{\overline{c}}} \newcommand{{\overline{d}}}{{\overline{d}}} \newcommand{\pincs}{s} \newcommand{t}{t} \newcommand{u}{u} \newcommand{v}{v} \newcommand{\pincalpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{s}{s} \newcommand{\pincNam}[1]{\operatorname{n}(#1)} \newcommand{\pincFN}[1]{\operatorname{fn}(#1)} \newcommand{\pincBN}[1]{\operatorname{bn}(#1)} \newcommand{cut-free\xspace}{cut-free\xspace} \newcommand{Cut-free\xspace}{Cut-free\xspace} \newcommand{cut-elimination\xspace}{cut-elimination\xspace} \newcommand{Cut-elimination\xspace}{Cut-elimination\xspace} \newcommand{Deep Inference\xspace}{Deep Inference\xspace} \newcommand{Linear Lambda Calculus\xspace}{Linear Lambda Calculus\xspace} \newcommand{Linear Logic\xspace}{Linear Logic\xspace} \newcommand{right-context}{right-context} \newcommand{Right-context}{Right-context} \newcommand{invertible}{invertible} \newcommand{Invertible}{Invertible} \newcommand{\invertible\ structure}{invertible\ structure} \newcommand{\Invertible\ structure}{Invertible\ structure} \newcommand{co-invertible}{co-invertible} \newcommand{Co-invertible}{Co-invertible} \newcommand{\coinvertible\ structure}{co-invertible\ structure} \newcommand{\Coinvertible\ structure}{Co-invertible\ structure} \newcommand{canonical}{canonical} \newcommand{Canonical}{Canonical} \newcommand{\canonical\ structure}{canonical\ structure} \newcommand{\Canonical\ structure}{Canonical\ structure} \newcommand{trivial}{trivial} \newcommand{Trivial}{Trivial} \newcommand{\trivial\ derivation}{trivial\ derivation} \newcommand{\Trivial\ derivation}{Trivial\ derivation} \newcommand{non-\trivial\xspace}{non-trivial\xspace} \newcommand{Non-\Trivial\xspace}{Non-Trivial\xspace} \newcommand{\nontrivial\ derivation}{non-\trivial\xspace\ derivation} \newcommand{\Nontrivial\ derivation}{Non-\Trivial\xspace\ derivation} \newcommand{standard\xspace}{standard\xspace} \newcommand{Standard\xspace}{Standard\xspace} \newcommand{\standard\ derivation}{standard\xspace\ derivation} \newcommand{\Standard\ derivation}{Standard\xspace\ derivation} \newcommand{non-\standard\xspace}{non-standard\xspace\xspace} \newcommand{Non-\standard\xspace}{Non-standard\xspace\xspace} \newcommand{\nonstandard\ derivation}{non-\standard\xspace\ derivation} \newcommand{\Nonstandard\ derivation}{Non-\standard\xspace\ derivation} \newcommand{simple\xspace}{simple\xspace} \newcommand{Simple\xspace}{Simple\xspace} \newcommand{\simple\ structure}{simple\xspace\ structure} \newcommand{\Simple\ structure}{Simple\xspace\ structure} \newcommand{simple process}{simple process} \newcommand{Simple process}{Simple process} \newcommand{process structure}{process structure} \newcommand{Process structure}{Process structure} \newcommand{environment structure}{environment structure} \newcommand{Environment structure}{Environment structure} \newcommand{reduction\xspace}{reduction\xspace} \newcommand{Reduction\xspace}{Reduction\xspace} \newcommand{labeled transition system}{labeled transition system} \newcommand{Labeled transition system}{Labeled transition system} \newcommand{proof-search\xspace}{proof-search\xspace} \newcommand{\Set}[1]{ \{ #1 \}} \newcommand{\Size}[1]{|#1|} \newcommand{\smalltitle}[1]{\small #1} \newcommand{such that}{such that} \newcommand{Linear Logic}{Linear Logic} \newcommand{iff}{iff} \newcommand{i.e.\xspace}{i.e.\xspace} \newcommand{w.r.t.\xspace}{w.r.t.\xspace} \newcommand{\dfn}[1]{\emph{#1}} \newcommand{::=\ }{::=\ } \newcommand{DI\xspace}{DI\xspace} \newcommand{\mathsf{NEL}\xspace}{\mathsf{NEL}\xspace} \newcommand{\mathsf{BV}\xspace}{\mathsf{BV}\xspace} \newcommand{\mathsf{SBV}\xspace}{\mathsf{SBV}\xspace} \newcommand{\mathsf{B}\xspace}{\mathsf{B}\xspace} \newcommand{\mathsf{B}\xspace}{\mathsf{B}\xspace} \newcommand{\BV\mathsf{Q}\xspace}{\mathsf{BV}\xspace\mathsf{Q}\xspace} \newcommand{{\BV\mathsf{Q}\llcorner}\xspace}{{\mathsf{BV}\xspace\mathsf{Q}\llcorner}\xspace} \newcommand{{\BV\mathsf{Q}^{-}}\xspace}{{\mathsf{BV}\xspace\mathsf{Q}^{-}}\xspace} \newcommand{{\BV\mathsf{Q}\llcorner^{-}}\xspace}{{\mathsf{BV}\xspace\mathsf{Q}\llcorner^{-}}\xspace} \newcommand{\BVTCC}{\mathsf{BV}\xspace\CCSR\xspace} \newcommand{\CCSRM}{\mathsf{CCS_{spr}}\xspace} \newcommand{\CCSR}{\mathsf{CCS_{spq}}\xspace} \newcommand{\SBV\mathsf{Q}\xspace}{\mathsf{SBV}\xspace\mathsf{Q}\xspace} \newcommand{\SBVT^{-}\xspace}{\SBV\mathsf{Q}\xspace^{-}\xspace} \newcommand{\mathsf{CCS}\xspace}{\mathsf{CCS}\xspace} \newcommand{\mathsf{LL}\xspace}{\mathsf{LL}\xspace} \newcommand{\mathsf{MLL}\xspace}{\mathsf{MLL}\xspace} \newcommand{\mathsf{IMLL}\xspace}{\mathsf{IMLL}\xspace} \definecolor{Blue}{rgb}{0,0,1} \definecolor{Red}{rgb}{1,0.2,0} \newcommand{\LR}[1]{(\textbf{\textcolor{Red}{#1}})} \newcommand{\LRnp}[1]{\textbf{\textcolor{blue}{#1}}} \newcommand{\InsertGraphics}[3]{ \begin{center} \epsfig{width=#1\textwidth,angle=#2,file=#3,clip=} \end{center}} \newtheorem{lthm}{Theorem}[section]{\bfseries}{\itshape} \newtheorem{lprop}[lthm]{Proposition}{\bfseries}{\itshape} \newtheorem{llem}[lthm]{Lemma}{\bfseries}{\itshape} \newtheorem{lcor}[lthm]{Corollary}{\bfseries}{\itshape} \newtheorem{lcon}[lthm]{Conjecture}{\bfseries}{\itshape} \newtheorem{lfac}[lthm]{Fact}{\bfseries}{\itshape} \newtheorem{ldef}[lthm]{Definition}{\bfseries}{} \newtheorem{lrem}[lthm]{Remark}{\bfseries}{} \newtheorem{lexa}[lthm]{Example}{\bfseries}{} \let\theorem=\lthm \let\proposition=\lprop \let\lemma=\llem \let\corollary=\lcor \let\conjecture=\lcon \let\fact=\lfac \let\definition=\ldef \let\remark=\lrem \let\example=\lexa \catcode`\@=11 \def\@thmcountersep{.} \def\@thmcounterend{\;} \catcode`@=12 \newcommand{\llcxVar}{\mathcal{V}} % \newcommand{\llcxSet}[1]{\Lambda_{#1}} % \newcommand{\llcxSetVal}{\Lambda^{\!\textrm{v}}} % \newcommand{\llcxFV}[1]{\operatorname{fv}(#1)} \newcommand{\llcxNO}[2]{\#(#1,#2)} \newcommand{\llcxSub}[2]{\Set{^{#1}\!/_{#2}}} \newcommand{\llcxSubT}[4]{\Set{^{#1}\!/_{#2}\, ^{#3}\!/_{#4}}} \newcommand{\llcxSubM}[4]{\Set{^{#1}\!/_{#2} \ldots ^{#3}\!/_{#4}}} \newcommand{\llcxF}[2]{\lambda #1.#2} \newcommand{\llcxA}[2]{(#1)\, #2} \newcommand{\llcxE}[3]{#1\, \subst{#3}{#2}} \newcommand{\llcxId}{\llcxF{x}{x}} \newcommand{\llcxM}{M} \newcommand{N}{N} \newcommand{P}{P} \newcommand{Q}{Q} \newcommand{\llcxX}{x} \newcommand{y}{y} \newcommand{w}{w} \newcommand{z}{z} \newcommand{\llcxo}{o} \newcommand{\llcxno}{\overline{o}} \newcommand{\llcxp}{p} % \newcommand{\llcxpP}{p'} % \newcommand{\llcxpPP}{p''} % \newcommand{\llcxq}{q} % \newcommand{\llcxqP}{q'} % \newcommand{\llcxqPP}{q''} % \newcommand{\llcxnp}{\overline{p}} % \newcommand{\llcxnpP}{{\overline{p'}}} % \newcommand{\llcxnpPP}{\overline{p''}} % \newcommand{\llcxnq}{\overline{q}} % \newcommand{\llcxnqP}{\overline{q'}} % \newcommand{\llcxnqPP}{\overline{q''}} % \newcommand{\llcxnr}{\overline{r}} % \newcommand{\llcxnrP}{\overline{r'}} % \newcommand{\llcxnrPP}{\overline{r''}} % \newcommand{\llcxBeta}{\rightarrow_{\beta}} \newcommand{\llcxBetaP}{\rightarrow_{\beta}^+} % \newcommand{\llcxBetaS}{\rightarrow_{\beta}^*} % \newcommand{\llcxSOSJud}[2]{ #1 \Rightarrow #2} % \newcommand{\llcxSOSBrule }{\mathsf{lft}} \newcommand{\llcxSOStrarule}{\mathsf{tra}} % \newcommand{\llcxSOSrflrule}{\mathsf{rfl}} % \newcommand{\llcxSOSfrule }{\mathsf{f}} % \newcommand{\llcxSOSalrule}{\mathsf{@l}} % \newcommand{\llcxSOSarrule}{\mathsf{@r}} % \newcommand{\llcxSOSslrule}{\sigma\mathsf{l}} % \newcommand{\llcxSOSsrrule}{\sigma\mathsf{r}} % \newcommand{\llcxSOSBrulein }{\mathsf{lft}} \newcommand{\llcxSOStrarulein}{\mathsf{tra}} % \newcommand{\llcxSOSrflrulein}{\mathsf{rfl}} % \newcommand{\llcxSOSfrulein }{\mathsf{f}} % \newcommand{\llcxSOSalrulein}{\mathsf{@\!l}} % \newcommand{\llcxSOSarrulein}{\mathsf{@\!r}} % \newcommand{\llcxSOSslrulein}{\sigma\mathsf{l}} % \newcommand{\llcxSOSsrrulein}{\sigma\mathsf{r}} % \newcommand{\vlrulename}[1]{\hbox{$\vlleftlabel {#1}$\!}} % \vllineartrue \renewcommand{\vlscn}[1]{ \!\ifvirginialakesmallbrackets\{\else\left\{\fi #1 \ifvirginialakesmallbrackets\}\else\right\}\fi } \newcommand{\vlone}{\circ} \newcommand{\vloneRed}{\textcolor{red}{\circ}} \newcommand{\vloneBlu}{\textcolor{blue}{\circ}} \newcommand{\vlzer}{\bullet} \newcommand{\vlqu}[2]{ \ifvirginialakesmallbrackets\lceil\else\left\lceil\fi #2 \ifvirginialakesmallbrackets\rfloor\else\right\rfloor\fi_{#1} } \newcommand{\vlfo}[2]{\vlqu{#1}{#2}} \newcommand{\vlex}[2]{\vlqu{#1}{#2}} \newcommand{\vlan}{\vlan} \newcommand{\vlor}{\vlor} \newcommand{\vlholer}[1]{#1^{\llcorner}} \newcommand{\textnormal{\textsf{Seq}}\xspace}{\textnormal{\textsf{Seq}}\xspace} \newcommand{\textnormal{\textsf{Par}}\xspace}{\textnormal{\textsf{Par}}\xspace} \newcommand{\textnormal{\textsf{CoPar}}\xspace}{\textnormal{\textsf{CoPar}}\xspace} \newcommand{\textnormal{\textsf{Sdq}}\xspace}{\textnormal{\textsf{Sdq}}\xspace} \newcommand{\textnormal{\textsf{Tensor}}\xspace}{\textnormal{\textsf{Tensor}}\xspace} \newcommand{\textnormal{\textsf{Not}}\xspace}{\textnormal{\textsf{Not}}\xspace} \renewcommand{\vlne}[1]{\overline{#1}} \newcommand{\strSeq}[1]{\vec{#1}} \newcommand{\strFN}[1]{\operatorname{fn}(#1)} \newcommand{\strBN}[1]{\operatorname{bn}(#1)} \newcommand{\strK}{K} \newcommand{P}{P} \newcommand{R}{R} \newcommand{S}{S} \newcommand{\breve{S}}{\breve{S}} \newcommand{\tilde{S}}{\tilde{S}} \newcommand{\check{S}}{\check{S}} \newcommand{T}{T} \newcommand{U}{U} \newcommand{V}{V} \newcommand{Z}{Z} \newcommand{\nstrK}{\vlne\strK} \newcommand{\vlne\strP}{\vlneP} \newcommand{\vlne\strR}{\vlneR} \newcommand{\vlne\strS}{\vlneS} \newcommand{\vlne\strT}{\vlneT} \newcommand{\vlne\strU}{\vlneU} \newcommand{\vlne\strV}{\vlneV} \newcommand{\vlne\strZ}{\vlneZ} \newcommand{\atma}{a} \newcommand{\atmaColor}[1]{\textcolor{#1}{\atma}} \newcommand{\atmaColor{Red}}{\atmaColor{Red}} \newcommand{\atmaColor{Blue}}{\atmaColor{Blue}} \newcommand{\atmaColor{Magenta}}{\atmaColor{Magenta}} \newcommand{b}{b} \newcommand{\atmbColor}[1]{\textcolor{#1}{b}} \newcommand{\atmbColor{Red}}{\atmbColor{Red}} \newcommand{\atmbColor{Blue}}{\atmbColor{Blue}} \newcommand{\atmbColor{Magenta}}{\atmbColor{Magenta}} \newcommand{c}{c} \newcommand{\atmcColor}[1]{\textcolor{#1}{c}} \newcommand{\atmcColor{Red}}{\atmcColor{Red}} \newcommand{\atmcColor{Blue}}{\atmcColor{Blue}} \newcommand{\atmcColor{Magenta}}{\atmcColor{Magenta}} \newcommand{d}{d} \newcommand{\natma}{\vlne\atma} \newcommand{\natmaColor}[1]{\textcolor{#1}{\natma}} \newcommand{\natmaColor{Red}}{\natmaColor{Red}} \newcommand{\natmaColor{Blue}}{\natmaColor{Blue}} \newcommand{\vlne\atmb}{\vlneb} \newcommand{\natmbColor}[1]{\textcolor{#1}{\vlne\atmb}} \newcommand{\natmbColor{Red}}{\natmbColor{Red}} \newcommand{\natmbColor{Blue}}{\natmbColor{Blue}} \newcommand{\vlne\atmc}{\vlnec} \newcommand{\natmcColor}[1]{\textcolor{#1}{\vlne\atmc}} \newcommand{\natmcColor{Red}}{\natmcColor{Red}} \newcommand{\natmcColor{Blue}}{\natmcColor{Blue}} \newcommand{\vlne\atmd}{\vlned} \newcommand{\atmLabL}{\mathfrak{l}} \newcommand{\mathfrak{m}}{\mathfrak{m}} \newcommand{\mathfrak{n}}{\mathfrak{n}} \newcommand{\natmLabL}{\vlne\mathfrak{l}} \newcommand{\vlne\mathfrak{m}}{\vlne\mathfrak{m}} \newcommand{\vlne\mathfrak{n}}{\vlne\mathfrak{n}} \newcommand{\atmalpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\gamma}{\gamma} \newcommand{\subst}[2]{\{^{#1}\!/\!_{#2}\}} \newcommand{\strSET}{\mathcal{S}} \newcommand{\bvtonedrule}{\vlone\downarrow} \newcommand{\vlone\uparrow}{\vlone\uparrow} \newcommand{\mathsf{u}\downarrow}{\mathsf{u}\downarrow} \newcommand{\mathsf{u}\uparrow}{\mathsf{u}\uparrow} \newcommand{\mathsf{v}\downarrow}{\mathsf{v}\downarrow} \newcommand{\mathsf{v}\uparrow}{\mathsf{v}\uparrow} \newcommand{\mathsf{n}\downarrow}{\mathsf{n}\downarrow} \newcommand{\mathsf{n}\uparrow}{\mathsf{n}\uparrow} \newcommand{\mathsf{f}\downarrow}{\mathsf{f}\downarrow} \newcommand{\mathsf{f}\uparrow}{\mathsf{f}\uparrow} \newcommand{\bvtrhourule}{\rho\uparrow} \newcommand{\rho\downarrow}{\rho\downarrow} \newcommand{\rho}{\rho} \newcommand{\bvtmixprule}{\mathsf{mixp}} \newcommand{\mathsf{pmix}}{\mathsf{pmix}} \newcommand{\mathsf{def}\downarrow}{\mathsf{def}\downarrow} \newcommand{\mathsf{def}\uparrow}{\mathsf{def}\uparrow} \newcommand{\mathsf{at}\downarrow\llcorner}{\mathsf{at}\downarrow\llcorner} \newcommand{\mathsf{at}\uparrow\llcorner}{\mathsf{at}\uparrow\llcorner} \newcommand{\mathsf{t}\downarrow}{\mathsf{t}\downarrow} \newcommand{\mathsf{t}\uparrow}{\mathsf{t}\uparrow} \newcommand{\mathsf{mt}\downarrow}{\mathsf{mt}\downarrow} \newcommand{\mathsf{mt}\uparrow}{\mathsf{mt}\uparrow} \newcommand{\mathsf{i}\downarrow}{\mathsf{i}\downarrow} \newcommand{\mathsf{i}\uparrow}{\mathsf{i}\uparrow} \newcommand{\bvtseqint}[2]{\scriptstyle{[#1\vlpa#2]}} \newcommand{\mathsf{q}\downarrow}{\mathsf{q}\downarrow} \newcommand{\mathsf{q}\uparrow}{\mathsf{q}\uparrow} \newcommand{\mathsf{ai}\downarrow}{\mathsf{ai}\downarrow} \newcommand{\mathsf{ai}\uparrow}{\mathsf{ai}\uparrow} \newcommand{\bvtswirule }{\mathsf{s}} \newcommand{\mathsf{p}\downarrow}{\mathsf{p}\downarrow} \newcommand{\mathsf{p}\uparrow}{\mathsf{p}\uparrow} \newcommand{\mathsf{w}\downarrow}{\mathsf{w}\downarrow} \newcommand{\mathsf{w}\uparrow}{\mathsf{w}\uparrow} \newcommand{\mathsf{subst}}{\mathsf{subst}} \newcommand{\bvtorerule}{\mathsf{subst}} \newcommand{\bvtorerule\mathsf{RL}}{\mathsf{subst}\mathsf{RL}} \newcommand{\bvtorerule\mathsf{LL}}{\mathsf{subst}\mathsf{LL}} \newcommand{\bvtorerule\mathsf{LR}}{\mathsf{subst}\mathsf{LR}} \newcommand{\mathsf{beta}}{\mathsf{beta}} \newcommand{\bvtslrule }{\mathsf{s-}\lambda} \newcommand{\mathsf{s-@l}}{\mathsf{s-@l}} \newcommand{\mathsf{s-@r}}{\mathsf{s-@r}} \newcommand{\mathsf{s-var}}{\mathsf{s-var}} \newcommand{\bvtonedrulein}{\mbox{$\vlone\!\!\downarrow$}} \newcommand{\mbox{$\vlone\!\!\uparrow$}}{\mbox{$\vlone\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{u}\!\!\downarrow$}}{\mbox{$\mathsf{u}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{u}\!\!\uparrow$}}{\mbox{$\mathsf{u}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{v}\!\!\downarrow$}}{\mbox{$\mathsf{v}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{v}\!\!\uparrow$}}{\mbox{$\mathsf{v}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{n}\!\!\downarrow$}}{\mbox{$\mathsf{n}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{n}\!\!\uparrow$}}{\mbox{$\mathsf{n}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{f}\!\!\downarrow$}}{\mbox{$\mathsf{f}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{f}\!\!\uparrow$}}{\mbox{$\mathsf{f}\!\!\uparrow$}} \newcommand{\bvtrhourulein}{\mbox{$\rho\!\!\uparrow$}} \newcommand{\mbox{$\rho\!\!\downarrow$}}{\mbox{$\rho\!\!\downarrow$}} \newcommand{\rho}{\rho} \newcommand{\bvtmixprulein}{\mathsf{mixp}} \newcommand{\mathsf{pmix}}{\mathsf{pmix}} \newcommand{\mbox{$\mathsf{def}\!\!\downarrow$}}{\mbox{$\mathsf{def}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{def}\!\!\uparrow$}}{\mbox{$\mathsf{def}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}}{\mbox{$\mathsf{at}\!\!\downarrow\!\!\llcorner$}} \newcommand{\mbox{$\mathsf{at}\!\!\uparrow\!\!\llcorner$}}{\mbox{$\mathsf{at}\!\!\uparrow\!\!\llcorner$}} \newcommand{\mbox{$\mathsf{t}\!\!\downarrow$}}{\mbox{$\mathsf{t}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{t}\!\!\uparrow$}}{\mbox{$\mathsf{t}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{mt}\!\!\downarrow$}}{\mbox{$\mathsf{mt}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{mt}\!\!\uparrow$}}{\mbox{$\mathsf{mt}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{i}\!\!\downarrow$}}{\mbox{$\mathsf{i}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{i}\!\!\uparrow$}}{\mbox{$\mathsf{i}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{q}\!\!\downarrow$}}{\mbox{$\mathsf{q}\!\!\downarrow$}} \newcommand{\bvtseqintin}[2]{[#1\vlpa#2]} \newcommand{\mbox{$\mathsf{q}\!\!\uparrow$}}{\mbox{$\mathsf{q}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{ai}\!\!\downarrow$}}{\mbox{$\mathsf{ai}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{ai}\!\!\uparrow$}}{\mbox{$\mathsf{ai}\!\!\uparrow$}} \newcommand{\bvtswirulein }{\mathsf{s}} \newcommand{\mbox{$\mathsf{p}\!\!\downarrow$}}{\mbox{$\mathsf{p}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{p}\!\!\uparrow$}}{\mbox{$\mathsf{p}\!\!\uparrow$}} \newcommand{\mbox{$\mathsf{w}\!\!\downarrow$}}{\mbox{$\mathsf{w}\!\!\downarrow$}} \newcommand{\mbox{$\mathsf{w}\!\!\uparrow$}}{\mbox{$\mathsf{w}\!\!\uparrow$}} \newcommand{\mathsf{sub}}{\mathsf{sub}} \newcommand{\bvtsubsRL}{\bvtorerule\mathsf{RL}} \newcommand{\bvtsubsLL}{\bvtorerule\mathsf{LL}} \newcommand{\bvtsubsLR}{\bvtorerule\mathsf{LR}} \newcommand{\mathsf{beta}}{\mathsf{beta}} \newcommand{\bvtslrulein }{\mathsf{s-}\!\lambda} \newcommand{\mathsf{s\!-\!@l}}{\mathsf{s\!-\!@l}} \newcommand{\mathsf{s\!-\!@r}}{\mathsf{s\!-\!@r}} \newcommand{\mathsf{s\!-\!var}}{\mathsf{s\!-\!var}} \newcommand{\bvtDder}{\mathcal{D}} \newcommand{\mathcal{E}}{\mathcal{E}} \newcommand{\bvtPder}{\mathcal{P}} \newcommand{\mathcal{Q}}{\mathcal{Q}} \newcommand{\bvtInfer}[2]{#1: #2} \newcommand{\bvtJudGen}[2]{\vdash_{#1 }^{#2 }} \newcommand{\bvtJud}[1]{\vdash_{\BV\mathsf{Q}\xspace}^{#1 }} \newcommand{\vdash}{\vdash} \newcommand{\mapCCSToDI}[1]{ \llbracket #1\rrbracket} \newcommand{o}{o} \newcommand{p}{p} \newcommand{q}{q} \newcommand{r}{r} \newcommand{s}{s} \newcommand{t}{t} \newcommand{\vlne\atmo}{\vlneo} \newcommand{\vlne\atmp}{\vlnep} \newcommand{\vlne\atmq}{\vlneq} \newcommand{\vlne\atmr}{\vlner} \newcommand{\vlne\atms}{\vlnes} \newcommand{\vlne\atmt}{\vlnet}
9732faf5a766a9197d61dbbf8b4db72d3425f223
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{s:introduction} Let $M$ be an $n$-dimensional polytope in $\R^n$, triangulated by a simplicial complex $\T_h$ of maximal simplex diameter $h$, which we orient by fixing an order for the vertices. (Although we restrict ourselves to polytopes for simplicity, several of the results below can easily be extended to triangulated Riemannian manifolds.) We denote by $\Lambda^k=\Lambda^k(M)$ the space of smooth differential $k$-forms on $M$. The Euclidean inner product restricted to $M$ determines the Hodge star operator $\Lambda^k\to \Lambda^{n-k}$, and the inner product on $\Lambda^k$ given by $\< u,v\>=\int u\wedge \star v$. The space $L^2\Lambda^k$ is the completion of $\Lambda^k$ with respect to this norm, i.e., the space of differential $k$-forms with coefficients in $L^2$. We then define $H\Lambda^k$ to be the space of forms $u$ in $L^2\Lambda^k$ whose exterior derivative $d u$, which may be understood in the sense of distributions, belongs to $L^2\Lambda^{k+1}$. These spaces combine to form the $L^2$ de~Rham complex $$ 0\to H\Lambda^0\xrightarrow{d} H\Lambda^1\xrightarrow{d}\cdots\xrightarrow{d} H\Lambda^n\to 0. $$ Viewing the exterior derivative $d$ as an unbounded operator $L^2\Lambda^k$ to $L^2\Lambda^{k+1}$ with domain $H\Lambda^k$, we may define its adjoint $d^*$. Thus a differential $k$-form $u$ belongs to the domain of $d^*$ if the operator $v\mapsto \<u,d v\>_{L^2\Lambda^k}$ is bounded on $L^2\Lambda^{k-1}$, and then $$ \<d^* u,v\>_{L^2\Lambda^{k-1}} = \<u,d v\>_{L^2\Lambda^k}, \quad v\in H\Lambda^{k-1}. $$ In particular, every $u$ which is smooth and supported in the interior of $M$ belongs to the domain of $d^*$ and $d^* u= (-1)^{k(n-k+1)}\star d\star u$. Let $\Delta_k(\T_h)$ denote the set of $k$-dimensional simplices of $\T_h$. We denote by $C_k(\T_h)$ the space of formal linear combinations of elements of $\Delta_k(\T_h)$ with real coefficients, the space of $k$-chains, and by $C^k(\T_h)=C_k(\T_h)^*$ the space of $k$-cochains. The coboundary maps $d^c:C^k(\T_h)\to C^{k+1}(\T_h)$ then determine the cochain complex. The \emph{de~Rham map} $R_h$ maps $\Lambda^k$ onto $C^k(\T_h)$ taking a differential $k$-form $u$ to the cochain \begin{equation}\label{dRm} R_hu: C_k(\T_h)\to \R, \quad c\mapsto \int_c u. \end{equation} The canonical basis for $C^k(\T_h)$ consists of the cochains $a_\tau$, $\tau\in\Delta_k(\T_h)$, where $a_\tau$ takes the value $1$ on $\tau$ and zero on the other elements of $\Delta_k(\T_h)$. The associated \emph{Whitney form} is given by $$ W_h a_\tau = k!\sum_{i=0}^k (-1)^i \lambda_i\, d\lambda_0\wedge\cdots\wedge\widehat{d\lambda_i}\wedge\cdots\wedge d\lambda_k, $$ where $\lambda_0,\ldots,\lambda_k$ are the piecewise linear basis functions associated to the vertices of the simplex listed, i.e., $\lambda_i$ is the continuous piecewise linear function equal to $1$ at the $i$th vertex of $\tau$ and vanishing at all the other vertices of the triangulation. The span of $W_ha_\tau$, $\tau\in\Delta_k(\T_h)$, defines the space of $\Lambda^k_h$ of Whitney $k$-forms. Its elements are piecewise affine differential $k$-forms which belong to $H\Lambda^k$ and satisfy $d \Lambda^k_h\subset \Lambda^{k+1}_h$. Thus the Whitney forms comprise a finite-dimensional subcomplex of the $L^2$ de~Rham complex called the Whitney complex: $$ 0\to \Lambda^0_h\xrightarrow{d} \Lambda^1_h\xrightarrow{d}\cdots\xrightarrow{d} \Lambda^n_h\to 0. $$ The \emph{Whitney map} $W_h$ maps $C^k(\T_h)$ isomorphically onto $\Lambda^k_h$ and satisfies \begin{equation}\label{ccm} W_hd^c c = d W_h c, \quad c\in C^k(\T_h), \end{equation} i.e., is a cochain isomorphism of the cochain complex onto the Whitney complex. Although Whitney $k$-forms need not be continuous, each has a well-defined trace on the simplices in $\Delta_k(\T_h)$, so the de~Rham map \eqref{dRm} is defined for $u\in\Lambda^k_h$. The Whitney map is a one-sided inverse of the de~Rham map: $R_hW_h c = c$ for $c\in C^k(\T_h)$. The reverse composition $\pi_h=W_hR_h:\Lambda^k\to \Lambda^k_h$ defines the \emph{canonical projection} into $\Lambda^k_h$. In \cite{dodziuk} and \cite{dodziuk-patodi}, Dodziuk and Patodi defined an inner product on cochains by declaring the Whitney map to be an isometry: \begin{equation}\label{isom} \< a,b\> = \<W_h a, W_h b\>_{L^2\Lambda^k}, \quad a,b\in C^k(\T_h). \end{equation} They then used this inner product to define the adjoint $\delta^c$ of the coboundary: \begin{equation}\label{cochainadj} \<\delta^c a,b\> = \<a, d^c b\>, \quad a,b\in C^k(\T_h). \end{equation} Since the coboundary operator $d^c$ may be viewed as a combinatorial version of the differential operator of the de~Rham complex, its adjoint $\delta^c$ may be viewed as a combinatorial codifferential, and together they define the combinatorial Laplacian on cochains given by $$ \Delta^c= d^c\delta^c+\delta^c d^c:C^k(\T_h)\to C^k(\T_h). $$ The work of Dodziuk and Patodi concerned the relation between the eigenvalues of this combinatorial Laplacian and those of the Hodge Laplacian. Dodziuk and Patodi asked whether the combinatorial codifferential $\delta^c$ is a consistent approximation of $d^*$ in the sense that if we have a sequence of triangulations $\T_h$ with maximum simplex diameter tending to zero and satisfying some regularity restrictions, then \begin{equation}\label{cochainconsist} \lim_h \|W_h \delta^c R_hu - d^* u\|=0, \end{equation} for sufficiently smooth $u\in \Lambda^k$ belonging to the domain of $d^*$. Here and henceforth the norm $\|\,\cdot\,\|$ denotes the $L^2$ norm. Since $C^k(\T_h)$ and $\Lambda^k_h$ are isometric, we may state this question in terms of Whitney forms, without invoking cochains. Define the Whitney codifferential $d^*_h:\Lambda^k_h\to \Lambda^{k-1}_h$ by \begin{equation}\label{adj} \<d^*_h u, v\>_{L^2\Lambda^{k-1}} = \<u,d v\>_{L^2\Lambda^k}, \quad u \in \Lambda^k_h,\ v\in \Lambda^{k-1}_h. \end{equation} Combining \eqref{ccm}, \eqref{isom}, and \eqref{cochainadj}, we see that $d^*_h = W_h\delta^c W_h^{-1}$. Therefore, $W_h\delta^c R_h\\= d^*_h\pi_h$, and the question of consistency becomes whether \begin{equation}\label{consist} \lim_h \|d^*_h\pi_hu - d^* u\|=0, \end{equation} for smooth $u$ in the domain of $d^*$. In Appendix~II of \cite{dodziuk-patodi}, the authors suggest a counterexample to \eqref{consist} for $1$-forms (i.e., $k=1$) on a two-dimensional manifold, but, as pointed out by Smits \cite{smits}, the example is not valid, and the question has remained open. Smits himself considered the question, remaining in the specific case of $1$-forms on a two-dimensional manifold, and restricting himself to a sequence of triangulations obtained by \emph{regular standard subdivision}, meaning that the triangulation is refined by dividing each triangle into four similar triangles by connecting the midpoints of the edges, resulting in a piecewise uniform sequence of triangluations. See Figure~\ref{f:puniform} for an example. In this case, Smits proved that \eqref{cochainconsist} or, equivalently, \eqref{consist} holds. Smits's result leaves open various questions. Does the consistency of the $1$-form codifferential on regular meshes in two dimensions extend to \begin{itemize} \item Mesh sequences which are not obtained by regular standard subdivision? \item More than two dimensions? \item The combinatorial codifferential on $k$-forms with $k>1$? \end{itemize} In this paper we show that the answer to the second question is affirmative, but the answers to the first and third are negative. More precisely, in Section~\ref{s:counterexample} we present a simple counterexample to consistency for a quadratic $1$-form on the sequence of triangulations shown in Figure~\ref{f:crisscross}. While these meshes are not obtained by regular standard subdivision, they may be obtained by another systematic subdivision process, \emph{standard subdivision}, as defined by Whitney in \cite[Appendix II, \S~4]{whitney}. Next, in Section~\ref{s:superconv}, we recall a definition of \emph{uniform} triangulations in $n$-dimensions which was formulated in the study of superconvergence of finite element methods, and we use the superconvergence theory to extend Smits's result on the consistency of the combinatorial codifferential on $1$-forms to $n$-dimensions, for triangulations that are uniform or piecewise uniform. In Section~\ref{s:experiments}, we provide computational confirmation of these results, both positive and negative. Finally, in Section~\ref{s:2-forms}, we numerically explore the case of $2$-forms in three dimensions and find that the combinatorial codifferential is inconsistent, even for completely uniform mesh sequences. \section{A counterexample to consistency}\label{s:counterexample} We take as our domain $M$ the square $(-1,1)\x (-1,1)\subset \R^2$, and as initial triangulation the division into four triangles obtained via drawing the two diagonals. We refine a triangulation by subdividing each triangle into four using standard subdivision. In this way we obtain the sequence of \emph{crisscross triangulations} shown in Figure~\ref{f:crisscross}, with the $m$th triangulation consisting of $4^m$ isoceles right triangles. We index the triangulation by the diameter of its elements, so we denote the $m$th triangulation by $\T_h$ where $h=4/2^m$. Using this triangulation, the authors of \cite{DMR91} showed that superconvergence does not hold for piecewise linear Lagrange elements. \begin{figure}[htb] \centerline{% \begin{tabular}{cc} \includegraphics[width=1.2in]{figures/unit_square_4_triangles_standard_subdivision/mesh0.png} & \includegraphics[width=1.2in]{figures/unit_square_4_triangles_standard_subdivision/mesh1.png} \\ \includegraphics[width=1.2in]{figures/unit_square_4_triangles_standard_subdivision/mesh2.png} & \includegraphics[width=1.2in]{figures/unit_square_4_triangles_standard_subdivision/mesh3.png} \end{tabular}} \caption{$\T_2$, $\T_1$, $\T_{1/2}$, $\T_{1/4}$, the first four crisscross triangulations.}\label{f:crisscross} \end{figure} Define $p:M\to\R$ by $p(x,y)=x-x^3/3$ and let $u=dp=(1-x^2)dx\in \Lambda^1(M)$. Now for $q\in H\Lambda^0(M)$ (i.e., the Sobolev space $H^1(M)$), we have $$ \star dq = \star\left(\frac{\partial q}{\partial x} dx + \frac{\partial q}{\partial y} dy\right) = \frac{\partial q}{\partial x} dy - \frac{\partial q}{\partial y} dx, $$ so $$ \< u, dq\>_{L^2\Lambda^1}=\int_M u\wedge\star dq = \int_M (1-x^2)\frac{\partial q}{\partial x}\,dx\,dy = \int_M 2x q\,dx \,dy = \<2x,q\>_{L^2\Lambda^0}. $$ Thus $u$ belongs to the domain of $d^*$ and $d^*u = 2x$. As an alternative verification, we may identify $1$-forms and vector fields. Then $u$ corresponds to the vector field $(1-x^2,0)$ which has vanishing normal component on $\partial M$, and so belongs to the domain of $d^*=-\div$ and $d^*u = -\div(1-x^2,0)=2x$. Set $w_h =d_h^*\pi_hu.$ Now $w_h\in\Lambda^0_h$, i.e., it is a continuous piecewise linear function. The projections $\pi_h$ into the Whitney forms form a cochain map, so $\pi_hu = \pi_h dp = d \pi_hp=\grad \pi_hp$, where $\pi_h p$ is piecewise linear interpolant of $p$. Thus $w_h\in \Lambda^0_h$ is determined by the equations \begin{equation}\label{dlap} \int_M w_h q\,dx\,dy = \int_M \grad \pi_h p\cdot \grad q\, dx\,dy, \quad q\in \Lambda^0_h. \end{equation} It turns out that we can give the solution to this problem explicitly. Since $w_h$ is a continuous piecewise linear function, it is determined by its values at the vertices of the triangulation $\T_h$. The coordinates of the vertices are integer multiples of $h/2$. In fact the value of $w_h$ at a vertex $(x,y)$ depends only on $x$ and for $h\le 1$ is given by $$ w_h(x,y) = \begin{cases} -h, & x=-1, \\ 0, & -1<x<1, \ \text{$x$ a multiple of $h$},\\ h, & x = 1,\\ -6+2h, & x=-1+h/2,\\ 6x, & -1 + h/2 < x < 1-h/2, \ \text{$x$ an odd multiple of $h/2$},\\ 6-2h, & x = 1-h/2. \end{cases} $$ A plot of the piecewise linear function $w_h$ is shown in Figure~\ref{f:cplot} for $h = 1/2$. To verify the formula it suffices to check \eqref{dlap} for all piecewise linear functions $q$ that vanish on all vertices except one. There are several cases depending on how close the vertex is to the boundary, and the computation is tedious, but elementary. Here we only give the details when the vertex is $(x,y)$ with $-1+h/2 <x<1-h/2$ and $x$ is an odd multiple of $h/2$. To this end, let $q$ be the piecewise linear function that is one on vertex $(x,y)$ and vanishes on all the remaining vertices. In this case, the support of $q$ is the union of the four triangles $T_1, T_2, T_3, T_4$ that have $(x,y)$ as a vertex (see Figure \ref{f:triangles}). According to the formula, in the support of $q$, one has $w_h= 6\, x \, q$. A simple calculation then shows that the left-hand side of \eqref{dlap} is \begin{equation*} \int_M w_h q\,dx\,dy = 6\, x \sum_{i=1}^4 \int_{T_i} q^2 dx\, dy= 4\,x\,m, \end{equation*} where $m=h^2/4=|T_i|$ for any $i$. To calculate the right-hand side of \eqref{dlap} for this $q$, we calculate that $$ \grad q = \frac{2}{h} \begin{cases} (1, 0), & \text{ on } T_1, \\ (0,1), & \text{ on } T_2,\\ (-1, 0), & \text{ on } T_3,\\ (0,-1), & \text{ on } T_4, \end{cases} $$ and $$ \grad \pi_h p= \frac{2}{h} \begin{cases} (p(x)-p(x-\frac{h}2), 0), & \text{ on } T_1, \\ (\frac{1}{2}[p(x+\frac{h}{2})-p(x-\frac{h}{2})],p(x)-\frac{1}{2}[p(x+\frac{h}{2})+p(x-\frac{h}{2})]), & \text{ on } T_2,\\ (p(x+\frac{h}{2})-p(x), 0), & \text{ on } T_3,\\ (\frac{1}{2}[p(x+\frac{h}{2})-p(x-\frac{h}{2})],\frac{1}{2}[p(x+\frac{h}{2})+p(x-\frac{h}{2})]-p(x)), & \text{ on } T_4. \end{cases} $$ Hence, \begin{equation*} \begin{split} \int_M \grad \pi_h p\cdot \grad q\, dx\,dy &= \sum_{i=1}^4 \int_{T_i} \pi_h p\cdot \grad q\, dx\,dy \\ &= \frac{16} {h^2} (p(x)-\frac{1}{2}[p(x-\frac{h}{2})+p(x+\frac{h}{2})]) m =4 \, x m. \end{split} \end{equation*} This verifies \eqref{dlap} for this piecewise linear function $q$. \begin{figure}[htb] \centerline{% \setlength{\unitlength}{.8in} \begin{picture}(2.4,2.4) \linethickness{.5pt} \put(0, 0){\line(1, 0){2}} \put(0, 0){\line(0, 1){2}} \qbezier(0, 0)(1,1)(2,2) \qbezier(2, 0)(1,1)(0,2) \put(2, 0){\line(0, 1){2}} \put(0, 2){\line(1, 0){2}} \put(1.1, .95){$(x,y)$} \put(-.8,-.2){$(x-h/2,y-h/2)$} \put(1.2,2.1){$(x+h/2,y+h/2)$} \put(.3, .9){$T_1$} \put(.9, .3){$T_2$} \put(1.7, .9){$T_3$} \put(.9, 1.6){$T_4$} \end{picture}} \vspace{10pt} \caption{The support of the piecewise linear function $q$.}\label{f:triangles} \end{figure} \begin{figure}[htb] \centerline{% \begin{tabular}{cccc} \includegraphics[width=3in]{figures/counterexample.png} \end{tabular}} \caption{The spiked surface is the graph of the piecewise linear function $w_h=d^*_h \pi_h f$ for $h=1/4$. The plane is the graph of the linear function $d^*u$.}\label{f:cplot} \end{figure} Finally, we note that, since $w_h$ essentially oscillates between $6x$ and $0$, it does not converge in $L^2$ to $d^*u$ (or to anything else) as $h$ tends to zero. \section{Consistency for $1$-forms on piecewise uniform meshes}\label{s:superconv} We continue to consider a sequence of triangulations $\T_h$ indexed by a positive parameter $h$ tending to $0$. We take $h$ to be equivalent to the maximal simplex diameter $$ c h \le \max_{T\in\Delta_n(\T_h)}\diam T \le C h, $$ for some positive constants $C,c$ independent of $h$ (throughout we denote by $C$ and $c$ generic constants, not necessarily the same in different occurrences). We also assume that the sequence of triangulations is \emph{shape regular} in the sense that there exists $c>0$ such that $$ \rho(T) \ge c \diam T, $$ for all $T\in\T_h$ and all $h$, where $\rho(T)$ is the diameter of the ball inscribed in $T$. We begin with some estimates for the approximation of a $k$-form by an element of $\Lambda^k_h$. For this we need to introduce the spaces of differential forms with coefficients in a Sobolev space. Let $m$ be a non-negative integer and $u$ a $k$-form defined on a domain $M\subset\R^n$, which we may expand as \begin{equation}\label{kform} u= \sum_{1\le i_1<\cdots<i_k\le n} u_{i_1\cdots i_k}\,dx^{i_1}\wedge\cdots\wedge dx^{i_k}. \end{equation} Using multi-index notation for partial derivatives of the coefficients $u_{i_1\cdots i_k}$, we define the $m$th Sobolev norm and seminorm by \begin{align*} \|u\|_{H^m\Lambda^k}^2 &= \sum_{1\le i_1<\cdots<i_k\le n}\ \sum_{|\alpha|\le m} \|D^\alpha u_{i_1\cdots i_k}\|_{L^2(M)}^2, \\ |u|_{H^m\Lambda^k}^2 &= \sum_{1\le i_1<\cdots<i_k\le n}\ \sum_{|\alpha|= m} \|D^\alpha u_{i_1\cdots i_k}\|_{L^2(M)}^2, \end{align*} and define the space $H^m\Lambda^k(M)$ to consist of all $k$-forms in $M$ for which the Sobolev norm $\|u\|_{H^m\Lambda^k}$ is finite. With this notation, we can state the basic approximation result that for any shape regular sequence of triangulations there is a constant $C$ such that \begin{equation}\label{e:H1approx} \inf_{v\in \Lambda^k_h} \|u-v\| \le C h \|u\|_{H^1\Lambda^k}, \quad u\in H^1\Lambda^k(M). \end{equation} For a proof, see \cite[Theorem~5.8]{afw-bull}. Since $H^1\Lambda^k$ is dense in $L^2\Lambda^k$, this implies that \begin{equation}\label{approx} \dist(f,\Lambda^k_h) := \inf_{v\in \Lambda^k_h}\|f-v\| \to 0 \text{ as $h\to0$}, \quad f\in L^2\Lambda^k(M). \end{equation} In addition to the best approximation estimate \eqref{e:H1approx}, we also need an $O(h)$ estimate on the projection error $\|u-\pi_h u\|$. For this we require more regularity of $u$, since $\pi_h u$ is defined in terms of traces of $u$ on $k$-dimensional faces, which need not be defined on $H^1\Lambda^k$. \begin{lem} Let $\{\T_h\}$ be a shape regular sequence of triangulations of $M\subset\R^n$ and $k$ an integer between $0$ and $n$. Let $\ell$ be the smallest integer so that $\ell>(n-k)/2$. Then there exists a constant $C$, depending only on $n$ and the shape regularity constant, such that \begin{equation}\label{proj-estimate} \|\pi_h u-u\|_{L^2\Lambda^k} \le C \, \sum_{m=1}^{\ell} h^m|u|_{H^m\Lambda^k}, \quad u\in H^\ell\Lambda^k(M). \end{equation} \end{lem} \begin{proof} First we note that the canonical projection is defined simplex by simplex, as $$ (\pi_h u)|_T = \pi_T(u|_T), $$ where, for $v$ a $k$-form on $T$, $\pi_T v$ is its interpolant into the space of Whitney forms on the single simplex $T$. Therefore, it is enough to prove that \begin{equation}\label{etp} \|u-\pi_T u\|_{L^2\Lambda^k(T)} \le C \, \sum_{m=1}^{\ell} h^m|u|_{H^m\Lambda^k(T)}, \quad u\in H^\ell\Lambda^k(T), \end{equation} with the constant $C$ depending on $T$ only through its shape constant. We prove this first for the unit right simplex in $\R^n$, $\hat T$, with vertices at the origin and the $n$ points $(1,0,\ldots,0)$, $(0,1,0,\ldots)$, \dots. Since $\ell>(n-k)/2$, we obtain, by the Sobolev embedding theorem, that $\|\pi_{\hat T} u\|_{L^2\Lambda^k(\hat T)}\le C\|u\|_{H^\ell\Lambda^k(\hat T)}$, and so, by the triangle inequality, $$ \|u-\pi_{\hat T} u\|_{L^2\Lambda^k(\hat T)}\le C\|u\|_{H^\ell\Lambda^k(\hat T)}. $$ Now let $\bar u = n!\int_{\hat T}u$, a constant $k$-form on $\hat T$ equal to the average of $u$. Then $\pi_{\hat T}\bar u = \bar u$, so \begin{multline*} \|u-\pi_{\hat T} u\|_{L^2\Lambda^k(\hat T)} = \|(u-\bar u)-\pi_{\hat T} (u-\bar u)\|_{L^2\Lambda^k(\hat T)} \\ \le C \|u-\bar u\|_{H^\ell\Lambda^k(\hat T)} \le C(\|u-\bar u\|_{L^2\Lambda^k(\hat T)} +\sum_{m=1}^\ell |u|_{H^m\Lambda^k(\hat T)}), \end{multline*} where we have used the fact that $\bar u$ is a constant form, so its $m$th Sobolev seminorm vanishes for $m\ge 1$. Now we invoke Poincar\'e's inequality $$ \|u-\bar u\|_{L^2\Lambda^k(\hat T)}\le C|u|_{H^1\Lambda^k(\hat T)}. $$ Putting things together, and writing $\hat u$ instead of $u$, we have shown that \begin{equation}\label{hat} \|\hat u-\pi_{\hat T} \hat u\|_{L^2\Lambda^k(\hat T)}\le C\sum_{m=1}^\ell |\hat u|_{H^m\Lambda^k(\hat T)}, \quad \hat u \in H^\ell\Lambda^k(\hat T). \end{equation} This is the desired result \eqref{etp} in the case $T=\hat T$. To obtain the result for a general simplex, we scale via an affine diffeomorphism $F:\hat T\to T$. If $u$ is the $k$-form on $T$ given by \eqref{kform}, then \begin{equation}\label{pullback} F^* u = \sum_{\{1\le i_1<\cdots <i_k\le n\}}\ \sum_{j_1,\ldots,j_k=1}^n (u_{i_1\cdots i_k}\circ F) \pd{F^{i_1}}{\hat x^{j_1}}\cdots\pd{F^{i_k}}{\hat x^{j_k}}\, d\hat x^{j_1}\wedge\cdots\wedge d\hat x^{j_k}. \end{equation} Each of the partial derivatives $\partial F^{i_p}/\partial \hat x^{j_q}$ is a constant bounded by $h$. Using the chain rule and change of variables in the integration, we find that \begin{equation}\label{scaling} c|F^*u|_{H^m\Lambda^k(\hat T)} \le (\operatorname{vol}{T})^{-1/2} \, h^{m+k} |u|_{H^m\Lambda^k(T)} \le C|F^*u|_{H^m\Lambda^k(\hat T)}, \end{equation} where the constants $c$ and $C$ depend only on $m$ and $n$ and the shape regularity constant of $T$. Combining \eqref{hat} and \eqref{scaling} we get \begin{multline*} \|u-\pi_T u\|_{L^2\Lambda^k(T)} \le C(\operatorname{vol}{T})^{1/2} h^{-k}\|\hat u - \pi_{\hat T}\hat u\|_{L^2\Lambda^k(\hat T)} \\ \le C (\operatorname{vol}{T})^{1/2}h^{-k}\sum_{m=1}^\ell |\hat u|_{H^m\Lambda^k(\hat T)} \le C\sum_{m=1}^\ell h^m|u|_{H^m\Lambda^k(T)}, \end{multline*} which establishes \eqref{etp}. \end{proof} Our approach to bounding the norm of the consistency error is to relate it to another quantity which has been studied in the finite element literature, namely \begin{equation}\label{defA} A_h(u):= \sup_{v_h\in \Lambda_h^{k-1}}\frac{\langle u-\pi_hu, d v_h \rangle}{\|v_h\|}. \end{equation} \begin{thm}\label{t:equiv} Assume the approximation property \eqref{approx}. Then, for any smooth $u\in L^2\Lambda^k$ belonging to the domain of $d^*$ we have $$ \lim_h \|d^*u-d^*_h\pi_hu\| = 0 \iff \lim_h A_h(u) =0. $$ \end{thm} This follows immediately from Lemma~\ref{t:eq}. \begin{lem}\label{t:eq} Let $1\leq k\leq n$, and let $u\in L^2\Lambda^k$ be smooth and in the domain of $d^*$. Then \begin{equation}\label{e:eq} A_h(u) \leq \|d^*u-d^*_h\pi_hu\| \leq \dist(d^*u,\Lambda_h^{k-1})+A_h(u). \end{equation} \end{lem} \begin{proof} The first inequality is straightforward. For any $v_h\in \Lambda^{k-1}_h$, \begin{equation*} \frac{\langle u-\pi_hu, d v_h \rangle}{\|v_h\|} =\frac{\langle d^*u-d^*_h\pi_hu,v_h\rangle}{\|v_h\|} \le\|d^*u-d^*_h\pi_hu\|. \end{equation*} For the second inequality, we introduce the $L^2$-orthogonal projection $P_h:L^2\Lambda^{k-1}\to \Lambda_h^{k-1}$ and invoke the triangle inequality to get \begin{equation}\label{e:triangle} \|d^*u-d^*_h\pi_hu\| \leq \|d^*u-P_hd^*u\|+\|P_hd^*u-d^*_h\pi_hu\| =\dist(d^*u,\Lambda^{k-1}_h)+\|w\|, \end{equation} where $w=P_hd^*u-d^*_h\pi_hu\in\Lambda^k_h$. Now \begin{equation} \|w\|^2 = \langle P_hd^*u-d^*_h\pi_hu,w\rangle=\langle u-\pi_hu,d w\rangle, \end{equation} and hence \begin{equation} \|w\| = \frac{\langle u-\pi_hu,d w\rangle}{\|w\|} \leq\sup_{v_h\in \Lambda_h^{k-1}}\frac{\langle u-\pi_hu, d v_h \rangle}{\|v_h\|}=A_h(u), \end{equation} which completes the proof. \end{proof} Thus we wish to bound $\langle u-\pi_h u,d v_h\rangle/\|v_h\|$ for smooth $u$ in the domain of $d^*$ and $v_h\in\Lambda^k_h$. An obvious approach is to apply the Cauchy--Schwarz inequality and then use the approximation estimate \eqref{proj-estimate} to obtain \begin{equation}\label{cs} |\langle u-\pi_h u,d v_h\rangle|\le \|u-\pi_h u\|\|d v_h\| \le Ch\|u\|_{H^\ell \Lambda^k}\|d v_h\|. \end{equation} To continue, we need to bound $\|d v_h\|/\|v_h\|$ for $v_h$ an arbitrary non-zero element of $\Lambda^k_h$. Because $\Lambda^k_h$ consists of piecewise polynomials, it is possible to bound its derivative in terms of its value using a Bernstein type inequality or inverse estimate. This gives that \begin{equation}\label{e:inv} \|d v_h\|\le C\underline h^{-1} \|v_h\|, \quad v_h\in\Lambda^k_h, \end{equation} where $\underline h = \min_{T\in\Delta_n(\T_h)} \diam T$. Unfortunately, even if we assume that our triangulations are \emph{quasiuniform}, i.e., that $\underline h \ge c h$ for some fixed $c>0$, this just leads to the bound $$ A_h(u) \le C\|u\|_{H^\ell\Lambda^k}, $$ which does not tend to zero with $h$. In fact, we cannot hope to get a bound which tends to zero without further hypotheses, since, as we have seen, even for the nice mesh sequence and form $u$ considered in the previous section, $d^*_h$ is not consistent, and so $A_h(u)$ does not tend to zero. Nonetheless, for very special mesh sequences it is possible to improve the bound \eqref{cs} from first to second order in $h$. This was established by Brandts and K\v r\'\i\v zek in their work on gradient superconvergence \cite{BK03}. The mesh condition is embodied by the following concept. \begin{defn}[\cite{BK03}] A triangulation $\T$ on $M$ is called \emph{uniform} if there exist $n$ linearly independent vectors $e_1,\ldots,e_n$, such that \begin{enumerate} \item Every simplex in $\T$ contains an edge parallel to each $e_j$. \item If an edge $e$ is parallel to one of the $e_j$ and is not contained in $\partial M$, then the union $P_e$ of simplices containing $e$ is invariant under reflection through the midpoint $m_e$ of $e$, i.e., $2m_e-x\in P_e$ for all $x\in P_e$. \end{enumerate} \end{defn} The crisscross triangulations shown in Figure~\ref{f:crisscross} satisfy the first condition of the definition, but not the second, and so are not uniform. On the other hand, the mesh sequence that is obtained by starting from a single triangle, or from a division of a square into two triangles and applying regular standard subdivision, is uniform. See the first two rows of Figure~\ref{f:uniform}. A uniform triangulation of the cube in $n$ dimensions is obtained by subdividing it into $m^n$ subcubes, and dividing each of these into $n!$ simplices sharing a common diagonal, with all the diagonals of the subcubes chosen to be parallel. The 3D case is shown in Figure~\ref{f:uniform}. We refer to \cite{BK03} for more details. \begin{figure}[htb] \centerline{% \begin{tabular}{cccc} \includegraphics[width=1.1in]{figures/one_nonisoceles_triangle_regular_subdivision/mesh0.png} & \includegraphics[width=1.1in]{figures/one_nonisoceles_triangle_regular_subdivision/mesh1.png} & \includegraphics[width=1.1in]{figures/one_nonisoceles_triangle_regular_subdivision/mesh2.png} & \includegraphics[width=1.1in]{figures/one_nonisoceles_triangle_regular_subdivision/mesh3.png} \\[1ex] \includegraphics[width=1.1in]{figures/unit_square_2_triangles_regular_subdivision/mesh0.png} & \includegraphics[width=1.1in]{figures/unit_square_2_triangles_regular_subdivision/mesh1.png} & \includegraphics[width=1.1in]{figures/unit_square_2_triangles_regular_subdivision/mesh2.png} & \includegraphics[width=1.1in]{figures/unit_square_2_triangles_regular_subdivision/mesh3.png} \\[1ex] \includegraphics[width=1.1in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh0.png} & \includegraphics[width=1.1in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh1.png} & \includegraphics[width=1.1in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh2.png} & \includegraphics[width=1.1in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh3.png} \end{tabular}} \caption{Uniform triangulations.}\label{f:uniform} \end{figure} \begin{figure}[htb] \centerline{% \begin{tabular}{cc} \includegraphics[width=1.2in]{figures/unstructured_mesh_of_square_5_triangles_regular_subdivision/mesh0.png} & \includegraphics[width=1.2in]{figures/unstructured_mesh_of_square_5_triangles_regular_subdivision/mesh1.png} \\ \includegraphics[width=1.2in]{figures/unstructured_mesh_of_square_5_triangles_regular_subdivision/mesh2.png} & \includegraphics[width=1.2in]{figures/unstructured_mesh_of_square_5_triangles_regular_subdivision/mesh3.png} \end{tabular}} \caption{A piecewise uniform sequence of triangulations.}\label{f:puniform} \end{figure} Theorem 3.4 of \cite{BK03} claims that if $\{\T_h\}$ is a shape regular family of uniform triangulations of $M$, and if $u$ is a smooth $1$-form, then there exists a constant $C>0$ such that \begin{equation*} |\langle \pi_hu-u, d v_h \rangle| \leq C h^2\|u\|_{H^2\Lambda^1}\|d v_h\|, \end{equation*} for all $v_h\in \Lambda_h^0\cap \ring H^1(M)$ and $h>0$. Here $\ring H^1(M)$ denotes the space of $H^1(M)$ functions with vanishing trace on $\partial M$. However, their proof uses the inequality (cf. (1.5) of \cite{BK03}) \begin{equation*} \|\pi_hu-u\|_{L^2\Lambda^1} \le C \, h \|u\|_{H^1\Lambda^1}, \end{equation*} where $C$ is a constant independent of $u$. This would imply that $\pi_h$ can be continuously extended to $H^1\Lambda^1$, which is impossible for $n\geq3$. Fortunately, the proof in \cite{BK03} works verbatim if the above inequality is replaced by \eqref{proj-estimate}. Hence, the following result is essentially proved in \cite{BK03}. \begin{thm} \label{t:sup-con} Let $\{\T_h\}$ be a shape regular family of uniform triangulations of $M$, and let $u$ be a smooth $1$-form. Furthermore, let $\ell$ be the smallest integer so that $\ell>(n-1)/2$. Then there exists a constant $C>0$ such that \begin{equation}\label{e:h2-bd} |\langle \pi_hu-u, d v_h \rangle| \leq C h^2\|u\|_{H^\ell\Lambda^1}\|d v_h\|, \end{equation} for all $v_h\in \Lambda_h^0\cap \ring H^1(M)$ and $1>h>0$. \end{thm} Next we consider piecewise uniform sequences of triangulations. \begin{defn} A family $\T_h$ of triangulations of the polytope $M$ is called \emph{piecewise uniform} if there is a triangulation $\T$ of $M$ such that for each $h$, $\T_h$ is a refinement of $\T$ and for each $T\in\T$ and each $h$, the restriction of $\T_h$ to $T\in\Delta_n(T)$ is uniform. \end{defn} If, as in \cite{smits}, we start with an arbitrary triangulation of a polygon and refine it by standard regular subdivision, the resulting sequence of triangulations is piecewise uniform. This is illustrated in Figure~\ref{f:puniform}. The following theorem shows that $d^*$ is consistent for $1$-forms on piecewise uniform meshes, thus generalizing the main result of \cite{smits} from $2$ to $n$ dimensions. \begin{thm}\label{t:puniform} Assume that the family of triangulations $\{\T_h\}$ is a shape regular, quasiuniform, and piecewise uniform. Let $u\in H^\ell \Lambda^1(M)$ be a $1$-form in the domain of $d^*$, where $\ell$ is the smallest integer satisfying $\ell >(n-1)/2$. Then we have \begin{equation} \lim_{h\to 0}\|d^*u-d^*_h\pi_h u\| = 0. \end{equation} \end{thm} \begin{proof} Let $\T$ denote the triangulation of $M$ with respect to which the triangulations $\T_h$ are uniform. We will apply Theorem \ref{t:sup-con} to the uniform mesh sequences obtained by restricting $\T_h$ to each $T\in\T$. To this end, let $K=\bigcup_{T\in\T}\partial T$ denote the skeleton of $\T$, and set $$ \Sigma_h = \bigcup \{\,T\in \T_h\,|\,T\cap K\ne\emptyset\,\}. $$ We can decompose an arbitrary function $v_h\in \Lambda_h^0$ as \begin{equation}\label{e:decomp} v_h=w_h+\sum_{T\in\T} v_h^T, \end{equation} where $w_h\in\Lambda^0_h$ is supported in $\Sigma_h$ and $v^T_h\in\Lambda^0_h$ is supported in $T$. Indeed, we just take $w_h$ to coincide with $v$ at the vertices of the triangulation contained in $K$ and to vanish at the other vertices, while $v^T_h=v$ at the vertices in the interior of $T$ and vanishes at the other vertices. Because the mesh family is shape regular and quasiuniform, there exist positive constants $C,c$ such that $$ c\|v\|^2 \le h \sum_{x\in\Delta_0(\T_h)} |v(x)|^2 \le C\|v\|^2, \quad v\in\Lambda^0_h, $$ from which we obtain the stability bound \begin{equation}\label{e:l2-stab} \|w_h\| + \sum_{T\in\T}\|v^T_h\| \le C\|v_h\|. \end{equation} Using the decomposition \eqref{e:decomp} of $v_h$ we get \begin{equation} \begin{split} |\langle \pi_hu-u, d v_h \rangle| &\leq |\langle \pi_hu-u, d w_h \rangle| + \sum_{T\in\T}|\langle \pi_h u-u, d v_h^T \rangle|\\ &\leq C h\|u\|_{H^\ell\Lambda^1(\Sigma_h)}\|d w_h\| + Ch^2\sum_{T\in\T}\|u\|_{H^\ell \Lambda^1(M)}\|d v_h^T\|\\ &\leq C \|u\|_{H^\ell\Lambda^1(\Sigma_h)}\|w_h\|_{L^2(\Sigma_h)} + Ch\sum_{T\in\T}\|u\|_{H^\ell \Lambda^1(M)}\|v_h^T\|\\ &\leq C \left( \|u\|_{H^\ell \Lambda^1(\Sigma_h)} + h\|u\|_{H^\ell \Lambda^1(M)} \right) \|v_h\|, \end{split} \end{equation} where we have used the Cauchy--Schwarz inequality, the projection error estimate \eqref{proj-estimate}, the second order estimate \eqref{e:h2-bd} (which holds on the uniform meshes on each $T$), the inverse estimate of \eqref{e:inv}, and the $L^2$-stability bound \eqref{e:l2-stab}. Since the volume of $\Sigma_h$ goes to $0$ as $h\to0$, so does $\|u\|_{H^\ell \Lambda^1(\Sigma_h)}$. Thus $A_h(u)$ vanishes with $h$, and the desired result is a consequence of Theorem~\ref{t:equiv}. \end{proof} \begin{remark} The preceding proof shows that as long as the triangulation is mostly uniform, in the sense that the volume of the defective region goes to $0$ as $h\to0$, we obtain consistency. One can also extract information on the convergence rate. For instance, using the fact that $\Sigma_h$ is $O(h)$, we obtain $\|u\|_{H^\ell \Lambda^1(\Sigma_h)}\leq C\sqrt{h}\|u\|_{C^\ell \Lambda^1}$ for $u\in C^\ell \Lambda^1(M)$. \end{remark} \section{Computational experiments for $1$-forms}\label{s:experiments} In this section, we present numerical computations confirming the consistency of $d^*_h$ for $1$-forms on uniform and piecewise uniform meshes in $2$ and $3$ dimensions, and other computations confirming its inconsistency on more general meshes. The four tables in this section display the results of computations with various mesh sequences. In each case we show the maximal simplex diameter $h$, the number of simplices in the mesh, the consistency error $\|d^*_h\pi_h f - d^*f\|$, and the apparent order inferred from the ratio of consecutive errors. All computations were performed using the FEniCS finite element software library \cite{LoggMardalEtAl2012a}. The first two tables concern the problem on the square described in Section~\ref{s:counterexample}, i.e., the approximation of $d^*u$ where $u=(1-x^2)dx$. Table~\ref{tb:unstruct-2d} shows the results when the piecewise uniform mesh sequence shown in Figure~\ref{f:puniform} is used for the discretization. Notice that the consistency error clearly tends to zero as $O(h)$. \begin{table}[ht] \caption{When computed using the $2$-dimensional piecewise uniform mesh sequence of Figure~\ref{f:puniform}, the consistency error tends to $0$.} \label{tb:unstruct-2d} \begin{tabular}{rrrrr} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{triangles} & \multicolumn{1}{c}{error} & \multicolumn{1}{c}{order} \\ \hline 1 & 5.00e$-$01 & 20 & 6.25e$-$01 & \\ 2 & 2.50e$-$01 & 80 & 3.08e$-$01 & 1.02 \\ 3 & 1.25e$-$01 & 320 & 1.56e$-$01 & 0.98 \\ 4 & 6.25e$-$02 & 1,280 & 7.85e$-$02 & 0.99 \\ 5 & 3.12e$-$02 & 5,120 & 3.94e$-$02 & 1.00 \\ 6 & 1.56e$-$02 & 20,480 & 1.97e$-$02 & 1.00 \\ \hline \end{tabular} \end{table} By contrast, Table~\ref{tb:standard-2d} shows the counterexample described analytically in Section~\ref{s:counterexample}, using the mesh sequence of Figure~\ref{f:crisscross}, obtained by standard subdivision. In this case, the consistency error does not converge to zero, as is clear from the computations. \begin{table}[ht] \caption{With the mesh sequence of Figure~\ref{f:crisscross}, the consistency error does not tend to $0$.} \label{tb:standard-2d} \begin{tabular}{rrrrr} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{triangles} & \multicolumn{1}{c}{error} & \multicolumn{1}{c}{order} \\ \hline 1 & 5.00e$-$01 & 16 & 1.15 & \\ 2 & 2.50e$-$01 & 64 & 1.50 & $-$0.38 \\ 3 & 1.25e$-$01 & 256 & 1.60 & $-$0.09 \\ 4 & 6.25e$-$02 & 1,024 & 1.62 & $-$0.02 \\ 5 & 3.12e$-$02 & 4,096 & 1.63 & $-$0.01 \\ 6 & 1.56e$-$02 & 16,384 & 1.63 & $-$0.00 \\ \hline \end{tabular} \end{table} Similar results hold in $3$ dimensions. We computed the error in $d^*_hu$ on the cube $(-1,1)^3$, where again $u$ is given by $(1-x^2)dx$. We calculated with two mesh sequences, both starting from a partition of the cube into six congruent tetrahedra, all sharing a common edge along the diagonal from $(-1,-1,-1)$ to $(1,1,1)$. We constructed the first mesh sequence by regular subdivision, yielding the meshes shown in Figure~\ref{f:meshes-3d-reg}. These are uniform meshes, and the numerical results given in Table~\ref{tb:regular-3d} clearly demonstrate consistency. For the second mesh sequence we applied standard subdivision, obtaining the sequence of structured but non-uniform triangulations shown in Figure~\ref{f:meshes-3d-std}. In this case $d^*_h$ is inconsistent. See Table~\ref{tb:standard-3d}. \begin{figure}[htb] \centerline{% \begin{tabular}{cc} \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh1.png} & \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh2.png} \\ \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh3.png} & \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_regular_subdivision/mesh4.png} \end{tabular}} \caption{Uniform mesh sequence in 3D, obtained by regular subdivision.} \label{f:meshes-3d-reg} \end{figure} \begin{figure}[htb] \centerline{% \begin{tabular}{cc} \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_standard_subdivision/mesh1.png} & \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_standard_subdivision/mesh2.png} \\ \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_standard_subdivision/mesh3.png} & \includegraphics[width=1.2in]{figures/unit_cube_6_tetrahedra_standard_subdivision/mesh4.png} \end{tabular}} \caption{As in 2D, the mesh sequence in 3D obtained by standard subdivision is not uniform.} \label{f:meshes-3d-std} \end{figure} \begin{table}[ht] \caption{The consistency error for $d^*_h$ on $1$-forms in $3$D tends to zero when using the uniform mesh sequence of Figure~\ref{f:meshes-3d-reg}.} \label{tb:regular-3d} \begin{tabular}{rrrrr} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{tetrahedra} & \multicolumn{1}{c}{error} & \multicolumn{1}{c}{order} \\ \hline 1 & 1.00e+00 & 48 & 1.69e+00 &\\ 2 & 5.00e$-$01 & 384 & 9.70e$-$01 & 0.80\\ 3 & 2.50e$-$01 & 3,072 & 5.13e$-$01 & 0.92\\ 4 & 1.25e$-$01 & 24,576 & 2.63e$-$01 & 0.96\\ 5 & 6.25e$-$02 & 196,608 & 1.33e$-$01 & 0.98\\ 6 & 3.12e$-$02 & 1,572,864 & 6.69e$-$02 & 0.99\\ \hline \end{tabular} \end{table} \begin{table}[ht] \caption{The consistency error for $d^*_h$ on $1$-forms in $3$D, using the non-uniform mesh sequence of Figure \ref{f:meshes-3d-std}, does not tend to zero.} \label{tb:standard-3d} \begin{tabular}{rrrrr} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{tetrahedra} & \multicolumn{1}{c}{error} & \multicolumn{1}{c}{order} \\ \hline 0 & 1.00e+00 & 48 & 1.81e+00 &\\ 1 & 5.00e$-$01 & 384 & 2.71e+00 & $-$0.58\\ 2 & 2.50e$-$01 & 3,072 & 3.02e+00 & $-$0.16\\ 3 & 1.25e$-$01 & 24,576 & 3.11e+00 & $-$0.04\\ 4 & 6.25e$-$02 & 196,608 & 3.13e+00 & $-$0.01\\ \hline \end{tabular} \end{table} \section{Inconsistency for $2$-forms in $3$ dimensions}\label{s:2-forms} We have seen that for $1$-forms, $d^*_h$ is consistent if computed using piecewise uniform mesh sequences, but not with general mesh sequences. It is also easy to see that consistency holds for $n$-forms in $n$-dimensions for \emph{any} mesh sequence. This is because the canonical projection $\pi_h$ onto the Whitney $n$-forms (which are just the piecewise constant forms) is the $L^2$ orthogonal projection. Now if $v_h$ is a Whitney $(n-1)$-form, then $dv_h$ is a Whitney $n$-form, so the inner product $\<u-\pi_h u,dv_h\>=0$. Thus $A_h(u)$, defined in \eqref{defA}, vanishes identically, and so $d^*_h$ is consistent by Theorem~\ref{t:equiv}. Having understood the situation for $1$-forms and $n$-forms, this leaves open the question of whether consistency holds for $k$-forms with $k$ strictly between $1$ and $n$. In this section we study $2$-forms in $3$ dimensions and give numerical results indicating that $d^*_h$ is not consistent, even for uniform meshes. Let $u=(1-x^2)(1-y^2)dx\wedge dy$, a $2$-form on the cube $M=(-1,1)^3$. The corresponding vector field is $(0,0,(1-x^2)(1-y^2))$ which has vanishing tangential components on $\partial M$. Therefore $u$ belongs to the domain of $d^*$ and $d^*u$ is the $1$-form corresponding to $\curl u$, i.e., $d^*u = -2(1-x^2)ydx+2x(1-y^2)dy$. Table~\ref{tb:2-forms-3d} shows the consistency error $\|d^*_h\pi_h u - d^*u\|_{L^2\Lambda^1}$ computed using the sequences of uniform meshes displayed in Figure \ref{f:meshes-3d-reg}. This mesh sequence yields a consistent approximation of $d^*h$ for 1-forms, but the experiments clearly indicate that this is not so for 2-forms. \begin{table}[ht] \caption{The consistency error does not tend to zero for $2$-forms, even on a uniform mesh sequence.} \label{tb:2-forms-3d} \begin{tabular}{rrrrr} \multicolumn{1}{c}{$n$} & \multicolumn{1}{c}{$h$} & \multicolumn{1}{c}{triangles} & \multicolumn{1}{c}{error} & \multicolumn{1}{c}{order} \\ \hline 1 & 1.00e+00 & 48 & 1.59e+00 & \\ 2 & 5.00e$-$01 & 384 & 1.18e+00 & 0.43 \\ 3 & 2.50e$-$01 & 3072 & 1.00e+00 & 0.24 \\ 4 & 1.25e$-$01 & 24576 & 9.47e$-$01 & 0.08 \\ 5 & 6.25e$-$02 & 196608 & 3.37e+00 & $-$1.83 \\ \hline \end{tabular} \end{table} \bibliographystyle{amsplain}
d0ddd9e471c2ba72bc9966310dcc13ca69a4c100
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Concluding remarks}\label{S:end} \subsection{Immediate results} Writing this extended year report has achieved at least three goals: \begin{description} \item[Streamlining new ideas.] Reexplaning (renarrating?) research ideas and putting them in perspective has helped to crystallise them into publishable achievable objectives. \item[Knowledge dissemination.] This document can serve both as a scientific report for my colleagues and superiors, and as an entrance point for people who want to get acquainted with my results for other reasons. \item[Case study in self-archiving.] As I have said in the introduction, this report can be seen as advanced form of self-archiving. It was a relatively big effort, compared to the traditional ``just put the PDF online'' thing. Together with the open notebook initiative, it stressed the paradigm and raised some questions yet to be answered (e.g.: How to properly break one atomic SKO --- essentially, a publication --- into subatomic ones to distinguish ``I've done for the tool that was later described in this paper'' from ``I've done this for the particular version of this paper, which was later rejected''? What are all possible stages in the SKO lifecycle?). \end{description} \subsection{Special features} \begin{description} \item[The presence of an open notebook.] A lot of claims about dates, continuations and amounts of effort, made on the pages of this report, can be reformulated into queries on the open notebook, and formally validated as such. For now these claims were intentionally done in plain text because no reliable or traditionally acceptable infrastructure exists for them (yet). \item[Open access window.] All the papers mentioned here, were put online immediately after their submission (unless prohibited explicitly by the submission rules), and taken down immediately after their rejection (if any). At this stage, I do not know any better way to expose your research results to the public: official acceptance can take months and years, during which one could have profited from sharing the contents around. \item[Rejected material.] Not all rejected papers are rejected because they are inherently, irreparably bad: some turn out to be out of scope, lacking some essential results or simply not mature enough to be published (yet). With this report, I have exposed most of the dark data concerning my rejected material. \item[Unfruitful attempts.] Also classified as dark data by \cite{DarkData2007}, but of an entirely different nature: these are failed experiments: prototypes that have never made it to the point of being ready to be described in a paper. There can be traces of such unfruitful attempts in presentations and other subatomic SKOs before their futility becomes apparent. \item[Venues.] Knowledge about workshops, conferences and journals seems to float around in the academic community and is usually distributed as folklore, if at all. There are many reasons for doing so, ranging from the lack of incentive to the fear of occasional offence. \end{description} \subsection{Acknowledgements} I am most grateful to the following people, without whom 2012 would not have been the same: \begin{denselist} \item Ronald de Wolf, Bas de Lange, Tijs van der Storm, Jean-Marie Favre --- for inviting me to present at various events. \item Ralf L{\"a}mmel --- for inviting me for a working visit to Koblenz and hosting me there. \item Felienne Hermans --- for referring me to the matrix grammars, picture calculus and adjacent topics. \item Paul Klint, Jurgen Vinju, Tijs van der Storm, Sander Bohte --- for contributing to organising the PEM Colloquium. \item T.~B.~Dinesh --- for introducing me to the notion of renarration. \item Frans Grijzenhout and Sandra Rientjes --- for coorganising a conference with me. \item Lodewijk Gelauff --- for introducing me to the topic of structured data extraction. \item Mark van den Brand --- for appreciating the input in the discussion on the future of LDTA, and for putting trust in me for chairing its programme next year. \item Jurgen Vinju, Tijs van der Storm --- for collaborating on a grammar-related topic. \item Ralf L{\"a}mmel, Andrei Varanovich, Jean-Marie Favre --- for collaborating on megamodelling topics. \item Zinovy Diskin --- for insightful discussions during ETAPS'12 and MoDELS'12 on category theory and grammars in a broad sense. \item Anonymous reviewers of BX'12, SAC'12, LDTA'12, ECMFA'12, ICSM'12, SCAM'12, NordiCloud'12, FSE NIER'12, XM'12, MPM'12, NWPT'12, POPL'13, JUCS, EMSE and NWO Veni programme --- for the effort that they have put into considering, assessing and reviewing my work. \item All presenters at PEM and WCN --- for their contributions, the only essential component of the final product. \item All unmentioned colleagues and uncounted conference contacts --- for fruitful discussions. \item All people who tried to contact me through email --- for patience and unreasonable waiting times. \end{denselist} \section{Topics overview}\label{S:topics} \subsection{Guided grammar convergence}\label{S:guided} Let us consider two grammars in a broad sense \cite{KlintLV05}. We say that they represent one \emph{intended} software language, if there exists a complete bidirectional mapping between language instances that commits to grammatical structure of different grammars. For example, if a parser produces parse trees that can always be converted to abstract syntax trees expected by a static analysis tool and back, it means that they represent the same intended language. As another example, consider an object model used in a tool that stores its objects in an external database (XML or relational): the existence of a bidirectional mapping between entries (trees or tables) in the database and the objects in memory, means that they represent the same intended language, even though they use very different ways to describe it. An equivalence class spawned by this definition (i.e., a set of different grammars of the same intended language) effectively forms a \emph{grammarware product line} of products that perform different tasks on instances of the same intended language: in that sense, for example, all Java-based tools form a product line, if they agree on a language version and do not employ any highly permissive methods that would shift them into a broader class. For the sake of simplicity, let us focus on \emph{grammar product lines}: collections of grammars of the same intended language. The relation between a grammar product line and a grammarware product line is justified by research on automated derivation of grammar-based tools like parsers, environments, documentation, formatters and renovators from grammars~\cite{ASFSDF-Klint,DeYacc,PrettyPrint3,KlintLV05,CamachoJournal,LDF2011}. Suppose that we have two grammars: one that we call a \emph{master grammar} (a specially pre-constructed abstract grammar of the intended language) and one that we call a \emph{servant grammar} (a grammar derived from a particular language implementation). In general, there are four phases of guided grammar convergence, and they are presented in this section in the \emph{reverse} order. First, we consider the simplest scenario when all mismatches are of \emph{structural} nature. Then, we move on to a more complicated situation when a \emph{nominal} matching between sets of nonterminals is unknown. Since this is rather uncommon (most methods used in practice for imploding parse trees to abstract syntax trees, from Popart~\cite{Wile1997} to Rascal~\cite{Rascal}, heavily rely on equality of names), a new method for matching nonterminals has been developed. In short, it comprises construction of production signatures for each production rule in both grammars, and a search for equivalent and weakly equivalent production rules with respect to those signatures. Once a name resolution relation has been successfully built, a previously discussed structural matching can be applied. We will also discuss \emph{normalisations} that can transform any arbitrary grammar to a form easily consumable by our nominal and structural matching algorithms. Finally, I will list additional problems that indicate \emph{grammar design} decisions and therefore not affected by normalisations. However, I describe how to automatically detect such issues and to address them with grammar mutations. \subsubsection*{Structural matching} Let us assume the simplest scenarios: the two input grammars have the same set of nonterminals; neither of them has terminals; the starting nonterminal is the same and that the sets of production rules are different but have the same cardinality. These would be typical circumstances if, for example, the grammars define two alternative abstract syntaxes for the same intended language. We can start from the roots of both grammars and traverse them synchronously top-down, encountering only the following four circumstances: \begin{description} \item[Perfect match.] Convergence is trivially achieved. \item[Nonterminal vs.\ value.] By ``values'' I mean nonterminals that are built-in in the underlying framework (e.g., ``string''). \item[Sequence element permutations] can be automatically detected and converged. \item[Lists of symbols.] Many frameworks that have components with grammatical knowledge, have a notion of a list or a repetition of symbols in their metalanguage. \end{description} It can be shown that these four are the only possibilities, and that their resolution can be resolved. \subsubsection*{Nominal resolution} In a more complicated scenario, let us consider the case of different nonterminal sets in two input grammars, and for simplicity we assume that all production rules are vertical (non-flat) and chained (if there is more than one production rule for the same nonterminal, all of them are chain productions --- i.e., have one nonterminal as their right hand side). Next, we define a \emph{footprint} of a nonterminal in an expression as follows: $$ \pi_n(x) = \begin{cases} \{1\} & \text{if } x=n\\ \{?\} & \text{if } x=n?\\ \{+\} & \text{if } x=n^{+}\\ \{*\} & \text{if } x=n^{*}\\ \bigcup\limits_{e\in L} \pi_n(e) & \text{if } x\text{ is a sequence }L\\ \varnothing & \text{otherwise} \end{cases} $$ By extension, we define a footprint of a nonterminal in a production rule as a footprint of it in its right hand side: $$ \pi_n(m \to e) = \pi_n(e) $$ Based on that, we define a \emph{production signature}, or a prodsig, of a production rule, by collecting all footprints of all nonterminals encountered in its right hand side: $$\sigma(p) = \{\langle n, \pi_n(e) \rangle\: |\: n\in\mathbb{N},\:\pi_n(e) \not= \varnothing \}$$ \begin{table}\footnotesize \centerline{\begin{tabular}{|l|c|}\hline \multicolumn{1}{|>{\columncolor[gray]{.9}}c|}{\footnotesize \textbf{Production rule}} & \multicolumn{1}{>{\columncolor[gray]{.9}}c|}{\footnotesize \textbf{Production signature}} \\\hline $p_1$=(\emph{program} $\to$ \emph{function}$^{+}$) & $\{\langle \textit{function}, \ensuremath{\raisebox{0.1em}{$\scriptscriptstyle\mathord{+}$}} \rangle\}$ \\ $p_2$=(\emph{function} $\to$ $str$ $str^{+}$ \emph{expr}) & $\{\langle \textit{expr}, {1} \rangle, \langle str, {1}\ensuremath{\raisebox{0.1em}{$\scriptscriptstyle\mathord{+}$}} \rangle\}$ \\ $p_3$=(\emph{expr} $\to$ $str$) & $\{\langle str, {1} \rangle\}$ \\ $p_4$=(\emph{expr} $\to$ $int$) & $\{\langle int, {1} \rangle\}$ \\ $p_5$=(\emph{expr} $\to$ \emph{apply}) & $\{\langle \mathit{apply}, {1} \rangle\}$ \\ $p_6$=(\emph{expr} $\to$ \emph{binary}) & $\{\langle \mathit{binary}, {1} \rangle\}$ \\ $p_7$=(\emph{expr} $\to$ \emph{cond}) & $\{\langle \mathit{cond}, {1} \rangle\}$ \\ $p_8$=(\emph{apply} $\to$ $str$ \emph{expr}$^{+}$) & $\{\langle \textit{expr}, \ensuremath{\raisebox{0.1em}{$\scriptscriptstyle\mathord{+}$}} \rangle, \langle str, {1} \rangle\}$ \\ $p_9$=(\emph{binary} $\to$ \emph{expr} \emph{operator} \emph{expr}) & $\{\langle \textit{expr}, {1}{1} \rangle, \langle \textit{operator}, {1} \rangle\}$ \\ $p_{10}$=(\emph{cond} $\to$ \emph{expr} \emph{expr} \emph{expr}) & $\{\langle \textit{expr}, {1}{1}{1} \rangle\}$ \\ \hline \end{tabular}} \caption{Production rules of the master grammar for FL, with their production signatures.} \label{F:prodsigs} \end{table} A good example of how production signatures look like, is to be found on \autoref{F:prodsigs}. We say that two production rules are \emph{prodsig-equivalent}, if and only if there is a unique match between tuple ranges of their signatures: \vspace{-.5em} $$p \bumpeq q \: \Longleftrightarrow \: \forall\langle n,\pi\rangle\in\sigma(p),\: \exists! \langle m,\xi\rangle\in\sigma(q),\: \pi=\xi$$ Similarly, a weak prodsig-equivalence $p \Bumpeq q$ is defined by dropping the uniqueness constraint and weakening the equality constraint in the last definition to footprint equivalence which disregards repetition kinds ($\ensuremath{\raisebox{0.1em}{$\scriptscriptstyle\mathord{+}$}}$ is equivalent to $\star$). Then it can be proven that for any two strongly prodsig-equivalent production rules $p$ and $q$, $p\bumpeq q$, a \emph{nominal resolution} relationship has the form of: $$p \diamond q = \sigma(p) \circ \overline{\sigma(q)} $$ where $\rho_1 \circ \rho_2$ is a composition of two relations in the classic sense and $\overline{\rho}$ is the classic inverse of a relation. Moreover, for any two weakly prodsig-equivalent production rules $p$ and $q$, $p\Bumpeq q$, there is (at least one) nominal resolution relationship $p \diamond q$ that satisfies the following: \vspace{-1em} \begin{gather*} \forall \langle a,b\rangle \in p \diamond q: a = \omega \vee b = \omega \:\vee\\ \exists \pi, \exists \xi, \pi\approx\xi, \langle a,\pi\rangle\in\sigma(p), \langle b,\xi\rangle\in\sigma(q) \end{gather*} and \vspace{-1em} $$ \forall \langle a,b\rangle \in p \diamond q, \forall \langle c,d\rangle \in p \diamond q: a = c \Rightarrow b = d $$ Where $\omega$ is used to explicitly denote unmatched nonterminals. \subsubsection*{Abstract Normal Form} In order to fit any grammar into the conditions required by the previously described matching techniques, we demand the following normalisation: \begin{enumerate} \item lack of labels for production rules \item lack of named subexpressions \item lack of terminal symbols \item maximal outward factoring of inner choices \item lack of horizontal production rules \item lack of separator lists \item lack of trivially defined nonterminals (with $\alpha$, $\varepsilon$ or $\varphi$) \item no mixing of chain and non-chain production rules \item the nonterminal call graph is connected, and its top nonterminals are the starting symbols of the grammar \end{enumerate} It can be shown that transforming any grammar into its Abstract Normal Form is in fact a grammar mutation (see \S\ref{S:mutation}). In the prototype, I have implemented it to effectively generate bidirectional grammar transformation steps, so the normalisation preserves any information that it needs to abstract from. \subsubsection*{Grammar design mutation} Some grammar design smells (terminology per \cite{PEM7}) like yaccification (per \cite{DeYacc,Harmful}) or layered expressions (per ~\cite{Convergence2009}) have shown to be persistent enough to survive all normalisations and cause problems for establishing nominal and structural mappings. They can be identified and dealt with by automated analyses and mutations, but so far I have to proof that they are the only possible obstacles, and no guarantees about any other smells problematic for guided grammar convergence. \subsubsection{Generalisation of production signatures} The method of establishing nonterminal mappings of different grammars of the same intended language, can be generalised as follows. Suppose that we have a metalanguage. Without loss of generality, let us assume that each grammar definition construct that is present in it, can be referred to by a single symbol: ``\raisebox{2pt}{\textbf{,}}'', ``\textbf{?}'', ``\textbf{*}'', etc and uses prefix notation. This metasyntactic alphabet $\Lambda$ will form the foundation of our footprints and signatures. Let us also assume that all metasymbols are unary or are encoded as unary, except for two composition constructs: a sequential ``\raisebox{2pt}{\textbf{,}}'' and an alternative ``$\mathbf{|}$'', which take a list of symbols. Then, a footprint of any nonterminal $n$ in an expression $x$ is a multiset of metasymbols that are used for occurrences of $n$ within $x$: $$ \pi_n(x) = \begin{cases} \{1\} & \text{if } x=n\\ \{\mu\} & \text{if } x=\mu(n), \mu\in\Lambda\\ \bigcup\limits_{e\in L} \pi_n(e) & \text{if } x=\raisebox{2pt}{\textbf{,}}(L)\\ \varnothing & \text{otherwise, also if } x=\mathbf{|}(L)\\ \end{cases} $$ Our previously given definition of a production signature can still be used with this generally redefined footprints. It is well known that language equivalence is undecidable. Any formulation of the grammar equivalence problem, that is based on language equivalence, is thus also undecidable. Grammar convergence~\cite{Convergence2009,JLS-SQJ2011} is a practically reformulated grammar equivalence problem that uses automated grammar transformation steps programmed by a human expert. By using these generalised metasyntactic signatures, we can \emph{infer converging transformation steps} automatically, thus eliminating the weakest link of the present methodology. However, this is not the only application of the generalisation. The most trivial use of metasyntactic footprints and signatures would lie in \emph{grammarware metrics}. Research on software metrics applied to context-free grammars has never been an extremely popular topic, but it did receive some attention in the 1970s~\cite{Gruska}, 1980s~\cite{Kelemenova1981} and even recently~\cite{PowerMalloy,CrepinsekMernik}. Using quantitative aspects of metasyntactic footprints and signatures (numbers of different footprints within the grammar, statistics on them, etc) is possible and conceptually akin to using micropatterns~\cite{Gil:2005:MPJ:1094811.1094819} and nanopatterns~\cite{Batarseh:2010:JNP:1900008.1900089}, but nothing of this kind has ever been done for grammars (in a broad sense or otherwise). A different more advanced application of metasyntactic footprints and signatures is the analysis of their usage by mining existing grammar repositories like Grammar Zoo~\cite{SLPS}. This can lead to not only improving the quality of the grammars by increasing their utilisation of the metalanguage functionality, but also to \emph{validation of metalanguage design}. The whole programming language community uses dialects and variations of BNF~\cite{BNF} and EBNF~\cite{EBNF}, but their design has never been formally verified. However, one may expect that introducing EBNF elements like symbol repetition to BNF can be justified by analysing plain BNF grammars and finding many occurrences of encoding them (``yaccification'', etc). It will also be interesting to see what new features the EBNF lacks practically --- none of the existing proposals so far (ABNF~\cite{ABNF}, TBNF~\cite{TBNF}, etc) were ever formally validated. \subsubsection{History of attempted publication} Initially, the idea of guided grammar convergence has emerged as a contribution for ECMFA~\cite{Guided-ECMFA2012}. The level of contribution was praised by the reviewers, but the paper itself was deemed inappropriate for a heavily model-related venue. A bit later it was resubmitted after minor revision to ICSM~\cite{Guided-ICSM2012}, where it was received even colder, presumably because the reviewers were seeking a more practical side which was not demonstrated well enough. After much more effort put into experiments, prototypes, auxiliary material~\cite{Guided2012} and a complete rewrite of the paper itself, the method was submitted to POPL~\cite{Guided-POPL2013}. It was unanimously rejected, but with very constructive and encouraging reviews. In 2013, they will be taken into account when the paper will be submitted again (the last time as a conference paper --- otherwise I will admit it to be impossible for me to explain this method within the common limitations and go for a much longer self-contained journal submission). In \cite{Incremental2012}, I have attempted to sell the very act of validating the new method of guided grammar convergence by letting it cover the older case study done with contemporary grammar convergence, as a some sort of experimental replication in a broad sense. The reviewers praised the nonconformism and originality of the approach, and rejected the paper. The generalisation of the method was proposed as an extended abstract to NWPT~\cite{Metasyntactic2012}, where the reviewers did not see any merit in it (which I personally found strange since both ICSM and POPL reviewers insisted that various components of the method like ANF and prodsigs must be treasured as standalone contributions which applicability is much wider than the automated convergence of grammars). Either my way of explaining was bad enough to obfuscate this point, or I have terribly misunderstood their call for papers. \subsection{Grammar transformation languages}\label{S:trafo} \subsubsection{XBGF}\label{S:XBGF} XBGF, standing for Transformation of BNF-like Grammar Format, is a domain-specific language for automated programmable operator-based transformations of grammars in a broad sense. It has been previously implemented in Prolog (which was mostly done by Ralf L{\"a}mmel) and published as a part of a journal article~\cite[\S4]{JLS-SQJ2011}, as well as a separate online manual~\cite[XBGF Manual]{SLPS} --- in fact, just a byproduct of the research on language documentation~\cite{LDF2011}. XBGF is essentially finished work: it is working, it is useful for experiments, it has documentation, it has a test suite, etc. The only thing that was added in the course of 2012 is the reimplementation of XBGF in Rascal~\cite{Rascal}. Beside some metaprogramming, this reimplementation led to streamlining some of the applicability preconditions and postcondition, which could be viewed as a very minor scientific contribution. \subsubsection{$\Upxi$BGF}\label{S:CBGF} If XBGF was read as ``iks bee gee eff'', then $\Upxi$BGF\ is ``ksee bee gee eff'', its bidirectional counterpart. Inspired by the call for papers of BX'12 (The First Workshop on Bidirectional Transformations, see \S\ref{S:yesvenues}), I was experimenting with bidirectionality in the grammarware technological space, and this language is what came out of it. 80\% of the work for creating it involved trivial coupling of grammar transformation operators like \emph{chain} and \emph{unchain}, but the remaining 20\% have provided a lot of fuel for thinking about what seemed to be a polished and finished product. $\Upxi$BGF\ was published as a part of online pre-proceedings \cite{Metasyntactically-BX2012}, and then, after the second round of reviews, as a journal article~\cite{Metasyntactically2012}. The only problem was that the BX paper took off on its own, so the bidirectional grammar transformation operator suite seems like one of many byproducts there. There was a failed attempt to craft a paper that would be more focused on $\Upxi$BGF\ (and other aspects of grammar transformation not covered sufficiently by the BX submission), but a wrong venue was targeted, which resulted in desk rejection~\cite{Trends2012}. \subsubsection{NXBGF?}\label{S:NBGF} Another property of programmable grammar transformations that always bothered me, was their rigidity: once written, they are hard to maintain and adapt, and one little change in the original grammar (for example, when the extractor is changed) can unexpectedly and unpredictably break (make defunct) some of the transformation steps much later in the chain, and there is no method available to detect the change impact. Analysing this problem led to an idea that was originally in preparation for the FM+AM workshop (see \S\ref{S:novenues}), but was not ready before the deadline, so it went to the Extreme Modelling Workshop instead, where it received surprisingly warm reaction. The idea is: negotiations. Whenever an error arises (usually an applicability condition is not met), instead of failing the whole chain, try to recover by negotiating the outcome with the data about near-failure and some external entity (usually an oracle or a human operator). For example, when we want to rename a nonterminal that does not exist, the transformation engine may seek nonterminals with names similar to the required one, and try renaming them. The idea of negotiated grammar transformations was published in the online proceedings~\cite{Negotiated-XM2012} and then in the ACM Digital Library~\cite{Negotiated2012}, after which I was invited to submit an extended version to a journal. This will soon lead to a prototype implementation of such a system and perhaps to some interesting experiments with it. If this advancement yields a yet another grammar transformation operator suite, it may or may not be named ``NXBGF''. \subsubsection{EXBGF}\label{S:EXBGF} \begin{table*} \begin{center} \begin{tabular}{|l|r|r|r|r|r|r|r|r|} \hline ~ & \textbf{jls1} & \textbf{jls2} & \textbf{jls3} & \textbf{jls12} & \textbf{jls123} & \textbf{r12} & \textbf{r123} & \textbf{Total} \\ \hline XBGF, LOC & 682 & 6774 & 10721 & 5114 & 2847 & 1639 & 3082 & 30859 \\ EXBGF, LOC & 399 & 5509 & 7524 & 3835 & 2532 & 1195 & 2750 & 23744 \\ ~ & $-$42\% & $-$19\% & $-$30\% & $-$25\% & $-$11\% & $-$27\% & $-$11\% & $-$23\% \\ genXBGF, LOC & 516 & 5851 & 9317 & 4548 & 2596 & 1331 & 2667 & 26826 \\ ~ & $-$24\% & $-$14\% & $-$13\% & $-$11\% & $-$9\% & $-$19\% & $-$13\% & $-$13\% \\ \hline XBGF, nodes & 309 & 3,433 & 5,478 & 2,699 & 1,540 & 786 & 1,606 & 15851 \\ EXBGF, nodes & 177 & 2,726 & 3,648 & 1,962 & 1,377 & 558 & 1,446 & 11894 \\ ~ & $-$43\% & $-$21\% & $-$33\% & $-$27\% & $-$11\% & $-$29\% & $-$10\% & $-$25\% \\ genXBGF, nodes& 326 & 3,502 & 5,576 & 2,726 & 1,542 & 798 & 1,610 & 16080 \\ ~ & +6\% & +2\% & +2\% & +1\% & +0.1\% & +2\% & +0.3\% & +1\% \\ \hline XBGF, steps & 67 & 387 & 544 & 290 & 111 & 77 & 135 & 1611 \\ EXBGF, steps & 42 & 275 & 398 & 214 & 98 & 50 & 120 & 1197 \\ ...pure EXBGF & 27 & 104 & 162 & 80 & 30 & 34 & 44 & ~ \\ ...just XBGF & 15 & 171 & 236 & 134 & 68 & 16 & 76 & ~ \\ ~ & $-$37\% & $-$29\% & $-$27\% & $-$26\% & $-$12\% & $-$35\% & $-$11\% & $-$26\% \\ genXBGF, steps & 73 & 390 & 555 & 296 & 112 & 83 & 139 & 1648 \\ ~ & +9\% & +1\% & +2\% & +2\% & +1\% & +8\% & +2\% & +2\% \\ \hline \end{tabular} \end{center} \caption{Size measurements of the Java grammar convergence case study, done in XBGF and in EXBGF. In the table, XBGF refers to the original transformation scripts, EXBGF to the transformations in Extended XBGF, genXBGF measures XBGF scripts generated from EXBGF. LOC means lines of code, calculated with \texttt{wc -l}; nodes represent the number of nodes in the XML tree, calculated by XPath; steps are nodes that correspond to transformation operators and not to their arguments. Percentages are calculated against the XBGF scripts of the original study.} \label{T:EXBGF} \end{table*} Considerations about the state of XBGF led me to start cursory reexamination of the available transformation scripts. The Java case study undertaken in 2009--2010 and published as a conference paper~\cite{JLS-SCAM2009}, a journal paper~\cite{JLS-SQJ2011} and open source repository~\cite{SLPS}, provided me with plenty of them. Manual ad hoc pattern recognition has resulted in development of a new operator suite, with higher order operators such as \emph{exbgf:pull-out}, which would be equivalent to a superposition of \emph{xbgf:horizontal}, \emph{xbgf:factor}, \emph{xbgf:extract} and \emph{xbgf:vertical}. As shown on \autoref{T:EXBGF}, size metrics show a drop of {23--26\%} in Extended XBGF with respect to XBGF, but also the complexity was obviously decreased. However, the results were not extremely convincing and lacked real strength since only a few uses per high level operator were found, and the new EXBGF language was not designed systematically. Besides all that, the case study I have done, is, strictly speaking, about \emph{refactoring} XBGF scripts to Extended XBGF, so claims about usefulness of EXBGF for \emph{creating} new transformation scripts, should be stated with caution. EXBGF was first described as an idea as a part of \cite{Trends2012}. After its rejection, it was developed further and laid out in much more detail in a journal submission, which was also eventually rejected~\cite{Incremental2012}. The fact that I presented Extended XBGF first as a ``trend'' and then as an ``experiment'', perfectly reflects my point of view that it is not a solid contribution on its own. \subsubsection{$\Delta$BGF?}\label{S:3BGF} If there was one good outcome of getting a grammar transformation paper~\cite{Trends2012} rejected at a functional programming conference, then this is it: I started contemplating how to specify them in a non-so-functional way. Having recently been to a bidirectional transformations workshop helped, and I started researching \emph{tridirectional} transformations (in fact, they quickly turned multidirectional). The idea was clean and simple: do not specify grammar changes as functions; instead, specify them as predicates. Such a predicate would, for example, introduce a nominal binding between nonterminals in different grammars --- after which, the actual renaming steps can be easily inferred from such a binding predicate. Unfortunately, this idea was so beautiful in theory, but proven nearly impossible in practice (or in detailed theory, for that matter). The main problem lies with the order of execution: a functional grammar transformation script specifies that order naturally, while a list of predicates does not. As I found out the hard way, my prototypes were still clean and beautiful when they dealt with one transformation step; reasonable tricks and extensions could let me go up to three steps; beyond that some serious redesign was needed; and so far I have not figured out how to overcome this. \subsection{Metasyntax}\label{S:meta} Whenever we have a software language, we can speak of its \emph{syntax} as a way it allows and disallows structural combinations of elements: programming languages rely on keywords and possibly layout conventions; spreadsheets have ways of distinguishing between cells and referring to one from another; markup languages have symbol sequences of special meaning; musical notes are arranged on a grid; graphs must have uniquely identifiable nodes and edges connecting exactly two each; etc. Then, a \emph{metasyntax} is a way of specifying this syntax. In the classic programming language theory, languages are textual and can be processed as sequences of lexems, and the metasyntax is Backus Normal Form~\cite{BNF}, also called Backus Naur Form~\cite{BNFvsEBNF}, or its enhanced variant Extended Backus Naur Form~\cite{EBNF}. Despite the fact that EBNF has been standardised by ISO~\cite{ISO-EBNF}, there is no agreement in the software language engineering community on the exact variant of EBNF: some people just prefer using ``$:\equiv$'' or ``$\triangleq$'' instead of ``='' for esthetic reasons or prefer separating production rules with double newlines for readability reasons and for the sake of easy processing. The idea was hinted in my PhD thesis in 2010~\cite{Zaytsev-Thesis2010}, completely worked out in 2011 and was put to several good uses in 2012. These are listed in the following subsections. \subsubsection{Notation specification}\label{S:EDD} The first step in treating metalanguages as first class entities is, of course, encapsulating a particular metalanguage with a specification that defines it. By extending the list of possible metasymbols from the ISO EBNF standard~\cite{ISO-EBNF} and by reusing the empirically constructed Table 6.1 from my thesis~\cite[p.135]{Zaytsev-Thesis2010}, I was able to construct such a specification, which was subsequently named EDD, for EBNF Dialect Definition. It was then turned into a small nicely packaged paper for the PL track of SAC~\cite{BNF-WAS-HERE2012} --- the very fact that it was published separately, gave me a lot of freedom later, when I did not feel like I need to introduce all the metasymbols all over again in each work that followed. \subsubsection{Transforming metasyntaxes}\label{S:XEDD} \begin{figure*} \centering \includegraphics[width=\textwidth]{bx-two.pdf} \caption{Components of a notation evolution: $\sigma$, a bidirectional \emph{notation specification transformation} that changes the notation itself; $\delta$, a \emph{convergence relationship} that can transform the notation grammars; $\gamma$, a bidirectional \emph{grammar adaptation} that prepares a beautified readable version of $N'$. $\mu$, an unidirectional \emph{coupled grammar mutation} that migrates the grammarbase according to notation changes; possibly $\mu'$, an unidirectional \emph{coupled grammar mutation} that migrates the grammarbase according to the inverse of the intended notation changes.} \label{F:bx} \end{figure*} Once you have a notation specification as a first class entity, you can define transformations on them. This was probably the first transformation language that I have designed, where the main complexity was not in defining the transformation operators as such, but rather in coupling them with the grammar transformation steps that they imply. The transformation suite consisted of just three operators: \begin{description} \item[rename-metasymbol$(s,v_1,v_2)$] {\small where $s$ is the metasymbol and values $v_1$ and $v_2$ are strings} \\ For example, we can decide to update the notation specification from using ``\texttt{:}'' as a defining metasymbol to using ``\texttt{::=}''. This is the most trivial transformation, but also bidirectional by nature. \item[introduce-metasymbol$(s,v)$] {\small where $s$ is the metasymbol and $v$ is its desired string value} \\ For example, a syntactic notation can exist without terminator metasymbol, and we may want to introduce one. \item[eliminate-metasymbol$(s,v)$] {\small where $s$ is the metasymbol and $v$ is its current string value} \\ Naturally, eliminate and introduce together form a bidirectional pair. Specifying the current value of a metasymbol is not necessary, but enables extra validation, as well as trivial bidirectionalisation. \end{description} Yet, the final megamodel of the infrastructure that did not even consider language instances (only grammars and metasyntaxes) looked as complex as \autoref{F:bx}. The paper about evolution of metalanguages had a bidirectionality flavour and was conditionally accepted at the BX workshop \cite{Metasyntactically-BX2012}, and then also for the journal special issue \cite{Metasyntactically2012}. \subsubsection{Notation-parametric grammar recovery}\label{S:EDDrec} In all previously published grammar recovery initiatives~\cite{Recovery-COBOL,Browsable,Recovery-MSC-SSL,Recovery-PLEX,Recovery-SPE,GRK,Too-Sharp2005,Convergence2009,Zaytsev-Thesis2010,JLS-SQJ2011,MediaWiki2011,MediaWiki2012} the step of transforming the raw grammar-containing text obtained from the language manual was either not automated (the grammar was re-typed from scratch in the notation required by the target grammarware framework), or semi-automated (comprised many rounds of test-driven improvement), or automated with a throwaway tool (one that can not be reused unless the replication deploys exactly the same EBNF dialect). Having a notation specification as a first class entity, we can step up from throwaway tools to throwaway notation specifications: at least they take minutes to create, not days. Notation-parametric grammar recovery~\cite{NPGR-LDTA2012,NPGR2012} was my best result of 2011, and this year it was officially published and put to several good uses. These uses are not exactly publishable simply because grammar recovery from (nearly) well-formed has become a trivial process itself, but there was one story that was enabled by this triviality. The grammar of MediaWiki syntax, for recovery of which I have a previously exposed preprint~\cite{MediaWiki2011}, is a unique case of using multiple notations within one community-created grammar. With any other recovery method, it would have been easier to just retype the grammar again in a uniform fashion, but notation-parametric grammar recovery allowed to treat all six different incoherent metalanguages with relative ease and derive the final grammar from the inconsistent input. A continuation of this topic was intended to be a published closure on the case of MediaWiki grammar recovery, but was unfortunately rejected in the end~\cite{MediaWiki2012}. \subsubsection{Notation-driven grammar convergence}\label{S:EDDguided} Grammar convergence was originally a lightweight verification method not intended for full automation~\cite{Convergence2009}. However, seeing how many transformations that were in fact converging grammars, it was possible to infer automatically for the metalanguage evolution case study~\cite{Metasyntactic2012} (see also \S\ref{S:XEDD}), I could not help starting to wonder whether and to what extent it was possible to drive the automated convergence process by the notation properties. The result of that was the methodology of guided grammar convergence, which was already covered by \S\ref{S:guided}. \subsection{Tolerance in parsing}\label{S:tolerant} Originally, the ``parsing in the cloud'' paper \cite{Islands-SCAM2012,Islands-NordiCloud2012} was intended to present a useful crossing of the in-the-cloud and as-a-service paradigm with the engineering discipline for grammarware. However, the related work digging quickly got out of hand and turned into a contribution of its own. The overview of many grammar-based techniques with some level of tolerance towards their input data and its weak commitment to grammatical structure, was presented at the PEM Colloquium~\cite{Tolerance2012-talk} (see also \S\ref{S:PEM}), where it was received very warm acceptance and led to many useful insights. It has been advised to me both by reviewers and colleagues to put more effort into demonstrative prototype and publish the overview with them separately from the parsing algorithm (see \S\ref{S:parsing}) itself. This is among one of the planned activities for 2013. So far, at least the following tolerant parsing methods have been identified: ad hoc lexical analysis~\cite{LexicalApproachSucks,AM}, hierarchical lexical analysis~\cite{Murphy:1995:LSM:222124.222147}, iterative lexical analysis~\cite{Cox03syntacticapproximation}, fuzzy parsing~\cite{Fuzzy}, parsing incomplete sentences~\cite{Lang:1988:PIS:991635.991710}, island grammars~\cite{DocGen}, lake grammars~\cite{Moonen01}, robust multilingual parsing~\cite{SynytskyyCD03}, gap parsing~\cite{GapParsing}, bridge grammars~\cite{Nilsson-Nyman:2009:PSR:1532448.1532458}, skeleton grammars~\cite{Tolerant}, breadth-first parsing~\cite{FortranCompiler,Ophel_breadth-firstparsing}, grammar relaxation~\cite{DragonBook}, agile parsing~\cite{DeanCMS03}, permissive grammars~\cite{KJNV09}, hierarchical error repair~\cite{SyntaxErrorRepair}, panic mode~\cite{DragonBook}, noncorrecting error recovery~\cite{Richter:1985:NSE:3916.4019}, precise parsing~\cite{Aho:1972:TPT:578789}. It remains to be seen whether they form a straight spectrum from lexical analysis to strict syntactic analysis. \subsection{Megamodelling}\label{S:mega} In computer science, \emph{modelling} happens when a real artefact is represented by its abstraction, which is then called a model; \emph{metamodelling} happens when the structure of such models is analysed and expressed as a model for models, or a metamodel; and \emph{megamodelling} happens when the infrastructure itself, involving multiple models and metamodels, is modelled. The need for megamodels is being advocated at least since 2004~\cite{Need4Mega,MegaModel}. The current state of the art is: in the simplest cases, people do not need a special formalism to state that, for example, ``models A and B conform to the metamodel C''; in somewhat more complicated scenarios scientists and engineers tend to develop their own domain-specific \emph{ad hoc megamodelling} methodologies and employ them in narrow domains; and in truly complex situations, any existing approach only adds to complexity, overwhelming stakeholders with a yet another view on the system architecture. However, at least one solid business case was found for megamodelling: the problem of comparing different technological spaces~\cite{KurtevBA02}: for example, comparing the relations between XML documents, schemata, data models and validators, with relations between object models, source code and compilers. At the University of Koblenz-Landau, the Software Languages Team is dedicated to develop a general purpose megamodelling language called MegaL~\cite{MegaL}. After attending presentations about MegaL on several occasions, I have paid a working visit to them in July. The consequences of that visit: I tried to use MegaL for my own megamodelling needs on several occasions~\cite{Negotiated-XM2012,Negotiated2012,Guided2012}, I have presented an extensive overview of currently existing ad hoc megamodelling techniques (see \S\ref{S:adhocmega}), and I have proposed my own method of dealing with overly complex megamodels (see \S\ref{S:renarration}). \subsubsection{MegaL dissection}\label{S:adhocmega} So far at least these previously existing ad hoc megamodelling approaches have been spotted: ATL~\cite{Jouault200831}, UNCOL~\cite{Bratman61,Conway58}, tombstone~\cite{McKeeman70}, grammarware megamodelling~\cite{KlintLV05}, software evolution megamodelling~\cite{MegaModel}, evolution of software architectures~\cite{Graaf07a,Graaf07b}, MEGAF~\cite{MEGAF}, global model management~\cite{VignagaJBB09}, grammar convergence~\cite{Convergence2009,Zaytsev-Thesis2010,LCI2011}, software language engineering~\cite{Zaytsev-Thesis2010,SLPS}, modelling language evolution framework~\cite{Meyers20111223}, metasyntactic evolution~\cite{Metasyntactically-BX2012,Metasyntactically2012}. My superficial overview of them, comparing them with MegaL, was presented to the MegaL designers in July~\cite{MegaL2012-talk}, and my current research activities include active collaboration with them with a paper presenting a unified model for megamodelling in mind. \subsubsection{Renarrating megamodels}\label{S:renarration} Having seen enough presentations on megamodelling made me realise that they are very easy to follow even for untrained people, unlike the resulting megamodels that contain far too much detail and are very intimidating. So, my take on this problem was introducing two operations: slicing (to make megamodels smaller) and narrating (to traverse the elements in the megamodel). If we have them, we can take the baseline megamodel that only experts can try to understand, and cut it to consumable chunks bundled with the story that introduces the remaining elements one by one and explaining each step. The resulting paper was sent to a workshop on Multiparadigm Modelling, where it was presented as a poster~\cite{Renarration-MPM2012-talk}, published in online pre-proceedings~\cite{Renarration-MPM2012} and is currently on its way to the post-proceedings in the ACM DL~\cite{Renarration2012}. \subsection{Grammar repository}\label{S:repo} My first project proposal ever, titled ``Automated Reuse-driven Grammar Restructuring'', was sent to the NWO Veni program in January, passed a rebuttal phase in May and was finally rejected in July after informing me that it ended up in the category ``very good''~\cite{NWO-Veni2012}. The idea described there was small and elegant: mining grammarware repositories. While repository mining techniques receive quite some attention nowadays, very few people actually have entire repositories filled with grammars: let's face it, they are omnipresent yet at the same time scarce. However, I already have this initiative called Grammar Zoo~\cite{SLPS}, which contains many grammars of languages big and small, and armed with the arsenal of extraction tools developed in my PhD time, it can grow even more. The goal of such mining is, of course, to reverse engineer reusable grammar fragments and forward engineer the discipline of their composition. A paper advocating the need and the usefulness of the repository itself, was written and submitted to a journal in November~\cite{Zoo2013}. The outcome will only become known in 2013. \subsection{(Open) Notebook Science}\label{S:ONS} Open Notebook Science is an open science paradigm of doing research in a transparent way~\cite{Lloyd2008}. It involves keeping a lab notebook that collects all data and metadata on experiments, hypotheses, results, details and other observations that occur during the research phase, so that after the final objective is reached (or deemed unreachable), the complete path towards it can be exposed and made publicly available for inspection, replication and reuse. The open notebook approach is fairly well-known and somewhat popular in fields like biology and chemistry~\cite{DoD2008,Singh2008201}, that strive on experimental frameworks and traditionally involve lab notebooks, so in practice exercising this approach had the only consequence of sharing the already existing notebook and systematically referring to it from the papers. In computer science, however, there are none to few adopters of this approach, mainly due to the seeming complexity of the method and the amount of extra effort that is needed to set up and to maintain such a lab notebook and the lack of positive feedback from it in the form of community encouragement and peer acknowledgement. During the Software Freedom Day, I have given a presentation, explaining one possible feasible way to start practicing open notebook science for computer science and software engineering researchers, with the case study of myself~\cite{Open2012-talk}. A couple of days later SL(E)BOK organisers have heard about it and asked me to record a keynote presentation~\cite{Subatomic2012-talk} about that, linking open access ideas with the existing research on ``scientific knowledge objects'' (SKO) and on a ``body of knowledge''~\cite{Giunchiglia:2010:SKO:2328909.2328928,Liquid2011}. In short, open notebook science strives to enable open access to atomic SKOs; to expose all the dark data~\cite{DarkData2007} from failed experiments and unpublished results; to self-archive~\cite{SelfArchiving} subatomic SKOs, which are relevant for the final result, but smaller than a ``publon''. Examples of subatomic SKOs include: \begin{denselista} \item Commits to an open source repository; \item Tweets on work-related subjects; \item Quora answers on work-related topics; \item Papers: preprints, reports, drafts, etc; \item Presentations: slides, screencasts, etc; \item Blog posts; \item Wiki edits; \item Exposed tools; \item Documentation; \item Shared raw data; \item Auxiliary material. \end{denselista} As it has been pointed out to me by some of the attendees of both talks, the topic of subatomic SKOs is bigger than just open notebook computer science, because if I can show the usefulness of keeping a notebook of actions for a researcher, it does not necessary mean that the notebook must be public to profit from its traceability. The first comprehensive paper on this topic is still in the process of being designed, but hopefully will be submitted somewhere during the next year or two. \section{Introduction} The purpose of this report is documenting personal research results of the year 2012 in a form primarily intended for assessment of their scientific merit as a foundation for future work, not for quantitative assessment of the resulting publication record. This can be considered as an aggressive form of self-archiving initiative~\cite{SelfArchiving} where scientific and engineering contributions are not only logged, but also put in perspective by a separate first class atomic scientific knowledge object. This report is mostly meant for my SWAT colleagues. However, it is open for broad audience and meant to be readable by any researcher with reasonable degree of familiarity with computer science. It can be consumed as a self-contained document, but many details are not pulled in from available referenced sources. We start right away with a the overview of the field (\S\ref{S:bg}) followed by brief descriptions of major (\S\ref{S:major}) and minor (\S\ref{S:minor}) contributions, followed by a more elaborate motivation for creation of this document (\S\ref{S:why}). Next, all research topics are laid out in detail one by one (\S\ref{S:topics}). For the sake of complexity, a separate overview of all involved venues (\S\ref{S:venues}) is included. \S\ref{S:end} concludes the report. \section{Preliminaries} \subsection{Background notions}\label{S:bg} \emph{Software language} is a concept that generalises over programming languages, markup languages, database schemata, data structures, abstract data types, data types, modelling languages, ontologies, etc. Whenever we observe some degree of \emph{commitment to structure}, we can identify it with a language, which elements (symbols) can be separately defined and the allowed combinations of them can be somehow specified. Studying software language engineering is important because of possibly gained insights into relations between the way such languages are defined and used in different technological spaces (e.g., we can study data binding as a way to map a relational database to an object model, or language convergence as a way to compare an XML schema with a syntax definition). \newpage \emph{Formal grammars} is a long-existing approach of dealing with languages --- originally context free grammars~\cite{Chomsky56} were mainly aimed at textual programming languages~\cite{DragonBook}, but later other variants of grammars were proposed, including keyword grammars~\cite{Meertens77}, indexed grammars~\cite{Aho:1968:IGE:321479.321488}, lexicalised grammars~\cite{Schabes:1988:PSL:991719.991757}, object grammars~\cite{ObjectGrammars2012}, pattern grammars~\cite{grenander1996elements}, array grammars~\cite{SSK72}, puzzle grammars~\cite{NSSSD91}, picture grammars~\cite{MS67}, picture processing grammars~\cite{Chang:1970:ATP:800161.805166}, tile grammars~\cite{Reghizzi:2005:TRG:1103398.1103405}, grid grammars~\cite{YuPaun01}, motion picture grammars~\cite{springerlink:10.1007/3-540-63931-4_228}, pair grammars~\cite{Pratt:1971:PGG:1739929.1740026}, triple graph grammars~\cite{TGG94}, deterministic graph grammars~\cite{Caucal07}, string adjunct grammars~\cite{JKY69}, head grammars~\cite{Pollard84}, tree adjunct grammars~\cite{JLT75}, tree description grammars~\cite{springerlink:10.1023/A:1011431526022}, description tree grammars~\cite{Rambow:1995:DG:981658.981679}, description tree substitution grammars~\cite{Rambow:2001:DSG:972778.972782}, functional grammars~\cite{DBLP:journals/ipl/Lukaszewicz77}, {\L}ukaszewicz universal grammars~\cite{Lukaszewicz198276}, two level grammars~\cite{Wijngaarden74}, van Wijngaarden grammars~\cite{Wijngaarden65}, metamorphosis grammars~\cite{springerlink:10.1007/BFb0031371}, affix grammars~\cite{Koster91}, extended affix grammars~\cite{Meijer90}, attribute grammars~\cite{AG-Genesis}, extended attribute grammars~\cite{WM83}, definite clause grammars~\cite{PW86}, minimalist grammars~\cite{Lecomte:2001:ELG:1073012.1073059}, categorial grammars~\cite{Ajdukiewicz35}, type grammars~\cite{Lambek58}, pregroup grammars~\cite{Lambek08}, Montague universal grammars~\cite{Montague70}, logic grammars~\cite{DBLP:books/daglib/0067304}, assumption grammars~\cite{Dahl97assumptiongrammars}, constraint handling grammars~\cite{Christiansen05}, abductive logic grammars~\cite{springerlink:10.1007/978-3-642-02261-6_14}, simple transduction grammars~\cite{Lewis:1968:ST:321466.321477}, inversion transduction grammars~\cite{Wu:1997:SIT:972705.972707}, range concatenation grammars~\cite{boullier:inria-00073347}, island grammars~\cite{DocGen}, bridge grammars~\cite{Nilsson-Nyman:2009:PSR:1532448.1532458}, skeleton grammars~\cite{Tolerant}, permissive grammars~\cite{KJNV09}, conjunctive grammars~\cite{Okhotin01}, Boolean grammars~\cite{Okhotin200419}, Peirce grammars~\cite{springerlink:10.1023/A:1011403527615}, transformational grammars~\cite{springerlink:10.1007/3540069585_50}, probabilistic grammars~\cite{springerlink:10.1007/s10849-011-9135-z}, notional grammars~\cite{Anderson91}, analytic grammars~\cite{PEG}, parsing schemata~\cite{DBLP:books/daglib/0085473}, cooperating string grammar systems~\cite{CVDKP94}, cooperating array grammar systems~\cite{DFP95}, cooperating puzzle grammar systems~\cite{SRC06}, etc\footnote{The earliest possible reference is given for each variant, preferably from the programming language research field.}. A grammar of a software language, which specifies commitment to grammatical structure, is called a \emph{grammar in a broad sense}~\cite{KlintLV05}, even if in practice it defines a metamodel or an API, thus not officially being a grammar at all. The grammarware technological space is commonly perceived as mature and drained of any scientific challenge, but provides many unsolved problems for researchers who are active in that field. For the last years, and specifically in 2012, I have focused my efforts on using grammar-based techniques in the broad field of software language engineering. \subsection{Major contributions in a nutshell}\label{S:major} This section contains brief descriptions of the contributions of 2012 and some statements about their usability and/or importance. Sections that contain extended descriptions of the contributions with some level of technical detail, are referenced in parenthesis. \begin{description} \item[Guided grammar convergence] (\S\ref{S:guided}).\\ Grammar convergence is a lightweight verification method for establishing and maintaining the correspondence between grammar knowledge ingrained in various software artifacts~\cite{Convergence2009}. The method entails programming grammar transformation steps with a general purpose grammar transformation operator suite. It was acknowledged in \cite[p.34]{CamachoThesis} as ``a product-line approach to provide [...] an organised software structure''. Yet, the method had some weak sides that inspired further investigation. One of the biggest issues is maintenance of the grammar relationships. Once they have been established by programming grammar transformation steps, it becomes very hard to coevolve these steps with eventual changes in the source grammars. An ideal solution would be a way to automatically reestablish grammar relationships based on declarative constraints. This way is \emph{guided grammar convergence}: instead of programming the transformations, we construct an idealised ``master grammar'' that shows the most essential properties of all grammars that are to be converged, and the transformation steps are then derived automatically, guided by the structure of the master grammar. The transformation inference algorithm relies on the source grammars and their metasyntax. This method was prototyped twice: in Python and in Rascal, and tested successfully on 12 grammars in a broad sense obtained from different technological spaces. It has not been properly published after being rejected three times~\cite{Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013}, but received encouraging feedback from some of those venues and from one presentation~\cite{Guided-Convergence2012-talk}. \item[Grammar transformation] (\S\ref{S:trafo})\\ Grammar convergence, evolution, maintenance and any other activity that deals with changes, can profit from expressing such changes in the functional way: every step is represented as a function application, where a function is a transformation operator such as \emph{rename} or \emph{add}. The latest of such operator suites has been developed in 2010~\cite[XBGF Manual]{SLPS} and shown to be superior to its alternatives~\cite[\S4]{JLS-SQJ2011}. During 2012, XBGF has been: \begin{denselist} \item reimplemented in Rascal, which led to extensive testing and more systematic specification of operator semantics (\S\ref{S:XBGF}); \item extended for bidirectionality by pairing operators, introducing lacking ones and abandoning unsuitable ones (\S\ref{S:CBGF}); \item experimentally extended for adaptability (\S\ref{S:NBGF}); \item extended by mining patterns of it usage (\S\ref{S:EXBGF}); \item investigated for migration from the functional paradigm to the declarative one (\S\ref{S:3BGF}). \end{denselist} Each of these initiatives is a nontrivial project complete with conceptual motivation, programmed prototypes and obtained results (positive for the first three, controversial for the fourth and decisively negative for the last one). \item[Metasyntactic experiments] (\S\ref{S:meta})\\ Metasyntax as a language in which grammars are specified, was a topic briefly touched in my PhD thesis~\cite{Zaytsev-Thesis2010}, but never officially published. In 2012, I finally dedicated enough time and attention to engineer a proper prototype for metasyntax specifications (\S\ref{S:EDD}) and their transformations (\S\ref{S:XEDD}), as well as to perform a series of experiments on metasyntax-driven grammar recovery (\S\ref{S:EDDrec}) and convergence (\S\ref{S:EDDguided}). This area has now been exhaustively covered, and the only possible future extensions must rely on going way beyond textually specified context-free grammars. To be completely frank, it should be noted here that most of the experiments with metasyntax were done in the course of 2011 and were only polished, presented and published in 2012 (which still required considerable effort). \item[Tolerant parsing overview] (\S\ref{S:tolerant})\\ Just like the grammar recovery paper came with an extensive related work section which listed all grammar recovery initiatives in the last decade or two~\cite[\S2]{NPGR2012}, a new parsing algorithm that I tried to propose (\S\ref{S:parsing}) came with an extensive overview of all methods of tolerant parsing known to grammarware engineers up to date (\S\ref{S:tolerant}). While the iterative parsing method was novel but ultimately dull and uninteresting, the overview itself was received very warmly during the presentation on it~\cite{Tolerance2012-talk}. One of the reviewers of \cite{Islands-SCAM2012} has also advised to throw away the thing I thought was the main contribution of the paper, and extend the thing I thought of as a byproduct, into a longer journal article. While surprising at first, this seems indeed like a reasonable course of action. \end{description} \subsection{Selected minor contributions}\label{S:minor} \begin{figure*} \centering \includegraphics[width=.8\textwidth]{dblp.png} \caption{The results of 2012, according to DBLP.} \label{F:dblp} \end{figure*} In the following sections, I will present a detailed overview of major (\S\S\ref{S:guided}--\ref{S:ONS}) and minor (\S\ref{S:minortopics}) contributions, but the border between them is naturally flexible. Thus, in the previous section introduced only four of the best major ones, and this section will introduce several middleweight contributions (``less major'' mixed with ``not so minor'' ones). \begin{description} \item[Grammar mutation] (\S\ref{S:mutation})\\ It has been noted in \cite{Metasyntactically-BX2012,Metasyntactically2012} that there is a separate group of grammar changes that reside between traditional grammar transformations (``rename X to Y'') and the grammar transformation operators (``rename''), which was labelled as a grammar \emph{mutation} and formalised differently from them. While the only truly important property of grammar mutation in the context of \cite{Metasyntactically-BX2012,Metasyntactically2012} was that they are considerably harder to bidirectionalise, a lot of useful grammar manipulations like ``rename all uppercase nonterminals to lowercase'' or ``eliminate all nonterminals unreachable from the root'' belong to the class of mutations, so it deserves to be studied closer. In \cite{Trends2012}, I have composed a list of 16 mutations identified in already published academic papers or in publicly available grammarware source code, but the paper was not accepted, so the topic remains only marginally explored. \item[Iterative parsing] (\S\ref{S:parsing})\\ Starting from a fresh yet weird topic of what ``the cloud'' can mean for grammarware engineering, I ended up proposing an algorithm for \emph{parsing in the cloud}, which was \textbf{not} based on parallel parsing~\cite{Alblas:1994:BPP:181577.181586}, but rather on island grammars~\cite{Moonen01,Tolerant}. The whole topic is questionable and only suitable for a ``wild ideas workshop'', as was nicely put by one of the reviewers, but is still potentially of some interest. The paper containing the algorithm was rejected twice~\cite{Islands-SCAM2012,Islands-NordiCloud2012} so far, and requires investing more time in empirical validation at least, in order to increase the chances of acceptance. \item[Unparsing in a broad sense] (\S\ref{S:unparsing})\\ I could not help noticing that parsing (i.e., mapping strings to graphs) receives much more research attention than the reverse process of unparsing (i.e., mapping graphs to strings). However, the only thing I did accomplish this year was to collect a couple of references on existing research and make a ``new ideas'' extended abstract~\cite{Unparsing-Techniques2012}, which was classified as a ``request for discussion'' and rejected. I am already prepared to give a discussion-provoking presentation on this topic, but it requires much more effort to be invested until more tangible results are obtained. \item[Megamodelling] (\S\ref{S:mega})\\ Megamodelling is higher abstraction level form of modelling that is concerned with software languages and technologies and relations between them. This year I have published some papers with megamodels in them~\cite{Negotiated-XM2012,Negotiated2012,Renarration-MPM2012,Renarration2012,Guided2012} and touched upon the topic in a range of presentations~\cite{MegaL2012-talk,Renarration-SLT2012-talk,Renarration-MPM2012-talk,Negotiated-XM2012-talk}. Much more work on this topic is planned for 2013. \item[Open notebook computer science] (\S\ref{S:ONS})\\ Open notebook science is an open science paradigm of doing research in a transparent way. It is already a fairly widely accepted methodology in areas like chemistry~\cite{DoD2008} and drug discovery~\cite{Singh2008201} and is generally perceived as the next big step after open access~\cite{Lloyd2008}. However, in computer science and software engineering it has never been a tradition to keep a lab notebook, and it takes quite some time to maintain it, with few apparently visible benefits. I have been experimenting quite a lot with this idea, but finally decided to come out to a bigger public with two presentations in 2012~\cite{Open2012-talk,Subatomic2012-talk}. In general, I believe this is a reasonable idea, and I will keep practicing open notebook science myself, but it will take quite some effort to put it carefully into words in order to publish, so I am not even sure it is feasible to expect a publication in 2013. \end{description} \subsection{Motivation for this report}\label{S:why} \begin{figure*} \centering \includegraphics[width=.7\textwidth]{counter.png} \caption{The results of 2012, according to the self-archiver.} \label{F:final} \end{figure*} The progress of a scientist is traditionally measured by an outsider by the papers that the scientist produces. According to DBLP, the main supplier of bibliography lists currently, the year 2012 for me yielded the following results (see the screencapture on \autoref{F:dblp}): one journal paper~\cite{Metasyntactically2012}, one conference proceedings paper~\cite{BNF-WAS-HERE2012}, one preprint~\cite{Guided2012}. However, the first one is an only slightly extended version of a workshop paper~\cite{Metasyntactically-BX2012} written mostly in 2011; the second one was written and accepted in 2011; and the third one was intended to be a supplementary material for another paper that is not yet accepted anywhere. Additionally, there are three more post-proceedings papers in print~\cite{NPGR2012,Renarration2012,Negotiated2012}, which are already finished and submitted and will eventually appear in the ACM Digital Library --- when they do, they will also be listed at DBLP under 2012, but at that time it will be too late to write a year report. What about the self-archiving initiative~\cite{SelfArchiving}? Luckily, I disclose relatively large amounts of dark data~\cite{DarkData2007} about my research activities, having an extensively linked daily updated website with an open notebook (see \S\ref{S:ONS}) and many generated lists, including the current publishing progress, as seen on the the screencapture on \autoref{F:final}. Even judging by the bare numbers, one can already tell that this list contains much more information than the DBLP list. However, it also has its problems: the ``published'' column contains the works of previous years that happened to be delayed enough for the post-proceedings to appear in January 2012~\cite{TestMatch2012}; as well as mentions of drafts planned for future publication (easily localised in the last column). It also contains editorial work for non-mainstream venues~\cite{WCN2012,WCNe2012} which is of much lesser relevance because there is no scientific value to it. What it does not contain, is relations between all these papers: obviously some papers are enhanced version of previously rejected drafts, but in order to figure them out, one needs to read the open notebook at \url{http://grammarware.net/opens} or analyse it automatically (no readily available tools are provided). Personally, I can state that guided grammar convergence (see \S\ref{S:guided}) is my top result of the year. However, it has not (yet) been properly published. After being rejected at ECMFA~\cite{Guided-ECMFA2012} and ICSM~\cite{Guided-ICSM2012}, it received very positive reactions from POPL~\cite{Guided-POPL2013}, yet was also deemed not mature enough for publication. Still, having to figure out what are the limits of the proposed methodology and how to describe it well, does not change the fact that this is my best contribution of the year 2012. Grammar transformation operator suites like XBGF (see \S\ref{S:XBGF}), $\Upxi$BGF\ (\S\ref{S:CBGF}), EXBGF (\S\ref{S:EXBGF}), $\Delta$BGF (\S\ref{S:3BGF}) and NBGF (\S\ref{S:NBGF}) represent massive amounts of work, but they are not publishable by themselves, if at all. Still, each of them represents a milestone enabling further advances. Engineering work that supports scientific research, has rarely been explicitly noted and appreciated. Quoting \cite{DoD2008}: \emph{``The notebook is about publishing data as quickly as possible. The paper is about synthesizing knowledge from all those results.''} Hence, this report is aimed at synthesizing knowledge about the experiments and achievements undertaken during the course of 2012 by me (possibly in collaboration with someone else) within the NWO project 612.001.007, ``Foundations for a Grammar Laboratory''. It holds the most value for myself and my project colleagues, but is also available for anyone interested in the topics discussed: unlike open notebook entries, this report is a proper atomic scientific knowledge object~\cite{Giunchiglia:2010:SKO:2328909.2328928,Liquid2011}. Only two topics directly relevant to the project, are not included: one must remain hidden according to the rules of the target venue, and for the other one the context and consequences are not yet understood enough even for such a lightweight presentation. \subsection{Minor topics}\label{S:minortopics} Additionally to the topics and achievements I consider major for 2012, there are several lesser contributions: their are either topics that did not receive enough attention to yield a solid major contribution (yet not insignificant enough to be omitted from the report completely); or just not traditionally considered worthy of mentioning (programming, engineering, organising effort). One topic is intentionally hidden from this section, in order to prevent jeopardising an upcoming submission to a strictly double blind peer reviewed venue. \subsubsection{Grammar mutation}\label{S:mutation} In the paradigm of programmable grammar transformations, the semantics of each of the transformation operators is bound to the operator itself, and may require arguments to be provided before the actual input grammar. Such partially evaluated operators (with all arguments provided, but no input grammar yet) are treated as transformation steps, and their applicability constraints only depend on the grammar: if they hold, the change takes place; if they do not, an error occurs instead. In other words, the exact consequence of the transformation step depends on operands, not on the grammar. However, those applicability constraints can also be processed as filters: whatever part of the grammar satisfies them, will be transformed --- that way, the exact change in the grammar depends on the grammar, not on the operands. As an example, consider renaming grammatical symbols: ``rename nonterminal'' itself is an operator. Its semantics can be expressed easily on the classic definition of a grammar. If the input grammar is $G = \langle\mathbb{N},\mathbb{T},\mathbb{P},S\rangle$, then the output must be $$G' = \langle\mathbb{N}\cap\{x\}\cup\{y\},\mathbb{T},\mathbb{P}|_{x\to y},S'\rangle$$ where $x$ and $y$ are operands; $S'$ is $S$ unless $S=x$ and $y$ otherwise; and $A|_{x\to y}$ means substitution (for example, by term rewriting). When $x$ and $y$ are provided, then $G'$ above becomes fully defined and yields meaningful results when applicability conditions (e.g., $x\in\mathbb{N}$ and $y\not\in\mathbb{N}$) are satisfied. Renaming a terminal symbol is specified similarly. However, ``renaming all lowercase nonterminals to uppercase'' is not an operator (or at least even it is made one, it will be of much higher level than the simple ``rename''), and it is not an atomic transformation step either: in fact, it can lead to any number of changes in the grammar from $0$ to $|\mathbb{N}|$, depending on $G$. This number absolutely cannot be known before $G$ is provided. This kind of grammar manipulation was identified first as a part of research on bidirectional transformations~\cite{Metasyntactically-BX2012,Metasyntactically2012,BX2012-talk,SEM2012BX-talk} (because they are not bidirectionalisable), where it received the name of ``grammar mutation''. Later there was an endeavour to compose a comprehensive list of useful grammar mutations as a part of \cite{Trends2012}, but it was rejected. For the sake of providing a better overview of the current state of research on grammar mutations, I collect all of them in the exhaustive list below. Note that conceptually the same mutations may have been appearing under different names in various sources: for example, the first mutation in the list, ``remove all terminal symbols'', has previously been known as a transformation ``\emph{stripTs}''~\cite[\S5.3]{Convergence2009} and as a generator ``\emph{striptxbgf}''~\cite[\S4.9,\S4.10.6.1]{Zaytsev-Thesis2010}. \begin{description} \item[Remove all terminal symbols] \cite{Convergence2009,Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013,Zaytsev-Thesis2010,LCI2011} \\ A simple grammar mutation that is helpful when converging a concrete syntax and an abstract syntax of the same intended language. While the abstract syntax definition may have differently ordered parameters of some of its constructs, and full convergence will require dealing them them and rearranging the structure with (algebraic) semantic-preserving transformations, we will certainly not encounter any terminal symbols and can safely employ this mutation. \item[Remove all expression selectors] \cite{Convergence2009,Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013,Zaytsev-Thesis2010} \\ Named (selectable) subexpressions are encountered in many contexts, but the choice of names for them is usually even more subjective than the naming convention for the nonterminal symbols. \item[Remove all production labels] \cite{Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013}\hfill\\ Technically, having production label is the same as making a selectable subexpression out of the right hand side of a nonterminal definition. Still, in some frameworks the semantics and/or the intended use for labels and for selectors differ. \item[Disciplined rename] \cite{Metasyntactically-BX2012,Metasyntactically2012,Zaytsev-Thesis2010,LCI2011} \\ There are several different well-defined naming conventions for nonterminal symbols in current practice of grammarware engineering, in particular concerning multiword names. Enforcing a particular naming convention such as making all nonterminal names uppercase or turning camelcased names into dash-separated lowercase names, can be specified as a unidirectional grammar mutation (one for each convention). \item[Reroot to top] \cite{LCI2011,Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013} \hfill\\ A top nonterminal is a nonterminal that is defined in the grammar but never used~\cite{Recovery-SPE}. In many cases it is realistic to assume that the top nonterminals are intended starting symbols (roots) of the grammar. A variation of this mutation was used in \S\ref{S:guided} with an additional requirement that a top nonterminal must not be a leaf in the relation graph. This is a rational constraint since a leaf top nonterminal defines a separated component. \item[Eliminate top] \cite{Zaytsev-Thesis2010,LCI2011} \\ In the situations when the root is known with certainly, we can assume all other top (unused) nonterminals to be useless, since they are unreachable from the starting symbol and are therefore not a part of the grammar. \item[Extract subgrammar] \cite{Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013} \hfill\\ Alternatively, we can generalise the last mutation to a parametrised one: given a grammar and a nonterminal (or a list of nonterminals), we can always automatically construct another grammar with the given nonterminal(s) as root(s) and the contents formed by all production rules of all nonterminals reachable from the assumed root nonterminal(s). Constructing a subgrammar starting with the already known roots will eliminate top nonterminals. \item[Make all production rules vertical] \cite{Zaytsev-Thesis2010,LCI2011,Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013} \hfill\\ Vertical definitions contain several alternative production rules, while horizontal ones have one with a top level choice. There are different approaches known to handle this distinction, including complete transparency (one form being a syntactic sugar of the other). For normalisation purposes or for quick convergence of a consistently vertical grammar and a consistently horizontal one, we can use this automated mutation. \item[Make all production rules horizontal] \cite{Zaytsev-Thesis2010,LCI2011}\hfill\\ A similar grammar mutation is possible, yet much less useful in practice. \item[Distribute all factored definitions] \cite{Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013}\hfill\\ Aggressive factoring a-la \texttt{xbgf:distribute} can also be discussed Surfacing all inner choices in a given grammar is a powerful normalisation technique. \item[Make all potentially horizontal rules vertical] \cite{Zaytsev-Thesis2010,LCI2011}\hfill\\ Technically, this mutation is a superposition of distribution of all factored definition and converting all resulting horizontal production rules to an equivalent vertical form. \item[Deyaccify all yaccified nonterminals] \cite{Zaytsev-Thesis2010,LCI2011}\hfill\\ A ``yaccified'' definition~\cite{Adaptation,JongeM01} is named after YACC~\cite{YACC}, a compiler compiler, the old versions of which required explicitly defined recursive nonterminals --- i.e., one would write \texttt{A : B} and \texttt{A : A B}, because in LALR parsers like YACC left recursion was preferred to right recursion (contrary to recursive descent parsers, which are unable to process left recursion directly at all). The common good practice is modern grammarware engineering is to use iteration metalanguage constructs such as \texttt{B*} for zero or more repetitions and \texttt{B+} for one or more --- this way, the compiler compiler can make its own decisions about the particular way of implementation, and will neither crash nor perform any transformations behind the scenes. However, many grammars~\cite{SLPS} contain yaccified definitions, and usually the first step in any transformation that attempts to reuse such grammars for practical purposes, start with deyaccification, which can be easily automated. \item[Remove lazy nonterminals] \cite{Zaytsev-Thesis2010,LCI2011}\hfill\\ Many grammars, in particular those that strive for better readability or for generality, contain excessive number of nonterminals that are used only once or chain production rules that are unnecessary for parsing and for many other activities one can engage in with grammars. We have used an optimising mutation that removes such elements with \emph{xbgf:inline} and \emph{xbgf:unchain} on several occasions, including improving readability of automatically generated grammars. \item[Normalise to ANF] \cite{Guided-ECMFA2012,Guided-ICSM2012,Guided-POPL2013}\hfill\\ The Abstract Normal Form (ANF) was introduced in \S\ref{S:guided} as means of limiting the search space for guided grammar convergence. Technically, such normalisation is equivalent to a superposition of removing all labels, removing all selectors, removing all terminals, surfacing all inner choices, converting all horizontal production rules to a vertical form, rerooting to top non-leaf nonterminals and eliminating others unreachable from them. For conceptual foundations of ANF the reader is redirected to the article where it was proposed. \item[Fold all grouped subexpressions] \cite{Metasyntactically-BX2012,Metasyntactically2012}\hfill\\ In the context of metalinguistic evolution, we need to construct a coupled mutation for the grammarbase, if the notation change contains retiring of a metasyntactic construct that is in use. One of such constructs is the possibility to group symbols together in an atomic subsequence --- a feature that is often taken for granted and therefore misused, improperly documented or implemented. Naturally, eliminating grouped subexpressions entails folding them to newly introduced nonterminals by means of \emph{xbgf:extract}. \item[Explicitly encode all separator lists] \cite{Metasyntactically-BX2012,Metasyntactically2012}\hfill\\ Our internal representation of grammars for software languages, following many other syntactic notations, contains a construct for defining separator lists. For example: \texttt{\{A ","\}+} is a syntactic sugar for \texttt{A ("," A)*} or \texttt{(A ",")* A} --- all three variants specify a comma-separated list of one or more \texttt{A}s. When such a construct needs to be retired from the notation, the coupled grammar mutation must refactor its occurrences to explicitly encode separator lists with one of the equivalent alternatives. \end{description} A full fledged paper shining enough light on grammar mutations, is still being written and will hit the submission desks in 2013. \subsubsection{Iterative parsing}\label{S:parsing} As the main (intended) contribution of \cite{Islands-SCAM2012,Islands-NordiCloud2012}, I have proposed the algorithm for iterative parsing. The basic idea is very simple: we take the baseline grammar and skeletonise it as far as it can be automated, in such a way that the relation between the ``lakes'' and the nonterminals in the baseline grammar are preserved. Then, our parse tree will give the basic structure and a number of watery fragments parsed with useless lake grammars (usually in a form of ``anything but newline'' or ``something in balanced out curly brackets''). If needed, any of those lakes can be parsed further with a subgrammar of the baseline grammar, with the new root being the nonterminal that corresponds to the lake. This parsing approach was being sold as ``parsing in the cloud'' in \cite{Islands-SCAM2012,Islands-NordiCloud2012}, which was certainly not the best (even though the coolest) way to look at it. Other applications for this form of lazy parsing can be found in debugging (disambiguation, fault localisation) and other areas that traditionally profit from laziness. This remains future work. \subsubsection{Unparsing techniques}\label{S:unparsing} One of the most confusing paper that I have submitted anywhere in 2012, was the one about unparsing techniques~\cite{Unparsing-Techniques2012}. Only after finishing writing it, I have realised how big and overwhelming this topic is. The paper was rightfully rejected after being classified as a ``request for discussion'': a much deeper survey of (some of) the presented topics must be composed sooner or later, but it requires much careful consideration. I have not done much in this topic after that, but there was at least one paper published recently that explicitly considered unparsing~\cite{ObjectGrammars2012}. The starting idea is simple as a sunrise: there was a lot of effort put in researching parsing techniques, so why not the opposite? The unparsing techniques can be understood in a very broad sense: pretty-printing, syntax highlighting, structural import yielding an editable textual representation, bidirectional construction of equivalent views, etc. Some papers consider \emph{conservative pretty-printing} as a way to preserve peculiar layout pieces (like multiple spaces) during unparsing~\cite{PrettyPrint3,Ruckert96}. This is a narrow application of a general idea of propagating layout through transformations, which is a long-standing and a well-researched problem. However, even the most conservative unparsers have the risk of introducing an inconsistently formatted code fragment, if that code was originally introduced by a source code manipulation technique and not produced by a parser. In other words, replacing a \texttt{GO TO} statement with a \texttt{WHILE} loop should look differently, depending on how the code around the introduced fragment was formatted. Possibly, results from the grammar inference research field \cite{Inference2012} can be reused for recovering formatting rules in some reasonable proximity of the code fragment in order to unparse it correctly and avoid code alienation. Suppose not just one desired textual formatted representation of the language instance exists, but several of them, which form a family, or a product line, like the line of metalanguages considered in \S\ref{S:meta} in the context of metasyntactic evolution. Following that example, suppose we are given a grammar in some internal representation and a syntactic notation specification~\cite{BNF-WAS-HERE2012}, then it is somewhat trivial to construct an unparser that would produce the same grammar in a textual form. In other words, such an unparser should generate a text that, given a notation specification, can yield the same grammar after automated notation-parametric grammar recovery~\cite[\S3]{NPGR2012}. However, other questions remain. How to find a minimal notation needed to unparse a given grammar? How in general to validate compatibility of a given grammar and a given notation? How to produce grammar transformations (see \S\ref{S:trafo}) to make the grammar fit the notation, how to produce notation transformations (see \S\ref{S:XEDD}) to make the notation fit the grammar, and how to negotiate to find a properly balanced outcome? These questions are not trivial and require investigation. Unparser-completeness has recently been studied in the context of template engines~\cite{AvdBS11}. Unparsing can also be viewed as commitment to grammatical structure~\cite{KlintLV05}. Can we recover grammars from them, compare and converge them with other grammars of the same language that we would like synchronised (e.g., concrete syntax definition intended for parsing, multiple abstract syntaxes for performing various grammar-based analysis tasks, data models for serialisation)? Are there some specific properties that such grammars always possess? What is the minimal upper formalism for the baseline grammar from which grammars for parsing and unparsing can be derived automatically with a language-independent or language-parametric technology? These questions are not trivial and require investigation. Connecting to the topic of robust/tolerant parsing (see \S\ref{S:tolerant}), we can consider at least two kinds of techniques that as the opposite: incremental unparsing and unparsing incomplete trees. By \emph{incremental unparsing} I mean a modular technique for unparsing modified code fragments and combining them with the previously unparsed versions of the unmodified code fragments. This is usually not considered for simple cases, but is possibly worth investigating for large scale scenarios (consider architectural modifications to an IT portfolio with hundreds of millions lines of code in dozens of languages). By \emph{unparsing incomplete trees} we define the process of unparsing structured representations of incomplete language instances. Besides scenarios when this technique is used together with tolerant/robust parsing (and then the lacking information may be somehow propagated to the unparser anyway), there are also other scenarios when the gaps are deliberately left out to be filled by the unparser. In documentation generation, this is the way code examples can be treated --- for a sample implementation we refer to Rascal Tutor~\cite{RascalTutor}. For construction of compiler compilers and similar grammarware with unparsing facilities, there is a commonly encountered problem of bracket minimality for avoiding constructions ambiguous for parsing: since brackets are there in the text only to guide the parsing process, they are removed from the AST, so how to put back as few of them as possible during unparsing? This is a typical research question for the unparsing techniques field. One could also investigate various ways to infer grammar adaptation steps needed to unparse the given grammatical structure in order to guarantee the lack of ambiguities if it is to be parsed again. \subsubsection{Migration to git} Following the current trend of leaving old-fashioned open source farms in favour of more modern 2.0 social coding websites, I have migrated the Software Language Processing Repository from SourceForge to GitHub~\cite{SLPS}. The project was started in 2008 by Ralf L{\"a}mmel~\cite{FL-is-born} and quickly after that become the main target for my efforts and the main repository for my code. As of now (December 2012), it contains 954 revisions committed by me, 314 by Ralf L{\"a}mmel, 44 by Tijs van der Storm and 28 by all other contributors combined. This would have not been worth mentioning, if I did not migrate all my other repositories to \texttt{git} as well, which enabled efficient linking to all of them from the open notebook (see \S\ref{S:ONS}). For closed source repositories (like ones used for writing papers) we use Atlassian BitBucket instead of GitHub. \subsubsection{Turing machine programming} Two of my colleagues from Centrum Wiskunde \& Informatica (CWI), Davy Landman and Jeroen van den Bos, have built a physical Turing machine with a finite tape and separate program space, from LEGO blocks~\cite{LEGO-Turing-Machine2012}. We were all passively yet encouragingly watching them do that and then watching with excitement how the resulting machine could sum two and two in less than half an hour. From the software perspective, they have created a kind of ``Turing assembly'' DSL that consisted of commands for accessing bits on the tape, moving the head and making decisions on the next command, and was translatable into some real code that could run on the LEGO chip brick. Then, there was a slightly more advanced DSL called ``Turing level 2'' developed on top of it, enhanced with label names and repetition loops, as well as IDE support features like a visualiser/simulator. My spontaneous contribution to the project involved writing several programs for the machine in this ``Turing language level 2'', including copying of unary numbers, incrementing them, performing various forms of addition and finally multiplying two unary numbers. All these programs are publicly accessible at the official repository: \url{http://github.com/cwi-swat/TuringLEGO/tree/master/examples}. \subsubsection{Grammarware visualisation}\label{S:vis} Various controversial thoughts on grammar recovery visualisation, related to the previous body of works on grammar recovery both (co)authored by me~\cite{Too-Sharp2005,Convergence2009,Zaytsev-Thesis2010,MediaWiki2011,JLS-SQJ2011,BNF-WAS-HERE2012,NPGR2012,MediaWiki2012} and the giants on shoulders of which I was standing~\cite{Recovery-COBOL,Browsable,Recovery-MSC-SSL,Recovery-PLEX,Recovery-SPE,KlintLV05}, yielded some experimental code, but no valuable stable results. In a draft sent to the ``new ideas'' track of FSE 2012~\cite{Grammarware-Visualization2012} to be rejected there, I have argued that introducing or improving visualisation of processes in grammarware engineering has at least these benefits: \begin{description} \item[Process comprehension:] it becomes easier to understand the process and to see what exactly is happening when it is applied to certain input. \item[Process verification:] while complete formal verification of a sophisticated process with many branches and underlying algorithms, may be a challenging task, it is relatively easy to pursue lightweight verification methods. One of them comes more or less for free when an experienced observer can see what is happening and detect peculiarities naturally. \item[Process improvement:] observing a process does not only let one find mistakes in it, but also to get familiar with bottlenecks and other problematic issues, which in turn will help to suggest refinements and improvements. \item[Interactiveness:] there are many examples of processes which are impossible or unfeasibly hard to automate completely, but for which reasonable automation schemes exist that exercise ``semi-automation'' and require occasional feedback from a system operator. The request-response loop for such feedback can be drastically shortened in the case of interactive visualisations. \end{description} The point of the paper was well-received by the FSE NIER reviewers: nobody tried to argue that visualisation techniques would be useless. However, I obviously overestimated a contribution that I could make with providing a ``mile wide, inch deep'' (a quote from one of the reviews) overview, so perhaps a much later overview with the list of solid achieved results, would be in order. For the sake of completeness of this report, I list the nine showcases that were briefly described in the NIER submission below. Each item of this list is a relatively low hanging fruit for an article or a series thereof. \begin{description} \item[Grammar recovery:] the state of the art in automated grammar recovery (see also \S\ref{S:EDDrec}) is to work based on a set of appropriate heuristics~\cite{JLS-SQJ2011,NPGR2012}. Proper visualisation of them would help: dealing with some particularly tricky notations; verifying that the heuristics do what they are intended to do; collecting evidence and statistics on the use of certain heuristics; proposing additional heuristics and other process improvements. \item[API-fication] is a term used in \cite{KlintLV05} to describe a process of replacing low level API calls for manipulating a data structure with more expressive and more maintainable high level API calls generated from a grammar~\cite{deJong200435}. Thus, API-fication is a form of grammar-aware software renovation where surfacing grammar knowledge is a crucial contribution of the process. Visualising both the API calls themselves and the improvement steps on them, can serve as a motivation and even as a lightweight verification of API-fication. \item[Grammar transformation \& convergence.] There are at least two commonly used ways to visualise a grammar: in a textual form as (E)BNF; or as a syntax diagram (``railroad track''). Neither of them has a designated visualisation notation for transformations. \item[Mapping between grammar notations] is of the biggest challenges in research on grammars in a broad sense, since grammarware strives to cover such a big range of various structural definitions. Mapping between EBNF dialects~\cite{Metasyntactically2012}, X/O mapping \cite{xotrafo}, O/R mapping \cite{ONeil2008}, R/X mapping \cite{Fernandez2002} and many other internotational mappings exist along with intranotational techniques for grammar diffing, graph comparison, nonterminal matching, model weaving, etc. Displaying matching artefacts in a traceable way by metagrammarware tools is usually rather limited and either display local (mis)matches or global statistics. \item[Grammarware coevolution.] Concurrent and coupled evolution of grammars and language instances~\cite{CicchettiREP08}, of coexisting related grammars~\cite{Adaptation}, of grammars and language transformations~\cite{CleveH06}, of language design and implementation~\cite{DHondt2000} are special mixed cases of mapping and transformations (see last two sections), where we would like to visualise both what kind of matches are made and what kind of actions are inferred from them. \item[Grammar-based analysis] comprises syntactic analysis (parsing), but also similarly geared techniques that never received enough attention. As an example, it would be great to have something to demonstrate hierarchical lexical analysis~\cite{Murphy:1995:LSM:222124.222147} to the same degree as \cite{AMUFVI09} demonstrated for LL and LALR parsing. \item[Disambiguation] is a process of filtering a parse forest or reasoning about the origins of it, in modern generalised parsing algorithms like SGLR~\cite{SGLR} or GLL~\cite{Scott2010177}. Visualising SGLR disambiguation~\cite{DisambigSGLR} was implemented in the ASF+SDF Meta-Environment as a part of parse tree rendering, so in fact it visualised the ambiguities themselves and not the process of removing them, which was still of considerable help. More recent GLL disambiguation algorithms~\cite{Basten2010} were expressed mostly in a textual form even within a PhD project entirely dedicated to ambiguity detection~\cite{BasPhD} --- primarily because there is no clear understanding of how exactly they would be useful to visualise. \item[Grammar-based testing] methods based on combinatorial (non-probabilistic) exploration of the software language under test, have emerged from recent research~\cite{LaemmelS06,TestMatch2012}. Visualising coverage achieved by them and adjusting the visualisation with each new test case should help both to keep track of the process by expressing its progress, and to localise grammar fragments responsible for the failing test cases. \item[Grammar inference] is a family of methods of inferring the grammar, partially or completely, from the available codebase and even from code indentation~\cite{CrepinsekMJBS05,NierstraszKGLB07,Inference2012}. Such inference is a complicated process based on heuristics and sometimes even on search-based methods. As a consequence, each attempt at grammar inference remains somehow unconnected to the rest of the research field: adoption of such methods by scientists and engineers outside the original working group happens rarely, if ever. One can think that a proper visualisation of such process would help new users to get acquainted with a grammar reconstruction system and tweak it to their needs. \end{description} \input{jaxb} NB: the last item was written before the publication of the excellent grammar inference field overview~\cite{Inference2012}, which can also be seen as considering visualisation in a very broad sense. Another newer initiative which can be seen as grammarware process visualisation, concerns guided convergence (see also \S\ref{S:guided}). We can recall that the whole process of the guided grammar convergence is rather complicated and involves normalising the input grammar and going through several phases of unification to ensure the final nonterminal mapping that looks like this~\cite{Guided2012}: \begin{align*}\mathit{jaxb} \:\diamond\: \mathit{master} =\:& \{\langle \mathit{Expr_2},\mathit{binary}\rangle,\\ & \langle \mathit{Expr_3},\mathit{conditional}\rangle,\\ & \langle int,int\rangle,\\ & \langle \mathit{Function},\mathit{function}\rangle,\\ & \langle str,str\rangle,\\ & \langle \mathit{Program},\mathit{program}\rangle,\\ & \langle \mathit{Expr},\mathit{expression}\rangle,\\ & \langle \mathit{Expr_1},\mathit{apply}\rangle,\\ & \langle \mathit{Ops},\mathit{operator}\rangle\}\end{align*} While preparing the main guided grammar submission, I have noticed that this particular mapping, as well as the normalised grammar (\autoref{tbl:JAXB}) and the list of weakly and strongly prodsig-equivalent production rules (\autoref{fig:JAXBvsMG}) can be automatically produced by the convergence tool virtually without any additional effort in a completely transparent, traceable, reliable and reproducible fashion. This led to open publication of \cite{Guided2012}, an extended appendix for the main guided grammar convergence paper, which was, except for the two-page introduction, generated automatically, but is still readable and useful. \subsubsection{Wiki activity} While contributing to wiki websites is not usually considered an activity worthy of tracking or mentioning in the academic sense, of the 72 wiki-articles I have written in 2012 I can identify at least six that can be viewed as (popular) scientific writing: \begin{denselist} \item \href{http://cyclowiki.org/wiki \item \href{http://cyclowiki.org/wiki \item \href{http://cyclowiki.org/wiki \item \href{http://cyclowiki.org/wiki \item \href{http://ru.wikipedia.org/wiki \item \href{http://ru.wikipedia.org/wiki \end{denselist} \subsubsection{Colloquium organisation}\label{S:PEM} Again, participating in organisation of various events is commonly considered normal for a practicing academic researcher, but is never counted as a scientific contribution. Not arguing with that, I am still happy to be able to maintain the existing seminar culture of CWI (Centrum Wiskunde \& Informatica, my current employer) as a colloquium organiser of a series of events that have been taken place continuously at least since 1997\footnote{\url{http://event.cwi.nl/pem}}. Over the course of 2012, \textbf{56} presentations were given in total as a part of Programming Environment Meeting (PEM, mostly an inter-institutional outlet), Software Engineering Meeting (SEM, mostly an internal group seminar) and a special one-day event Symposium on Language Composability and Modularity (SLaC'M, most trouble of organising which was taken by Tijs van der Storm). These speakers have appeared at PEM, SEM and SLaC'M in 2012 (in chronological order of their first appearance): \newpage\begin{denselist} \item \href{http://grammarware.net/}{Dr.~Vadim Zaytsev} \cite{PEM2012-talk,SEM2012BX-talk,Tolerance2012-talk,Replications2012-talk,Decomposition2012-talk} \item \href{http://www.cwi.nl/people/2428}{Atze van der Ploeg} \cite{PEM2,PEM34} \item \href{http://win.ua.ac.be/~sdemey/}{Prof.~Dr.~Serge Demeyer} \cite{PEM3} \item \href{http://www.win.tue.nl/~aserebre/}{Dr.~Alexander Serebrenik} \cite{PEM5} \item \href{http://www.linkedin.com/in/stellapachidi}{Stella Pachidi} \cite{PEM6} \item \href{http://homepages.cwi.nl/~storm/}{Dr.~Tijs van der Storm} \cite{PEM7,PEM12,SLaCM7} \item \href{http://www.informatik.uni-trier.de/~ley/db/indices/a-tree/s/Steindorfer:Michael.html}{Michael Steindorfer} \cite{PEM8,SLaCM5,PEM35} \item \href{http://wiki.mq.edu.au/display/plrg/Anthony+Sloane}{Dr.~Antony Sloane} \cite{PEM9} \item \href{http://nl.linkedin.com/in/riemervanrozen}{Riemer van Rozen} \cite{PEM10,PEM33} \item \href{http://www.cwi.nl/people/2333}{Jeroen van den Bos} \cite{PEM11,PEM21,PEM44} \item \href{http://www.linkedin.com/pub/alex-loh/13/971/5a8}{Alex Loh} \cite{PEM13,SLaCM2,PEM31} \item \href{http://turingmachine.org/blog/}{Dr.~Daniel M.~German} \cite{PEM14} \item \href{http://plg.uwaterloo.ca/~migod/}{Dr.~Michael Godfrey} \cite{PEM15} \item \href{http://homepages.cwi.nl/~hills/CWI_Homepage/Homepage.html}{Dr.~Mark Hills} \cite{PEM19,SLaCM4,PEM32} \item \href{http://landman-code.blogspot.com/}{Davy Landman} \cite{PEM21,PEM29} \item \href{http://www.cwi.nl/people/2528}{Luuk Stevens} \cite{PEM22} \item \href{http://gsd.uwaterloo.ca/kczarnec}{Dr.~Krzysztof Czarnecki} \cite{PEM23} \item \href{http://www.ii.uib.no/~magne/}{Prof.~Dr.~Magne Haveraaen} \cite{PEM24,PEM25} \item \href{http://www.ii.uib.no/~anya/}{Dr.~Anya Helene Bagge} \cite{PEM24,PEM48} \item \href{http://homepages.cwi.nl/~simon/}{Dr.~Sunil Simon} \cite{PEM26} \item \href{http://www.linkedin.com/in/tbdinesh}{Dr.~T.~B.~Dinesh} \cite{PEM27} \item \href{http://homepages.cwi.nl/~jurgenv/}{Dr.~Jurgen Vinju} \cite{PEM28,SLaCM3} \item \href{http://www.cs.utexas.edu/~wcook/}{Dr.~William~R.~Cook} \cite{SLaCM1} \item \href{http://homepages.cwi.nl/~ai/}{Anastasia Izmaylova} \cite{SLaCM6} \item \href{http://www.lclnet.nl/}{Dr.~Lennart Kats} \cite{PEM30} \item \href{http://nl.linkedin.com/in/carelbast}{Carel Bast}, \href{http://www.wimbast.nl/}{Wim Bast}, \href{http://www.linkedin.com/in/tombrus}{Tom Brus} \cite{PEM36} \item \href{https://sites.google.com/site/tesfahuntesfay/}{Tesfahun Tesfay} \cite{PEM37} \item \href{http://www0.cs.ucl.ac.uk/staff/W.Langdon/}{Dr.~William B.~Langdon} \cite{PEM38} \item \href{http://www.linkedin.com/in/andreivaranovich}{Andrei Varanovich} \cite{PEM39} \item \href{http://jorisdormans.nl}{Dr.~Joris Dormans} \cite{PEM40} \item \href{http://nl.linkedin.com/pub/bas-joosten/3/739/868}{Sebastiaan Joosten} \cite{PEM41} \item \href{http://www.linkedin.com/pub/magiel-bruntink/1/4b1/b74}{Dr.~Magiel Bruntink} \cite{PEM42} \item \href{http://www.cs.vu.nl/~patricia/Patricia_Lago/Home.html}{Dr.~Patricia Lago} \cite{PEM43} \item \href{http://cs.uwaterloo.ca/~ftip/}{Prof.~Dr.~Frank Tip} \cite{PEM45} \item \href{http://staff.science.uva.nl/~poss/}{Dr.~Raphael Poss} \cite{PEM46} \item \href{http://scherpenisse.net/}{Arjan Scherpenisse} \cite{PEM47} \end{denselist} \section{\@startsection {section} {1} {\z@} {2.2ex plus 1ex minus .2ex} {1.2ex plus .2ex \@afterindentfalse} {\large\bf}} \def\subsection{\@startsection {subsection} {2} {\z@} {2.0ex plus 1ex} {.8ex plus .2ex \@afterindentfalse} {\@setsize\subsize{12pt}\xipt\@xipt\bf}} \def\subsubsection{\@startsection {subsubsection} {3} {\z@} {1.8ex plus 1ex} {.8ex plus .2ex \@afterindentfalse} {\normalsize\bf}} \def\paragraph{\@startsection {paragraph} {4} {\z@} {1.8ex plus .3ex} {-1em \@afterindentfalse} {\normalsize\bf}} \setlength{\textheight}{247mm} \setlength{\columnsep}{6.5mm} \setlength{\textwidth}{17cm} \setlength{\parindent}{1pc} \setlength{\parskip}{0.0cm} \setlength{\topsep}{0.1cm} \setlength{\partopsep}{0.0cm} \setlength{\itemsep}{0.1cm} \setlength{\parsep}{0.0cm} \setlength{\topmargin}{-12mm} \setlength{\oddsidemargin}{-6mm} \setlength{\evensidemargin}{-6mm} \pagestyle{headings} \thispagestyle{empty} \baselineskip12pt \newcommand{$\Upxi$BGF}{$\Upxi$BGF} \newcommand{$\upxi$bgf}{$\upxi$bgf} \newenvironment{denselist}{\begin{list}{\textbullet}{\setlength{\itemsep}{0em}\setlength{\parsep}{0em}}}{\end{list}} \newenvironment{denselista}{\begin{list}{\textbullet}{\setlength{\itemsep}{.2em}\setlength{\parsep}{0em}}}{\end{list}} \title{\textbf{The Grammar Hammer of 2012\footnote{The title relates both to the folklore story of a steel driving man named John Henry dying with a hammer in his hand instead of losing to a steam drill~\cite{Henry} and to a psychologist Abraham Maslow stating that if the only tool you have is a hammer, it is tempting to treat everything as if it were a nail~\cite{Maslow}.}}} \author{Vadim Zaytsev, \href{mailto:vadim@grammarware.net}{\texttt{vadim@grammarware.net}}\\[1em] \large Software Analysis and Transformation (SWAT) Team \\ Centrum Wiskunde \& Informatica (CWI)\\ Amsterdam, The Netherlands } \usepackage[unicode,bookmarks=false,pdfstartview={FitH},% colorlinks,linkcolor=blue,urlcolor=blue,citecolor=blue,% pdfauthor={Dr. Vadim Zaytsev} pdftitle={The Grammar Hammer of 2012}]{hyperref} \begin{document} \maketitle {\footnotesize\tableofcontents} \input{intro} \input{core} \input{minor} \input{venues} \input{conclusion} { \newpage \printbibliography[notkeyword=talk,notkeyword=unpublished,title={References}] \newpage \printbibliography[keyword=unpublished,title={Unpublished work}] \printbibliography[keyword=talk,title={Presentations}] } \end{document} \section{Venues}\label{S:venues} Academic venues (mostly conferences, workshops and journals) are essential components of the research process: publishing there means community recognition; submitting eventually leads to receiving peer reviews; and even reading calls for papers can be very inspiring and eye-opening. Below I list two kinds of venues that contributed to my research in 2012: one list is for those where I have submitted, the other one for the rest --- I am deeply grateful to all the reviewers and organisers of both kinds. The lists are not meant to cover all possible venues for my field, just those directly relevant to my activities this year. \subsection{Exercised venues}\label{S:yesvenues} \begin{description} \item[BX 2012 (ETAPS workshop)]~\\ I have been a ``\emph{bx-curious}'' person for quite a while, but BX 2012 was my first venue to come out. A very inspiring call for papers\footnote{\url{http://www.program-transformation.org/BX12}}, excellent atmosphere during the workshop, friendly and productive reviewers. A typical example of an event that appreciates you preparing a dedicated paper for which this becomes the one and only target venue. I submitted against all the odds (December deadlines are rather stressful), got there against all the odds (had to fly from ETAPS to SAC and then back) and still regretted nothing. I will not attend BX 2013 (my grandmother has her 80th birthday on the day of the workshop, and one has to set priorities), but I would if I could. Definitely recommended for people at least marginally interested in this field~\cite{bxReport}. \item[SAC 2012 (PL track)]~\\ A yet another experimental submission in the sense that I did not know almost anyone from the programme committee at that moment. However, I know people from my technological space who published there, and the call for papers\footnote{\url{http://www.cis.uab.edu/bryant/sac2012}} was inspiring, so I gave it a try, and did not regret it. The whole conference is huge, so I was afraid that attending would be unproductive, but I was proven wrong: if you know at least a couple of people with similar research interests and stick to them all the time, you will find many other similar researchers to talk to. I did not submit anything to SAC 2013 due to bad planning (holidays right before the deadline are unproductive), but I definitely will consider it very seriously every year from now on. \item[LDTA 2012 (ETAPS worshop)]~\\ Trying to be a good programme committee member, I knew I have to attend, so I have submitted the best result of 2011 there: the Grammar Hunter. I was also pleased to see how the current call for papers\footnote{\url{http://ldta.info/ldta_2012_cfp.pdf}} positioned LDTA as ``SLE, but with more grammarware''. The future of LDTA remains to be determined, but it has departed from ETAPS and will most probably join forces with SLE. \item[ECMFA 2012]~\\ The call for papers\footnote{\url{http://www2.imm.dtu.dk/conferences/ECMFA-2012/contributions/?page=cfp}} made it look like I have a chance, so I submitted something that I believed to be of good quality and of possible interest to the modelware researchers. One of the reviewers said that the paper ``clearly makes the most contribution of any paper I read'', which was rather encouraging, but ended up with rejection. In the end, I must conclude that I should have devoted this time to writing for ICPC or one of the journal special issues with deadlines around early spring. \item[TFP 2012]~\\ The call for papers\footnote{\url{http://www-fp.cs.st-andrews.ac.uk/tifp/TFP2012/TFP_2012/CFP12.txt}} looked challenging, but I really liked the ``trends'' aspect of it, since most traditional conferences dislike overview papers unless they are extremely strong and retrospective: there is simply no place for overviews of the current trends, unless you are already in the field and you systematically explore the ``future work'' sections of all papers you come by. In contrast to BX, this was an example of a venue that did not appreciate preparing a paper specifically for them on a topic relevant to me. In less than two weeks after submission I have received a short notification that it was judged to be out of scope. This was obviously not the only reason since other (stronger, less ``trendy'') papers from my technological space like \cite{DBLP:journals/corr/abs-1201-0024} were accepted, so I can only conclude that I have failed to explain the link between grammar transformation and the functional programming paradigm properly. Given the fact that I am not qualified to report on ``trends'' in any other field, I doubt that I will try sending anything to this venue in the future, but I surely do not discourage others to do so. Personally for me, it would have been more more productive to pursue MoDELS which had a competing deadline this year. \item[JUCS (journal)]~\\ The call for papers\footnote{\url{http://www.jucs.org/ujs/jucs/info/special_issues/sbcars_cfp.pdf}} made it clear that this special issue is linked to a workshop where I did not participate, but the call was open, and I answered. I cannot say that that was very appreciated: the reviews for \cite{MediaWiki2012} came very late (several months after the notification deadline), were extremely short and discouraging. \item[SCAM 2012]~\\ This is the third time I have served as a programme committee member for SCAM, where I have been invited after our paper with Ralf L{\"a}mmel got a best paper award in 2009~\cite{JLS-SCAM2009}. I have never attended since that time, and received a warning that I will not be included next year if I miss the event again. So, putting date-conflicting events like SLE and CSMR aside, I did my best, which for me meant submitting one paper to SCAM and one to the colocated ICSM (see below). The topic chosen for SCAM (island grammars) seemed to be in scope of the call for papers\footnote{\url{http://scam2012.cs.usask.ca/CFP}}, but the paper was seen as weird and immature, and was hopelessly rejected. The reviews it received were pretty helpful, even though one of the reviewers really hated the ``in the cloud'' aspect (and that is exactly how I tried to sell it). Apparently, putting some effort into submitting something has already been noticed, since I have been, against all the odds, invited to the programme committee again for SCAM 2013. \item[ICSM 2012]~\\ The call for papers\footnote{\url{http://selab.fbk.eu/icsm2012/download/cfp-icsm2012.pdf}} came to my attention right after the rejection letter from ECMFA, and I decided that ICSM would be a good venue for the guided grammar convergence methodology (\S\ref{S:guided}). Getting a paper there would also increase my chances at going to SCAM (see above for the reasons). Reviews were rather cold, but some of them (except one) useful nonetheless. \item[NordiCloud (WICSA/ECSA workshop)]~\\ Not really being an architecture researcher, I would have never considered going to WICSA/ECSA, but the call for contributions\footnote{\url{http://46.22.129.68/NordiCloud/?page_id=39}} was out precisely a couple of weeks after my SCAM rejection, and I was not feeling enough energy to rewrite the island parsing paper completely, so NordiCloud was a relatively cheap way for me to resubmit the same material after a minor revision. It did not pay off: most of the reviewers were scared off just by seeing a grammar-related submission. \item[FSE 2012 (NIER track)]~\\ The call for papers\footnote{\url{http://www.sigsoft.org/fse20/cfpNewIdeas.html}} came out at a very busy time, but four page limit was easily reachable, so I have submitted two papers on different new ideas. Unfortunately, they were indeed more of idealistic proposals for discussing and considering certain aspects, than usual ``short papers'' that are just normal papers at the early stage. Both were hopelessly rejected, and I still want to find some venue for the future that would be good for sharing and discussing fresh ideas --- perhaps OBT? I have to try to find out. \item[SoTeSoLa 2012 (summer school)]~\\ An experiment in ``Research 2.0'' driven mostly by Jean-Marie Favre and Ralf L{\"a}mmel, this summer school was by far not a typical one. There was a lot of innovations: submitting a one-page profile of yourself, making a one-minute video about yourself, listening to lots of remote lectures, having a hackathon distributed in time and space, registering at a social networking website, etc. Not all of them very entirely successful: partly due to being ahead of its time, partly due to other reasons, which are being dissected, analysed and researched now by Jean-Marie Favre. I was involved in all kinds of activities from the relatively early stage, and in the end it was officially classified as serving as a ``Social Media Chair'' and a ``Hackathon Lead Coordinator''. This was not a publishing venue, and I did not give any invited lecture, but it was fun to be a part of it. \item[SATToSE 2012 (seminar)]~\\ A non-publishing seminar series where I have given a presentation on bidirectional grammar transformation~\cite{BGX2012-talk}. The material presented there was in a state somewhere between \cite{Metasyntactically2012} and the planned future paper on bidirectionalisation. \item[POPL 2013]~\\ The call for papers\footnote{\url{http://popl.mpi-sws.org/2013/popl2013-cfp.pdf}} was concise and crunchy, but POPL is one of the venues that does not require much advertisement. I have poured a lot of effort into \cite{Guided-POPL2013}, completely redesigned the convergence process (see \S\ref{S:guided}), reimplemented the prototype and rewritten the paper with respect to \cite{Guided-ECMFA2012,Guided-ICSM2012}. In a way, it did pay off: the paper was rejected, but the reviews were among the most useful that I have received this year. \item[NWPT 2012]~\\ The call for papers\footnote{\url{http://nwpt12.ii.uib.no/call-for-papers}} was brought to my attention by Anya Helene Bagge, a co-organiser of this workshop. In an extended abstract that was submitted there, I apparently went overboard with the required abstraction level and assumed level of grammatical knowledge, and recent POPL rejection has possibly jeopardised the outsourcing of the usefulness statement of the method. Reviews were curt and bleak. \item[EMSE (journal)]~\\ The call for papers\footnote{\url{http://sequoia.cs.byu.edu/lab/?page=reser2013&section=emseSpecialIssue}} called for ``experimental replications'' and went to great length explaining how important it is to be able to publish not just the experiments themselves, but also replications thereof. I was immediately convinced, but decided to reinterpret the definition of a replication. Instead of doing classical empirical studies, I presented research activities (and in particular prototype engineering) as experiments. That way, the replications were also ``experiments'' in that sense that were intended to cover an older experiment and could therefore be measured and assessed based on grounds of that coverage. I could even find some related work on the topic in form of papers that described the prototype development process itself. My paper was intended to contain three case studies: (1) replicating the grammar convergence case study of the Factorial Language from \cite{Convergence2009} with the guided grammar convergence methodology (see \S\ref{S:guided}); (2) replicating a bigger grammar convergence case study of Java from \cite{JLS-SQJ2011} with more abstract and concise Extended XBGF (see \S\ref{S:EXBGF}); (3) replicating both of these case studies with a bidirectional $\Upxi$BGF\ (see \S\ref{S:CBGF}). Due to insane amounts of work that this turned out to be, only the first two replications have made it into a 42-page long paper~\cite{Incremental2012}. Only one of three reviewers was excited by my approach, and all three agreed that the empirical software engineering journal is not the right venue for such a report. \item[MPM 2012 (MoDELS worshop)]~\\ Basically, this venue was chosen \emph{after} I have written a paper. The text underwent some polishing after the choice was made, but the topic was not adjusted. I have had a nice idea of transforming megamodels in order to make a good story out of them (see \S\ref{S:mega}): a substantial contribution was not yet there (and such work is still ongoing), but I wanted to expose it to the public and to discuss it first. The call for papers\footnote{\url{http://avalon.aut.bme.hu/mpm12/MPM12-CFP.pdf}} for MPM looked the most inviting for this kind of cross-paradigm approach among all MoDELS workshops, and indeed the reviewers found the paper weird yet acceptable, so I was able to give a short presentation and hang my poster there~\cite{Renarration-MPM2012-talk}. \item[XM 2012 (MoDELS worshop)]~\\ The topics list\footnote{\url{http://www.di.univaq.it/XM2012/page.php?page_id=13}} provided by the organisers of this workshop was fascinating, and I desperately wanted to submit anything, but eventually gave up to find the time it deserved. Soon after that, the deadline was extended, and I had no other choice than to write down the idea that was floating around in my head for a while (see \S\ref{S:NBGF}). \item[SCP (journal)]~\\ The call for systems\footnote{\url{http://www.win.tue.nl/~mvdbrand/SCP-EST}} was very much in sync with what its guest editors have tried to achieve in the last years, and I support them wholeheartedly in that. The Grammar Zoo, one of essential parts of the SLPS~\cite{SLPS}, that did not receive a lot of my attention in 2012, but that was always on my mind, was packaged and submitted there both as an available system and as an important repository of experimental systems in grammarware. The outcome will become known in 2013. \end{description} \subsection{Inspiring venues}\label{S:novenues} There have been many venues that I did not submit anything to, but not because I did not want to. Their calls for papers gave me inspiration to work on something, even though I was not productive enough to be able to fit into their deadlines or produce anything of value at the required level. \begin{description} \item[MSR 2012]~\\ The mining challenge\footnote{\url{http://2012.msrconf.org/challenge.php}} of MSR looked very interesting, so I looked at it, but since I was looking specifically for grammars, it did not work out at all: only two grammars were found, and there was no sensible way to connect them to the rest of the system. If more of them could have been obtained written in a variety of EBNF dialects, it could have become an interesting case study similar to \cite{MediaWiki2012}. \item[Laws and Principles of Software Evolution]~\\ The call for papers\footnote{\url{http://listserv.acm.org/scripts/wa-acmlpx.exe?A2=ind1111&L=seworld&F=&S=&P=25841}} for this special issue of JSME looked tempting, so I even emailed the editors, asking for some additional information. Unfortunately, the collaboration that I hoped to achieve with other people, did not work out, and nothing was produced in time. \item[Success Stories in Model Driven Engineering]~\\ The call for papers\footnote{\url{http://www.di.univaq.it/ssmde}} came out at the time when I was busy with all kinds of other initiatives. Besides that, this special issue of SCP was actually looking for extended reports on already published projects, and I was busy with new experiments. Possibly, a strong ``lessons learnt'' kind of paper on grammar hunting would make sense, but I was too immersed in new stuff at the time to go back. However, I have to admit that when/if I finally sit down to write a comprehensive grammar recovery paper (i.e., connecting \S\S\ref{S:EDDrec}, \ref{S:mutation}, \ref{S:vis} and \ref{S:repo}), it must go to either SCP or SP\&E. \item[CloudMDE (ECMFA 2012 workshop)]~\\ This was \emph{the} venue that gave me the eerie thought of writing a ``parsing in the cloud'' paper (see \S\ref{S:parsing}). However, I was disheartened by the rejection of \cite{Guided-ECMFA2012} at ECMFA and decided to not submit anything to ECMFA workshops\footnote{V.~Zaytsev (grammarware). ``Yet another bridging attempt failed: my grammar paper got rejected at @\href{http://twitter.com/ecmfa2012}{ecmfa2012}. Now I will also go submit the \#CloudMDE draft elsewhere.'' Tweet. \url{https://twitter.com/grammarware/status/189976445995593728}. 11 April 2012, 9:21.}. \item[ICPC 2012]~\\ The call for papers\footnote{\url{http://icpc12.sosy-lab.org/CfP.pdf}} competes date-wise with many other good venues, so this year ICPC just happened to not be among the ones I have chosen as my targets. \item[RC 2012]~\\ The call for papers\footnote{\url{http://www.reversible-computation.org/2012/index91b1.html?call_for_papers}} for the fourth workshop on reversible computation gave me a lot of ideas and keyword pointers for the bidirectionality topic. However, I did not feel confident enough to submit anything. Anyway, thanks a lot and congratulations on becoming a conference in 2013! \item[CSCW 2013]~\\ This is not a typical venue for me, but I have a dream of eventually submitting something wiki-related there. The call for participation\footnote{\url{http://cscw.acm.org/participation_paper.html}} was as good as it always is, and even better this year because they have introduced a new rule concerning the paper size: 10 pages is no longer the \emph{limit}, it is rather a \emph{standard}. If your idea fits on smaller number of pages, your reviewers have the right to complain if you try to bloat your submission. On the other hand, if that is not enough, you can always make your paper longer, but the contribution then needs to grow accordingly. I believe that with small incremental and non-disruptive ideas like these, we could achieve modern comfortable publishing models easier than with endeavours to revolutionise the field. \item[PPDP/LOPSTR 2012]~\\ The calls for papers\footnote{\url{http://dtai.cs.kuleuven.be/events/PPDP2012/ppdp-cfp.txt}}$^,$\footnote{\url{http://costa.ls.fi.upm.es/lopstr12/cfp.pdf}} were both interesting, but at my level I could not actually decide between the two venues. I was working honestly toward the seemingly achievable goal (see \S\ref{S:3BGF}), but it turned out to be unachievable. Being insecure about my ability to write a strong paper about negative results, I gave up. \item[FM+AM 2012 (SEFM workshop)]~\\ This was \emph{the} workshop\footnote{\url{http://ssfm.cs.up.ac.za/workshop/FMAM12.htm}} that set my thoughts in the agile/extreme mode, which ultimately led to the paper at XM (see \S\ref{S:NBGF}) simply because I did not manage to complete the work before the FM+AM deadline. Imagine my surprise when I found out that FM+AM was cancelled due to the lack of good submissions! \item[WoSQ 2012 (FSE workshop)]~\\ The call for papers\footnote{\url{http://sites.google.com/site/wosq2012/cfp}} has led me to believe that this would be a good possible venue for the paper on grammar mutations (see \S\ref{S:mutation}). However, the time was too tight, and both of my NIER submissions have been rejected, so an FSE workshop stopped looking that attractive after all. \item[SQM 2012 (CSMR workshop)]~\\ The workshop\footnote{\url{http://sqm2012.sig.eu}} happened at the same time as I was attending both ETAPS and SAC, so I could not possible be at the third place at the same time as well, but I just want to name it as a relatively small venue where I have enjoyed reviewing a couple of papers as a PC member (will be on PC next year as well). \item[WCN 2012]~\\ The website\footnote{\url{http://www.wikimediaconferentie.nl}} is in Dutch, as the conference itself. This was my second experience being a Program Chair (the first one was with WCN 2011), and this time I counted: 842 emails needed to be sent or answered by me in order for this conference to happen. Luckily, CWI (my employer) did not mind since they could proudly list ``one of theirs'' to be the PC at a venue where one of the keynote speakers is Jimmy Wales~\cite{JimboWCN12}. \end{description}
7f7edf8f1e8430f4397d4ce933c7901c53daa2b0
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{The 500/500 Metric for Academic HPC Resources} Lattice QCD has traditionally and continues to be one of the most computationally demanding research fields within quantitative science. Progress in Lattice QCD has closely tracked advances in high performance computing (HPC). It is unsurprising then that the semi-annual supercomputing Top 500 list\footnote{\url{http://top500.org/}} is closely watched by many researchers within lattice QCD. The Top 500 provides a straightforward answer to those wanting to know which country has the biggest and the best machines. It is arguable that such a simple comparison is not always the most relevant. In certain circumstances, it may be more pertinent to ask a different question: \emph{How much supercomputing access do I have relative to my competitors overseas?} In an attempt to provide an answer, our starting point is the Academic Ranking of World Universities (ARWU) list compiled by Shanghai Jiao Tong University, China, also known as the Shanghai Ranking\footnote{\url{http://www.shanghairanking.com/}}. This survey lists the top 500 ranked universities in the world, which we shall simply refer to as the ARWU 500. Table~\ref{tab:ARWU500} lists the top 6 countries, as ranked by the Academic Ranking of World Universities (ARWU) in 2012. The national rankings are determined in a similar manner to those based on the Olympic medal tallies. Countries are first ranked in descending order by the number of university entries they have in the ARWU Top 20, then by the number of Top 100 universities, followed by the number of Top 200, 300, 400 and 500 entries respectively. \begin{table}[!hb] \centering \begin{tabular}{lrrrrrr} \sc Country & Top 20 & Top 100 & Top 200 & Top 300 & Top 400 & Top 500 \\ \hline USA & 17 & 53 & 85 & 109 & 137 & 150 \\ UK & 2 & 9 & 19 & 30 & 33 & 38 \\ Japan & 1 & 4 & 9 & 9 & 16 & 21 \\ Australia & -- & 5 & 7 & 9 & 16 & 19 \\ Germany & -- & 4 & 14 & 24 & 30 & 37 \\ Canada & -- & 4 & 7 & 17 & 18 & 22 \\ \end{tabular} \caption{\label{tab:ARWU500} Top 6 countries, as ranked by the Academic Ranking of World Universities (ARWU) in 2012.} \end{table} These 6 countries will form the basis of our study of the HPC resources to available to academics in Australia, in comparison to our overseas competitors. The list includes Japan, Germany and the USA, the traditional leaders of the supercomputing field. Canada has broadly similar socioeconomic characteristics to Australia and hence provides a useful point of comparison. We now turn our attention to the June 2012 Top 500 Supercomputer list. We filter the Top 500 supercomputing data by restricting ourselves to the aforementioned top 6 countries in the ARWU ranking. The top 3 entries for each country in the Academic and Research segments of the Top 500 supercomputer list are displayed in Table~\ref{tab:top500}. Also shown are the total number of entries, number of compute cores, and combined computing power for all Academic/Research entries in the list for that country. The quantity that we will be interested in is the combined $R_{\rm max}$ value for each country, which is an indicator of the total number of Teraflops available to the Academic/Research segments in that country. $R_{\rm max}$ is the LINPACK benchmark and provides a measure of the supercomputer's speed in Teraflops. \begin{table}[!tb] \centering \begin{tabular}{lrrrrr} \textsc{Rank} & \textsc{Country/Site} & $N_{\rm cores}$ & $R_{\rm max}$ & $R_{\rm peak}$ \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large Australia}} & & & \\ \hline 31 & VLSCI/Avoca & 65536 & 690.2 & 838.9 \\ 139 & NCI-NF/Vayu & 11936 & 126.4 & 139.9 \\ 248 & iVEC & 9600 & 87.2 & 107.5 \\ & \bf Total: 3 Academic/Research entries & \bf 87072 & \bf 903.8 & \bf 1086.3 \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large Canada}} & & & \\ \hline 66 & SciNet/U. Toronto/Compute Canada/GPC & 30912 & 261.6 & 312.82 \\ 71 & Calcul Canada/Calcul Qu\'ebec/Sherbrooke & 37728 & 240.3 & 316.9 \\ 90 & Environment Canada & 8192 & 185.1 & 251.4 \\ & \bf Total: 9 Academic/Research entries & \bf 137872 & \bf 1342.5 & \bf 1751.3 \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large Germany}} & & & \\ \hline 4 & Leibniz Rechenzentrum/SuperMUC & 147456 & 2897.0 & 3185.1 \\ 8 & Forschungszentrum Juelich/JuQUEEN & 131072 & 1380.4 & 1677.7 \\ 25 & Forschungszentrum Juelich/JUGENE & 294912 & 825.5 & 1002.7 \\ & \bf Total: 16 Academic/Research entries & \bf 753944 & \bf 7062.6 & \bf 8471.0 \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large Japan}} & & & \\ \hline 2 & RIKEN/K computer & 705024 & 10510.0 & 11280.4 \\ 12 & IFERC/Helios & 70560 & 1237.0 & 1524.1 \\ 14 & GSIC/Tokyo Inst. of Tech./TSUBAME 2.0 & 73278 & 1192.0 & 2287.6 \\ & \bf Total: 23 Academic/Research entries & \bf 1184258 & \bf 17089.0 & \bf 20430.9 \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large United Kingdom}} & & & \\ \hline 13 & STFC/Daresbury Laboratory/Blue Joule & 114688 & 1207.8 & 1468.0 \\ 20 & U. Edinburgh/DiRAC & 98304 & 1035.3 & 1258.3 \\ 32 & U. Edinburgh/HECToR & 90112 & 660.2 & 829.0 \\ & \bf Total: 16 Academic/Research entries & \bf 455584 & \bf 5875.3 & \bf 7553.0 \\ \hline\\[-3mm] & \multicolumn{2}{l}{\textbf{\large United States}} & & & \\ \hline 1 & DOE/NNSA/LLNL/Sequoia & 1572864 & 16324.8 & 20132.7 \\ 3 & DOE/SC/Argonne/Mira & 786432 & 8162.4 & 10066.3 \\ 6 & DOE/SC/Oak Ridge/Jaguar & 298592 & 1941.0 & 2627.6 \\ & \bf Total: 87 Academic/Research entries & \bf 5063813 & \bf 44953.9 & \bf 56928.4 \\ \hline \end{tabular} \caption{\label{tab:top500}Selected entries in the June 2012 Top 500 Supercomputer list in the Academic and Research segments. The top 3 entries are listed for each of the chosen countries, as well as the total number of entries and the aggregate computing capacity of the entries. $N_{\rm cores}$ is the number of compute cores. $R_{\rm max}$ (the LINPACK benchmark score) and $R_{\rm peak}$ (the theoretical peak) are in Teraflops.} \end{table} The most straightforward measure of the supercomputing power available to researchers in a given country would be to compare the integrated $R_{\rm max}$ values in the academic segment. However, this simple measure doesn't reflect the level of competition for those resources. In order to provide a better estimate of the HPC resources available to a given research group, we propose a novel measure called the 500/500, which is calculated for each country by taking the combined Teraflops of the Academic and Research entries in the Top 500 supercomputer list and dividing by the number of institutions in the ARWU 500. A summary of the data is presented in Table~\ref{tab:500500}. The measure assumes that the number universities in the ARWU 500 is a good representation of the number of academic supercomputing groups in the country. \begin{table}[!h] \centering \begin{tabular}{lrrrr} \sc Country & \sc Top 500 & \sc Total $R_{\rm max}$ & \sc ARWU 500 & 500/500 \\ \hline Australia & 3 & 903.8 & 19 & 47.6 \\ Canada & 9 & 1342.5 & 22 & 61.0 \\ Germany & 16 & 7062.6 & 37 & 190.9 \\ Japan & 23 & 17089.0 & 21 & 813.8 \\ UK & 16 & 5875.3 & 38 & 154.6 \\ USA & 87 & 44953.9 & 150 & 299.7 \end{tabular} \caption{\label{tab:500500} Data of interest for the selected countries in 2012. Listed are the number of Academic/Research Top 500 entries, the combined $R_{\max}$ (in Teraflops) under the Academic and Research segments, the number of ARWU 500 entries and our proposed 500/500 measure of academic HPC resources (in Tflops/institution).} \end{table} \begin{figure}[!h] \centering \tikzset{external/export next=false} \begin{tikzpicture} \begin{axis}[ ylabel=500/500 Score, symbolic x coords={Australia,Canada,Japan,Germany, UK, USA}, xtick=data] \addplot[ybar,fill=blue] coordinates { (Australia, 47.6) (Canada, 61.0) (Germany, 190.9) (Japan, 813.8) (UK, 154.6) (USA, 299.7)}; \end{axis} \end{tikzpicture} \caption{\label{fig:500500} The 500/500 scores (in units of Tflops/institution) for the selected countries in 2012.} \end{figure} As demonstrated in Figure~\ref{fig:500500}, Japanese researchers are the clear winners, with 500/500 score more than double that of second-placed USA, and nearly twenty times that of Australia! While the USA easily has the highest integrated $R_{\rm max}$ score, they are ranked second on the basis of their 500/500 score, a reflection of the intense competition for those resources as indicated by their place at the top of the ARWU 500. Of the six selected countries, Australia ranks last according to the 500/500 metric. \section{GPU Computing} As demonstrated in the previous section, Australian researchers are disadvantaged with regard to HPC resources when compared to our overseas competitors. The lack of HPC resources is particularly acute in our field of Lattice QCD, where some of our competitors have access to dedicated lattice machines capable of hundreds of Teraflops. The ILDG program allows for the sharing of gauge field configurations within a group or with the lattice QCD community at large\cite{ildg}. The PACS-CS collaboration in Japan generously released several gauge field ensembles of large volume and light quark mass suitable for cutting edge calculations to the general lattice community\cite{pacs-cs}. Through the use of these configurations we have been able to bypass the unaffordable gauge field generation process and devote our limited cycles towards the production of quark propagators. It should come as no surprise that with the relatively scarce level of HPC resources available to us when compared to our competitors, we have turned to GPUs as a cost-effective way of competing with overseas groups. Lattice QCD has a geometric parallelism that makes it ideally suited to be put on GPUs\cite{Egri:2006zm,Clark:2009wm}. NVIDIA has two distinct GPU product lines that are relevant to HPC. The Tesla line of cards specifically targets HPC users, whereas the commodity GeForce graphics cards target the much bigger computer gaming market. The specific cards that we are interested in are listed in Table~\ref{tab:GPUs}. As we can see, in comparison to the GTX cards, the Tesla GPUs feature improved double precision performance and ECC memory. These features come at a cost however, with a Tesla card costing roughly 4 times as much as a top-end GTX card. \begin{table} \centering \begin{tabular}{cccccc} \emph{Architecture} & \emph{GPU} & \emph{Cores} & \emph{Peak (SP)} & \emph{Peak (DP)} & \emph{ECC} \\ \hline Fermi & GTX 580 & 512 & 1581 Gflops & 166 Gflops & No \\ & Tesla M2090 & 512 & 1331 Gflops & 665 Gflops & Yes \\ Kepler & GTX 680 & 1536 & 3090 Gflops & 95 Gflops & No \\ & Tesla K20 & 2496 & 3520 Gflops & 1170 Gflops & Yes \end{tabular} \caption{\label{tab:GPUs} Previous (Fermi) and current (Kepler) generation NVIDIA GPUs. Shown are the number of CUDA cores, the peak floating point performance in single and double precision, and the ECC memory capability.} \end{table} Fortunately, the numerical requirements for quark propagator generation are much less strict than those for gauge field generation. The need to preserve unitarity during gauge field generation typically requires double precision, and as the generated gauge fields are not easily ``checked'' one also requires ECC memory. In contrast, for quark propagators the tolerance when calculating the application of the fermion matrix inverse is typically $\sim 10^{-5},$ which means single precision is sufficient. Furthermore, the solution to the linear system is easily verified, avoiding the need for ECC memory. Hence, GTX cards are perfectly viable for quark propagator calculation. \section{Adventures in Single Precision} To obtain the action of the the inverse fermion matrix $D^{-1}$ on a vector we calculate the solution to the linear system \begin{equation} D\vec{x} = \mathbf{b}. \label{eq:linsys} \end{equation} As the fermion matrix $D$ is non-Hermitian the most common algorithm for obtaining the solution is BiCGStab\cite{bicgstab} or some variant thereof. In double precision BiCGStab usually converges to a solution, even though the typical convergence is not smooth but rather `spiky'. However, in single precision we find that BiCGStab is numerically unstable. When attempting to invert the fermion matrix on large lattices and light quark masses BiCGStab frequently fails to converge. To avoid this, we propose to use an algorithm that minimises the residual and hence will converge smoothly. The conjugate gradient (CG) algorithm\cite{cg} minimises the residual, but is only applicable to cases where the matrix being inverted is Hermitian positive-definite (Hpd). There are two simple ways to convert our original problem into a form suitable for the CG algorithm. The first is to simply multiply (\ref{eq:linsys}) by $D^\dagger$ to obtain the CGNR form of the normal equations, \begin{equation} D^\dag D\vec{x} = D^\dag \vec{b}. \label{eq:cgnr} \end{equation} The second is to solve the CGNE form of the normal equations \begin{equation} D D^\dag \vec{x}' = \vec{b}, \label{eq:cgne} \end{equation} where the solution to the original equations is given by $\vec{x} = D^\dag \vec{x}'.$ When solving the CGNE form of the normal equations, the residual for the normal form $|D D^\dag \vec{x}' - \vec{b}|$ and the residual for the original form $|D \vec{x} - \vec{b}|$ coincide by construction, so when CGNE converges we have obtained the solution to the original equation to the desired tolerance $\delta_{\rm tol}.$ Furthermore, we find that even in single precision the estimated residual $|\vec{r}|$ and the true residual coincide for the CGNE process. In double precision, when the CGNR process converges this usually implies that we have obtained the desired solution. However, in single precision, the solution to (\ref{eq:cgnr}) converges well before we have obtained the solution to (\ref{eq:linsys}). To work around this, we propose a simple modification of the CGNR process. When the CGNR normal equation converges with tolerance $\delta_{\rm ne},$ check if we have a solution to the original equation within $\delta_{\rm tol}.$ If not, adjust $\delta_{\rm ne}$ and restart CGNR with the current solution. Our modified CGNR algorithm with restarts is presented in Figure~\ref{fig:CGNR}. A comparison of the typical behaviour of CGNE and our CGNR with restarts is shown in Figure~\ref{fig:combk13770}. We can see that the estimated residual $|\vec{r}_{\rm ne}|$ and the true residual for the CGNR normal equations $\epsilon' = |D^\dag (D\vec{x} - \vec{b})|$ coincide until the CGNR system (\ref{eq:cgnr}) has converged, after which they diverge due to hitting the limits of single precision. What is interesting is that even though the CGNR process undergoes restarts, the true residual for the original system $\epsilon = |D\vec{x} - \vec{b}|$ decreases smoothly until it has converged to within the desired tolerance. Tests comparing CGNR and CGNE were performed at several quark masses and we found that the modified CGNR process (with restarts) consistently converges significantly faster than CGNE, requiring $\sim 10\%-30\%$ less iterations to reach the desired tolerance. \begin{figure} \hrule \vspace{6pt} \begin{algorithmic} \STATE Initialise $\delta_{\rm ne} := \delta_{\rm tol}$ to the desired solution tolerance. \LOOP \STATE Set $\vec{y} := \vec{r}_{\rm ne} := D^\dag D\vec{x} - D^\dag \vec{b},\quad \rho := |\vec{r}_{\rm ne}|^2.$ \WHILE{$\sqrt{\rho} > \delta_{\rm ne}$} \STATE Set $\beta := \inner{\vec{y}}{D^\dag D\vec{y}},\quad \omega := \rho/\beta.$ \STATE Set $\vec{x} := \vec{x} + \omega\vec{y},\quad \vec{r}_{\rm ne} := \vec{r}_{\rm ne} - \omega D^\dag D\vec{y}.$ \STATE Set $\rho' := \rho,\quad \rho := |\vec{r}_{\rm ne}|^2,\quad \theta := -\rho/\rho'.$ \STATE Set $\vec{y} := \vec{r}_{\rm ne} - \theta\vec{y}.$ \ENDWHILE \STATE Set $\epsilon := |D\vec{x} - \vec{b}|$ to the true residual for the original equation. \STATE \textbf{if} $\epsilon < \delta_{\rm tol}$ \textbf{then exit} \COMMENT{\emph{We are finished.}} \STATE Set $\epsilon' := | D^\dag D\vec{x} - D^\dag \vec{b}|$ to the true residual for the normal equation. \STATE Update $\delta_{\rm ne} := \tau \cdot \delta_{\rm tol} \cdot (\epsilon'/\epsilon). $ \COMMENT{\emph{Restart CGNR.}} \ENDLOOP \end{algorithmic} \vspace{4pt} \hrule \caption{\label{fig:CGNR} The modified CGNR algorithm with restarts. The constant $\tau \sim 0.9$ controls the restart frequency.} \end{figure} \begin{figure}[!tb] \centering \tikzsetnextfilename{combk13770} \begin{tikzpicture}[scale=1.0] \begin{semilogyaxis}[ xlabel=Iterations] \pgfplotstableread{cgnrk13770.dat}\cgnrdata \addplot[color=red,mark=none] table[y index=1] {\cgnrdata}; \addplot[color=blue,mark=none] table[y index=2] {\cgnrdata}; \addplot[color=dark-green,mark=none] table[y index=3] {\cgnrdata}; \pgfplotstableread{cgnek13770.dat}\cgnedata \addplot[color=purple,mark=none] table[y index=1] {\cgnedata}; \addplot[color=black,domain=0:4000,dashed] {(1.0e-5)}; \legend{$|\vec{r}_{\rm ne}|$ (CGNR), $\varepsilon'$ (CGNR), $\varepsilon$ (CGNR), $|\vec{r}|$ (CGNE), Tolerance} \end{semilogyaxis} \end{tikzpicture} \caption{\label{fig:combk13770} Typical behaviour of the CGNE process and the CGNR process with restarts. Shown for CGNR are the estimated residual $|\vec{r}_{\rm ne}|$ and true residual $\varepsilon'$ for the normal equation, as well as the true residual $\varepsilon$ for the original equation. For CGNE we show the estimated residual $|\vec{r}|$ (which coincides with the true residual).} \end{figure} \providecommand{\href}[2]{#2}\begingroup\raggedright
0433cc666b8d36cdc0a4e3fdffb23f1748a5729c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Methods} Devices were fabricated using a co-lamination mechanical transfer technique similar to that described previously\cite{Dean:2010,Dean:2011}. Electrical leads consisting of a Cr/Pd/Au metal stack were deposited using standard electron-beam lithography after which the sample was etched into a Hall bar by exposure to oxygen plasma. Graphene/hBN stacks were fabricated on doped Si substrates with a $\sim300$~nm oxide layer. More than 20 devices were made in similar way, where 6 devices show similar behavior to that reported here. We focus only on 2 high quality devices in the text, listing other examples in the supplementary materials. In electrical measurements the charge carrier density was varied using the doped silicon as a field effect gate. Four-terminal transport measurements were performed using a lock-in amplifier at 17~Hz with a $10-100$~nA source current. Samples were measured in a 31~T resistive magnet and $^{3}$He cryostat (sample in vapour). Longitudinal and Hall conductivities were calculated from the corresponding measured resistance according to $\sigma_{xx}=\rho_{xx}/(\rho_{xx}^{2}+R_{xy}^{2})$ and $\sigma_{xy}=R_{xy}/(\rho_{xx}^{2}+R_{xy}^{2})$, respectively. AFM images were acquired after device fabrication was complete, using an Omicron low temperature AFM system operated at room temperature. Imaging was performed using $V_{bias} = 0.2$~V and $\delta f=20$~Hz. Images were filtered to remove noise. Gap energies were estimated from the temperature dependence of longitudinal conductivity minima in the thermally activated regime, $\sigma_{xx}\propto e^{\Delta/2k_{B}T}$ where $\Delta$ is the energy gap, $k_{B}$ is Boltzmann's constant and $T$ is the electron temperature. Each gap value was determined from the corresponding Arrhenius plot by fitting to the linear regime. \bigskip \section*{Acknowledgments} We thank A. MacDonald for helpful discussions. A portion of this work was performed at the National High Magnetic Field Laboratory, which is supported by National Science Foundation Cooperative Agreement No. DMR-0654118, the State of Florida and the U.S. Department of Energy. This work is supported by AFOSR MURI, FCRP through C2S2 and FENA. PK and FG acknowledge sole support from DOE (DE-FG02-05ER46215). JK and MI were supported by by the National Science Foundation under Grant No. 0955625.
1cf783393d32ba8d8aea2cce126df8693f261d01
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:intro} It has long been realized that top quarks should decay before they have a chance to hadronize. As a consequence, spin effects present in their production should be passed on to their decay products largely unobscured by the depolarizing effects of soft QCD~\cite{Barger:1988jj,Kane:1991bg}. Several analyses at the Tevatron and LHC have been dedicated to establishing that this picture is correct by measuring the correlations in the spins of top-antitop pairs. Only recently, the presence of spin correlations has been established at the 3$\sigma$ level by D0 by using powerful matrix element techniques~\cite{Abazov:2011gi}, and at the 5$\sigma$ level by ATLAS by measuring the lab-frame $\Delta\phi(l^+,l^-)$ about the beam axis in dileptonic events~\cite{ATLAS:2012ao}. Besides verifying our basic intuition about the smallness of soft QCD effects in top quark production, it is also hoped that a detailed understanding of top quark spin correlations will provide a unique probe for new physics~\cite{Bernreuther:1993hq,Beneke:2000hk,Frederix:2007gi,Arai:2007ts,Degrande:2010kt,Cao:2010nw,Baumgart:2011wk,Barger:2011pu,Krohn:2011tw,Bai:2011uk,Han:2012fw,Fajfer:2012si}. This becomes a particularly pressing endeavor in light of the apparent anomalies in top quark production that are currently being reported by the Tevatron experiments~\cite{Aaltonen:2012it,CDF:AFB2011dilep,Abazov:2011rq,Abazov:2012bfa}. Nonetheless, measurement of the very rich structure of top quark spin correlations in perturbative QCD, and their highly nontrivial evolution over top production phase space, have not been considered in complete detail. Here, we will give a more general treatment of the correlations, highlighting some effects which are missed or incompletely captured when using common observables. We will also consider some novel ways that new physics can imprint itself on the correlations and propose a well-defined set of new measurements that can be carried out in either dileptonic or $l$+jets channels. Measurement of the spin correlations relies on studying the decay products of the two top quarks, and there are several well-known approaches. The matrix element method mentioned above makes use of the full information present in the six-body final state~\cite{Melnikov:2011ai}. While this is formally the most powerful method available, it is mainly useful for addressing the binary question of whether the expected correlation is present or absent. A classic and more physically transparent method picks a single particle from each decay -- usually a lepton, but it can also be a $b$-quark or light quark -- and looks for correlations in their polar angles defined in their parent tops' rest frames. This method requires specifying axes from which to measure the polar angles, and there are several well-motivated choices, on which we elaborate below. It is the standard method for most spin correlation studies, and can be optimized to pick up the Standard Model correlation at the Tevatron or LHC~\cite{Mahlon:1995zn,Uwer:2004vp,Mahlon:2010gw}. Other options include measuring the three-dimensional opening angle between two decay products (after boosting to a common frame)~\cite{Bernreuther:1997gs}, and, as definitively demonstrated by ATLAS, measuring the azimuthal angle between leptons around the barrel of the detector~\cite{Barger:1988jj,Mahlon:2010gw,ATLAS:2012ao}. The last strategy notably bypasses detailed reconstruction of the top production and decay kinematics. Given these many options, and the successful measurements at the Tevatron and LHC, what remains to be measured? The complete spin correlation is encoded in a 3$\times$3 matrix that depends on the top pair production mechanism ($q\bar q \to t \bar t$ or $gg\to t\bar t$), the partonic center-of-mass energy, and the production angle. Assuming separate P and C conservation, the matrix is parametrized by four numbers, which represent various combinations of production helicity amplitudes. If new C-~or P-violating physics is at play in top quark production, even more degrees of freedom can open up. Of course, no measurement of a single variable is adequate to pin down all of these matrix entries. Similarly, no measurement that is inclusive over top production phase space can fully reveal how the correlation matrix changes in different kinematic regions. Nonetheless, most of this information is readily measurable. The situation is especially interesting at the LHC, given the enormous top pair data set over a broad bandwidth of energies, as well the fact that the correlations should change significantly as we scan from $p_T < m_t$ to $p_T > m_t$~\cite{Mahlon:2010gw}. At low $p_T$, top pairs are predominantly produced in a spin $s$-wave from $gg$ annihilation, and their spins are therefore totally anti-correlated along any axis. At high $p_T$, production becomes chiral, and the spins lie parallel to each other (and are hence totally {\it correlated}) along the $t\bar t$ production axis. It is common to describe this switchover in correlations purely in the language of classical spin ensembles, once an appropriate quantization axis has been defined for every point in production phase space. However, the interference between different spin channels almost always leads to sizeable contributions to the correlation matrix, making the full evolution much more complicated. Tracking this evolution in detail would provide us with many new opportunities to test for unexpected behavior in top production. In order to more comprehensively characterize the spin correlations, we here explore the utility of measuring various combinations of {\it all} of the tops' decay angles. Since much work has already been done using polar decay angles in various physically-motivated bases, the main question that we address is what can be learned by systematically including the corresponding azimuthal angles. Though often overlooked, these by themselves probe a large portion of the correlation matrix in a very simple way. By taking differences and sums of the azimuthal angles, we obtain distributions that are flat in the absence of spin correlations and are sinusoidally modulating in their presence. Such effects have already been suggested for finding and categorizing $t\bar t$ resonances~\cite{Baumgart:2011wk,Barger:2011pu}, and the threshold azimuthal-difference correlation around the beams is captured by the LHC lab-frame $\Delta\phi(l^+,l^-)$ measurements~\cite{ATLAS:2012ao,CMScorr}. Ref.~\cite{Barger:2011pu} also discussed the azimuthal-difference modulation about the production axis for $gg\to t\bar t$ in QCD, integrated over production angles. We will generalize this result in several ways, in particular demonstrating that the azimuthal sum contains complementary information. We will also show how both of these combinations of angles can be measured at arbitrary top production angles and velocities using complete event reconstruction. Remarkably, it is possible to do so without inducing severe distortions in the distributions, even given the biases induced by acceptance cuts and the reconstruction difficulties associated with jets and neutrinos. Combining these pure azimuthal correlation measurements with a pure polar correlation measurement already gives us access to most of the correlation matrix, and almost fully captures the switchover that occurs between threshold production and high-$p_T$ production at the LHC. Much of this switchover can be seen in the azimuthal variables alone: at threshold tops exhibit mainly an azimuthal-difference modulation, while at high $p_T$ they exhibit mainly an azimuthal-sum modulation. For a broad range of phase space at high $p_T$, the latter is in fact the dominant manifestation of the entire QCD spin correlation effect. For completeness, we also discuss how to measure the remaining entries of the correlation matrix, which lead to mixed polar-azimuthal correlations. These are small in QCD across the entire top production phase space, and might therefore be an interesting place to look for deviations. New physics can affect the correlation matrix in a myriad of ways. We will concentrate on two simple examples whose dominant effects are felt in the azimuthal correlations. The first is a color-dipole operator, which induces an azimuthal-difference correlation whose phase directly reflects the operator's CP-violating phase. This perspective offers the interesting possibility of probing both the chromomagnetic and chromoelectric dipole strengths with a single measurement, potentially at the level of $3\times 10^{-18} \, {\rm cm}$ ($0.03/m_t$) at 2$\sigma$ with the current run of the LHC. The second example is a broad spin-one color-octet resonance with parity-violating couplings. This resonance would be very difficult to discover as a peak in the $t\bar t$ mass spectrum. However, the effects of parity violation would be evident in the azimuthal-sum spin correlations, with a strength comparable or greater than that exhibited in more traditional observables sensitive to net polarization. The effect on the azimuthal-sum distribution represents a novel form of parity-violation in top production. Our paper outline is as follows. In the next section, we review the basics of top quark spin correlations. Subsequently, in Section~\ref{sec:QCD}, we describe their expected pattern in Standard Model QCD production at leading order, and what is captured by various observables. In Section~\ref{sec:NP}, we consider modifications to the correlations that can be induced by new physics. In Section~\ref{sec:measurement}, we outline a possible measurement of azimuthal correlations at the LHC. We conclude in Section~\ref{sec:conclusions}. We also include two appendices, which contain supplementary formulas and details of our simulations. \section{Formalism} \label{sec:formalism} We start with a fairly general discussion where we introduce the formalism of spin correlations, a few standard ways to measure aspects of the them using specific decay angle correlations, and our own suggestions for how to more comprehensively extract their properties and to quantify their total strength. In subsequent sections, we show how these different manifestations of the spin correlations behave in QCD and in new physics. We do not address detailed reconstruction issues in this section, reserving them for Section~\ref{sec:measurement}. Throughout this section and the rest of the paper, we define top and antitop decay angles in a common reference frame constructed as follows. We start in the lab frame of the colliding hadron beams, and actively boost the $t\bar t$ system to rest without rotation. We define our $z$-axis along the $t\bar t$ production axis, pointing in the direction of the charge +2/3 top. We define the $x$-axis within the production plane, such that it lies on the same side of the beamline as the $z$-axis. The $y$-axis then points out of the production plane, forming a right-handed coordinate system. We subsequently boost the tops to rest without rotation, carrying along their decay products with them. The system is illustrated in Fig.~\ref{fig:coords}. As is, this construction furnishes a specific realization of {\it helicity basis}, since the $z$-axis points along the top quark's CM-frame momentum vector. (In our treatment, the top and antitop share this as a common $z$-axis, rather than constructing a separate $\bar z$-axis that points along the antitop's momentum vector.) We can also consider other bases formed by rotating this system within the production plane, i.e. about $\hat y$. A straightforward choice is {\it beam basis}, which orients the $z$-axis along one of the beams. Another useful choice, to which we will return, is the {\it off-diagonal} basis of~\cite{Mahlon:1995zn}. \begin{figure}[tp] \begin{center} \begin{picture}(400,300)(0,0) \SetColor{Gray} \SetWidth{4} \Line( 0,200)(100,200) \Line(150,200)(250,200) \Line(300,200)(400,200) \Line(150,50)(250,50) \SetColor{Black} \SetWidth{1} \LongArrow( 50,200)( 90,250) \GCirc( 94,254){2}{0} \Text(105,256)[]{$t$} \LongArrow( 50,200)( 65,150) \GCirc( 67,145){2}{0} \Text( 77,147)[]{$\bar t$} \LongArrow(200,200)(220,250) \GCirc(222,255){2}{0} \Text(232,256)[]{$t$} \LongArrow(200,200)(180,150) \GCirc(178,145){2}{0} \Text(188,147)[]{$\bar t$} \DashLine(350,200)(370,250){5} \GCirc(351,203){2}{0} \Text(345,210)[]{$t$} \DashLine(350,200)(330,150){5} \GCirc(349,198){2}{0} \Text(355,190)[]{$\bar t$} \DashLine(200,50)(220,100){5} \DashLine(200,50)(180, 0){5} \Text(125,200)[]{\Large $\Rightarrow$} \Text(275,200)[]{\Large $\Rightarrow$} \SetWidth{3} \LongArrow(200,50)(214,85) \Text(225,90)[]{$\hat z$} \LongArrow(200,50)(165,64) \Text(158,68)[]{$\hat x$} \BCirc(200,50){5} \Vertex(200,50){1.5} \Text(210,40)[]{$\hat y$} \end{picture} \end{center} \caption{\it Construction of a common coordinate system for the top and antitop. The thick gray line is the beamline. Starting from the lab frame, the $t\bar t$ CM system is actively boosted to rest, and then the individual tops are actively boosted to rest. The decay products of the tops are measured in this frame. The overlayed coordinate axes on the bottom figure correspond to our helicity basis.} \label{fig:coords} \end{figure} Since the top quark is a narrow particle, we can factorize the complete top pair production and decay process. For a given partonic production process and a given point in production/decay phase space, the squared matrix element for the $2\to2\to6$ process (after color- and spin-summing) can be written as \begin{equation} {\textstyle \Big(\, \frac{1}{3^2 \; {\rm or} \; 8^2} \underset{\rm colors}{\sum} \,\Big) \, \Big(\, \frac{1}{2^2} \underset{\rm spins}{\sum} \,\Big) } \, \big|\mathcal{M}(\, q\bar q / gg \,\to\, t \bar t \,\to\, (f_1 \bar f_1' b) \, (\bar f_2 f_2' \bar b) \, )\big|^2 \; = \; \Gamma_{ab} \, \rho_{ab,\bar a\bar b} \, \bar\Gamma_{\bar a\bar b} \, , \label{eq:termdef} \end{equation} where the $f$'s are light left-handed fermions (up/down quarks or neutrino/lepton), and $\rho$ and $\Gamma$ ($\bar\Gamma$) are the production and decay spin density matrices, indexed by the top and antitop spins. We will generally use overbars in referring to indices and properties associated with the antitop. Repeated indices are implicitly summed unless otherwise indicated. The production spin density matrix is defined as \begin{equation} \rho_{ab,\bar a\bar b} \,\equiv\, {\textstyle \Big(\, \frac{1}{3^2 \; {\rm or} \; 8^2} \underset{\rm colors}{\sum} \,\Big) \, \Big(\, \frac{1}{2^2} \underset{\rm initial \; spins}{\sum} \,\Big) } \, \mathcal{M}( q\bar q / gg \,\to\, t_a \bar t_{\bar a}) \, \mathcal{M}( q\bar q / gg \,\to\, t_b \bar t_{\bar b})^*. \end{equation} The study of top-antitop spin correlations, and of individual top spins, is ultimately a study of the properties of this matrix. It is purely a function of the partonic initial state and production kinematics, and is therefore sensitive to any novel physics that affects how the tops are produced. The matrix is hermitian by construction ($\rho^*_{ab,\bar a\bar b} = \rho_{ba,\bar b\bar a}$) and can be expanded in a basis of Pauli matrices: \begin{eqnarray} \rho_{ab,\bar a\bar b} & \,=\, & \frac14 M^{\mu\bar\mu}\,\sigma^{\mu}_{ab}\,\sigma^{\bar\mu}_{\bar a\bar b} \nonumber \\ & \,=\, & \frac14 \left( M^{00}\,\delta_{ab}\,\delta_{\bar a\bar b} + M^{i0}\,\sigma^{i}_{ab}\,\delta_{\bar a\bar b} + M^{0\bar i}\,\delta_{ab}\,\sigma^{\bar i}_{\bar a\bar b} + M^{i\bar i}\,\sigma^{i}_{ab}\,\sigma^{\bar i}_{\bar a\bar b} \right) \nonumber \\ & \,\equiv\, & \frac14 M^{00}\,\left( \delta_{ab}\,\delta_{\bar a\bar b} + P^{i}\,\sigma^{i}_{ab}\,\delta_{\bar a\bar b} + \bar P^{\bar i}\,\delta_{ab}\,\sigma^{\bar i}_{\bar a\bar b} + C^{i\bar i}\,\sigma^{i}_{ab}\,\sigma^{\bar i}_{\bar a\bar b} \right) \, , \label{eq:proddecomp} \end{eqnarray} where $M$ is a real 4$\times$4 matrix, and we have appropriated the usual $\mu$ notation for spacetime indices, though we use a trivial metric here. $M^{00}$ parametrizes the overall production rate, while $P^{i} \equiv M^{i0}/M^{00}$ and $\bar P^{\bar i} \equiv M^{0\bar i}/M^{00}$ characterize the net degree of top/antitop polarization, and $C^{i\bar i} \equiv M^{i\bar i}/M^{00}$ characterizes their correlations. More explicitly, $P^{i} = \langle 2 S^i \rangle$, $\bar P^{\bar i} = \langle 2\bar S^{\bar i} \rangle$, and $C^{i\bar i} = \langle 4 S^i \bar S^{\bar i} \rangle$, where $S^i$ and $\bar S^{\bar i}$ are the top/antitop spin operators. Given an unpolarized initial state and our choice of coordinate system, the parity and charge-conjugation symmetries characteristic of pure QCD production would individually imply \begin{eqnarray} {\rm P} & \,\Rightarrow\, & M^{\mu\bar\mu} = (-1)^{\mu+\bar\mu} \, M^{\mu\bar\mu} \nonumber \\ {\rm C} & \,\Rightarrow\, & M^{\bar\mu\mu} = (-1)^{\mu+\bar\mu} \, M^{\mu\bar\mu} \end{eqnarray} (indices not summed). These identities also hold for any other coordinates obtained by rotating within the production plane, i.e. about $\hat y$. Parity would force much of the matrix to vanish, leaving over the diagonal spin correlations and mixed $xz$ spin correlations, as well as possible net $y$-polarizations for the tops.\footnote{Top pairs produced in QCD processes from an unpolarized initial state are in fact polarized in the $y$-direction, transverse to the production plane. However, the effect comes in at loop-level due to the need for a complex phase between amplitudes, and is predicted to be percent-level at both the Tevatron and LHC~\cite{Kane:1991bg,Bernreuther:1995cx}. New physics processes can also induce complex phases at tree level, significantly enhancing this polarization effect, even in perfectly P- and C-conserving theories. We reserve exploration of this for future work~\cite{Baumgart:future}.} By itself, charge-conjugation would require either symmetry or antisymmetry between the different correlation and polarization components, but in conjunction with parity just forces the remaining nonzero entries to be symmetric: $P^2 = \bar P^2$ and $C^{13} = C^{31}$. The combination CP would only require the entire $M^{\mu\bar\mu}$ matrix to be symmetric, independent of whether parity and charge-conjugation are individually good symmetries, and with no further constraints on the entries. (A more extensive discussion of symmetry properties can be found in~\cite{Bernreuther:1993hq}.) The decay spin density matrix is defined as \begin{equation} \Gamma_{ab} \,\equiv\, \mathcal{M}(t_a \to f\bar f' b) \, \mathcal{M}(t_b \to f\bar f' b)^*, \end{equation} and the antitop has an exactly analogous expression. These matrices are purely functions of the top decay kinematics. In its fully general form, $\Gamma$ depends in a complicated way on the four-momenta of each of the top's three decay products, and may itself be affected by new physics. However, the situation vastly simplifies if we integrate out most of the decay phase space, leaving over only the unit vector $\hat\Omega$ of one particle ($f$, $\bar f'$, or $b$) in the top's rest frame: $\Gamma \to \tilde\Gamma(\hat \Omega)$. As an hermitian 2$\times$2 matrix, $\tilde\Gamma$ can also be expanded in Pauli matrices, and due to rotational invariance the dependence on the remaining particle direction is fixed up to an analyzing power $\kappa$: \begin{equation} \tilde\Gamma(\hat\Omega)_{ab} \,\propto\, \delta_{ab} + \kappa \, \hat\Omega \cdot \vec{\sigma}_{ab} \: . \end{equation} In the Standard Model, $\kappa_{l/d} = +1$, $\kappa_{\nu/u} = -0.3$, and $\kappa_b = -0.4$~\cite{Brandenburg:2002xr}.\footnote{The maximal analyzing power of the lepton/down is due to the $V-A$ current structure of the top decay. It is easy to see by Fierz-transforming the decay amplitude.} From this perspective, any new physics effects in the top decay only show up as possible rescalings of the analyzing powers, and these effects are already becoming constrained~\cite{Aaltonen:2012rz,CDF:Wpol2012,Aad:2012ky,CMS:Wpol2012}. Since we are interested here in understanding top production, not decay, we assume the Standard Model analyzing powers. We can also choose a single particle using means other than its flavor. In particular, for hadronic decays of the top, we can choose a random non-$b$ quark to get $\kappa = +0.35$, or we can pick the softer of the two non-$b$ quarks in the top's rest frame to get $\kappa = +0.5$. Assuming approximate CP conservation in the decay, the equivalent analyzing powers for the antitop should the same size as those of the top, multiplied by a minus sign (i.e., $\bar\kappa_{l/d} = -1$, $\bar\kappa_{\nu/u} = +0.3$, and $\bar\kappa_b = +0.4$). Picking one decay particle from each side, and contracting through the spin indices in the production and decay density matrices as in Eq.~\ref{eq:termdef}, we can see the spin structure of the production matrix elements encoded in a correlated distribution of decay angles: \begin{equation} \frac{d^4\sigma}{d\Omega \, d\bar\Omega} \,\propto\, 1 + \kappa\,\vec{P}\cdot\hat\Omega + \bar\kappa\,\vec{\bar P}\cdot\hat{\bar\Omega} + \kappa\bar\kappa\:\hat\Omega\cdot C \cdot \hat{\bar\Omega} \: , \label{eq:fullcorr} \end{equation} with $\hat\Omega \equiv (\cos\phi\sin\theta,\sin\phi\sin\theta,\cos\theta)$ and analogously for $\hat{\bar\Omega}$. For example, it is commonly observed that the polar decay angle distribution for a single side is \begin{equation} \frac{d\sigma}{d\cos\theta} \,\propto\, 1 + \kappa \, P^3 \cos\theta \: , \label{eq:sap} \end{equation} and therefore tells us about the top's (or antitop's) net polarization in this direction. To begin to gain sensitivity to the correlation matrix $C$, we can instead look at the double polar decay angle distribution, \begin{equation} \frac{d^2\sigma}{d\cos\theta \, d\cos\bar\theta} \,\propto\, 1 + \kappa \, P^3 \cos\theta + \bar\kappa \, \bar P^3 \cos\bar\theta + \kappa\bar\kappa \, C^{33} \cos\theta\cos\bar\theta \: , \label{eq:polar-polar} \end{equation} or we can more directly access $C^{33}$ via \begin{equation} \frac{d\sigma}{d(\cos\theta\cdot\cos\bar\theta)} \,\propto\, (1 + \kappa\bar\kappa \, C^{33}\cos\theta\cdot\cos\bar\theta) \, \log\left(\frac{1}{|\cos\theta\cdot\cos\bar\theta|}\right). \label{eq:coscos} \end{equation} By restricting to a specific axis like this, we can recover a relatively intuitive picture of a certain population of ``spin-up'' and ``spin-down'' top quarks with classical average polarizations and correlation: $P^3$, $\bar P^3$ and $C^{33}$. But it is clear that these polar angle distributions actually only give us a very limited view of the production density matrix. Gaining a more complete perspective in this manner would require us to measure an extended set of polar angle distributions, utilizing a full complement of different ``$z$-axes'' chosen independently for the $t$ and $\bar t$. Often, it is simply suggested to choose a single basis which is expected to yield a large $C^{33}$, or a $C^{33}$ which is particularly sensitive to some specific new physics effect. Another common option is to measure the three-dimensional opening angle between the two decay products, or equivalently to take the dot product between their directions. We call the opening angle $\chi$ (sometimes known as ``$\phi$'' in the literature). Its distribution is \begin{equation} \frac{d\sigma}{d\cos\chi} \,\propto\, 1 + \frac13 \kappa\bar\kappa\:{\rm tr}[C]\,\cos\chi \: . \label{eq:chi} \end{equation} Measuring $\chi$ gives us a very compact method for measuring the strength of the correlation independently of any net polarization, though it is only sensitive to this one specific linear combination of matrix elements. We will see below that ${\rm tr}[C]$ captures most of the correlation at low $p_T$ at the LHC, but otherwise tends to miss most or all of the correlation. (Though for new physics effects that contribute strictly to the trace, such as an $s$-channel pseudoscalar exchange, this variable may be quite sensitive.) A completely general analysis of the top spins might utilize a fit of the distribution over all four of the independent decay angles, possibly invoking parity and/or charge-conjugation symmetries to limit the number of free parameters. This is probably the ideal approach in principle, but it may be overly complicated in practice, especially if a given analysis is mainly interested in some specific aspect top spin physics. It is also beyond the scope of the present paper to understand how effective such a fit might be under realistic measurement conditions. Given the formulas for polar decay angles above, an obvious question to ask is whether we could learn anything from the azimuthal decay correlations, or azimuthal-polar correlations. The precise statement of this question of course depends on the coordinate basis used to define ``azimuthal'' versus ``polar.'' Our default choice in subsequent sections will be to use the the off-diagonal basis of~\cite{Mahlon:1995zn} for the Tevatron and the helicity basis for the LHC. The former is motivated by the spin structure particular to $q\bar q \to t\bar t$. In lieu of such an obvious choice for the LHC, helicity basis becomes the most physical alternative. A large fraction of tops at the LHC are produced with relativistic velocities, and helicity-basis azimuthal correlations have a direct physical interpretation as the interference between different chiral production amplitudes.\footnote{Close to threshold, it is more natural to take one of the beams as the $z$-axis, as is done for some of the LHC measurements~\cite{ATLAS:2012ao,CMScorr} (though they do not first boost the individual top systems to rest). However, at the LHC, the question of what is the best basis for slow tops is nominally moot, since they are produced approximately in a spin $s$-wave. The QCD correlations should therefore be nearly basis-independent.} Moreover, azimuthal angles measured in helicity basis should asymptotically become the least susceptible to the distorting effects of acceptance cuts, since rotating the decay system of a fast moving top about its own axis should not have a large impact on its detection efficiency. If we just proceed to integrate out $\theta$ and $\bar\theta$ in Eq.~\ref{eq:fullcorr}, we get a formula similar to Eq.~\ref{eq:polar-polar} but which specifically probes the $x$ and $y$ parts of the net polarizations and the correlation matrix. We can further process this into more compact one-dimensional distributions to extract useful combinations of the correlation matrix elements\footnote{Incidentally, if we instead just subsequently integrated out $\bar\phi$ or $\phi$, we would obtain convenient formulas for extracting the net transverse polarizations $(P^1,P^2)$ or $(\bar P^1,\bar P^2)$, respectively. E.g., $d\sigma/d\phi \propto 1 + (\pi/4)\kappa\,(P^1\cos\phi + P^2\sin\phi)$.}: \begin{eqnarray} \frac{d\sigma}{d(\phi-\bar\phi)} & \,\propto\, & 1 + \left(\frac{\pi}{4}\right)^2 \kappa\bar\kappa \, \left[ \left(\frac{C^{11}+C^{22}}{2}\right)\cos(\phi-\bar\phi) + \left(\frac{C^{21}-C^{12}}{2}\right)\sin(\phi-\bar\phi) \right] \nonumber \\ \frac{d\sigma}{d(\phi+\bar\phi)} & \,\propto\, & 1 + \left(\frac{\pi}{4}\right)^2 \kappa\bar\kappa \, \left[ \left(\frac{C^{11}-C^{22}}{2}\right)\cos(\phi+\bar\phi) + \left(\frac{C^{21}+C^{12}}{2}\right)\sin(\phi+\bar\phi) \right]. \end{eqnarray} There are several advantages to expressing the $xy$ correlations in this manner. An immediate one is that the measurement of the entire $xy$ part of the spin correlation is reduced to the measurement of the amplitudes and phases of these two simple distributions. They are flat in the absence of correlations, are modulating in the presence of correlations, and, as we will see for the LHC, are left fairly intact by basic acceptance cuts and event reconstruction. They therefore offer a simple alternative to measuring these four matrix elements one-by-one with four different ``polar-polar'' distributions as in Eqs.~\ref{eq:polar-polar} and~\ref{eq:coscos}, or extracting the sum $C^{11}+C^{22}$ via Eq.~\ref{eq:chi} (relying on other measurements to independently determine $C^{33}$ so that we can subtract it off of the trace). The symmetry structure of the $xy$ part of the correlation matrix is also made immediately manifest. Any net phase in the $\phi-\bar\phi$ modulation signals CP-violation ($C^{21} \neq C^{12}$, the same type discussed in~\cite{Bernreuther:1993hq}), and any net phase in the $\phi+\bar\phi$ modulation signals C-~and P-violation ($C^{12} \neq -C^{21}$ and $C^{21,12} \neq 0$). Since these distributions are specifically sensitive to the interference between different spin configurations in a given basis, rather than their relative probabilities, they offer a complementary physical picture to that obtained with polar angle correlations in the same basis. Combining an azimuthal sum/difference measurement with a polar-polar correlation measurement in the same coordinate basis already gives us five of the nine entries of $C$. What about the other four? To access these, we can think about possible polar-azimuthal correlations, which probe the $xz$ and $yz$ parts of the matrix. For example, Eq.~\ref{eq:fullcorr} contains terms like $C^{13}\cos\phi\sin\theta\cos\bar\theta$. Integrating out $\theta$ and $\bar\phi$ would leave over a $d^2\sigma/d\phi\,d\cos\bar\theta$ distribution that contains $C^{13}\cos\phi\cos\bar\theta$, as well as other terms with the coefficients $C^{23}$, $P^{1,2}$, and $\bar P^3$. To more directly extract the elements $C^{13}$ and $C^{23}$, we can employ a simple trick: for $\cos\bar\theta < 0$, shift our definition of $\phi$ by $\pi$. I.e., define $\phi' \equiv \phi$ ($\phi+\pi$) for $\cos\bar\theta > 0$ ($\cos\bar\theta < 0$). If we subsequently integrate out $\cos\bar\theta$, we are left with another simple sinusoidal distribution: \begin{equation} \frac{d\sigma}{d\phi'} \,\propto\, 1 + \frac{\pi}{8} \kappa\bar\kappa \, \left( C^{13}\cos\phi' + C^{23}\sin\phi' \right), \label{eq:PolarAz} \end{equation} with a similar expression for $d\sigma/d\bar\phi'$ (sensitive to $C^{31}$ and $C^{32}$) when the analogous $\pi$-shift is applied. Again, the presence of P-violation in top production would be immediately obvious as a phase offset ($C^{23,32} \neq 0$), though looking for C-~or CP-violation using these distributions would require a comparison of $d\sigma/d\phi'$ and $d\sigma/d\bar\phi'$. It is also possible to pick up the four elements $C^{13}$, $C^{23}$, $C^{31}$, and $C^{32}$ via a series of four dedicated polar-polar correlation measurements. However, the method that we have presented requires us to reconstruct only two distributions. Finally, we point out a novel way to characterize the total strength of the correlation effect. It extends approaches like those in~\cite{Mahlon:1995zn,Uwer:2004vp,Mahlon:2010gw}, which diagonalize the correlation matrix assuming pure QCD production and then pick off the polar-polar correlation using the axis with the largest eigenvalue. While our approach can in principle be used as the basis of a top spin correlation measurement in its own right, our main interest here will be to use it to quantify how much of the total spin correlation is captured up by any given measurement. This will serve as a useful tool in our comparisons of different approaches to measuring the QCD correlations in the next section. The top and antitop decays can be said to be correlated, in the absence of net polarization, when the angular distribution for the decay on one side is nontrivial given a {\it fixed} decay configuration on the other side. For example, consider the case of $t\bar t$ produced in a spin $s$-wave, so that $C = {\rm diag}(-1,-1,-1)$ and $\vec{P} = \vec{\bar P} = \vec{0}$. Using the leptonic decay on each side, and picking the lepton/antilepton as the spin analyzers ($\kappa = +1$, $\bar\kappa = -1$), the total decay distribution is $d^4\sigma/d\Omega\, d\bar\Omega \propto 1 + \hat\Omega \cdot \hat{\bar\Omega}$. If we fix the direction of the antitop's lepton, $\hat{\bar\Omega}$, then the top's lepton will be distributed in a manner azimuthally-symmetric about this direction, but with a maximally linearly-biased polar angle distribution. (E.g., the two leptons cannot move in exactly opposite directions.) Similarly, if we fix the direction of the top's lepton, $\hat\Omega$, then the antitop's lepton will be seen to have a maximally linearly-biased distribution of polar angles with respect to this direction. In this case, the entire effect is encapsulated in the distribution of the 3D opening angle $\chi$, as in Eq.~\ref{eq:chi}. A less trivial example is $C = {\rm diag}(+1,-1,+1)$, which in fact occurs in the relativistic, central production limit of both $q\bar q \to t\bar t$ and $gg \to t\bar t$ in QCD. Given a fixed $\hat{\bar\Omega}$, the top's lepton is not distributed in a simple way around this direction, but instead about the direction $\kappa\bar\kappa\, C \cdot \hat{\bar\Omega}$. The magnitude of the correlation is again maximal, but this fact is missed by all of the standard approaches to measuring it using a single variable. To completely characterize the strength of general correlations, we suggest that instead of referring to a fixed coordinate system or measuring a simple opening angle, we measure the angle between one decay product's direction and an appropriately transformed version of the other decay product's direction. Assuming a given correlation matrix $C$, define \begin{eqnarray} \cos\chi' & \equiv & \hat\Omega \cdot \frac{\kappa\bar\kappa\, C \cdot\hat{\bar\Omega}}{|\kappa\bar\kappa\, C \cdot\hat{\bar\Omega}|} \nonumber \\ \cos\bar\chi' & \equiv & \hat{\bar\Omega} \cdot \frac{\kappa\bar\kappa\, C^T \cdot\hat{\Omega}}{|\kappa\bar\kappa\, C^T \cdot\hat{\Omega}|}. \end{eqnarray} Note that these are generally {\it different} angles event-by-event. Nonetheless, if we integrate out $\hat{\bar\Omega}$ or $\hat\Omega$, respectively, to get the cumulative effect of the correlation in $\cos\chi'$ or $\cos\bar\chi'$, we get the same linear coefficient: \begin{eqnarray} \frac{d\sigma}{d\cos\chi'} & \propto & 1 + |\kappa\bar\kappa | \: {\cal C} \,\cos\chi' \nonumber \\ \frac{d\sigma}{d\cos\bar\chi'} & \propto & 1 + |\kappa\bar\kappa | \: {\cal C} \,\cos\bar\chi' \end{eqnarray} where \begin{equation} {\cal C} \,=\, \int \,\frac{d\Omega}{4\pi} \; \sqrt{\hat\Omega\cdot C^T C \cdot\hat\Omega}. \label{eq:calC} \end{equation} Defined this way, the correlation strength ${\cal C}$ is purely a function of the eigenvalues of the matrix $C^T C$ (or equivalently $CC^T$), and can vary between zero and one.\footnote{In the case of zero expected correlation, the procedure becomes ill-defined. But, as we will see, the correlation never vanishes in leading-order QCD, except in the extremal case of $gg \to t\bar t$ at zero-angle and infinite-boost.} Note that the net single-side polarization effects from possible nonzero $\vec{P},\vec{\bar P}$ integrate out to zero, just as they do for the original $\chi$. We will shortly see how this method performs in the context of QCD production.\footnote{We also point out that a similar construction can be obtained working strictly with the $xy$ block of $C$ and considering projections of decay products into this plane. Instead of individually measuring $\phi\pm\bar\phi$ to characterize the strength of correlation within this part of the matrix, we can measure the difference in azimuthal angles between one untransformed and one transformed decay product. This total azimuthal correlation strength can be expressed in terms of complete elliptic integrals of the second kind. Practically, though, we find that one of the simple combinations $\phi\pm\bar\phi$ nearly saturates the correlation in QCD over most of the production phase space.} \section{Spin Correlations in QCD} \label{sec:QCD} The leading-order partonic subprocesses $q\bar q \to t\bar t$ and $gg \to t\bar t$ exhibit very different patterns of spin correlations, which we now explore in detail. The case of $q\bar q$ annihilation has been considered straightforward for a long time. In this process, there is a natural coordinate basis for each point in production phase space for which the top and antitop spins have perfectly correlated $S^3$. For near-threshold production, the $z$-axis is aligned with the beams (either choice works since the beams are usually unpolarized). For relativistic production, the $z$-axis is that of helicity basis. Intermediate boosts interpolate between the two choices via the off-diagonal basis construction~\cite{Mahlon:1995zn}. The usual tactic is to go into off-diagonal basis and study the polar-polar correlation as in Eq.~\ref{eq:polar-polar}, either by reconstructing the distribution of $d^2\sigma / d\cos\theta\, d\cos\bar\theta$, or by reducing this down to the one-dimensional distribution $d\sigma/d(\cos\theta\cdot\cos\bar\theta)$. To understand how this performs relative to the methods discussed above, we need to have a way to compare the strengths of the correlation's effects on different variables. To do this, we use asymmetries of distributions obtained with the $\kappa$'s stripped off, adding events in regions where the correlated rate is larger than the uncorrelated rate, and subtracting events in regions where the correlated rate is smaller than the uncorrelated rate. For example, the asymmetry in $d\sigma/d(\cos\theta\cdot\cos\bar\theta)$ about $\cos\theta\cdot\cos\bar\theta=0$ is directly proportional to $C^{33}$. (It is exactly the same asymmetry that we would obtain by dividing up the 2D space of $(\cos\theta,\cos\bar\theta)$ into positive and negative quadrants.) Similarly, the distributions $d\sigma / d(\phi-\bar\phi)$ and $d\sigma / d(\phi+\bar\phi)$, which modulate as cosines, have asymmetries between regions with $|\phi\pm\bar\phi| < \pi/2$ and $\pi/2 < |\phi\pm\bar\phi| < \pi$. These asymmetries are proportional to $C^{11}+C^{22}$ and $C^{11}-C^{22}$.\footnote{Defined in this way, they are {\it insensitive} to possible nonvanishing $C^{21}\pm C^{12}$. However, these combinations can be accessed either by a full sinusoidal fit or by forming asymmetries between regions of positive and negative $\phi\pm\bar\phi$.} The distributions of $\cos\chi$ and $\cos\chi'$ ($\cos\bar\chi'$) are linearly-biased, and their asymmetries about zero are respectively proportional to tr$[C]/3$ and the total correlation $\cal C$ (Eq.~\ref{eq:calC}). The asymmetry in $\cos\chi'$ ($\cos\bar\chi'$) is also what we would obtain by forming the asymmetry over the complete four-dimensional phase space for the two decay directions. In Fig.~\ref{fig:qqTotalCorr} we show the total correlation $\cal C$ for $q\bar q \to t\bar t$ as a function production angle ($\Theta$) and squared-velocity ($\beta^2$) in the partonic CM frame. The strongest possible correlation, ${\cal C} = 1$, leads to a 50\% asymmetry in $\cos\chi'$ ($\cos\bar\chi'$). To keep a common normalization, such that ``1'' corresponds to maximal correlation, we multiply the asymmetries of all of our variables by 2. According to this measure, the correlation measured by $\cos\theta\cdot\cos\bar\theta$ in off-diagonal basis is $0.5$ (i.e., 25\% asymmetry) over the entire production phase space. We do not need to plot this number, but we illustrate how it compares relative to the total correlation in Fig.~\ref{fig:qqPolarRel}. In Fig.~\ref{fig:qqAzSum}, we show the $\phi+\bar\phi$ correlation strength, also in off-diagonal basis, both absolute and relative to $\cal C$. In this basis, the $\phi-\bar\phi$ and $xz$ correlations identically vanish at leading-order. (The $\cos\chi$ correlation strength is a flat $1/3$, strictly weaker than $\cos\theta\cdot\cos\bar\theta$.) \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{qqTotalCorr.eps} \caption{\it Total LO spin correlation strength in $q\bar q \to t\bar t$. Plotted versus top production angle and squared-velocity in the partonic CM frame.} \label{fig:qqTotalCorr} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{qqPolarRel.eps} \caption{\it LO correlation strength in off-diagonal-basis polar angles in $q\bar q \to t\bar t$, relative to total strength. Plotted versus top production angle and squared-velocity in the partonic CM frame.} \label{fig:qqPolarRel} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{qqAzSum.eps} \epsfxsize=0.44\textwidth\epsfbox{qqAzSumRel.eps} \caption{\it LO correlation strength in off-diagonal-basis $\phi+\bar\phi$ in $q\bar q \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame.} \label{fig:qqAzSum} \end{center} \end{figure} Figure~\ref{fig:qqTotalCorr} indicates that the correlation in $q\bar q \to t\bar t$ production is half-maximal at low velocities and/or forward angles, but becomes maximal at relativistic velocities and central angles. The corresponding correlation matrices are $C = {\rm diag}(0,0,+1)$ and $C = {\rm diag}(+1,-1,+1)$. (Complete formulas can be found in Appendix~\ref{sec:formulas}.) Consequently, the $\cos\theta\cdot\cos\bar\theta$ asymmetry does not always reflect the total strength of the correlation (Fig.~\ref{fig:qqPolarRel}), and the ``missing'' part of the correlation yields the $\phi+\bar\phi$ modulation (Fig.~\ref{fig:qqAzSum}). This additional correlation comes from the fact that the two possible spin configurations (up-up and down-down) are not generally produced incoherently. Their interference only vanishes when the tops are at rest or moving along the beamline, in which case their spins are directly inherited from the spins of the annihilating quarks. At the Tevatron, where $q\bar q$ annihilation is dominant, tops are typically produced near threshold, and the spin interference contribution to the correlation is roughly at the 10\% level. A dedicated high-$p_T$ analysis might nonetheless reveal the emergence of the $\phi+\bar\phi$ correlation, statistics permitting. The analogous story for $gg \to t\bar t$, which is of primary concern at the LHC, is more complicated. As pointed out by~\cite{Mahlon:2010gw}, we should really think of this as two separate processes: the annihilation of opposite-spin and same-spin gluons (or, equivalently, same-helicity and opposite-helicity). For an unpolarized initial state, the former dominates for $p_T < m_t$, and the latter dominates for $p_T > m_t$. The former always produces opposite-spin tops in helicity basis, and the latter always produces same-spin tops in off-diagonal basis. When the two processes are superimposed, the resulting pattern of spin correlations is highly nontrivial. Ref.~\cite{Mahlon:2010gw} considered the polar angle correlations of $gg \to t\bar t$, establishing that there is generally no basis in which this correlation saturates (i.e., $C^{33} = \pm 1$). However, they show which basis maximizes the polar correlation. Practically, this corresponds to finding a basis that diagonalizes the matrix $C$, and then choosing as the ``$z$-axis'' that has the eigenvalue of the largest magnitude (as suggested by~\cite{Uwer:2004vp}). We now consider what these correlations look like using our more general approach. First, in Fig.~\ref{fig:ggTotalCorr}, we show the total correlation $\cal C$. The correlation is maximal both near threshold and for relativistic central production, with a broad contour of minima at $p_T = m_t$. In the near-threshold region, production is dominated by opposite-spin gluons annihilating into tops in a spin $s$-wave with $C = {\rm diag}(-1,-1,-1)$. In the relativistic region with $p_T \gg m_t$, same-spin gluons dominate, and the correlations look similar to those of $q\bar q$ annihilation. At central production angles these again approach $C = {\rm diag}(+1,-1,+1)$. Near $p_T = m_t$, the two initial spin states contribute comparably, and their correlations combine destructively. The cancellation becomes perfect in the limit of relativistic forward production. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggTotalCorr.eps} \caption{\it Total LO spin correlation strength in $gg \to t\bar t$. Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggTotalCorr} \end{center} \end{figure} We plot the maximal polar correlation in Fig.~\ref{fig:ggPolar}, both the absolute strength and the strength relative to $\cal C$. The situation is again clearly nontrivial. According to our measure, this method picks up between 50\% and 75\% of the total correlation throughout the bulk of the phase space, approaching 100\% only for relativistic forward production, where the correlation is shutting off. Note that the largest-magnitude eigenvalue of $C$ flips sign at $p_T = m_t$, from negative at low $p_T$ to positive at high $p_T$. This also corresponds to a discrete jump in our choice of $z$-axis: for central production at low $p_T$ the ideal $z$-axis is mainly aligned with the top momentum vector (helicity basis), whereas at high $p_T$ the ideal $z$-axis is mainly aligned with the beams. This switchover happens because the helicity-basis polar correlation necessarily passes through a zero as we transition from $s$-wave production near threshold to chiral production at high momentum. It therefore becomes small in magnitude and cannot contribute to a large eigenvalue. The situation is illustrated in Fig.~\ref{fig:ggPolarHel}, where we can see the zero occurring near $\beta^2 = 1/\sqrt2$ ($m_{t\bar t} \simeq (1.85)(2m_t)$) for a broad range of production angles. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggPolar.eps} \epsfxsize=0.44\textwidth\epsfbox{ggPolarRel.eps} \caption{\it Maximal LO correlation strength obtainable via polar angles in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggPolar} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggPolarHel.eps} \epsfxsize=0.44\textwidth\epsfbox{ggPolarHelRel.eps} \caption{\it LO correlation strength in helicity-basis polar angles in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggPolarHel} \end{center} \end{figure} Next, we consider the other variables which we have been discussing. The simple dot-product $\cos\chi$ appears in Fig.~\ref{fig:ggDotProduct}. As commonly observed, it picks up most of the total correlation close to threshold. Near $p_T = (1.3)m_t$, it passes through a zero, and then asymptotically approaches $1/3$ at high boost. In Figs.~\ref{fig:ggAzDiff} through~\ref{fig:ggPolarAz}, we show the variables that make dedicated use of azimuthal angles, specializing to helicity basis. These include the azimuthal angle difference $\phi-\bar\phi$ (Fig.~\ref{fig:ggAzDiff}), the azimuthal sum $\phi+\bar\phi$ (Fig.~\ref{fig:ggAzSum}), and the polar-azimuthal, or $xz$, cross-correlation (Fig.~\ref{fig:ggPolarAz}). Like the dot-product correlation, the azimuthal-difference correlation is mainly active near threshold (albeit with smaller strength) and passes through a zero at intermediate $p_T$. Unlike the dot-product correlation, it largely fails to regenerate at high $p_T$, reaching back up to only about 5\% near $p_T \simeq (1.85)m_t$ before shutting off again. Its turnoff is also more closely aligned with the $p_T = m_t$ contour. The azimuthal-sum correlation is in some sense the inverse of the azimuthal-difference. It is weak at low $p_T$ and strong at high $p_T$. In fact, for $\beta^2$ near $1/\sqrt2$, where the helicity-basis polar correlation shuts off, the azimuthal-sum nearly saturates the entire correlation, surpassing 99\% of $\cal C$ for central production. For even higher $p_T$, the correlation remains strong, typically above 80\% relative to the total. Finally, we consider the $xz$ correlation, which can be obtained either from Eq.~\ref{eq:PolarAz} or from Eq.~\ref{eq:coscos} by measuring one of the tops' polar decay angles with respect to the $x$-axis instead of the $z$-axis. (Both approaches yield the same asymmetry.) This correlation is typically quite small, below 10\%, and is strictly zero on all four edges of Fig.~\ref{fig:ggPolarAz}. It accounts for a nontrivial fraction of the total correlation only at somewhat forward angles with $p_T \simeq m_t$. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggDotProduct.eps} \epsfxsize=0.44\textwidth\epsfbox{ggDotProductRel.eps} \caption{\it LO correlation strength in the dot product ($\cos\chi$) in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggDotProduct} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggAzDiff.eps} \epsfxsize=0.44\textwidth\epsfbox{ggAzDiffRel.eps} \caption{\it LO correlation strength in helicity-basis $\phi-\bar\phi$ in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggAzDiff} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggAzSum.eps} \epsfxsize=0.44\textwidth\epsfbox{ggAzSumRel.eps} \caption{\it LO correlation strength in helicity-basis $\phi+\bar\phi$ in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggAzSum} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggPolarAz.eps} \epsfxsize=0.44\textwidth\epsfbox{ggPolarAzRel.eps} \caption{\it LO polar-azimuthal ($xz$) cross-correlation strength in helicity-basis in $gg \to t\bar t$. Absolute (left) and relative to total strength (right). Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggPolarAz} \end{center} \end{figure} Clearly, then, there is quite a lot that can be measured at the LHC. At low momenta, $p_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_t$, a measurement of a large dot-product correlation would suggest the expected $s$-wave configuration. (The current ATLAS and CMS measurements already support this~\cite{ATLAS:2012ao,CMScorr}.) Indeed, as illustrated by Fig.~\ref{fig:ggDotProduct}, the total correlation is almost wholly reflected in $\cos\chi$, even for momenta somewhat above threshold where the total correlation has already dropped by $O(2)$. However, a measurement of $\cos\chi$ by itself is inadequate to conclusively establish this behavior. We have seen how to instead break the correlation down into its individual components in helicity basis, providing a more comprehensive view. The expectation at low $p_T$ is small correlations in $\phi+\bar\phi$ and $xz$, and comparable $O$(0.1--1) correlations in $\phi-\bar\phi$ and the polar decay angles. At high momenta, $p_T \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} m_t$, the the helicity-basis polar correlation passes through a zero, and the azimuthal correlations dominate. While it is possible to capture part of this correlation via polar angles constructed with the optimized choice of ``$z$-axis'' from~\cite{Uwer:2004vp,Mahlon:2010gw} (basically the beam-axis), it is much more efficient and straightforward to simply measure $\phi+\bar\phi$, which encodes almost the entire correlation over a broad range of momenta for moderately central angles. The smallness of all of the remaining correlations (polar, $\phi-\bar\phi$, and $xz$) can then be checked independently. In this way, the complete and rather complicated evolution of the spin correlations can be fully mapped out using a sequence of fairly simple measurements. (At very high $p_T$, the process $q\bar q \to t\bar t$ also becomes important, but exhibits essentially the same pattern of spin correlations as $gg \to t\bar t$.) \section{Spin Correlations from New Physics} \label{sec:NP} New physics can alter the spin correlations of top pairs in a large variety of ways, many of which have been explored in the literature~\cite{Bernreuther:1993hq,Beneke:2000hk,Frederix:2007gi,Arai:2007ts,Degrande:2010kt,Cao:2010nw,Baumgart:2011wk,Barger:2011pu,Krohn:2011tw,Bai:2011uk,Han:2012fw,Fajfer:2012si}. Polar-polar correlations in particular are practically the default probe, especially when the new physics in question does not introduce a strong net polarization or other symmetry violations. Often an ideal basis is first identified to maximize the polar-polar effect. Nonetheless, considering the complete correlation matrix, especially the $xy$ block containing azimuthal correlations, can be quite useful. In some cases, azimuthal correlations even capture the majority of the correlation effect from new physics. In~\cite{Baumgart:2011wk}, we showed how correlations in both $\phi-\bar\phi$ and $\phi+\bar\phi$ could be used to characterize resonances in the $t\bar t$ invariant mass spectrum. Spin-0 resonances produce a $\phi-\bar\phi$ modulation, and the phase of this modulation is directly related to the phase in the scalar's Yukawa coupling to tops. This observation has also been made in~\cite{Barger:2011pu}, which included a more comprehensive study of interference effects. Spin-1 and spin-2 resonances produce a $\phi+\bar\phi$ modulation, which is sensitive to the signed ratio of the resonance's chiral couplings to top quarks. This feature is particularly relevant to heavy axigluon resonance or contact-interaction models that purport to explain the top forward-backward asymmetry at the Tevatron~\cite{Frampton:2009rk}, as they will induce a ``wrong-signed'' $\phi+\bar\phi$ modulation at high-$m(t\bar t)$ compared to the SM. In this section, we consider two additional new physics scenarios that leave strong imprints on the azimuthal correlations. The first is the presence of dimension-five chromomagnetic and chromoelectric dipole (CMDM and CEDM) operators. The effects of these operators have been studied extensively in the past. The usual logic is to probe the CMDM using total rates, and the CEDM using various forms of CP-violating observables \cite{Atwood:1992vj,Atwood:1994vm,Rizzo:1996zt,Lee:1997up,Zhou:1998wz,Beneke:2000hk,Atwood:2000tu,Sjolin:2003ah,Gupta:2009wu,Zhang:2010dr,Degrande:2010kt,Kamenik:2011dk,Hioki:2012vn,Englert:2012by}. It is also possible to see the effects of the CEDM in the neutron electric dipole moment, and this leads to a particularly powerful indirect constraint \cite{Kamenik:2011dk}. However, because the chirality structure of these operators are similar to Yukawa couplings, one of their dominant effects is a modification of the $\phi-\bar\phi$ distribution. A measurement of this distribution therefore provides a sensitive probe of both operators simultaneously. The second scenario is a broad resonance with parity-violating couplings. A standard way to reveal parity violation is to measure the longitudinal polarization of the individual tops. But we have seen above that parity-violation can also manifest itself as the appearance of forbidden terms in the spin correlation matrix. We explore a simple example model of a resonance that is so broad as to be unobservable in the $t\bar t$ mass spectrum, but which modifies the $\phi+\bar\phi$ distribution at least as strongly as it modifies distributions sensitive to the net top polarizations. \subsection{Chromomagnetic and chromoelectric dipole operators} Our starting point is a modification to the QCD couplings to top quarks due to the following operators \begin{equation} \Delta {\mathcal L} \,=\, \frac{g_s}{2} \, G_{\mu\nu}^a \, \bar t \left[ T^a \sigma^{\mu\nu} (\mu + i\gamma^5 d) \right] t \, , \end{equation} with $\sigma^{\mu\nu} \equiv (i/2)[\gamma^\mu,\gamma^\nu]$. These operators are dimension-five, but arise from more fundamental dimension-six operators with a Higgs field insertion. (They also lead, for example, to a novel $tbW^+g$ coupling.) The dimensionful couplings $\mu$ and $d$ characterize their strength. One physical consequence of these operators are contributions to the anomalous chromomagnetic and chromoelectric dipole moments of the top as seen by soft gluon exchanges, from which the couplings derive their usual labels as the CMDM and CEDM. They also lead to effects in high-energy processes, and can be constrained by measurements at hadron and lepton colliders. Using direct collider constraints from the measured $t\bar t$ cross sections, Ref.~\cite{Kamenik:2011dk} obtains $\mu\times m_t \, <$ 0.05 and $d\times m_t\, <$ 0.16. They also demonstrate a strong indirect constraint on the CEDM due to loop-induced effects on the electromagnetic EDMs of the neutron and the mercury nucleus, implying $d\times m_t\, < \, 2 \times 10^{-3}$. We will see in Section~\ref{sec:measurement} that our independent set of constraints from spin correlations gives us competitive sensitivity to the CMDM and CEDM with respect to the direct studies. Spin correlations alone will have the potential to probe dipole moments below 0.01 over the lifetime of the LHC, possibly even approaching the level of the indirect CEDM constraint. The CMDM operator conserves all of the discrete Lorentz symmetries, while the CEDM operator is P-violating and C-conserving, leading to a net CP-violation. In the language of Section~\ref{sec:formalism}, the CMDM can only affect the entries of the 4$\times$4 spin density matrix $M$ that are already populated by the SM. The CEDM can also populate the forbidden $xy$, $yx$, $yz$, and $zy$ entries, but because of C-conservation these entries are still symmetric. Neither operator induces a net polarization at tree-level. Assuming that the couplings $\mu$ and $d$ are small relative to $1/m_t$, we work to linear order, and therefore pick up only the leading effect from interference with QCD. To this order, the CMDM still appears in all C/P-allowed entries of the production matrix, but the CEDM only appears in the aforementioned entries associated with P-violation. While the CMDM can affect the total rate and cause shifts to the spin correlations exhibited by the SM, the CEDM is ``all symmetry violation,'' inducing novel effects that never occur in QCD. We can get a sense for the total effects of these operators on the spin correlations by applying a variation of the method described in Section~\ref{sec:formalism}. We assume that the normalized 3$\times$3 correlation matrix $C$, in the presence of new physics, can be written as $C = C_{\rm SM} + \epsilon \, C_{\rm NP}$. Here $\epsilon$ is an expansion parameter that characterizes the strength of the new physics. Note that $C_{\rm NP}$ must incorporate effects not only in the spatial part of the production density matrix, but also modifications to the total rate. Picking one fermion each from the top and antitop decay, we construct a modified opening angle from their unit vectors in their respective parents' rest frames \begin{equation} \cos\chi_{\rm NP}' \,\equiv\, \hat\Omega \cdot \frac{\kappa\bar\kappa\, C_{\rm NP} \cdot\hat{\bar\Omega}}{|\kappa\bar\kappa\, C_{\rm NP} \cdot\hat{\bar\Omega}|} \, , \end{equation} or alternatively with $\hat\Omega \leftrightarrow \hat{\bar\Omega}$. This angle is distributed according to \begin{equation} \frac{d\sigma}{d\cos\chi_{\rm NP}'} \,\propto\, 1 + |\kappa\bar\kappa | \: {\cal C} \,\cos\chi_{\rm NP}' \, , \end{equation} where \begin{eqnarray} {\cal C} & \,=\, & \int \,\frac{d\bar\Omega}{4\pi} \; \frac{\hat{\bar\Omega}\cdot C_{\rm NP}^T C \cdot\hat{\bar\Omega}}{\sqrt{\hat{\bar\Omega}\cdot C_{\rm NP}^T C_{\rm NP} \cdot\hat{\bar\Omega}}} \nonumber \\ \frac{d \cal C}{d\epsilon} & \,=\, & \int \,\frac{d\bar\Omega}{4\pi} \; \sqrt{\hat{\bar\Omega}\cdot C_{\rm NP}^T C_{\rm NP} \cdot\hat{\bar\Omega}} \; . \label{eq:slope} \end{eqnarray} This construction gives us the strongest possible slope of ${\cal C}$ versus the new physics strength $\epsilon$. Defined in this way, positive values of $\cos\chi_{\rm NP}'$ always correspond to regions of decay phase space where the normalized rate is enhanced with respect to the SM (for positive $\epsilon$), and negative values of $\cos\chi_{\rm NP}'$ correspond to regions where it is de-enhanced. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggTotalCorrMDM.eps} \epsfxsize=0.44\textwidth\epsfbox{ggTotalCorrEDM.eps} \caption{\it Slope of the total spin correlation shift, defined in Eq.~\ref{eq:slope}, with respect to a CMDM (left) or a CEDM (right) in $gg \to t\bar t$. The correlation is expanded to linear order in $\mu\times m_t$ or $d\times m_t$, respectively. Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.) High-valued contours have been cut off due to the formal divergence at high-$\beta^2$ .} \label{fig:ggTotalCorrCDM} \end{center} \end{figure} In Fig.~\ref{fig:ggTotalCorrCDM}, we map out this slope over $gg \to t\bar t$ production phase space, independently for the CMDM and CEDM with $\epsilon \equiv \mu\times m_t$ and $\epsilon \equiv d\times m_t$, respectively. For both the CMDM and CEDM, we see that the shifts to the correlations are maximized for boosted production. They are fairly well-behaved over much of the phase space, though there is a formal divergence as $\beta^2 \to 1$ at intermediate production angles due to the growth of the dimension-five interaction strength with energy.\footnote{This divergence is an artifact of our linear approximation of the new physics. Adding in the full dipole dependence regulates it, but effects sensitive to higher orders in the dipole could also receive corrections from even higher-dimensional operators. Thus, the specific form of the correlations at very high velocity is more model-dependent. Nonetheless, these effects are not immediately visible, due to the rapidly-falling PDF's.} Thinking of boosted tops as approximately chiral quarks, QCD dominantly produces same-spin (opposite-helicity) tops, whereas inserting a single dipole operator leads to top production with dominantly opposite-spin (same-helicity). The interference between these different, purely chiral, spin channels is entirely localized in the $xz$ and $yz$ off-diagonal parts of the correlation matrix, and these are the sources of the divergences. Interference effects in the rest of the correlation matrix are everywhere finite, as one of the processes must sacrifice an $m_t/E$-suppressed helicity-flip.\footnote{The same effect occurs in the total rate interference, which is entirely due to the CMDM. While the interference would naively grow with energy, one of the interfering amplitudes must undergo a helicity-flip. This prevents the rate interference from blowing-up at high $t\bar t$ invariant mass. Indeed, the rate interference is a rather mild function of the tops' production angle and velocity, typically giving a fractional contribution of $O(5\mu\times m_t)$ for both $gg \to t\bar t$ and $q\bar q \to t\bar t$ production. Note that for both the total rate and the spin correlations, the ``new physics squared'' contributions do not suffer from $m_t/E$ suppressions, and do ultimately take over. The effects are not strictly predictive, as interference from even higher-order operators can also become relevant. However, the leading-order interference nominally dominates for $m(t\bar t) \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 2.5$~TeV for $\mu \times m_t \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 0.01$.} In practice, much of the correlation effect at large but finite $\beta^2$ comes from interference with opposite-spin QCD production, inducing modifications to the $\phi-\bar\phi$ distributions. This is fortuitous, as we have seen that the SM modulations in this variable die off at high energies. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{ggAzDiffRelMDM.eps} \epsfxsize=0.44\textwidth\epsfbox{ggAzDiffRelEDM.eps} \caption{\it Slope of the $\phi-\bar\phi$ spin correlation shift with respect to a CMDM (left) or a CEDM (right) in $gg \to t\bar t$, relative to the slope of the total shift in Fig.~\ref{fig:ggTotalCorrCDM}. The correlation is expanded to linear order in $\mu\times m_t$ or $d\times m_t$, respectively. Note that $\mu$ induces a pure cosine-wave modulation, and $d$ induces a pure sine-wave modulation. Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:ggAzDiffCDM} \end{center} \end{figure} To illustrate this point in more detail, we show in Fig.~\ref{fig:ggAzDiffCDM} the relative slope of the $\phi-\bar\phi$ modulation effect compared to the maximal slope ${\cal C}$, continuing to focus on $gg \to t\bar t$ production. As in Section~\ref{sec:QCD}, we use asymmetries to define a common normalization, allowing us to compare the strength of sinusoidal correlations to linear correlations. For the CMDM, we define the asymmetry between the regions $|\phi-\bar\phi| = [0,\pi/2]$ and $|\phi-\bar\phi| = [\pi/2,\pi]$. For the CEDM, we define the asymmetry between the regions $\phi-\bar\phi = [0,\pi]$ and $\phi-\bar\phi = [-\pi,0]$. We see that, from this perspective, the $\phi-\bar\phi$ modulations capture the vast majority of the correlation shifts over much of the production phase space. The only exceptions are in the high-$p_T$ limit for the CMDM (above roughly $400$~GeV), and for high-$\beta^2$, intermediate-angle production for the CEDM. These are exactly the regions where the $xz$ and $yz$ correlations are naively beginning to diverge. While it is still possible to define a more powerful probe of the correlation shifts using the above matrix formalism (or related ``optimal observable'' methods~\cite{Atwood:1992vj,Zhou:1998wz,Atwood:2000tu,Sjolin:2003ah,Hioki:2012vn}), the extreme simplicity and high sensitivity of the single variable $\phi-\bar\phi$ strongly motivate us to consider probing both dipole moments simultaneously using azimuthal angle measurements. While we have been focusing on $gg \to t\bar t$ production, which is the main production channel for tops at the LHC, we cannot neglect the impact of the CMDM and CEDM on $q\bar q \to t\bar t$ production. At high-$p_T$, $q\bar q$ contributes $O$(30\%) of the total rate. We account for this in detail when we estimate measurement prospects below. Here, we simply comment that the effects on the $\phi-\bar\phi$ modulation are typically about half as large, and come in with opposite sign. Adding in $q\bar q$ therefore dilutes the final modulations at the LHC by as much as a factor of $O$(2) relative to pure $gg$. For the CMDM, this cancellation formally reduces the sensitivity of an azimuthal angle measurement compared to a total cross section measurement, as the latter's slope has the same sign (and similar magnitude) for both production mechanisms. Nonetheless, we emphasize that a variable that undergoes a simple {\it shape} distortion may ultimately win out, especially when statistics begin to allow us to probe down to percent-level effects or smaller. (The luminosity uncertainty at the LHC is itself a few percent.) For the CEDM, it is also formally possible to improve the measurement, but it is not clear that the added complexity would genuinely pay off. In particular, we note that for central production, the {\it entire} effect is captured by $\phi-\bar\phi$, and there is no way to escape the fact that $gg$ and $q\bar q$ partially cancel each other. For both electric and magnetic dipoles, one might hope that the Tevatron might avoid this cancellation as $t \bar t$ pairs there are produced overwhelmingly through $q \bar q$ annihilation. However, we find the dominant effects for the CMDM are typically not fully captured by azimuthal-difference modulations about the $z$-axis, but rather by azimuthal-sum modulations about the $x$-axis. For non-central tops, most of the CEDM effect is captured by azimuthal-difference modulations about the $x$-axis. It would be interesting to explore a detailed measurement of these effects, but we do not undertake this here. Still, it serves as yet another example of the perspective offered by taking a more general approach to the spin correlations. \subsection{Broad parity-violating resonance} \label{sec:resonance} Here, we consider a new spin-one color-octet particle that couples to light quarks and top quarks as \begin{equation} \Delta {\mathcal L} \,=\, g_s A_\mu^a \, \Big( \bar q \left[ T^a \gamma^\mu (v_{q} + \gamma^5 a_{q}) \right]q + \bar t \left[ T^a \gamma^\mu (v_{t} + \gamma^5 a_{t}) \right]t \Big). \end{equation} We have studied this model before in~\cite{Baumgart:2011wk}. However, there we assumed a narrow resonance with large $S/B$, so that interference effects with QCD were subdominant and the chirality structure of the light quark couplings was not apparent. In that case, we get C- and P-violation when $v_t a_t \ne 0$, which leads to (identical) net polarizations of the top and antitop in the $xz$-plane. While the correlation matrix has nontrivial functional dependence on $v_t$ and $a_t$, none of the P-violating entries become nonzero. When we include interference with QCD, these entries can all turn on, yielding novel symmetry-violating effects in spin observables. In particular, nonvanishing $xy$ and $yx$ entries would induce a sinusoidal component of the $\phi+\bar\phi$ modulation. To the best of our knowledge, this type of P-violation has never been studied before for top quarks. As a concrete example, we add a resonance with $(v_q,a_q) = (0.05,0)$, $(v_t,a_t) = (0,5)$, $M = 500$~GeV, and $\Gamma/M = 0.5$. Such a resonance may serve to parametrize the effects of TeV-scale composite physics that couples strongly to top quarks, and only couples to light quarks through kinetic mixing with the gluon. If the main decay channel was through top quarks, we would only expect $\Gamma/M \sim 0.2$. Therefore, the unusually broad width requires additional decay channels, perhaps into high-multiplicity jets via new (on- or off-shell) colored particles \cite{Tavares:2011zg,Gross:2012bz}. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{qqAzSumCosResonance.eps} \caption{\it Shift in the strength of the $\phi+\bar\phi$ cosine correlation in the presence of our example resonance, in $q\bar q \to t\bar t$. Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:resonance1} \end{center} \end{figure} \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{qqAzSumSinResonance.eps} \epsfxsize=0.44\textwidth\epsfbox{qqHelicityResonance.eps} \caption{\it Induced strength of the $\phi+\bar\phi$ sine correlation (left) and net top quark helicity (left) in the presence of our example resonance, in $q\bar q \to t\bar t$. Plotted versus top production angle and squared-velocity in the partonic CM frame. (Dashed line indicates $p_T = m_t$.)} \label{fig:resonance2} \end{center} \end{figure} The small coupling to light quarks and the lack of a tightly-localized resonance peak should ensure that this particle escapes detection in dijet searches. Our choice of parameters also keeps it well-hidden in the $t\bar t$ invariant mass spectrum at the Tevatron and LHC~\cite{CDFresonance,Abazov:2011gv}. The relative modifications to the $q\bar q \to t\bar t$ rate are modest, vanishing near threshold and with a gradual turn-on that peaks at 15\% near 600~GeV. Nonetheless, the effects on individual top spins and spin correlations are significant. By far the dominant contributions appear in the $xy$-block of the correlation matrix and in the $z$-polarization. The $zz$ entry, which leads to the standard polar-polar correlation, is nowhere modified by more than 0.1 from the SM, and typically less than 0.05. The main effects on $t\bar t$ decay kinematics are a decrease of the $\phi+\bar\phi$ cosine-modulation, the appearance of the advertised $\phi+\bar\phi$ sine-modulation, and linear biases in the polar decay angles $\cos\theta$ and $\cos\bar\theta$. We plot the strengths of these three effects in Figs.~\ref{fig:resonance1} and~\ref{fig:resonance2} (as above, normalizing to distribution asymmetries multiplied by two). Notably, the net polarization flips sign as we pass through the resonance, while the azimuthal correlation effects do not. While the P-violation from this model can be seen in the top quark polarization at high-energy, the presence of a comparable (in many regions larger) effect from P-violating spin correlations presents us with an interesting opportunity. We study the prospects for measuring these correlations at the LHC in the next section. We note that they may also be observable at the Tevatron, though we have not undertaken a dedicated study here. The inclusive polarization induced in $q\bar q \to t\bar t$ at the LHC is only 6\%, and is highly diluted by the much larger unpolarized $gg \to t\bar t$ contribution. This small size allows the model to evade existing constraints from polarization measurements at the LHC~\cite{CMSspin,ATLASspin}. The inclusive effects at the Tevatron would be much weaker than 6\%. In either case, the polarization might be revealed by using harder cuts. \section{Azimuthal Spin Correlation Measurement} \label{sec:measurement} We now turn to the question of whether the azimuthal sum/difference spin correlations from QCD can be measured at the LHC. We find that these correlations can be revealed with quite good accuracy even using fairly simple event reconstruction strategies. We demonstrate this point in a set of proof-of-principle measurements carried out on simulation data, showing that the evolution from low-$p_T$ production to high-$p_T$ production should be clearly observable with the current LHC data set. We also discuss the observability of the new physics effects from color-dipoles or a broad parity-violating resonance. Our simulations incorporate many realistic effects such as parton showering and hadronization, jet reconstruction, $b$-tagging and mistagging efficiencies, and finite energy/directional resolution (described in detail in Appendix~\ref{sec:details}). While we cannot accurately represent all of the effects that afflict particle detection and reconstruction, we believe that our simple model gives an adequate first approximation. Using {\tt MadGraph5}~\cite{Alwall:2011uj} interfaced with {\tt PYTHIA}~\cite{Sjostrand:2006za}, we generate simulated samples of $t\bar t$ at LHC8, and samples of the dominant backgrounds (also detailed in Appendix~\ref{sec:details}). The $t\bar t$ simulations include separate samples where the top decays are either handled individually by {\tt PYTHIA}\ or using the full 6-body production/decay matrix elements via {\tt MadGraph5}'s decay-chain functionality. These furnish ``uncorrelated'' and ``correlated'' samples which we compare. We study the effects of the correlations in both dileptonic and $l$+jets decay channels. There are various advantages and disadvantages to each. In principle, dileptonic is ideal for spin correlation measurement due to the leptons' maximal spin analyzing powers. However, the presence of two neutrinos significantly complicates kinematic reconstruction, and the overall rate is low due to the 5\% dileptonic branching fraction. The $l$+jets channel is naively easier to fully reconstruct, and comes with a higher branching fraction of 30\%. The former can especially be useful if we are interested in some very specific region of production kinematics, such as near a resonance. But we pay a penalty in analyzing power: the best that we accomplish on the hadronic side is to either pick the $b$-jet ($\kappa = -0.4$), or the softest non-$b$ jet in the top's rest frame ($\kappa = 0.5$). The promise of complete kinematic reconstruction and much higher statistics are also not so immediately delivered upon, due to imperfect $b$-tagging, combinatoric ambiguities, jet overlaps, and plentiful jet-like contamination in the events. Nonetheless, as we will find, the dileptonic and $l$+jets channels both yield observable effects with high statistical significance after complete event reconstruction. \subsection{Dileptonic} For this channel, the central challenge is to reconstruct the individual top momenta. The two neutrinos make this process nontrivial, as four degrees of freedom are unmeasured by the detector, and we need to recover them without inducing spurious kinematic correlations. We must also devise an approach for deciding how to pair $b$-jets with the leptons. Our aim is to provide a sufficiently accurate and unbiased reconstruction such that we can clearly observe the correlations in $\phi\pm\bar\phi$. As a proof of concept, we have taken a straightforward approach inspired by the methods in~\cite{Baringer:2011nh,Choi:2011ys}. More involved procedures using detailed kinematic constraints~\cite{Sonnenschein:2006ud} might be able to achieve greater sensitivity than what we claim here. Nonetheless, the three-body nature of the top decay provides us with a large portion of the relevant kinematic information in fully visible particles, and complete reconstruction of top decay angles is not necessarily very sensitive to how we treat the neutrinos. Moreover, azimuthal decay angle measurements are highly forgiving; for example, they are independent of the absolute top velocities in the reconstructed CM frame. Though we do not exploit it here, the necessary kinematic reconstructions can also be much simpler in the case of highly-boosted tops~\cite{Baumgart:2011wk}. For our present study, we select events with exactly two oppositely-charged leptons, assumed to come from the top decays, and at least two jets. To help control backgrounds (mainly $Z^{(*)}$+jets), we reject events with a same-flavor lepton pair with $m(l^+l^-) = [80,100]$~GeV, and also demand at least one $b$-tag and $\displaystyle{\not}E_T$\ $> 30$~ GeV. We chose as our $b$-jet candidates either the two hardest $b$-tagged jets or, if only one jet is tagged, the $b$-tagged jet and the hardest untagged jet. We reconstruct the individual top systems by computing $M_{T2}$~\cite{Lester:1999tx,Barr:2003rg}, constructed out of the two leptons, two $b$-jets, and $\vec{\displaystyle{\not}E}_T$. We consider both possible assumptions of pairings for the $b$'s and leptons for this calculation. The solution of $M_{T2}$\ presents us with an educated guess for the individual neutrino $\vec{p}_T$'s, and we further provide a guess for their $p_z$'s by matching each neutrino solution's rapidity with that of the four-vector of its associated $b$+lepton system.\footnote{We can alternatively use kinematic constraints to solve for $p_z(\nu)$ and $p_z(\bar\nu)$ after estimating the neutrinos' transverse momenta through $M_{T2}$. We find that this induces extra bias into the azimuthal angle distributions, at the few percent level. Using the full method of \cite{Choi:2011ys}, including $M_{T2}$\ for the $W$'s induces biases at the 10--20\% level.} This choice minimizes the full invariant mass of each top candidate. To decide which of the two jet-lepton pairings to choose, we use the following procedure, moving on to the next step if the previous one is inconclusive: \begin{enumerate} \item If one pairing has $m(bl) >$ 151 GeV, and the other does not, take the latter, as in~\cite{Baringer:2011nh}. \item The value of $M_{T2}$\ is strictly bounded by $m_t$. Thus if one pairing has $M_{T2}$$>m_t$, but the other does not, take the latter. \item Lastly, take the pairing that minimizes the quantity $(m(bl^+\nu) - m_t)^2 + (m(\bar bl^-\bar\nu) - m_t)^2$. \end{enumerate} If either reconstructed top has a mass exceeding 200~GeV, we throw out the event. (This last cut removes only about 5\% of events.) In our simulation sample, 60\% of the selected events were paired correctly. Of the 40\% of incorrect pairings, in 11\% one or both $b$-jets failed basic reconstruction, and in 7\% both $b$-jets were present but not correctly picked as our candidates. Therefore, in cases where our procedure was given the two correct $b$-jets as input, it successfully paired them with a rate better than 70\%. We observe that the success rate increases with truth-level top $p_T$. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{Dphi_lowPt_dilepton.eps} \epsfxsize=0.44\textwidth\epsfbox{Dphi_highPt_dilepton.eps} \epsfxsize=0.44\textwidth\epsfbox{phiSum_lowPt_dilepton.eps} \epsfxsize=0.44\textwidth\epsfbox{phiSum_highPt_dilepton.eps} \caption{\it Distributions of the azimuthal difference and sum for the dileptonic channel at LHC8: low-$p_T$ $\phi-\bar\phi$ (upper-left), high-$p_T$ $\phi-\bar\phi$ (upper-right), low-$p_T$ $\phi+\bar\phi$ (lower-left), high-$p_T$ $\phi+\bar\phi$ (lower-right). The solid black and red histograms are uncorrelated and correlated simulations with full reconstruction. (Error bars are monte carlo statistics, with an effective sample luminosity of 34~fb$^{-1}$.) The solid curves are two-parameter fits using a flat distribution plus independent sine and cosine modulations. The dashed curves are fits of the parton-level samples with identical acceptance cuts.} \label{fig:llModulations} \end{center} \end{figure} \begin{table}[tp] \centering \begin{tabular}{l|r|c|c|c} LHC8, 20 fb$^{-1}$ & \; \# events \; & \; stat error \; & \; $\phi-\bar\phi$ amp uncorr/corr \; & \; $\phi+\bar\phi$ amp uncorr/corr \; \\ \hline low-$p_T$ & 47,600 \; & 0.7\% & 0.3\% / 12.8\% (18$\sigma$) & -1.0\% / -9.6\% (12$\sigma$) \\ high-$p_T$ & 8,400 \; & 1.6\% & 6.4\% / 6.9\% (0.3$\sigma$) & -2.4\% / -19.7\% (11$\sigma$) \end{tabular} \caption{\it Cosine modulation amplitudes for azimuthal correlation observables, and expected number of events, statistical error, and significances for LHC8 at 20~fb$^{-1}$. Sine modulation components would have the same statistical errors, and central values are consistent with zero. The two $p_T$ regions are determined by whether $|p_{T}(bl^+\nu)| + |p_{T}(\bar bl^-\bar\nu)|$ is less than or greater than $2m_t$.} \label{tab:llResults} \end{table} We plot the distributions and fits for the azimuthal sum/difference variables in Fig.~\ref{fig:llModulations} and compare these to parton-level events with perfect reconstruction but subject to basic acceptance cuts. We have divided the final-state phase space into two regions based on the scalar-summed $p_T$ of the two reconstructed tops: a ``low-$p_T$'' region with $|p_T(bl\nu)|+|p_T(\bar bl\bar\nu)| < 2m_t$, and a ``high-$p_T$'' region with $|p_T(bl^+\nu)|+|p_T(\bar bl^-\bar\nu)| > 2m_t$. The expected azimuthal modulations are fairly faithfully preserved, with most of the effect in $\phi-\bar\phi$ at low-$p_T$, and in $\phi+\bar\phi$ at high-$p_T$. The biases induced by our simple reconstruction are typically modest, though a roughly 6\% modulation is induced in $\phi-\bar\phi$ at high-$p_T$. In Tab.~\ref{tab:llResults}, we summarize the fitted amplitudes and their expected statistical errors given 20~fb$^{-1}$ of data at LHC8. The effects of the correlation are in principle measurable with very high statistical significance, in excess of 10$\sigma$. We also study the mixed polar-azimuthal correlations, involving the matrix elements $C^{13}$ and $C^{31}$. We expect and find that the polar distributions suffer from significant acceptance bias, as the polar decay angles are strongly correlated with lepton $p_T$. The individual $\phi$ and $\bar\phi$ distributions are similarly highly distorted by basic acceptance criteria (as we will illustrate in more detail for $l$+jets in the next subsection). We therefore do not attempt sinusoidal fits of the $\phi'$ and $\bar\phi'$ distributions of Eq.~\ref{eq:PolarAz}, but instead default to measuring their asymmetries between the regions $[0,\pi/2]$ and $[\pi/2,\pi]$. The asymmetry difference between correlated and uncorrelated samples is roughly 2\% for both $p_T$ regions, with expected statistical uncertainties of 0.4\% (low-$p_T$) and 1\% (high-$p_T$) at LHC8. The reconstruction-induced asymmetry biases are percent or smaller.\footnote{We can also study the traditional polar-polar correlations, probing $C^{33}$, by measuring asymmetries in $\cos \theta \cos \bar \theta$. We find that the correlations induce asymmetries of -3.9\% and 3.7\% on top of biases of 9.3\% and 2.7\%, for low-$p_T$ and high-$p_T$, respectively. The statistical uncertainties are the same as for the asymmetries in $\cos \phi'$, and the effects are therefore highly significant. However, we emphasize that our own analysis was not optimized to measure these effects, and alternative reconstruction strategies might obtain better sensitivity.} It may therefore be possible to establish the presence of even this small correlation effect in the low-$p_T$ region with high significance, or perhaps more importantly to place tight limits on possible anomalous contributions. Fully realistic measurements must contend with systematic uncertainties and backgrounds. We are not in a position to fully understand systematics or the quality of background subtraction, but we can at least check that the effects of the backgrounds are small. We include $tW$, $W^+W^-$+jets, $l^+l^-$+jets, and $\tau^+\tau^-$+jets. (Simulations are described in detail in Appendix~\ref{sec:details}.) Already our basic reconstruction cuts are sufficient to largely eliminate $tW$ and $W^+W^-$+jets, and our additional selections on $m(l^+l^-)$, $\displaystyle{\not}E_T$, and $b$-tagging highly reduce $l^+l^-$+jets and $\tau^+\tau^-$+jets. The largest surviving background is $l^+l^-$+jets. It will be crucial to keep this background under control, as the tendency of the leptons to be back-to-back in the transverse plane leads to an $O(30\%)$ bias in its reconstructed $\phi - \bar \phi$ distribution in our low-$p_T$ region. However, after all cuts the overall rate for this background is only 3\% of $t\bar t$, and the final effect on the amplitude is expected to be only $O(1\%)$. We have also checked the stability of the difference/sum modulations to physics modeling. We compare the modulations, both correlated and uncorrelated, in $t\bar t$ samples generated in {\tt MadGraph5}\ from simple two-body production followed by parton showering, as well as production matched up to two additional jet emissions via $k_T$-MLM. The fitted distributions all agree at the 1\% level. Lastly, we comment on how these measurements can be used to probe for color-dipole moments or broad parity-violating resonances, as discussed in Section~\ref{sec:NP}. We measure the dipole moments using the $\phi-\bar\phi$ distribution. We find good sensitivity to the CMDM using an inclusive sample, and good sensitivity to the CEDM with a cut of $m(t\bar t) > 500$~GeV ($\beta^2 > 0.5$). The induced cosine- and sine-modulations have respective amplitudes of $0.40(\mu\times m_t)$ and $0.30(d\times m_t)$. With 20~fb$^{-1}$ of data at LHC8, we find 2$\sigma$ sensitivity to $\mu\times m_t \simeq 0.03$ and $d\times m_t \simeq 0.05$. In the next run of the LHC, with an expected beam energy near 13~TeV and integrated luminosity above 100~fb$^{-1}$, these limits would extend down to 0.01 or smaller. For our example resonance model of~\ref{sec:resonance}, we fit the induced sine-wave amplitude in $\phi+\bar\phi$, taking events with $|p_{T}(t)| + |p_{T}(\bar t)| \,>\, 250$ GeV and $|\cos \Theta| \,<\, 0.5$. The amplitude is 3.4\%, with a significance of 2.5$\sigma$. \subsection{$l$+jets} For this analysis we demand, in addition to our basic acceptance cuts, that the event contain at least four jets, at least one of which is $b$-tagged, and that leptonic and hadronic tops can be fully reconstructed. To reconstruct the tops, we iterate over all possible partitions of the lepton and jets into a leptonic top ($l\nu j$) and a hadronic top ($jjj$), including separately each of the two possible neutrino solutions.\footnote{If $m_T(l,$$\displaystyle{\not}E_T$$) > m_W$, we use a single solution $\eta(\nu) \equiv \eta(l)$ and rescale $p_T(\nu)$ so that $m_T(l,\nu) \equiv m_W$.} Each candidate set of the six objects must contain at least one $b$-tagged jet, and in sets with more than one $b$-jet we require that the leptonic and hadronic top individually contain at least one. We choose the partitioning that minimizes $(m(l\nu j)-m_t)^2 + (m(jjj)-m_t)^2$. Within the hadronic top, we further resolve the kinematics. If the hadronic top contains one $b$-tagged jet, we take the two remaining jets to reconstruct the hadronic $W$. Otherwise, we pick the two jets that best reconstruct the $W$ mass, and identify the third as the ``$b$-jet." To ensure good quality reconstruction and to reduce backgrounds, we further demand that $m(jjj) = [130,200]$~GeV, $m((jj)_W) = [60,100]$~GeV, and $p_T(l\nu jjjj)/m(l\nu jjjj) < 0.1$. To construct our spin correlation observables, we use the lepton from the leptonic side and the less energetic of the two $W$ jets in the hadronic top's rest frame. However, we note that we get similar results (albeit with reversed modulations and somewhat smaller amplitudes) by picking the hadronic top's $b$-jet. In a real measurement, these two choices, which have similar sensitivity, can serve as excellent cross-checks of one another. We split the production phase space into ``low-$p_T$'' and ``high-$p_T$'' regions, respectively defined by $p_T(jjj) < 150$~GeV and $p_T(jjj) > 150$~GeV. In Fig.~\ref{fig:ljetsModulations}, we show the $\phi\pm\bar\phi$ distributions from our $t\bar t$ samples.\footnote{Note that for $l^+$+jets the variables are $\phi(l^+)\pm\bar\phi({\rm softer\;}j_W)$, and for $l^-$+jets they are $\phi({\rm softer\;}j_W)\pm\bar\phi(l^-)$. We assume CP-conservation in top decay, and do not study these two cases individually.}$^,$\footnote{The composition of these samples in $gg\to t\bar t$ ($q\bar q\to t\bar t$) is 80\% (20\%) for low-$p_T$ and 70\% (30\%) for high-$p_T$.} For comparison, we also display fits to parton-level results with perfect reconstruction but identical acceptance cuts. (These include a criterion, $\Delta R > 0.4$ for all $lj$ and $jj$ pairings, to reproduce the effects of lepton isolation and jet clustering.) As expected, the effects of the correlations are smaller than in the dileptonic channel. There are also some clear acceptance biases introduced by the cuts in the low-$p_T$ region. Our jet-level kinematic reconstruction largely traces this bias for $\phi+\bar\phi$, but can deviate by almost 10\% for $\phi-\bar\phi$. Still, the separation between correlated and uncorrelated tops is largely preserved. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{Dphi_lowPt_ljets.eps} \epsfxsize=0.44\textwidth\epsfbox{Dphi_highPt_ljets.eps} \epsfxsize=0.44\textwidth\epsfbox{phiSum_lowPt_ljets.eps} \epsfxsize=0.44\textwidth\epsfbox{phiSum_highPt_ljets.eps} \caption{\it Distributions of the azimuthal difference and sum for $l$+jets at LHC8: low-$p_T$ $\phi-\bar\phi$ (upper-left), high-$p_T$ $\phi-\bar\phi$ (upper-right), low-$p_T$ $\phi+\bar\phi$ (lower-left), high-$p_T$ $\phi+\bar\phi$ (lower-right). The solid black and red histograms are uncorrelated and correlated simulations with full reconstruction. (Error bars are monte carlo statistics, with an effective sample luminosity of 29~fb$^{-1}$.) The solid curves are two-parameter fits using a flat distribution plus independent sine and cosine modulations. The dashed curves are fits of the parton-level samples with identical acceptance cuts.} \label{fig:ljetsModulations} \end{center} \end{figure} We also illustrate the reconstruction bias induced on the positively-charged leptons' raw $\phi$ distributions in Fig.~\ref{fig:ljetsPhil}. Negatively-charged leptons display similar distributions with $\phi\to\phi+\pi$, as our orientation of the $x$ and $y$-axes is always by definition tied to the positively-charged top. Individual jets from the hadronic top also display similar distributions, which also depend on the sign of the leptonic top.\footnote{Generally, decay particles emitted within the production plane ($\phi = 0,\pi$) are less likely to be detected than particles emitted perpendicular to the production plane ($\phi = \pm\pi/2$). For centrally produced tops, this is easy to understand: particles emitted perpendicular to the plane are always in the central detector, whereas particles emitted within the plane can end up at very forward angles with small $p_T$. For tops that are not exactly central, particles emitted toward $\phi=0$ are being shot back into the central part of the detector, whereas those emitted toward $\phi = \pi$ are being shot more toward the beams.} Despite the rather severe reshaping of these otherwise flat distributions, the bias largely cancels out when forming $\phi\pm\bar\phi$ and combining positive and negative charges. We take this as some indication that these modulation effects may not be very sensitive to detailed detector acceptances. However, the distributions of our other correlation-sensitive azimuthal observables, $\phi'$ and $\bar\phi'$, are more directly reshaped in this manner. \begin{figure}[tp] \begin{center} \epsfxsize=0.44\textwidth\epsfbox{phil_lowPt_ljets.eps} \epsfxsize=0.44\textwidth\epsfbox{phil_highPt_ljets.eps} \caption{\it Azimuthal angle distributions for positively-charged leptons in $l$+jets after all acceptance cuts: low-$p_T$ (left) and high-$p_T$ (right). (Error bars are monte carlo statistics.)} \label{fig:ljetsPhil} \end{center} \end{figure} The results of our sinusoidal fits to the $\phi\pm\bar\phi$ distributions are summarized in Table~\ref{tab:ljetsResults}, as well as the expected number of events and statistical errors for 20~fb$^{-1}$ integrated luminosity at LHC8. Modulations should be resolvable with errors at the sub-percent level with respect to the total rate. Without systematic errors, we predict roughly $10\sigma$ discrimination between correlated and uncorrelated distributions in low-$p_T$ $\phi-\bar\phi$ and high-$p_T$ $\phi+\bar\phi$. \begin{table}[tp] \centering \begin{tabular}{l|r|c|c|c} LHC8, 20 fb$^{-1}$ & \; \# events \; & \; stat error \; & \; $\phi-\bar\phi$ amp uncorr/corr \; & \; $\phi+\bar\phi$ amp uncorr/corr \; \\ \hline low-$p_T$ & 73,900 \; & 0.5\% & -6.7\% / -0.9\% (11$\sigma$) & -4.5\% / -8.9\% (9$\sigma$) \\ high-$p_T$ & 35,300 \; & 0.8\% & -3.0\% / -0.5\% (3.1$\sigma$) & -1.8\% / -10.0\% (10$\sigma$) \end{tabular} \caption{\it Cosine modulation amplitudes for azimuthal correlation observables, and expected number of events, statistical errors, and significances for LHC8 at 20~fb$^{-1}$. Sine modulation components would have the same statistical errors, and central values are consistent with zero.} \label{tab:ljetsResults} \end{table} We can also consider measurement of the mixed polar-azimuthal correlations via $\phi'$ and $\bar\phi'$ ({\it cf.~}Eq.~\ref{eq:PolarAz}). The acceptance reshaping makes their distributions less amenable to sinusoidal fits, and the small size of the correlation also means that discrimination of correlated versus uncorrelated is much more difficult. If we default back to taking asymmetries, the difference between correlated and uncorrelated is about 1\% for low-$p_T$ and 2\% for high-$p_T$ (with acceptance biases of -2\%), and the statistical resolutions are roughly 0.4\% and 0.5\%. The small correlation effect might therefore be seen up to respective significances of roughly 2.4$\sigma$ and 4$\sigma$.\footnote{By way of comparison, we can also apply the same analysis style to the polar-polar correlations. The statistical errors are identical. The differences in low-$p_T$ and high-$p_T$ asymmetries are 3\% and 2\% (with opposite signs, and acceptance bias of 5\%). While these asymmetries are both small, they would also be visible in principle. We note that the helicity-basis polar correlations are much more sensitive to our division of top production phase space, and that in particular our dividing line at $p_T \simeq m_t$ sits near the minimum.} As in the dilepton channel, we again consider the impact of the major backgrounds, which include $W$+jets and single-top (mainly $tW$), as well as $t\bar t$ with leptonic $\tau$ decays. These each contribute an additional 5--10\% on top of the $t\bar t$ signal. We have checked that the azimuthal difference/sum modulations in each of these background is at the 10\% level or less relative to their individual rates, and therefore when combined with $t\bar t$ contribute additional modulations at the sub-percent level. We reach analogous conclusions for the contributions to the $\phi'$ and $\bar\phi'$ asymmetries. The relative contribution of the backgrounds can be further reduced by requiring at least two $b$-tags, though at a cost of $O(50\%)$ of the signal. We have also compared our results between matched and unmatched $t\bar t$ simulation samples. This is a highly nontrivial cross check for $l$+jets, which uses a large number of jets in the reconstruction. We nonetheless find percent-level agreement between the reconstructed modulations obtained in the different simulations, which suggests that higher-order corrections to our observables may be modest. Next, we consider the possible impact of new physics. The CMDM and CEDM respectively introduce cosine- and sine-wave modulations in $\phi-\bar\phi$. Our simulations indicate that the amplitudes shift by $0.29(\mu\times m_t)$ and $0.27(d\times m_t)$ in our inclusive sample (combining low-$p_T$ and high-$p_T$ subsamples).\footnote{The effects grow somewhat with a top $p_T$ cut, but the inclusive analysis provides better statistical significance.} This would give us 2$\sigma$ sensitivity to $\mu\times m_t$ and $d\times m_t$ of roughly 0.03 by the end of 2012. These are comparable to the limits that we obtained with the dileptonic channel above, and a combined measurement over both channels would be warranted. For our parity-violating resonance example model, we study a subsample with $p_T > 100$~GeV and $|\cos\Theta| < 0.5$, and observe a 2.7\% sine-wave modulation in $\phi+\bar\phi$. This corresponds to about 4.5$\sigma$ at LHC8. We thus find that for this scenario, given our particular choices of cuts and reconstruction methods, $l$+jets outperforms dilepton (in which we found the effect to be visible with 2.5$\sigma$ significance). For both types of new physics, it may also be useful to fold in the independent $l$+jets measurements obtainable by correlating a lepton with a $b$-jet. Somewhat longer-term, for a projected 13~TeV LHC with 100~fb$^{-1}$, the statistical significances will improve over our 2012 estimates by roughly a factor of four. For example, dipole strengths weaker than 0.01 will become constrained. A combined dilepton and $l$+jets spin correlation measurement might push down to roughly $5\times10^{-3}$, which for the CEDM is approaching the scale of the indirect neutron EDM constraint~\cite{Kamenik:2011dk}. Clearly, by that point it also makes sense to further subdivide the production phase space, which will allow very detailed maps of the correlations if systematic errors can be controlled. In particular, effects such as those from our example resonance might be cleanly localized with high precision as in Fig.~\ref{fig:resonance2}. With high statistics, any effects of interest in specific production regions can also be further enhanced by applying cuts to the top decays, for example selecting only leptonic tops that decay with $\theta \simeq \pi/2$. The $l$+jets channel is particularly well-suited for such measurements, due to its amenability to complete kinematic reconstruction. Given the large number of highly-boosted top quarks produced at the very relativistic energies of LHC8 and LHC13, it will also be useful to employ jet substructure techniques and alternative lepton isolation strategies for extracting the decay kinematics (see, e.g.,~\cite{Abdesselam:2010pt,Altheimer:2012mn} and references therein). \section{Conclusions} \label{sec:conclusions} A systematic treatment of top quark production and decay reveals how the complete set of spin correlations imprint themselves on $t\bar t$ events, including all interference effects, and provides us with novel ways to search for new physics. In particular, we have emphasized that sums and differences of the azimuthal decay angles of the top and antitop about their production axis encode a significant portion of the full 3$\times$3 spin correlation matrix, and that modulations in these variables exhibit a nontrivial evolution as we scan over top production phase space. Within QCD, there is a common lore that the $q\bar q \to t\bar t$ subprocess, dominant at the Tevatron, produces top quarks whose spins are fully correlated when measured along a special off-diagonal axis that interpolates between the beam directions for slow tops and the production axis for fast tops~\cite{Mahlon:1995zn}. We have seen that this picture does not generally capture the entire spin correlation. In the limit of centrally-produced tops with appreciable velocities, an even larger correlation emerges due to interference between the different spin channels, leading to a modulation in the sum of the tops' azimuthal decay angles about the off-diagonal axis. For $gg \to t\bar t$, the picture is more complicated. Threshold production is pure $s$-wave, and can therefore be broken down into a longitudinal {\it anti}-correlation along any axis, married to an azimuthal-difference modulation around that axis. For fast, central tops, the correlation again begins to look like that of $q\bar q \to t\bar t$. For intermediate regions of phase space, a nontrivial crossover occurs, and in much of this crossover region the correlation is completely dominated by the azimuthal-sum modulation about the production axis. Measurement of azimuthal correlations about the production axis, including their evolution across top production phase space, is straightforward. Unlike polar decay angles, azimuthal sums and differences are not highly sensitive to detailed detector acceptances. We have demonstrated these points with a set of simulation measurements set at the 8~TeV LHC, for both dileptonic and $l$+jets channels. Dividing the phase space into ``low-$p_T$'' and ``high-$p_T$'' regions for illustration, and accounting only for statistical errors, we predict that the modulations in the two azimuthal angle combinations should be observable with significances above 10$\sigma$ with the current 2012 data set. The presence of new physics will generally lead to modifications of the azimuthal decay angle correlations. In \cite{Baumgart:2011wk}, we showed how these correlations can elucidate the coupling structure of heavy, relatively narrow resonances in the $t\bar t$ mass spectrum. Here, we have considered two additional examples which have modest $S/B$ and whose effects are not necessarily well-localized in production phase space, but which lead to significant distortions of azimuthal correlations. The first is the well-studied possibility of contributions from dimension-five color-dipole operators. These induce azimuthal-difference modulations, with a phase governed by the relative strengths of the CMDM and CEDM. We have argued that the azimuthal modulations capture the vast majority of the dipoles' effects on the spin correlations at the LHC and proposed measurements that will ultimately facilitate bounds that are an order of magnitude more sensitive than the current direct limits~\cite{Kamenik:2011dk}. Our second example is a broad spin-one color-octet resonance that couples vectorially to light quarks and axially to top quarks. Interference with QCD leads to a pronounced asymmetric modulation in the azimuthal-sum variable, representing a form of parity-violation that has not been considered before. We studied a specific model point which would remain well-hidden from $t\bar t$ and dijet resonance searches, and whose dominant contribution is this parity-violating effect. While we have restricted most of our discussion of measurement prospects to the LHC, the Tevatron might also be a promising venue in which to search for anomalous correlations. Though the QCD correlation has only just been measured there, new physics can induce strong effects in unexpected places. In particular, processes that dominantly affect azimuthal correlations in $q\bar q\to t\bar t$, such as our parity-violating resonance model, may show up more strongly at the Tevatron than at the LHC. We have also mainly focused on correlations amongst azimuthal angles. In conjunction with polar angle correlation measurements, this provides us with five of the nine entries of the full correlation matrix. The other four can be obtained through dedicated polar-azimuthal cross-correlation measurements. We have seen that these effects are small in QCD, though potentially measurable. They also tend to be small in many new physics models. However, we have already encountered an important counterexample, in the high-velocity limit in the presence of color-dipole interactions. We have not undertaken a dedicated estimate of how visible these effects might be, as they are likely subject to large corrections beyond the leading order in the dipole strengths, but they would be interesting to explore in this context and in more general models. Detailed tests of the Standard Model such as the ones that we are proposing demand precise predictions. Our results here are entirely restricted to leading-order, but one may ask whether our statements are stable to NLO corrections. We have made some modest efforts toward this end by cross-checking our results between matched and unmatched samples and finding good agreement. Recently, a full NLO analysis of the azimuthal-sum correlation has been performed for QCD augmented by a neutral spin-one resonance~\cite{Caola:2012rs}. Reshapings of the normalized distributions at the level of roughly 10\% were observed, though the residual scale variations tended to be much smaller. For a resonance that exhibits the same correlation as QCD, and contributes $S/B \simeq 1/2$ within a specified mass window, the scale variations were smaller than the Monte Carlo resolution. These results are encouraging indications that the QCD predictions are under good control. As we enter the era of truly precision top physics, it will be important to have a clear understanding of what information is available to measure. Top quark spin correlations are a rich phenomenon that merit more detailed examination beyond the standard approaches. This paper has been aimed toward comprehensively understanding the patterns of azimuthal decay correlations exhibited by QCD and by a handful of new physics scenarios. We hope that our observations will promote these correlations to a more prominent place in the Tevatron's and LHC's top quark physics programs. \acknowledgments{MB was supported by NSF-PHY-0910467 and DE-FG-03-91ER40682. MB wishes to thank the Galileo Galilei Institute for the their hospitality while a portion of this work was completed. BT was supported by DoE grant No.\ DE-FG-02-91ER40676 and by NSF-PHY-0969510 (LHC Theory Initiative).}
54dbae5d775678b9f14e0c0a3d989b2818265687
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:intro} The transport properties of excitons in organic as well as in inorganic molecular solids are of fundamental interest [Kempe, 2003; Woerner {\it et al.}, 2004; M\"{u}lken {\it et al.}, 2006; Sillanp\"{a}\"{a} {\it et al.}, 2007; Olaya-Castro {\it et al.}, 2008; Zhou {\it et al.}, 2008]. In general, at high temperatures the transport is incoherent and can be efficiently modeled by continuous-time random walks (CTRWs) over sets of participating centers (atoms, molecules, etc.) [Montroll \& Weiss, 1965]. In this case the transport follows a master equation. The transfer rates between the participating centers can be related to the spatial arrangement of the centers. The arrangement is captured by the so-called Laplacian Matrix $\mathbf{L}$ which we will identify here with the transfer matrix of the CTRW. However, when dealing with quantum particles at low densities and low temperatures, decoherence can be suppressed to a large extent: The study of transport in this regime requires different modeling tools, able to mimic the coherent features. Clearly, quantum mechanical transport phenomena follow Schr\"{o}dinger's equation. In order to make contact to CTRW we relate the Hamiltonian of the system to the classical transfer matrix, $\mathbf{H}=-\mathbf{L}$; this yields a description mathematically closely connected to the classical master equation approach. Indeed, this realizes a quantum mechanical analog of the CTRWs defined on a discrete structure, i.e. the so-called continuous-time quantum walks (CTQWs). However, apart from formal analogies, coherence can give rise to very peculiar properties (e.g. Anderson localization [Anderson, 1958], crucial dependences of the transport on the starting position [M\"{u}lken {\it et al.}, 2006] and a quadratic speed-up of the chemical distance covered [Agliari {\it et al.}, 2008]) with no counterpart in classical transport. These effects allow interesting and cross-disciplinary applications and can also be exploited in experiments in order to distinguish whether the transport is rather classical or rather quantum mechanical [M\"{u}lken {\it et al.}, 2007]. In particular, a common means for probing the transport relies on the interaction with other species, such as impurity atoms or molecules (found in or doped into the medium) which irreversibly trap the charges or quench the excitations. Consequently, a great deal of recent theoretical work has focused on investigating essential features of basic trapping models, wherein a single particle moves in a medium containing different arrangements of traps. Indeed, much is known about the decay when the motion is incoherent [Van Kampen, 1981; Blumen {\it et al.}, 1983; 1986; ben-Avraham \& Havlin, 2000], while (as we will show here) when quantum effects become important, strong deviations from the classical results occur. In a set of early works the dynamics of coherent excitations on a chain with randomly distributed traps has been investigated using several methods [Hemenger \textit{et al}., 1974; Kenkre, 1978; Huber, 1980; Parris, 1991] which provided a reasonably description of the process at short times and in the asymptotic limit. On the other hand, from the experimental point of view, the most relevant regime is the one of intermediate times; in this time interval some striking effects have been recently highlighted [M\"{u}lken {\it et al.}, 2007; M\"{u}lken {\it et al.}, 2008]. Here we focus on trapping processes taking place on a finite ring where the traps are distributed according to different arrangements: the traps are either gathered in a cluster or distributed periodically or randomly. In these cases the classical survival probability $P_M(t)$ has been studied intensively (see e.g. [ben-Avraham \& Havlin 2000]). In fact, under ordered conditions $P_M(t)$ is known to exponentially decay to zero. Conversely, for a random distribution of traps $P_M(t)$ exhibits different time regimes: At long times it follows a stretched exponential which turns into a pure exponential when finite size effects dominate. As for the CTQW, the emergence of intrinsic quantum-mechanical effects, such as tunneling, prevents the decomposition of the problem into a collection of disconnected intervals and, as we will see, the mean survival probability $\Pi_M(t)$ is strongly affected by the trap arrangement. Hence, by following the temporal decay of $\Pi_M(t)$ we can extract information about the geometry. Furthermore, we show that in the cases analyzed here $P_M(t)$ and $\Pi_M(t)$ exhibit qualitatively different behaviours; this allows to determine the nature, either rather coherent or rather incoherent, of the transport process. Our paper is structured as follows: In Sec.~\ref{sec:CTRW} we provide a brief summary of the main concepts and of the formulae concerning CTQWs. In Sec.~\ref{sec:Trapping} we introduce a mathematical formalism useful for analyzing trapping in the CTQW picture. In the following Sec.~\ref{sec:Perturbative}, we consider special arrangements of traps on a ring and we investigate the mean survival probability by means of a perturbative approach; these analytical findings are corroborated by numerical results. In Sec.~\ref{sec:random} we study the case of random distributions of traps and finally, in Sec.~\ref{sec:concl} we present our comments and conclusions. \section{Continuous Time Quantum Walk} \label{sec:CTRW}Let us consider a graph $\mathcal{G}$ made up of $N$ nodes and algebraically described by the so-called adjacency matrix $\mathbf{A}$:The non-diagonal elements $A_{ij}$ equal $1$ if nodes $i$ and $j$ are connected by a bond and $0$ otherwise; the diagonal elements $A_{ii}$ are $0$. From the adjacency matrix we can directly derive some interesting quantities concerning the corresponding graph. For instance the coordination number of a node $i$ is $z_i = \sum_j A_{ij}$ and the number of walks of length $\ell$ from $i$ to $j$ is given by $(A^{\ell})_{ij}$ [Biggs, 1974]. We also define the Laplacian operator $\mathbf{L}$ according to $L_{ij} = z_i \delta_{ij} - A_{ij}$; the set of all $N$ eigenvalues of $L$ is called the Laplacian spectrum of $\mathcal{G}$. Interestingly, the Laplacian spectrum is intimately related not only to dynamical processes involving particles moving on the graph, but also to dynamical processes involving the network itself; these include energy transfer and diffusion-reaction processes as well as the relaxation of polymer networks, just to name a few (see for example [Mohar, 1991; Galiceanu \& Blumen, 2007] and references therein). In the context of coherent and incoherent transport it is worth underlining that, being symmetric and non-negative definite, $\mathbf{L}$ can generate both a probability conserving Markov process and a unitary process [Childs \& Goldstone, 2004; M\"{u}lken \& Blumen, 2005; Volta {\it et al.}, 2006]. Now, continuous-time random walks (CTRWs) [Montroll \& Weiss, 1965] are described by the following Master Equation: \begin{equation}\label{eq:master_cl} \frac{d}{dt} p_{k,j}(t)= \sum_{l=1}^{N} T_{kl} p_{l,j}(t), \end{equation} being $p_{k,j}(t)$ the conditional probability that the walker is on node $k$ when it started from node $j$ at time $0$. If the walk is unbiased the transmission rates $\gamma$ are bond-independent and the transfer matrix $\mathbf{T}$ is related to the Laplacian operator through $\mathbf{T} = - \gamma \mathbf{L}$ (in the following we set $\gamma=1$). We now define the quantum-mechanical analog of the CTRW, i.e. the CTQW, by identifying the Hamiltonian of the system with the classical transfer matrix, $\mathbf{H}=-\mathbf{T}$ [Farhi \& Gutmann, 1998; M\"{u}lken \& Blumen, 2005]. Hence, given the orthonormal basis set $|j \rangle$, representing the walker localized at the node $j$, we can write \begin{equation}\label{eq:tb} \mathbf{H} = \sum_{j=1}^{N} z_j |j\rangle \langle j| - \sum_{j=1}^{N} \sum_{k {\rm NN} j} |k\rangle \langle j| , \end{equation} where in the second sum $k$ runs over all nearest neighbors (NN) of $j$. The operator in Eq.~\ref{eq:tb} is also known as tight-binding Hamiltonian. Actually, the choice of the Hamiltonian $\mathbf{H}$ is, in general, not unique [Childs \& Goldstone, 2004] and Eq.~\ref{eq:tb} has two important advantages: It allows to take into account the local properties of the arbitrary substrate and, remarkably, it yields a mathematical formulation displaying important analogies with the classical picture. In fact, the dynamics of the CTQW can be described by the transition amplitude $\alpha_{k,j}(t)$ from state $| j \rangle$ to state $| k \rangle$, which obeys the following Schr\"{o}dinger equation: \begin{equation}\label{eq:schrodinger} \frac{d}{dt} \alpha_{k,j}(t)=-i \sum_{l=1}^{N} H_{kl} \alpha_{l,j}(t), \end{equation} structurally very similar to Eq.~\ref{eq:master_cl}. The solution of Eq.~\ref{eq:schrodinger} can be formally written as \begin{equation}\label{eq:formal_solution} \alpha_{k,j}(t)=\langle k | \exp(-i\mathbf{H}t) | j \rangle, \end{equation} whose squared magnitude provides the quantum mechanical transition probability $\pi_{k,j}(t) \equiv |\alpha_{k,j}(t)|^2$. In general, it is convenient to introduce the orthonormal basis $| \Phi_n \rangle, n \in [1,N]$ which diagonalizes $\mathbf{H}$; the correspondent set of eigenvalues is denoted by $\{E_n\}_{n=1,...,N}$. Thus, we can write \begin{equation} \pi_{k,j}(t) = \left| \sum_{n=1}^{N} \langle k | e^{-i E_n t} |\Phi_n \rangle \langle \Phi_n | j \rangle \right|^2. \end{equation} It should be underlined that while both problems (CTRW and CTQW) are linear, and thus many results obtained in solving CTRWs (eigenvalues and eigenfunctions) can be readily reutilized for CTQWs, the physically relevant properties of the two cases differ vastly: Thus, in the absence of traps CTQWs are symmetric under time-inversion, which precludes them from attaining equipartition for the $\pi_{k,j}(t)$ (such as the $p_{k,j}(t)$ for CTRWs) at long times. Also, the quantal system keeps memory of the initial conditions, exemplified by the occurrence of quasi-revivals [M\"{u}lken \& Blumen, 2005; M\"{u}lken \& Blumen, 2006] \section{CTQWs in the presence of traps}\label{sec:Trapping} As discussed in the previous section, the operators describing the dynamics of CTQWs and of CTRWs share the same set of eigenvalues and of eigenstates. However, when new contributions (arising e.g. from the interaction with external fields or absorbing sites) are incorporated, the eigenvalues and the eigestates start to differ. In the following we introduce a formalism useful to analyze the dynamics of CTQWs and CTRWs in the presence of traps; for this we will denote with $\mathbf{H_0}$ and $\mathbf{T_0}$ the unperturbed operators without traps. Let us consider a system where $M$ out of the $N$ nodes are traps; we label the trap positions with $m_j$, with $j=1,...,M$, and we denote this set with $\mathcal{M}$. For substitutional traps the system can be described by the following effective (but non-Hermitian) Hamiltonian [M\"{u}lken {\it et al.}, 2007] \begin{equation} \label{eq:H} \mathbf{H} = \mathbf{H}_0 - i \mathbf{\Gamma}, \end{equation} where $\mathbf{\Gamma}$ is the trapping operator defined as \begin{equation} \label{eq:Gamma} \mathbf{\Gamma} = \sum_{j=1}^M \Gamma_{m_j} | m_j \rangle \langle m_j |. \end{equation} The capture strength $\Gamma_{m_j}$ determines the rate of decay for a particle located at trap site $m_j$; here we will take the $\Gamma_{m_j}$ to be equal for all traps, i.e. $\Gamma_{m_j} \equiv \Gamma$ for all $j$. The limit $\Gamma \rightarrow \infty$ corresponds to perfect traps, which means that a classical particle is immediately absorbed when reaching any trap. Due to the non-hermiticity of $\mathbf{H}$, its eigenvalues are complex and can be written as $E_l = \epsilon_l - i \gamma_l \ (l=1,...,N)$; moreover, the set of its left and right eigenvectors, $| \Phi_l \rangle$ and $\langle \tilde{\Phi}_l |$, respectively, can be chosen to be biorthonormal ($\langle \tilde{\Phi}_l |\Phi_l' \rangle = \delta_{l,l'}$) and to satisfy the completeness relation $\sum_{l=1}^N |\Phi_l \rangle \langle \tilde{\Phi}_l | = \mathbf{1}$. Therefore, according to Eq.~\ref{eq:schrodinger}, the transition amplitude can be evaluated as \begin{equation} \label{eq:alfa} \alpha_{k,j}(t) = \sum_{l=1}^N e^{- (\gamma_l + i \epsilon_l)t} \langle k | \Phi_l \rangle \langle \tilde{\Phi}_l | j \rangle, \end{equation} from which $\pi_{k,j}(t)=|\alpha_{k,j}(t)|^2$ follows. Of particular interest, due to its relation to experimental observables, is the mean survival probability $\Pi_M(t)$ which can be expressed as [M\"{u}lken {\it et al.}, 2007] \begin{eqnarray} \label{eq:pi} \nonumber \lefteqn{\Pi_{M} \equiv \frac{1}{N - M} \sum_{j \notin \mathcal{M} } \sum_{k \notin \mathcal{M} } \pi_{kj}(t)} \\ \nonumber & & = \frac{1}{N-M} \sum_{l=1}^N e^{-2 \gamma_l t} \left ( 1- 2 \sum_{m \in \mathcal{M}} \langle \tilde{\Phi}_l | m \rangle \langle m | \Phi_l \rangle \right ) \\ & & \mbox{} + \frac{1}{N-M} \sum_{l,l'=1}^N e^{-i(E_l - E_{l'}^{*})} \left ( \sum_{m \in \mathcal{M}} \langle \tilde{\Phi}_{l'} | m \rangle \langle m | \Phi_l \rangle \right )^2 . \end{eqnarray} The temporal decay of $\Pi_M(t)$ is determined by the imaginary parts of $E_l$, i.e. by the $\gamma_l$. As shown in [M\"{u}lken {\it et al.}, 2007] at intermediate and long times and for $M \ll N$ the $\Pi_M(t)$ can be approximated by a sum of exponentially decaying terms: \begin{equation}\label{eq:pi_asym} \Pi_{M} \approx \frac{1}{N-M} \sum_{l=1}^{N} e^{-2 \gamma_l t}, \end{equation} and is dominated asymptotically by the smallest $\gamma_l$ values. Now, in the incoherent, classical transport case trapping is incorporated into the CTRW according to \begin{equation} \mathbf{T} = \mathbf{T_0} - \mathbf{\Gamma} = - \mathbf{L} -\mathbf{\Gamma}. \end{equation} The transfer operator $\mathbf{T}$ is therefore real and symmetric, and it leads to real, strictly negative eigenvalues which we denote by $-\lambda_l$; to them correspond the eigenstates $| \phi_l \rangle$. Analogously, the mean survival probability for the CTRW can be written as \begin{eqnarray} \label{eq:p} \nonumber \lefteqn{P_M(t) \equiv \sum_{j \notin \mathcal{M}} \sum_{k \notin \mathcal{M}} p_{kj}(t)} \\ & & = \frac{1}{N-M} \sum_{l=1}^{N} e^{-\lambda_l t} \left| \sum_{k \notin \mathcal{M}} \langle k | \phi_l \rangle \right|^2. \end{eqnarray} From Eq.~\ref{eq:p} one may deduce that $P_M(t)$ attains in general rather quickly an exponential form; furthermore, if the smallest eigenvalue $\lambda_{\rm min}$ is well separated from the next closest eigenvalue, $P_M(t)$ is dominated by $\lambda_{\rm min}$ and by the corresponding eigenstate $|\phi_{\rm min} \rangle$ [M\"{u}lken {\it et al.}, 2007; M\"{u}lken {\it et al.}, 2008]: \begin{equation} \label{eq:p_asym} P_M(t) \approx \frac{1}{N-M} e^{-\lambda_{\rm min} t} \left| \sum_{k \notin \mathcal{M}} \langle k | \phi_{\rm min} \rangle \right|^2. \end{equation} Lower estimates of the gap $\Delta$ between the two smallest eigenvalues have been found in the past for special choices of operators (see e.g. [Chen M., 1997] and references therein). For instance, the operator $\textbf{T}_0$ has $\lambda_{\rm min}=0$; its next smallest eigenvalue represents the algebraic connectivity of the graph, namely the relative number of edges needed to be deleted to generate a bipartition. In the case of a $k$-regular graph $\Delta$ is bounded from below by $k/(D N)$, being $D$ the diameter of the graph, i.e. the maximum distance between any two vertices [Chung F.R.K., 1996]. \section{Perturbative approach for trapping on a ring}\label{sec:Perturbative} When the strength $\Gamma$ of the trap is small with respect to the couplings between neighbouring nodes (which here means $\Gamma \ll 1$), we can treat the effective Hamiltonian introduced in Eq.~\ref{eq:H} along the lines of time-independent perturbation theory. Before developing this strategy we fix the structure $\mathcal{G}$, by considering a ring of length $N$ so that the coordination number equals $2$ for all sites ($\mathbf{Z} = 2 \mathbf{I}$), where we assume $N$ to be even. For the corresponding Hamiltonian $\mathbf{H_0}$ we know exactly all the eigenvalues and eigenvectors; one has namely \begin{equation}\label{eq:eigenvalues} E_l^{(0)} = 2 - 2 \cos (2 \pi l /N) \end{equation} and \begin{equation} \label{eq:eigenvectors} | \Phi_l^{(0)} \rangle = \frac{1}{\sqrt{N}} \sum_{j=1}^{N} e^{-i 2 \pi l j /N} | j \rangle. \end{equation} We underline that all the eigenvalues, apart from $E_{N/2}=4$ and $E_{N}=0$, are two-fold degenerate, $E_l = E_{N-l} \, (l=1,2,...,N/2-1)$. We now apply perturbation theory to evaluate to first order the corrections $E_l^{(1)}$ to the eigenvalues $E_l$. For $l= N/2$ and for $l=N$ we use the non-degenerate expression \begin{equation} \label{eq:nondeg} E_{l}^{(1)} = - i \Gamma \sum_{m \in \mathcal{M}} \left| \langle m | \Phi_l^{(0)} \rangle \right|^2 \end{equation} and get \begin{equation} \label{eq:correction_nondeg} E_{N/2} = 4 - i \Gamma \frac{M}{N} \; \; \; \mathrm{and} \; \; \; E_{N} = - i \Gamma \frac{M}{N}. \end{equation} For $l$ different from $N/2$ and $N$ we set \begin{eqnarray} V_{i,j} \equiv \langle \Phi_i^{(0)} | -i \mathbf{\Gamma} | \Phi_j^{(0)} \rangle \end{eqnarray} and we apply the expression valid for two-fold degenerate solutions of $\mathbf{H_0}$: \begin{eqnarray} \label{eq:deg} E_{l}^{(1)} = \frac{1}{2} \left ( V_{l,l} + V_{N-l,N-l} \right ) \\ \nonumber \pm \frac{1}{2} \left[ \left( V_{l,l} - V_{N-l,N-l} \right)^2 + 4 |V_{l,N-l}|^2 \right ]^{1/2}, \end{eqnarray} where we choose the positive sign for $l \in [1,N/2-1]$ and the negative sign for $l \in [N/2+1,N-1]$. Now we have \begin{equation} \label{eq:V11} V_{l,l} \equiv V_{N-l,N-l} = - i \Gamma \frac{M}{N}, \end{equation} independently of the trap arrangement and \begin{eqnarray} \label{eq:V12} \nonumber & V_{l,N-l} = -i \Gamma / N \sum_{j=1}^{M} \exp \{ 2 i \pi m_j [l - (N-l)]/N \} \\ & = -i \Gamma / N \sum_{j=1}^{M} \exp (4 i \pi l m_j /N). \end{eqnarray} By inserting the last results into Eq.~\ref{eq:deg} we get \begin{equation} \label{eq:correction_deg} E_{l}^{(1)} = \frac{-i \Gamma}{N} \left ( M \pm \left| \sum_{j=1}^{M} e^{2 i \pi 2 l m_j /N} \right| \right ). \end{equation} We notice that for special trap arrangements the $E_l^{(1)}$ can be calculated exactly: The most striking results are obtained when the exponential in the sum in Eq.~\ref{eq:correction_deg} equals one of the values from the set $\{ 1, i, -1, -i \}$. Then the absolute value of the sum reduces to $|\sum_{j=1}^{M} \exp(i 4 \pi l m_j /N)| = M$. For this there have to exist indices $l \neq N/2$ and $l \neq N$ such that $m_j$ can be expressed as \begin{equation} \label{eq:conditions} m_j = \frac{N}{8l}(4 k_j +r ) + c \end{equation} where $k_j$ and $c$ are arbitrary integers and $r=0,1,2$ or $3$, corresponding to $1,i,-1$ or $-i$, respectively. Consequently, we obtain for the correction \begin{equation} E_{l}^{(1)}= -i \Gamma \frac{M}{N} \; \; \mathrm{and} \; \; E_{N-l}^{(1)}= 0, \end{equation} so that the degeneracy is always lifted. \begin{figure}[ht] \includegraphics[width =2.0in]{period.eps} \includegraphics[width =2.0in]{cluster.eps} \caption{Examples of periodic (top) and sequential (bottom) arrangements of $M=5$ traps on a ring of size $N=20$.} \label{fig:arrangia} \end{figure} Let us now focus on a periodic distribution of traps with $m_j = j N/M$, while $N/M \in \mathbb{N}$. It is easy to see that in this case there exists a non-empty set $\Upsilon$ of distinct values of $l \in [1,N/2-1]$ satisfying the condition of Eq.~\ref{eq:conditions}; this occurs for $2l/M \in \mathbb{N}$, so that the cardinality of $\Upsilon$ is given by the number of integers in $\{ 2l/M \}_{l=1,2,...,N/2-1}$, namely by \begin{equation} \label{eq:upsi} |\Upsilon| = \left\{ \begin{array}{cr} \lfloor (N-2) /M \rfloor & \mathrm{\; \; for \; even} \; M, \\ \lfloor (N-2) / 2M \rfloor & \mathrm{\; \; for \; odd} \; M, \end{array} \right. \end{equation} where $\lfloor x \rfloor$ denotes the largest integer less than or equal to $x$. In particular, for both $M=1$ and $M=2$ we have $| \Upsilon| =N/2-1$. Now, the numerical diagonalization of the Hamiltonian $\mathbf{H}$ shows that for $l \in \Upsilon$ we get $\gamma_{N-l}=0$ (not only in first order in $\Gamma$). Consequently, the corresponding term in Eq.~\ref{eq:pi} decays to a non vanishing value, and from Eq.~\ref{eq:pi_asym} we have for $t \rightarrow \infty$: \begin{equation} \label{eq:surv_upsilon} \Pi_M(t) \approx \frac{|\Upsilon|}{N-M}. \end{equation} Hence, recalling Eq.~\ref{eq:upsi}, for large structures with $M \ll N$, $\Pi_M(t)$ asymptotically decays to $1/M$ (even case) and to $1/(2M)$ (odd case). Figure \ref{fig:per} shows results obtained for a ring of size $N=300$ with a periodic arrangement of $M=10$ ($|\Upsilon| = 29$) and $M=75$ ($|\Upsilon| = 1$) traps. Consequently, the survival probability $\Pi_M(t)$ decays to the constant values $1/10$ and $1/225$, respectively. From a physical point of view, the finite limit for the survival probability stems from the existence of stationary states to which the nodes in $\mathcal{M}$ do not contribute, so that they never ``see" the traps. This genuine quantum-mechanical effect has no counterpart in the classical case where, for finite structures, the survival probability always decays to zero in the presence of traps. In particular, as shown in Fig.~\ref{fig:per}, $P_M(t)$ decays exponentially, as expected. \begin{figure}[ht] \includegraphics[width =3.5in]{per.eps} \caption{Survival probabilities $\Pi_M(t)$ (continuous line) and $P_M(t)$ (dotted lines) on a ring of size $N=300$ and $\Gamma=0.01$ in the presence of $M=10$ and of $M=75$ traps arranged periodically, i.e. $m_j = j N/M$. Note the semilogarithmic scales.} \label{fig:per} \end{figure} Let us now focus on another special configuration of $M$ traps, $M > 1$: we consider a sequential arrangement, such that $m_j=j$ and $j=1,....,M$. Hence, Eq.~\ref{eq:V12} can be written as \begin{eqnarray} \label{eq:V12_seq} \lefteqn{V_{l,N-l} = -i \Gamma /N \sum_{j=1}^{M} \exp (4 i \pi l j /N) }\\ \nonumber & & = \frac{-i \Gamma}{N} \; \frac{ \exp (4 i \pi l M/N ) -1 } { \exp (4 \pi i l /N ) -1 } \; \exp (4 \pi i l /N )\\ \nonumber & & = \frac{-i \Gamma}{N} \; \frac{\sin(2 \pi M l /N)}{\sin(2 \pi l /N)} \; \exp [2 i \pi l (M+1)/N], \end{eqnarray} from which we get \begin{equation} \label{eq:correction_deg_seq} E_l^{(1)} = \frac{-i \Gamma}{N} \left( M \pm \frac{\sin(2 \pi M l /N)}{\sin(2 \pi l /N)} \right). \end{equation} We notice that since $l \neq N/2$ and $l \neq N$ then $2l/N \notin \mathbb{N}$, while for $2lM/N \in \mathbb{N}$ then $E_l^{(1)} = E_{N-l}^{(1)} = - i \Gamma M/N$. In particular, when $M=N/2$, we have $\gamma_l = M/N$ for each value of $l \in \left[ 1,N \right]$. As a result, in Eq.~\ref{eq:pi} the first term vanishes due to the completeness property and the fact that the $\gamma_l$ are no longer $l$-dependent. As for the second term, by neglecting oscillations, we get \begin{equation} \label{eq:pi_seq_spec} \Pi_M (t) \approx \frac{M}{N-M} e^{-2 \Gamma t M/N} = \frac{1}{2} e^{-\Gamma t}, \end{equation} which is independent of $N$. As shown in Fig.~\ref{fig:seq_q} the exponential behaviour predicted by Eq.~\ref{eq:pi_seq_spec} holds also for intermediate times. In Fig.~\ref{fig:seq_cq} we compare the survival probabilities of CTQWs and CTRWs: as highlighted by the semi-logarithmic plot, the decay is exponential in both cases, although faster in the former. Indeed, for the CTRWs we have in the long-time limit from Eq.~\ref{eq:p}: \begin{equation}\label{eq:p_seq_spec} P_M(t)\approx \frac{N-M}{N} e^{- \Gamma M t /N} = \frac{1}{2} e^{- \Gamma t /2}, \end{equation} where we used the fact that the smallest eigenvalue is $\Gamma M /N$. By comparing Eq.~\ref{eq:pi_seq_spec} and Eq.~\ref{eq:p_seq_spec} we see that, although the decay is exponential in both cases, the decay rate is twice larger for $\Pi_M(t)$ than for $P_M(t)$. \begin{figure}[ht] \includegraphics[width =3.5in]{Seq_qsemi3.eps} \caption{Survival probability $\Pi_M(t)$ on rings of size $N=32, 48, 64$ and $96$ for $\Gamma=0.04, 0.01, 0.004$, as indicated. The number of traps is $M=N/2$ and they are placed consecutively, i.e. $m_j = j$. The straight lines represent Eq.~\ref{eq:pi_seq_spec}.} \label{fig:seq_q} \end{figure} \begin{figure}[ht] \includegraphics[width =3.5in]{seq_cq3.eps} \caption{Survival probabilities $\Pi_M(t)$ and $P_M(t)$ on rings of size $N=48$ for $\Gamma=0.001$; the number of traps is $M=N/2$ and they are placed consecutively, i.e. $m_j = j$. The straight lines represent Eq.~\ref{eq:pi_seq_spec} (continuous line) and Eq.~\ref{eq:p_seq_spec} (dashed line).} \label{fig:seq_cq} \end{figure} \section{Random distributions of traps}\label{sec:random} We now take $N$ to be odd (so as not to fulfill Eq.~\ref{eq:conditions}) and consider random arrangements of traps: we pick $M$ distinct trap locations randomly from a uniform distribution and determine the corresponding $\Pi_M(t)$ and $P_M(t)$. Then we average these over different, independent realizations to determine $\langle \Pi_M(t) \rangle$ and $\langle P_M(t) \rangle$. As already mentioned in Sec.~\ref{sec:intro}, $\langle P_M(t) \rangle$ exhibits different behaviours: in an infinite system the decay law is a stretched exponential at long times, whereas in finite systems at such times the decay gets to be exponential. In Fig.~\ref{fig:rand_c} we show evidence of the long-time exponential behaviour of $\langle P_M(t) \rangle$ in systems of relatively small size. \begin{figure}[ht] \includegraphics[width =3.5in]{rand_c.eps} \caption{Average survival probabilities $\langle P_M(t) \rangle$ for rings of sizes $N=51$ and $101$. Here $\Gamma=0.1$ and $M$ is either $4$ or $8$. The data presented have been averaged over $120$ different realizations, see text. The straight lines highlight the exponential decay.} \label{fig:rand_c} \end{figure} \begin{figure*}[ht] \includegraphics[width =4.2in]{rand2.eps} \caption{$\langle \Pi_M(t) \rangle$ and $\mu$ for rings of sizes $N=51, 101,$ and $201$. Here $\Gamma=0.1$ and $M$ is $2,4,$ or $8$, as indicated. The data presented have been averaged over $120$ different realizations. The main figure displays the average survival probabilities $\langle \Pi_M(t) \rangle$ in double logarithmic scales. The straight lines represent the best fit. The inset displays the exponent $\mu$ as a function of $c=M/N$ for systems of sizes $N=201$ ($*$) and $N=101$ ($\bullet$).} \label{fig:rand} \end{figure*} Let us now consider $\langle \Pi_M(t) \rangle$ for random trap arrangements. Now, the $\langle \Pi_M(t) \rangle$ decay differs qualitatively from that of the $\Pi_M(t)$ analyzed in the previous section. As shown in Fig.~\ref{fig:rand}, for intermediate times the average survival probability displays a power law, which decays more slowly than exponentially: \begin{equation} \langle \Pi_M(t) \rangle \sim t^{-1 / \mu}. \end{equation} A similar result has already been obtained for CTQWs on a finite chain with two traps at its ends ($\mathcal{M}=\{1,N\}$), in the presence of either nearest-neighbour [M\"{u}lken {\it et al.}, 2007] or long-range interactions [M\"{u}lken {\it et al.}, 2008]. There one could understand the power law decay based on the imaginary part of the Hamiltonian spectrum $\{ \gamma_l \}$, which in a large interval scales algebraically with $l$. By fitting the numerical data obtained for different sizes and concentrations we get the characteristic exponent $\mu$ depicted in the inset of Fig.~\ref{fig:rand}. \section{Conclusions}\label{sec:concl} In conclusion, we have modeled the coherent dynamics by continuous-time quantum walks and studied interactions with traps: Taking a periodic chain as substrate, we calculated the mean quantal survival probability $\Pi_M(t)$ and we compared it to the classical $P_M(t)$ for different trap arrangements. The quantum problem was approached both analytically (by means of perturbative theory) and numerically, showing that the spatial distribution of the traps significantly affects $\Pi_M(t)$. In particular, when the traps are arranged periodically throughout the substrate, $\Pi_M(t)$ decays asymptotically to a nonvanishing value which depends directly on the system size $N$ and on the number of traps $M$ (e.g., when $M=2$, $\Pi_2(t) \rightarrow 1/2$ for $t \rightarrow \infty$). This is a genuine quantum-mechanical effect with no counterpart in classical mechanics, where $P_M(t)$ decays to zero for finite systems. Another interesting, deterministic trap configuration is realized by distributing the traps consecutively such to form a cluster; then at intermediate and long times the survival probability decays exponentially with the characteristic time $\Gamma^{-1}$. Now, for the same trap configuration, the characteristic time for the classical survival probability doubles, being $2 \Gamma^{-1}$. When the traps are distributed randomly on the substrate, a further, qualitatively different behaviour of $\Pi_M(t)$ is obtained. In fact, by averaging over different independent configurations we find in this case that at intermediate times $\langle \Pi_M(t) \rangle$ decays algebraically, i.e. $\langle \Pi_M(t) \rangle \sim t^{-1 / \mu}$, where $\mu$ depends on $M$ and $N$ and is related to the imaginary part of the Hamiltonian spectrum. On the other hand, for systems of relatively small size we find that in the same time range finite-size effects dominate $\langle P_M(t) \rangle$, giving rise to an exponential decay. These results establish that studying the decay due to trapping is indeed an advantageous means to monitor the system's evolution, as it allows to determine the nature of the transport, which can be either rather coherent or rather incoherent. Moreover, the behaviour exhibited by $\Pi_M(t)$, being qualitatively affected by the trap configurations, may be used to distinguish between these. \bigskip \noindent {\bf Acknowledgments} \smallskip Support from the Deutsche Forschungsgemeinschaft (DFG) and the Fonds der Chemischen Industrie is gratefully acknowledged. EA thanks the Italian Foundation ``Angelo Della Riccia" for financial support. \bigskip \noindent {\bf References} \smallskip \noindent Agliari, E., Blumen, A. \& M\"{u}lken, O. [2008] ``Dynamics of Continuous-time quantum walks in restricted geometries," {\it J. Phys. A} {\bf 41}, 445301-445321. \noindent Anderson, P.W. [1958] ``Absence of Diffusion in Certain Random Lattices," {\it Phys. Rev.} {\bf 109}, 1492-1505. \noindent ben-Avraham D. \& Havlin S. [2000] ``Diffusion and Reactions in Fractals and Disordered Systems," Cambridge University Press. \noindent Biggs N. [1974] ``Algebraic graph theory," Cambridge University Press. \noindent Blumen A., Klafter, J. \& Zumofen G. [1983] ``Trapping and reaction rates on fractals," {\it Phys. Rev. B} {\bf 28}, 6112-6115. \noindent Blumen A., Klafter, J. \& Zumofen G. [1986] ``Models for reaction dynamics in glasses," in {\it Optical Spectroscopy of Glasses}, I. Zschokke ed., D. Reidel, Dordrecht , pp. 199-265. \noindent Chen M. [1997] ``Coupling, spectral gap and realted topics (II)," {\it Chinese Science Bulletin} {\bf 42}, 1409-1416. \noindent Childs A.M. \& Goldstone J. [2004] ``Spatial search by quantum walk," {\it Phys. Rev. A} {\bf 70}, 022314-022324. \noindent Chung F.R.K. [1996] ``Spectral graph theory," CBMS Lecture Notes. \noindent Farhi E. \& Gutmann S. [1998] ``Quantum computation and decision trees," {\it Phys. Rev. A} {\bf 58} 915-928. \noindent Galiceanu M. \& Blumen A. [2007] ``Spectra of Husimi cacti: Exact results and applications," {\it J. Chem. Phys.} {\bf 127}, 134904-134911. \noindent Hemenger R.P., Lakatos-Lindenberg K. \& Pearlstein R.M. [1974] ``Impurity quenching of molcular excitons. III. Partially coherent excitons in linear chains," {\it J. Chem. Phys.} {\bf 60}, 3271-3277. \noindent Huber, D.L. [1980] ``Fluorescence in the presence of traps. II. Coherent transfer," {\it Phys. Rev. B} {\bf 22}, 1714-1721. \noindent Kempe, J. [2003] ``Quantum random walks: an introductory overview," {\it Contemp. Phys.} {\bf 44}, 307-327. \noindent Kenkre, V.M. [1978] ``Model for Trapping Rates for Sensitized Fluorescence in Molecular Crystals," {\it Phys. Status Solidi B} {\bf 89}, 651-654. \noindent Mohar B., [1991] ``The Laplacian Spectrum of Graphs''. In {\it Graph Theory, Combinatorics, and Applications}, Vol. 2, Ed. Y. Alavi, G. Chartrand, O.R. Oellermann, A.J. Schwenk, Wiley, pp.\ 871-898. \noindent Montroll E.W. \& Weiss G.H. [1965] ``Random walks on lattices II," {\it J. Math. Phys.} {\bf 6} 167-181. \noindent M\"{u}lken O., Bierbaum V. \& Blumen A. [2006] ``Coherent exciton transport in dendrimers and continuous-time quantum walks," {\it J. Chem. Phys.} {\bf 124}, 124905-124911. \noindent M\"{u}lken O. \& Blumen A. [2005] ``Slow transport by continuous-time quantum walks," {\it Phys. Rev. E} {\bf 71}, 016101-016106. \noindent M\"{u}lken O. \& Blumen A. [2006] ``Continuous-time quantum walks in phase space," {\it Phys. Rev. A} {\bf 73}, 012105-012110. \noindent M\"{u}lken O., Blumen A., Amthor T., Giese C., Reetz-Lamour M. \& Weidem\"{u}ller M. [2007] ``Survival Probabilities in Coherent Exciton Transfer with Trapping," {\it Phys. Rev. Lett.} {\bf 99}, 090601-090605. \noindent M\"{u}lken O., Pernice V. \& Blumen A. [2008] ``Slow Excitation Trapping in Quantum Transport with Long-Range Interactions," {\it Phys. Rev. E} {\bf 78} 021115-021119. \noindent Olaya-Castri A., Lee C.F., Fassioli Olsen F. \& Johnson N.F. [2008] ``Efficiency od energy transfer in a light-harvesting system under quantum coherence," {\it Phys. Rev. B} {\bf 78} 085115-085121. \noindent Parris, P.E. [1991] ``Quantum and Stochastic Aspects of Low-Temperature Trapping and Reaction Dynamics," {\it J. Stat. Phys.} {\bf 65}, 1161-1172. \noindent Sillanp\"{a}\"{a} M.A., Park J.I. \& Simmonds R.W. [2007] ``Coherent quantum state storage and transfer between two phase qubits via a resonant cavity," {\it Nature} {\bf 449}, 438-442. \noindent Van Kampen N.G. [1981] ``Stochastic Processes in Physics and Chemistry," North-Holland, Amsterdam. \noindent Volta A., M\"{u}lken O. \& Blumen A. [2006] ``Quantum transport on two-dimensional regular graphs," {\it J. Phys. A} {\bf 39}, 14997-15012. \noindent Woerner M., Reimann K. \& Elsaesser T. [2004] ``Coherent charge transport in semiconductor quantum cascade structures," {\it J. Phys.: Condens. Matter} {\bf 16}, R25-R48. \noindent Zhou L., Gong Z.R., Liu Y.-X., Sun C.P. \& Nori F. [2008] ``Controllable Scattering of a Single Photon inside a One-Dimensional Resonator Waveguide," {\it Phys. Rev. Lett.} {\bf 101}, 100501-100505. \end{document}
c0f81a25a9eb55d4fa39778acc813638d5fdb3e9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Introduction} Let $X$ be a $d$-dimensional projective arithmetic variety and $L$ an invertible sheaf on $X$. We fix a continuous hermitian metric $\vert\cdot\vert$ of $L$. In Arakelov geometry, we frequently ask whether $H^0(X,L)$ has a free basis consisting of strictly small sections, that is, sections whose supremum norm are less than $1$. However, we know few about this problem in general. For example, Zhang \cite{ZhPos} proves that $H^0(X, nL)$ possesses a free basis as above for $n \gg 1$ if the following conditions are satisfied: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $X_{{\mathbb{Q}}}$ is regular, $L_{{\mathbb{Q}}}$ is ample and $L$ is nef on every fiber of $X \to \operatorname{Spec}({\mathbb{Z}})$. \item The metric $\vert\cdot\vert$ is $C^{\infty}$ and the first Chern form $c_1(L,\vert\cdot\vert)$ is semipositive. \item For some positive integer $n_0$, there are strictly small sections $s_1, \ldots, s_l$ of $n_0L$ such that $\{ x \in X_{{\mathbb{Q}}} \mid s_1(x) = \cdots = s_l(x) = 0 \} = \emptyset$. \end{enumerate} From viewpoint of birational geometry, the ampleness of $L_{{\mathbb{Q}}}$ is rather strong. Through Example~\ref{example:projective:line} and Example~\ref{example:projective:plane}, the base point freeness is not a necessarily condition, but we can realize that the condition (3) is substantially crucial to find a basis consisting of strictly small sections. In this paper, we would like to consider the problem under the mild and appropriate assumption (3) (cf. Corollary~\ref{cor:intro:B}). Let $R$ be a graded subring of $\bigoplus_{n=0}^{\infty} H^0(X, nL)$ over ${\mathbb{Z}}$. For each $n$, we assign a norm $\operatorname{\|\text{\textperiodcentered}\|}_n$ to $R_n \otimes_{{\mathbb{Z}}} {\mathbb{R}}$ in such a way that \[ \Vert s \cdot s' \Vert_{n+n'} \leq \Vert s \Vert_{n} \cdot \Vert s' \Vert_{n'} \] holds for all $s \in R_n \otimes_{{\mathbb{Z}}} {\mathbb{R}}$ and $s' \in R_{n'} \otimes_{{\mathbb{Z}}} {\mathbb{R}}$. Then \[ (R, \operatorname{\|\text{\textperiodcentered}\|}) = \bigoplus_{n=0}^{\infty} (R_n, \operatorname{\|\text{\textperiodcentered}\|}_n) \] is called a {\em normed graded subring of $L$}. An important point is that each norm $\operatorname{\|\text{\textperiodcentered}\|}_n$ does not necessarily arise from the metric of $L$, so that we can obtain several advantages to proceed with arguments. The following theorem is one of the main results of this paper. \begin{Theorem} \label{thm:intro:A} If $R \otimes_{{\mathbb{Z}}} {\mathbb{Q}}$ is noetherian and there are homogeneous elements $s_1, \ldots, s_l \in R$ of positive degree such that $\{ x \in X_{{\mathbb{Q}}} \mid s_1(x) = \cdots = s_l(x) = 0 \} = \emptyset$, then there is a positive constant $B$ such that \[ \lambda_{{\mathbb{Z}}}(R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \leq B n^{(d+2)(d-1)/2} \left( \max \left\{ \Vert s_1 \Vert^{1/\deg(s_1)}, \ldots, \Vert s_l \Vert^{1/\deg(s_l)} \right\}\right)^n \] for all $n \geq 1$, where $\lambda_{{\mathbb{Z}}}(R_n,\operatorname{\|\text{\textperiodcentered}\|}_n)$ is the infimum of the set of real numbers $\lambda$ such that there is a free ${\mathbb{Z}}$-basis $x_1, \ldots, x_r$ of $R_n$ with $\max \{ \Vert x_1 \Vert_n, \ldots, \Vert x_r \Vert_n \} \leq \lambda$. \end{Theorem} This is a consequence of the technical result Theorem~\ref{thm:base:strictly:small:sec}, which also yields variants of arithmetic Nakai-Moishezon's criterion (cf. Theorem~\ref{thm:Nakai:Moishezon:1} and Theorem~\ref{thm:Nakai:Moishezon:2}). As a corollary of the above theorem, we have the following: \begin{Corollary} \label{cor:intro:B} Let $\overline{L}$ be a continuous hermitian invertible sheaf on $X$. If the above condition \textup{(3)} is satisfied, in other words, \[ \left\langle \{ s \in H^0(X, n_0L) \mid \Vert s \Vert_{\sup} < 1 \} \right\rangle_{{\mathbb{Z}}} \otimes_{{\mathbb{Z}}} {\mathcal{O}}_X \to n_0 L \] is surjective on $X_{{\mathbb{Q}}}$ for some positive integer $n_0$, then $H^0(X, nL)$ has a free ${\mathbb{Z}}$-basis consisting of strictly small sections for $n \geq 1$. \end{Corollary} \renewcommand{\theTheorem}{\arabic{section}.\arabic{Theorem}} \renewcommand{\theClaim}{\arabic{section}.\arabic{Theorem}.\arabic{Claim}} \renewcommand{\theequation}{\arabic{section}.\arabic{Theorem}.\arabic{Claim}} \section{Normed ${\mathbb{Z}}$-module} Let $(V,\operatorname{\|\text{\textperiodcentered}\|})$ be a normed finite dimensional vector space over ${\mathbb{R}}$, that is, $V$ is a finite dimensional vector space over ${\mathbb{R}}$ and $\operatorname{\|\text{\textperiodcentered}\|}$ is a norm of $V$. Let $\alpha : W \to V$ be an injective homomorphism of finite dimensional vector spaces over ${\mathbb{R}}$. If we set $\Vert w \Vert_{W \hookrightarrow V} = \Vert\alpha(w)\Vert$ for $w \in W$, then $\operatorname{\|\text{\textperiodcentered}\|}_{W \hookrightarrow V}$ gives rise to a norm of $W$. This is called the {\em subnorm} of $W$ induced by $W \hookrightarrow V$ and the norm $\operatorname{\|\text{\textperiodcentered}\|}$ of $V$. Next let $\beta : V \to T$ be a surjective homomorphism of finite dimensional vector spaces over ${\mathbb{R}}$. The {\em quotient norm} $\operatorname{\|\text{\textperiodcentered}\|}_{V \twoheadrightarrow T}$ of $T$ induced by $V \twoheadrightarrow T$ and the norm $\operatorname{\|\text{\textperiodcentered}\|}$ of $V$ is given by \[ \Vert t \Vert_{V \twoheadrightarrow T} = \inf \{ \Vert v \Vert \mid \beta(v) = t \} \] for $t \in T$. Let $(U,\operatorname{\|\text{\textperiodcentered}\|})$ be another normed finite dimensional vector space over ${\mathbb{R}}$, and let $\phi : V \to U$ be a homomorphism over ${\mathbb{R}}$. The norm $\Vert \phi\Vert$ of $\phi$ is defined to be \[ \Vert \phi \Vert = \sup \{ \Vert \phi(v) \Vert \mid v \in V, \ \Vert v \Vert = 1 \}. \] First let us see the following lemma. \begin{Lemma} \label{norm:sub:quot:4:spaces} Let $(V,\operatorname{\|\text{\textperiodcentered}\|})$ be a normed finite dimensional vector space over ${\mathbb{R}}$. Let $T \subseteq U \subseteq W \subseteq V$ be vector subspaces of $V$. Then \[ (\operatorname{\|\text{\textperiodcentered}\|}_{W \hookrightarrow V})_{W \twoheadrightarrow W/U} = ((\operatorname{\|\text{\textperiodcentered}\|}_{V \twoheadrightarrow V/T})_{W/T \hookrightarrow V/T})_{W/T \twoheadrightarrow W/U} \] holds on $W/U$. \end{Lemma} \begin{proof} Let us consider the following commutative diagram: \[ \xymatrix{ W \ar@{^{(}->}[r] \ar@{->>}[d] & V \ar@{->>}[d] \\ W/U \ar@{^{(}->}[r] & V/U. } \] Then, by \cite[(2) in Lemma~3.4]{MoCont}, we have \[ (\operatorname{\|\text{\textperiodcentered}\|}_{W \hookrightarrow V})_{W \twoheadrightarrow W/U} = (\operatorname{\|\text{\textperiodcentered}\|}_{V \twoheadrightarrow V/U})_{W/U \hookrightarrow V/U}. \] Moreover, considering the following commutative diagram: \[ \xymatrix{ W/T \ar@{^{(}->}[r] \ar@{->>}[d] & V/T \ar@{->>}[d] \\ W/U \ar@{^{(}->}[r] & V/U, } \] if we set $\operatorname{\|\text{\textperiodcentered}\|}' = \operatorname{\|\text{\textperiodcentered}\|}_{V \twoheadrightarrow V/T}$, then \[ (\operatorname{\|\text{\textperiodcentered}\|}'_{V/T \twoheadrightarrow V/U})_{W/U \hookrightarrow V/U} = (\operatorname{\|\text{\textperiodcentered}\|}'_{W/T \hookrightarrow V/T})_{W/T \twoheadrightarrow W/U}. \] Thus the lemma follows because $\operatorname{\|\text{\textperiodcentered}\|}_{V \twoheadrightarrow V/U} = \operatorname{\|\text{\textperiodcentered}\|}'_{V/T \twoheadrightarrow V/U}$ by \cite[(1) in Lemma~3.4]{MoCont}. \end{proof} Let $M$ be a finitely generated ${\mathbb{Z}}$-module and $\operatorname{\|\text{\textperiodcentered}\|}$ a norm of $M_{{\mathbb{R}}} := M \otimes_{{\mathbb{Z}}} {\mathbb{R}}$. A pair $(M, \operatorname{\|\text{\textperiodcentered}\|})$ is called a {\em normed ${\mathbb{Z}}$-module}. For a normed ${\mathbb{Z}}$-module $(M, \operatorname{\|\text{\textperiodcentered}\|})$, we define $\lambda_{{\mathbb{Q}}}(M, \operatorname{\|\text{\textperiodcentered}\|})$ and $\lambda_{{\mathbb{Z}}}(M,\operatorname{\|\text{\textperiodcentered}\|})$ to be \[ \hspace{-2.7em} \lambda_{{\mathbb{Q}}}(M,\operatorname{\|\text{\textperiodcentered}\|}) := \inf \left\{ \lambda \in {\mathbb{R}} \left| \begin{array}{l} \text{there are $e_1, \ldots, e_n \in M$ such that $e_1, \ldots, e_n $} \\ \text{form a basis over ${\mathbb{Q}}$ and $\Vert e_i \Vert \leq \lambda$ for all $i$} \end{array} \right\}\right. \] and \[ \lambda_{{\mathbb{Z}}}(M,\operatorname{\|\text{\textperiodcentered}\|}) := \inf \left\{ \lambda \in {\mathbb{R}} \left| \begin{array}{l} \text{there are $e_1, \ldots, e_n \in M$ such that $e_1, \ldots, e_n$ form} \\ \text{a free ${\mathbb{Z}}$-basis of $M/M_{tor}$ and $\Vert e_i \Vert \leq \lambda$ for all $i$} \end{array} \right\}\right.. \] Note that if $M$ is a torsion module, then $\lambda_{{\mathbb{Q}}}(M, \operatorname{\|\text{\textperiodcentered}\|}) = \lambda_{{\mathbb{Z}}}(M,\operatorname{\|\text{\textperiodcentered}\|}) = 0$. \begin{Lemma} \label{lem:lambda:lambda:prime} $\lambda_{{\mathbb{Q}}}(M,\operatorname{\|\text{\textperiodcentered}\|}) \leq \lambda_{{\mathbb{Z}}}(M,\operatorname{\|\text{\textperiodcentered}\|}) \leq \operatorname{rk} (M) \lambda_{{\mathbb{Q}}}(M,\operatorname{\|\text{\textperiodcentered}\|})$. \end{Lemma} \begin{proof} See \cite[Lemma~1.7 and its consequence]{ZhPS}. \end{proof} \begin{Lemma} \label{lem:iso:Q:comp:lambda} Let $(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1)$ and $(M_2, \operatorname{\|\text{\textperiodcentered}\|}_2)$ be normed ${\mathbb{Z}}$-modules, and let $\phi : M_1 \to M_2$ be a homomorphism such that $\phi$ yields an isomorphism over ${\mathbb{Q}}$. Then we have the following: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $\lambda_{{\mathbb{Q}}}(M_2,\operatorname{\|\text{\textperiodcentered}\|}_2) \leq \Vert \phi \Vert \lambda_{{\mathbb{Q}}}(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1)$. \item Further we assume that $\phi$ is surjective and that $\phi$ induces an isometry \[ ((M_{1})_{{\mathbb{R}}}, \operatorname{\|\text{\textperiodcentered}\|}_1) \overset{\sim}{\longrightarrow} ((M_{2})_{{\mathbb{R}}}, \operatorname{\|\text{\textperiodcentered}\|}_2). \] Then $\lambda_{{\mathbb{Q}}}(M_2,\operatorname{\|\text{\textperiodcentered}\|}_2) = \lambda_{{\mathbb{Q}}}(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1)$. \end{enumerate} \end{Lemma} \begin{proof} (1) Let $e_1, \ldots, e_n \in M_1$ such that $e_1, \ldots, e_n$ form a basis of $M_1$ over ${\mathbb{Q}}$ and \[ \max \{ \Vert e_1 \Vert_1, \ldots, \Vert e_n \Vert_1 \} = \lambda_{{\mathbb{Q}}}(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1). \] Then $\phi(e_1), \ldots, \phi(e_n)$ form a basis of $M_2$ over ${\mathbb{Q}}$ and \[ \Vert \phi(e_i) \Vert_2 \leq \Vert \phi \Vert \Vert e_i \Vert_1 \leq \Vert \phi \Vert \lambda_{{\mathbb{Q}}}(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1) \] for all $i$. Thus we have the assertion. (2) First of all, by (1), $\lambda_{{\mathbb{Q}}}(M_2,\operatorname{\|\text{\textperiodcentered}\|}_2) \leq \lambda_{{\mathbb{Q}}}(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1)$. Let $y_1, \ldots, y_n \in M_2$ such that $y_1, \ldots, y_n$ form a basis of $M_2$ over ${\mathbb{Q}}$ and \[ \max \{ \Vert y_1 \Vert_2, \ldots, \Vert y_n \Vert_2 \} = \lambda_{{\mathbb{Q}}}(M_2, \operatorname{\|\text{\textperiodcentered}\|}_2). \] For each $i$, we choose $x_i \in M_1$ with $\phi(x_i) = y_i$. Then $\Vert x_i \Vert_1 = \Vert y_i \Vert_2$ for all $i$. Thus $\lambda_{{\mathbb{Q}}}(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1) \leq \lambda_{{\mathbb{Q}}}(M_2, \operatorname{\|\text{\textperiodcentered}\|}_2)$. \end{proof} \begin{Proposition} \label{prop:estimate:lambda:chain} Let $(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1), \ldots, (M_n, \operatorname{\|\text{\textperiodcentered}\|}_n)$ be normed ${\mathbb{Z}}$-modules. For each $i$ with $2 \leq i \leq n$, let $\alpha_i : M_{i-1} \to M_i$ be a homomorphism such that $\alpha_i$ gives rise to an injective homomorphism over ${\mathbb{Q}}$. We set \ \[ \phi_i = \alpha_n \circ \cdots \circ \alpha_{i+1} : M_i \to M_n \] for $i=i,\ldots,n-1$, and \[ Q_i = \begin{cases} \operatorname{Coker}(\alpha_i : M_{i-1} \to M_i) & \text{if $i \geq 2$}, \\ M_1 & \text{if $i=1$} \end{cases} \] for $i=1,\ldots,n$. Then we have \[ \lambda_{{\mathbb{Q}}}(M_n, \operatorname{\|\text{\textperiodcentered}\|}_n) \leq \lambda_{{\mathbb{Q}}}(Q_n, \operatorname{\|\text{\textperiodcentered}\|}_{n, M_n \twoheadrightarrow Q_n}) + \sum_{i=1}^{n-1} \Vert \phi_i \Vert \lambda_{{\mathbb{Q}}}(Q_i, \operatorname{\|\text{\textperiodcentered}\|}_{i, M_i \twoheadrightarrow Q_i}) \operatorname{rk} Q_i. \] \end{Proposition} \begin{proof} The proof of this proposition can be found in \cite[Lemma~5.1]{ZhPS}. For reader's convenience, we reprove it here. Let $\operatorname{\|\text{\textperiodcentered}\|}'_i = \operatorname{\|\text{\textperiodcentered}\|}_{n,(M_i)_{{\mathbb{R}}} \hookrightarrow (M_n)_{{\mathbb{R}}}}$, that is, the sub-norm induced by the injective homomorphism $\phi_i : (M_i)_{{\mathbb{R}}} \to (M_n)_{{\mathbb{R}}}$ and the norm $\operatorname{\|\text{\textperiodcentered}\|}_n$ of $(M_n)_{{\mathbb{R}}}$. First let us see the following claim. \begin{Claim} \label{claim:prop:estimate:lambda:chain:1} $\lambda_{{\mathbb{Q}}}(Q_i, \operatorname{\|\text{\textperiodcentered}\|}'_{i, M_i \twoheadrightarrow Q_i}) \leq \Vert \phi_i \Vert \lambda_{{\mathbb{Q}}}(Q_i, \operatorname{\|\text{\textperiodcentered}\|}_{i, M_i \twoheadrightarrow Q_i})$. \end{Claim} By the definition of $\Vert \phi_i \Vert$, for $x \in (M_i)_{{\mathbb{R}}}$, \[ \Vert x \Vert_i \Vert \phi_i \Vert \geq \Vert \phi_i(x) \Vert = \Vert x \Vert'_i. \] Thus, for $y \in (Q_i)_{{\mathbb{R}}}$, \[ \Vert y \Vert_{i,M_i \twoheadrightarrow Q_i} \Vert \phi_i \Vert \geq \Vert y \Vert'_{i,M_i \twoheadrightarrow Q_i}, \] which shows the inequality of the claim. \CQED By Claim~\ref{claim:prop:estimate:lambda:chain:1}, (2) in Lemma~\ref{lem:iso:Q:comp:lambda} and replacing $M_i$ with $\phi_i(M_i)$, we may assume that $\alpha_i : M_{i-1} \hookrightarrow M_i$ is an inclusion map and $\operatorname{\|\text{\textperiodcentered}\|}_i = \operatorname{\|\text{\textperiodcentered}\|}_{n, M_i \hookrightarrow M_n}$. \begin{Claim} \label{claim:prop:estimate:lambda:chain:2} The assertion holds in the case $n=2$, that is, \[ \lambda_{{\mathbb{Q}}}(M_2,\operatorname{\|\text{\textperiodcentered}\|}_2) \leq \lambda_{{\mathbb{Q}}}(Q_2,\operatorname{\|\text{\textperiodcentered}\|}_{2,M_2 \twoheadrightarrow Q_2}) + \lambda_{{\mathbb{Q}}}(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1) \operatorname{rk} M_1. \] \end{Claim} Let $e_1, \ldots, e_s \in M_1$ and $f_1, \ldots, f_t \in Q_2$ such that $e_1, \ldots, e_s$ and $f_1, \ldots, f_t$ form bases of $M_1$ and $Q_2$ over ${\mathbb{Q}}$ respectively, and that \[ \begin{cases} \lambda_{{\mathbb{Q}}}(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1) = \max \{ \Vert e_1 \Vert_1, \ldots, \Vert e_s \Vert_1 \}, \\ \lambda_{{\mathbb{Q}}}(Q_2,\operatorname{\|\text{\textperiodcentered}\|}_{2,M_2 \twoheadrightarrow Q_2}) = \max \{ \Vert f_{1}\Vert_{2,M_2 \twoheadrightarrow Q_2}, \ldots, \Vert f_t \Vert_{2,M_2 \twoheadrightarrow Q_2} \}. \end{cases} \] Let us choose $f'_j \in M_2$ and $f''_j \in (M_2)_{{\mathbb{R}}}$ such that $f'_j = f_j$ on $Q_2$, $f''_j = f_j$ on $(Q_2)_{{\mathbb{R}}}$ and that $\Vert f''_j \Vert_2 = \Vert f_{j}\Vert_{2,M_2 \twoheadrightarrow Q_2}$. Since $f'_j \otimes 1 - f''_j \in (M_1)_{{\mathbb{R}}}$, there are $a_{ji} \in {\mathbb{R}}$ such that \[ f'_j \otimes 1 - f''_j = \sum_{i} a_{ji} (e_i \otimes 1). \] We set $g_j = f'_j - \sum_i \lfloor a_{ji} \rfloor e_i$. Then $e_1, \ldots, e_s, g_1, \ldots, g_t \in M_2$ form a basis of $M_2$ over ${\mathbb{Q}}$. Moreover, as \[ g_j \otimes 1 = f''_j + \sum_i (a_{ji} - \lfloor a_{ji} \rfloor)(e_i \otimes 1), \] we have \[ \Vert g_j \Vert_2 \leq \lambda_{{\mathbb{Q}}}(Q_2,\operatorname{\|\text{\textperiodcentered}\|}_{2,M_2 \twoheadrightarrow Q_2}) + \lambda_{{\mathbb{Q}}}(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1) \operatorname{rk} M_1 , \] which implies the claim. \CQED We assume $n \geq 3$. We set $M'_i = M_{i}/M_1$ for $i=2, \ldots, n$ and the norm $\operatorname{\|\text{\textperiodcentered}\|}'_i$ of $M'_i$ is given by $\operatorname{\|\text{\textperiodcentered}\|}'_i = \operatorname{\|\text{\textperiodcentered}\|}_{i, M_{i} \twoheadrightarrow M'_i}$. Note that \[ \operatorname{\|\text{\textperiodcentered}\|}'_i = (\operatorname{\|\text{\textperiodcentered}\|}_{n, M_n \twoheadrightarrow M'_{n}})_{M'_{i} \hookrightarrow M'_{n}} \] by \cite[(2) in Lemma~3.4]{MoCont}. Applying the induction hypothesis to \[ (M'_2, \operatorname{\|\text{\textperiodcentered}\|}'_2) \hookrightarrow \cdots \hookrightarrow (M'_{n}, \operatorname{\|\text{\textperiodcentered}\|}'_{n}), \] we obtain \[ \lambda_{{\mathbb{Q}}}(M'_{n}, \operatorname{\|\text{\textperiodcentered}\|}'_{n}) \leq \lambda_{{\mathbb{Q}}}(Q_n, \operatorname{\|\text{\textperiodcentered}\|}'_{n, M'_n \twoheadrightarrow Q_n}) + \sum_{i=2}^{n-1} \lambda_{{\mathbb{Q}}}(Q_i, \operatorname{\|\text{\textperiodcentered}\|}'_{i, M'_i \twoheadrightarrow Q_i}) \operatorname{rk} Q_i. \] Using Lemma~\ref{norm:sub:quot:4:spaces} in the case where \[ M_1 \subseteq M_{i-1} \subseteq M_i \subseteq M_n, \] we have $\operatorname{\|\text{\textperiodcentered}\|}'_{i, M'_i \twoheadrightarrow Q_i} = \operatorname{\|\text{\textperiodcentered}\|}_{i, M_i \twoheadrightarrow Q_i}$. Therefore, the above inequality means \addtocounter{Claim}{1} \begin{equation} \label{eqn:prop:estimate:lambda:chain:1} \lambda_{{\mathbb{Q}}}(M'_{n}, \operatorname{\|\text{\textperiodcentered}\|}'_{n}) \leq \lambda_{{\mathbb{Q}}}(Q_n, \operatorname{\|\text{\textperiodcentered}\|}_{n, M_n \twoheadrightarrow Q_n}) + \sum_{i=2}^{n-1} \lambda_{{\mathbb{Q}}}(Q_i, \operatorname{\|\text{\textperiodcentered}\|}_{i, M_i \twoheadrightarrow Q_i}) \operatorname{rk} Q_i. \end{equation} On the other hand, applying Claim~\ref{claim:prop:estimate:lambda:chain:2} to the case where $(M_1, \operatorname{\|\text{\textperiodcentered}\|}_1) \hookrightarrow (M_n, \operatorname{\|\text{\textperiodcentered}\|}_n)$, we can see \addtocounter{Claim}{1} \begin{equation} \label{eqn:prop:estimate:lambda:chain:2} \lambda_{{\mathbb{Q}}}(M_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \leq \lambda_{{\mathbb{Q}}}(M'_n,\operatorname{\|\text{\textperiodcentered}\|}'_{n}) + \lambda_{{\mathbb{Q}}}(M_1,\operatorname{\|\text{\textperiodcentered}\|}_1) \operatorname{rk} M_1, \end{equation} so that we obtain the assertion combing \eqref{eqn:prop:estimate:lambda:chain:1} with \eqref{eqn:prop:estimate:lambda:chain:2}. \end{proof} \section{Normed graded ring} Let $k$ be a commutative ring with unity and $R = \bigoplus_{n=0}^{\infty} R_n$ a graded ring over $k$. Let $M$ be a $R$-module and $h$ a positive integer. We say $M$ is a {\em $h$-graded $R$-module} if $M$ has a decomposition $M = \bigoplus_{n=-\infty}^{\infty} M_n$ as $k$-modules and \[ x \in R_n,\ m \in M_{n'} \quad\Longrightarrow\quad x \cdot m \in M_{hn + n'} \] holds for all $n \in {\mathbb{Z}}_{\geq 0}$ and $n' \in {\mathbb{Z}}$. For example, if we set $R^{(h)} = \bigoplus_{n=0}^{\infty} R_{nh}$, then $R$ is a $h$-graded $R^{(h)}$-module. Form now on, we assume that $k = {\mathbb{Z}}$ and $R_n$ (resp. $M_n$) is a finitely generated ${\mathbb{Z}}$-module for all $n \in {\mathbb{Z}}_{\geq 0}$ (resp. $n \in {\mathbb{Z}}$). Let ${\mathbb{K}}$ be either ${\mathbb{Q}}$ or ${\mathbb{R}}$. We set $R_{{\mathbb{K}}} = R \otimes_{{\mathbb{Z}}} {\mathbb{K}}$ and $M_{{\mathbb{K}}} = M \otimes_{{\mathbb{Z}}} {\mathbb{K}}$. Then \[ R_{{\mathbb{K}}} = \bigoplus_{n=0}^{\infty} (R_n)_{{\mathbb{K}}}\quad\text{and}\quad M_{{\mathbb{K}}} = \bigoplus_{n=-\infty}^{\infty} (M_n)_{{\mathbb{K}}}, \] where $(R_n)_{{\mathbb{K}}} = R_n \otimes_{{\mathbb{Z}}} {\mathbb{K}}$ and $(M_n)_{{\mathbb{K}}} = M_n \otimes_{{\mathbb{Z}}} {\mathbb{K}}$. Note that $R_{{\mathbb{K}}}$ is a graded ring over ${\mathbb{K}}$ and $M_{{\mathbb{K}}}$ is a $h$-graded $R_{{\mathbb{K}}}$-module. We say \[ (R,\operatorname{\|\text{\textperiodcentered}\|}) = \bigoplus_{n=0}^{\infty} (R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \] is a {\em normed graded ring over ${\mathbb{Z}}$} if \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $\operatorname{\|\text{\textperiodcentered}\|}_n$ is a norm of $(R_{n})_{{\mathbb{R}}}$ for each $n \in {\mathbb{Z}}_{\geq 0}$, and \item $\Vert s \cdot s' \Vert_{n+n'} \leq \Vert s \Vert_{n} \Vert s' \Vert_{n'}$ holds for all $s \in (R_{n})_{{\mathbb{R}}}$ and $s' \in (R_{n'})_{{\mathbb{R}}}$. \end{enumerate} Similarly, \[ (M,\operatorname{\|\text{\textperiodcentered}\|}_M) = \bigoplus_{n=-\infty}^{\infty} (M_n,\operatorname{\|\text{\textperiodcentered}\|}_{M_n}) \] is called a {\em normed $h$-graded $(R,\operatorname{\|\text{\textperiodcentered}\|})$-module} if \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})'}} \item $\operatorname{\|\text{\textperiodcentered}\|}_{M_n}$ is a norm of $(M_{n})_{{\mathbb{R}}}$ for each $n \in {\mathbb{Z}}$, and \item $\Vert s \cdot m \Vert_{M_{hn+n'}} \leq \Vert s \Vert_{n} \Vert m \Vert_{M_{n'}}$ holds for all $s \in (R_{n})_{{\mathbb{R}}}$ and $m \in (M_{n'})_{{\mathbb{R}}}$. \end{enumerate} \begin{Proposition} \label{prop:normed:graded:structure:descent} Let $I$ be a homogeneous ideal of $R$ and $R' = R/I$. Let $f : M \to Q$ be a surjective homomorphism of $h$-graded $R$-modules of degree $0$, that is, $f(M_n) = Q_n$ for all $n \in {\mathbb{Z}}$. We set \[ (R',\operatorname{\|\text{\textperiodcentered}\|}') = \bigoplus_{n=0}^{\infty} (R'_n,\operatorname{\|\text{\textperiodcentered}\|}'_{n})\quad\text{and}\quad (Q,\operatorname{\|\text{\textperiodcentered}\|}_Q) = \bigoplus_{n=-\infty}^{\infty} (Q_n,\operatorname{\|\text{\textperiodcentered}\|}_{Q_n}), \] where $\operatorname{\|\text{\textperiodcentered}\|}'_n = \operatorname{\|\text{\textperiodcentered}\|}_{n,R_n \twoheadrightarrow R'_n}$ and $\operatorname{\|\text{\textperiodcentered}\|}_{Q_n} = \operatorname{\|\text{\textperiodcentered}\|}_{M_n, M_n \twoheadrightarrow Q_n}$. Then we have the following: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $(R',\operatorname{\|\text{\textperiodcentered}\|}')$ is a normed graded ring over ${\mathbb{Z}}$. \item If $I \cdot Q = 0$, then $(Q,\operatorname{\|\text{\textperiodcentered}\|}_Q)$ is naturally a normed $h$-graded $(R',\operatorname{\|\text{\textperiodcentered}\|}')$-module. \end{enumerate} \end{Proposition} \begin{proof} (1) We need to see that \[ \Vert x' \cdot y' \Vert'_{n+n'} \leq \Vert x' \Vert'_n \Vert y' \Vert'_{n'} \] for all $x' \in (R'_{n})_{{\mathbb{R}}}$ and $y' \in (R'_{n'})_{{\mathbb{R}}}$. Indeed, we choose $x \in (R_{n})_{{\mathbb{R}}}$ and $y \in (R_{n'})_{{\mathbb{R}}}$ such that the classes of $x$ and $y$ in $R'_{{\mathbb{R}}}$ are $x'$ and $y'$ respectively and that $\Vert x \Vert_n = \Vert x' \Vert'_{n}$ and $\Vert y \Vert_n = \Vert y' \Vert'_{n'}$. Then, as the class of $x \cdot y$ in $R'_{{\mathbb{R}}}$ is $x' \cdot y'$, \[ \Vert x' \cdot y' \Vert'_{n+n'} \leq \Vert x \cdot y \Vert_{n+n'} \leq \Vert x \Vert_{n} \Vert y \Vert_{n'} = \Vert x' \Vert'_{n} \Vert y' \Vert'_{n'}. \] (2) It is sufficient to show that \[ \Vert x' \cdot q \Vert_{Q_{hn+n'}} \leq \Vert x' \Vert'_n \Vert q \Vert_{Q_{n'}} \] for all $x' \in (R'_{n})_{{\mathbb{R}}}$ and $q \in (Q_{n'})_{{\mathbb{R}}}$, which can be checked in the same way as in (1). \end{proof} Next let us observe the following lemma: \begin{Lemma} \label{lem:asym:R:asym:M} We assume the following: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $M_{{\mathbb{Q}}}$ is a finitely generated $R_{{\mathbb{Q}}}$-module, and $M_n = \{ 0 \}$ for $n < 0$. \item There are $A,e,\upsilon \in {\mathbb{R}}_{> 0}$ such that $\lambda_{{\mathbb{Q}}}(R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \leq A n^e \upsilon^n$ for all $n \geq 1$. \end{enumerate} Then there is $A' \in {\mathbb{R}}_{> 0}$ such that $\lambda_{{\mathbb{Q}}}(M_n,\operatorname{\|\text{\textperiodcentered}\|}_{M_n}) \leq A' n^e \upsilon^{n/h}$ for all $n \geq 1$. \end{Lemma} \begin{proof} For $n \geq 1$, we choose $s_{n,1}, \ldots, s_{n,r_n} \in R_n$ such that $s_{n,1}, \ldots, s_{n,r_n}$ form a basis of $(R_{n})_{{\mathbb{Q}}}$ and $\Vert s_{n,j} \Vert_n \leq A n^e \upsilon^n$ holds for all $j=1,\ldots, r_n$. Let $m_1, \ldots, m_l$ be homogeneous elements of $M_{{\mathbb{Q}}}$ such that $M_{{\mathbb{Q}}}$ is generated by $m_1, \ldots, m_l$ as a $R_{{\mathbb{Q}}}$-module. Let $a_i$ be the degree of $m_i$. Clearly we may assume that $m_i \in M_{a_i}$ by replacing $m_i$ with $b m_i$ ($b \in {\mathbb{Z}}_{>0}$). If $n > \max \{ a_1, \ldots, a_l \}$, then $(M_{n})_{{\mathbb{Q}}}$ is generated by elements of the form $s_{i,j} m_k$ with $i h + a_k = n$ and $i \geq 1$. We set \[ B = \max_{k=1, \ldots, l} \left\{ \frac{\Vert m_k \Vert_{M_{a_k}} \upsilon^{-a_k/h}}{h^e} \right\}. \] Note that $s_{i,j} m_k \in M_n$ and \begin{align*} \Vert s_{i,j} m_k \Vert_{M_n} & \leq \Vert s_{i,j} \Vert_i \Vert m_k \Vert_{M_{a_k}} \leq A i^e \upsilon^i \Vert m_k \Vert_{M_{a_k}} \\ & = A \left( \frac{n - a_k}{h} \right)^e \upsilon^{(n-a_k)/h}\Vert m_k \Vert_{M_{a_k}} \leq AB n^e \upsilon^{n/h} \end{align*} which means that $\lambda_{{\mathbb{Q}}}(M_n, \operatorname{\|\text{\textperiodcentered}\|}_{M_n}) \leq AB n^e \upsilon^{n/h}$ holds for all $n > \max \{ a_1, \ldots, a_l \}$, as required. \end{proof} As a consequence, we have the following proposition. \begin{Proposition} \label{prop:I:J:K:estimate} Let $I$, $J$ and $K$ be homogeneous ideals of $R$ such that $J \subseteq K$ and $I \cdot K \subseteq J$. We set $R' = R/I$ as before and $Q = K/J$. Let $\operatorname{\|\text{\textperiodcentered}\|}_{K_n} = \operatorname{\|\text{\textperiodcentered}\|}_{n,K_n \hookrightarrow R_n}$ and $\operatorname{\|\text{\textperiodcentered}\|}_{Q_n} = \operatorname{\|\text{\textperiodcentered}\|}_{K_n, K_n \twoheadrightarrow Q_n}$. If $R_{{\mathbb{Q}}}$ is noetherian and there are $A, e, \upsilon \in {\mathbb{R}}_{> 0}$ such that \[ \lambda_{{\mathbb{Q}}}(R'_n,\operatorname{\|\text{\textperiodcentered}\|}'_n) \leq A n^e \upsilon^n \] for all $n \geq 1$, then there is $A' \in {\mathbb{R}}_{> 0}$ such that \[ \lambda_{{\mathbb{Q}}}(Q_n,\operatorname{\|\text{\textperiodcentered}\|}_{Q_n}) \leq A' n^e \upsilon^{n}\] for all $n \geq 1$. \end{Proposition} \begin{proof} Obviously, $(K, \operatorname{\|\text{\textperiodcentered}\|}_K) = \bigoplus_{n=0}^{\infty} (K_n, \operatorname{\|\text{\textperiodcentered}\|}_{K_n})$ is a normed $1$-graded $(R, \operatorname{\|\text{\textperiodcentered}\|})$-module. Thus, by Proposition~\ref{prop:normed:graded:structure:descent}, $(Q, \operatorname{\|\text{\textperiodcentered}\|}_Q) =\bigoplus_{n=0}^{\infty} (Q_n, \operatorname{\|\text{\textperiodcentered}\|}_{Q_n})$ is also a normed $1$-graded $(R, \operatorname{\|\text{\textperiodcentered}\|})$-module. As $I \cdot Q = 0$, by Proposition~\ref{prop:normed:graded:structure:descent} again, $(Q, \operatorname{\|\text{\textperiodcentered}\|}_Q)$ is a normed $1$-graded $(R', \operatorname{\|\text{\textperiodcentered}\|}')$-module. Since $R_{{\mathbb{Q}}}$ is noetherian and $K_{{\mathbb{Q}}}$ is an ideal of $R_{{\mathbb{Q}}}$, $K_{{\mathbb{Q}}}$ is finitely generated as a $R_{{\mathbb{Q}}}$-module. Thus $Q_{{\mathbb{Q}}}$ is also finitely generated as a $R'_{{\mathbb{Q}}}$-module. Hence the assertion follows from Lemma~\ref{lem:asym:R:asym:M}. \end{proof} Finally note the following lemma, which will be used later. \begin{Lemma} \label{lem:R:finite:over:R:m} Let $R = \bigoplus_{n=0}^{\infty} R_n$ be a graded ring and $h$ a positive integer. If $R$ is noetherian, then $R^{(h)}$ is also noetherian and $R$ is a finitely generated $R^{(h)}$-module. \end{Lemma} \begin{proof} See \cite[Chap.~III, \S~1, $\text{n}^{\circ}$~3, Proposition~2 and its proof]{Bourbaki}. \end{proof} \section{Estimation of $\lambda_{{\mathbb{Q}}}$ for a normed graded ring} Let $X$ be a $d$-dimensional projective arithmetic variety, that is, $X$ is a $d$-dimensional projective and flat integral scheme over ${\mathbb{Z}}$, and let $L$ be an invertible sheaf on $X$. Let $R$ be a graded subring of $\bigoplus_{n=0}^{\infty} H^0(X, nL)$ over ${\mathbb{Z}}$. Such a graded ring $R$ is called a {\em graded subring of $L$}. For each $n$, we assign a norm $\operatorname{\|\text{\textperiodcentered}\|}_n$ to $(R_{n})_{{\mathbb{R}}}$ such that $(R,\operatorname{\|\text{\textperiodcentered}\|}) = \bigoplus_{n=0}^{\infty} (R_n,\operatorname{\|\text{\textperiodcentered}\|}_n)$ is a normed graded ring over ${\mathbb{Z}}$. For an ideal sheaf $\mathcal{I}$ of $X$, we set \[ \begin{cases} I_n(R;\mathcal{I}) = H^0(X, nL \otimes \mathcal{I}) \cap R_n,\\ I(R;\mathcal{I}) = \bigoplus_{n=0}^{\infty} I_n(R;\mathcal{I}),\\ R_{\mathcal{I}} = R/I(R;\mathcal{I}). \end{cases} \] Then $I(R;\mathcal{I})$ is a homogeneous ideal of $R$. Let $\operatorname{\|\text{\textperiodcentered}\|}_{(R_\mathcal{I})_n}$ be the quotient norm of $(R_{\mathcal{I}})_n$ induced by $R_n \twoheadrightarrow (R_{\mathcal{I}})_{n}$ and the norm $\operatorname{\|\text{\textperiodcentered}\|}_n$ of $R_n$. Let $Y$ be an arithmetic subvariety of $X$, that is, $Y$ is an integral closed subscheme flat over ${\mathbb{Z}}$, and $\mathcal{I}_Y$ the defining ideal sheaf of $Y$. Then, for simplicity, $R_{\mathcal{I}_Y}$, $\operatorname{\|\text{\textperiodcentered}\|}_{R_{\mathcal{I}_Y}}$, $(R_Y)_n$, and $\operatorname{\|\text{\textperiodcentered}\|}_{(R_Y)_n}$ are denoted by $R_Y$, $\operatorname{\|\text{\textperiodcentered}\|}_Y$, $R_{Y,n}$ and $\operatorname{\|\text{\textperiodcentered}\|}_{Y,n}$ respectively. Note that \[ \xymatrix{ R_{Y,n} \ar@{^{(}->}[r] & H^0(X,nL)/H^0(X, nL \otimes \mathcal{I}_Y) \ar@{^{(}->}[r] & H^0(Y, \rest{nL}{Y}). } \] Thus $R_Y$ is a graded subring of $\rest{L}{Y}$ and \[ R_{Y,n} \overset{\sim}{\longrightarrow} \operatorname{Image}(R_n \to H^0(Y, \rest{nL}{Y})). \] In particular, $R_Y$ is an integral domain. We denote the set of all arithmetic subvarieties of $X$ by $\Sigma_X$. The following theorem is the technical main theorem of this paper. \begin{Theorem} \label{thm:base:strictly:small:sec} Let $\upsilon : \Sigma_X \to {\mathbb{R}}_{> 0}$ be a map. For $(R, \operatorname{\|\text{\textperiodcentered}\|})$ and $\upsilon$, we assume the following: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\arabic{enumi})}} \item $R_{{\mathbb{Q}}}$ is noetherian. \item For each $Y \in \Sigma_X$, there is $n_0 \in {\mathbb{Z}}_{>0}$ such that $(R_{Y,n})_{{\mathbb{Q}}} = H^0\left(Y_{{\mathbb{Q}}}, \rest{n L_{{\mathbb{Q}}}}{Y_{{\mathbb{Q}}}}\right)$ for all $n \geq n_0$. \item For each $Y \in \Sigma_X$, there are $n_1 \in {\mathbb{Z}}_{>0}$ and $s \in R_{Y,n_1} \setminus \{ 0 \}$ with $\Vert s \Vert_{Y, n_{1}} \leq \upsilon(Y)^{n_1}$. \end{enumerate} Then there are $B \in {\mathbb{R}}_{>0}$ and a finite subset $S$ of $\Sigma_X$ such that \[ \lambda_{{\mathbb{Q}}}(R_n, \operatorname{\|\text{\textperiodcentered}\|}_n) \leq B n^{d(d-1)/2} \left(\max\{ \upsilon(Y) \mid Y \in S \}\right)^n \] for all $n \geq 1$. \end{Theorem} \begin{proof} This theorem can be proved by similar techniques as in \cite[Theorem~(4.2)]{ZhPos}. Let $D \in \Sigma_X$ and $\upsilon_D = \rest{\upsilon}{\Sigma_D}$, where $\Sigma_{D}$ is the set of all arithmetic subvarieties of $D$. Note that the conditions (1), (2) and (3) also hold for $(R_D, \operatorname{\|\text{\textperiodcentered}\|}_D)$ and $\upsilon_D$. Let us begin with the following claim. \begin{Claim} \label{claim:thm:base:strictly:small:sec:1} We may assume that there is a non-zero $s \in R_1$ with $\Vert s \Vert_1 \leq \upsilon(X)$. \end{Claim} We choose a positive integer $m$ and a non-zero section $s \in R_m$ with $\Vert s \Vert_m \leq \upsilon(X)^m$. Clearly the assumptions (1) and (2) of the theorem hold for $R^{(m)} = \bigoplus_{n=0}^{\infty} R_{mn}$. For $Y \in \Sigma_X$ we choose a positive integer $n_1$ and a non-zero $t \in R_{Y, n_1}$ with $\Vert t \Vert_{Y, n_1} \leq \upsilon(Y)^{n_1}$. Then $t^m \in R_{Y, mn_1} \setminus \{ 0\}$ and \[ \Vert t^m \Vert_{Y,mn_1} \leq (\Vert t \Vert_{Y,n_1})^m \leq (\upsilon(Y)^m)^{n_1}. \] Thus $(R^{(m)}, \operatorname{\|\text{\textperiodcentered}\|}^{(m)})$ and $\upsilon^m$ satisfy the assumption (3) of the theorem. Therefore, if the theorem holds for $(R^{(m)},\operatorname{\|\text{\textperiodcentered}\|}^{(m)})$ and $\upsilon^m$, then there are $B \in {\mathbb{R}}_{> 0}$ and a finite subset $S$ of $\Sigma_X$ such that \[ \lambda_{{\mathbb{Q}}}(R_{nm}, \operatorname{\|\text{\textperiodcentered}\|}_{nm}) \leq B n^{d(d-1)/2} \left(\max\{ \upsilon(Y)^m \mid Y \in S \}\right)^{n} \] for all $n \geq 1$. On the other hand, by Lemma~\ref{lem:R:finite:over:R:m}, $R_{{\mathbb{Q}}}$ is a finitely generated $R^{(m)}_{{\mathbb{Q}}}$-module. Thus, by Lemma~\ref{lem:asym:R:asym:M}, there is $B' \in {\mathbb{R}}_{> 0}$ such that \[ \lambda_{{\mathbb{Q}}}(R_{n}, \operatorname{\|\text{\textperiodcentered}\|}_{n}) \leq B' n^{d(d-1)/2} \left(\max\{ \upsilon(Y)^m \mid Y \in S\}\right)^{n/m} \] for all $n \geq 1$. Therefore the claim follows. \CQED \begin{Claim} \label{claim:thm:base:strictly:small:sec:2} The assertion of the theorem holds if $d=1$. \end{Claim} Since $R_n \overset{\cdot s}{\longrightarrow} R_{n+1}$ is injective, \[ \operatorname{rk} R_1 \leq \cdots \leq \operatorname{rk} R_n \leq \operatorname{rk} R_{n+1} \leq \cdots \leq \operatorname{rk} L. \] Thus there is a positive integer $n_0$ such that $R_{n_0} \overset{\cdot s^{n}}{\longrightarrow} R_{n_0 + n}$ yields an isomorphism over ${\mathbb{Q}}$. Hence, by (1) in Lemma~\ref{lem:iso:Q:comp:lambda}, \[ \lambda_{{\mathbb{Q}}}(R_{n+n_0}, \operatorname{\|\text{\textperiodcentered}\|}_{n+n_0}) \leq \Vert s \Vert_1^n \lambda_{{\mathbb{Q}}}(R_{n_0}, \operatorname{\|\text{\textperiodcentered}\|}_{n_0}) \leq \upsilon(X)^n \lambda_{{\mathbb{Q}}}(R_{n_0}, \operatorname{\|\text{\textperiodcentered}\|}_{n_0}), \] as required. \CQED We prove the theorem on induction of $d$. By Claim~\ref{claim:thm:base:strictly:small:sec:2}, we have done in the case $d=1$. Thus we assume $d > 1$. Let $\mathcal{I}$ be the ideal sheaf of ${\mathcal{O}}_X$ given by \[ \mathcal{I} = \operatorname{Image} \left( L^{-1} \overset{ \otimes s}{\longrightarrow} {\mathcal{O}}_X \right). \] \begin{Claim} \label{claim:thm:base:strictly:small:sec:3} There is a sequence \[ \mathcal{I}_0 = \mathcal{I} \subsetneq \mathcal{I}_1 \subsetneq \cdots \subsetneq \mathcal{I}_m = {\mathcal{O}}_X \] of ideal sheaves and proper integral subschemes $D_1, \ldots, D_m$ of $X$ such that $ \mathcal{I}_{D_r} \cdot \mathcal{I}_{r} \subseteq \mathcal{I}_{r-1}$ for all $r=1,\ldots,m$, where $\mathcal{I}_{D_r}$ is the defining ideal sheaf of $D_r$. \end{Claim} It is standard. For example, we can show it by using \cite[Chapter~1, Proposition~7.4]{Hartshorne}. \CQED Let us fix a positive integer $n_1$ such that $(R_{n})_{{\mathbb{Q}}} = H^0(X_{{\mathbb{Q}}}, nL_{{\mathbb{Q}}})$ for all $n \geq n_1$. We set \[ \overline{R}_n = (R_n, \operatorname{\|\text{\textperiodcentered}\|}_n)\quad\text{and}\quad \overline{I}_n(R;\mathcal{I}_r) = (I_n(R;\mathcal{I}_r), \operatorname{\|\text{\textperiodcentered}\|}_{n,r}), \] where $\operatorname{\|\text{\textperiodcentered}\|}_{n,r} = \operatorname{\|\text{\textperiodcentered}\|}_{n, I_n(R;\mathcal{I}_r) \hookrightarrow R_n}$. Note that $\overline{R}_n = \overline{I}_n(R;\mathcal{I}_m)$. We would like to apply Proposition~\ref{prop:estimate:lambda:chain} to \addtocounter{Claim}{1} \begin{equation} \label{eqn:thm:base:strictly:small:sec:1} \begin{array}{ccccccc} \overline{R}_{n_1} & \overset{\cdot s}{\longrightarrow} & \overline{I}_{n_1+1}(R; \mathcal{I}_{0}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n_1+1}(R; \mathcal{I}_{r}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n_1+1}(R;\mathcal{I}_m) \\ & \overset{\cdot s}{\longrightarrow} & \overline{I}_{n_1+2}(R; \mathcal{I}_{0}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n_1+2}(R; \mathcal{I}_{r}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n_1+2}(R;\mathcal{I}_m) \\ & \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\ & \overset{\cdot s}{\longrightarrow} & \overline{I}_{n}(R; \mathcal{I}_{0}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n}(R; \mathcal{I}_{r}) & \hookrightarrow \cdots \hookrightarrow & \overline{I}_{n}(R;\mathcal{I}_m). \end{array} \end{equation} For this purpose, let us observe the following claim. \begin{Claim} \label{claim:thm:base:strictly:small:sec:4} \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\alph{enumi})}} \item Let $\operatorname{\|\text{\textperiodcentered}\|}_{n,r,\operatorname{quot}}$ be the quotient norm of $I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1})$ induced by $I_n(R;\mathcal{I}_r) \twoheadrightarrow I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1})$ and $\operatorname{\|\text{\textperiodcentered}\|}_{n,r}$ of $I_n(R;\mathcal{I}_r)$. Then, for each $1 \leq r \leq m$, there are $B_r \in {\mathbb{R}}_{> 0}$ and a finite subset $S_r$ of $\Sigma_X$ such that \begin{multline*} \hspace{3em} \lambda_{{\mathbb{Q}}}\left( I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1}), \ \operatorname{\|\text{\textperiodcentered}\|}_{n, r, \operatorname{quot}} \right) \\ \leq B_r n^{(d-1)(d-2)/2} \left(\max\{ \upsilon(Y) \mid Y \in S_r \}\right)^{n}. \end{multline*} for all $n \geq 1$. \item If we set \[ e_{n,r} = \max \{ 1, \operatorname{rk}(I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1})) \}, \] then there is $C_1 \in {\mathbb{R}}_{>0}$ such that $e_{n,r} \leq C_1 n^{d-2}$ for all $n \geq 1$ and $r=1, \ldots, m$. \item $\operatorname{rk} (I_n(R;\mathcal{I}_0)/R_{n-1} s) = 0$ for all $n \geq n_1 + 1$. \end{enumerate} \end{Claim} (a) If $D_r$ is vertical, then $I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1})$ is a torsion module for all $n \geq 0$. Thus the assertion is obvious. In this case, we can set $S_r = \{ X \}$ and $B_r = 1$. Otherwise, since $I(R;\mathcal{I}_{D_r}) \cdot I(R;\mathcal{I}_r) \subseteq I(R;\mathcal{I}_{r-1})$, the assertion follows from Proposition~\ref{prop:I:J:K:estimate} and the hypothesis of induction. (b) Note that $I_n(R;\mathcal{I}_r)/I_n(R;\mathcal{I}_{r-1}) \hookrightarrow H^0(D_r, nL \otimes \mathcal{I}_r/\mathcal{I}_{r-1})$. (c) It follows from \[ (R_{n-1})_{{\mathbb{Q}}} s = H^0(X_{{\mathbb{Q}}}, (n-1)L_{{\mathbb{Q}}}) s = H^0(X_{{\mathbb{Q}}}, (nL \otimes \mathcal{I})_{{\mathbb{Q}}}) = I_n(R;\mathcal{I})_{{\mathbb{Q}}}. \] \CQED Using (c) in Claim~\ref{claim:thm:base:strictly:small:sec:4} and applying Proposition~\ref{prop:estimate:lambda:chain} to \eqref{eqn:thm:base:strictly:small:sec:1}, we obtain \begin{multline*} \lambda_{{\mathbb{Q}}}(R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \\ \leq \sum_{i=n_1 + 1}^n \left( \sum_{r=1}^m \Vert s \Vert_1^{n-i} \lambda_{{\mathbb{Q}}}\left( I_i(R;\mathcal{I}_r)/I_i(R;\mathcal{I}_{r-1}), \ \operatorname{\|\text{\textperiodcentered}\|}_{i, r, \operatorname{quot}} \right) e_{i,r} \right) \\ + \Vert s \Vert_1^{n-n_1} \lambda(R_{n_1}, \operatorname{\|\text{\textperiodcentered}\|}_{n_1}) \operatorname{rk}(R_{n_1}) \end{multline*} for $n \geq n_1 + 1$. Hence, if we set $S = S_1 \cup \cdots \cup S_r \cup \{ X \}$, then, using (a) and (b) in Claim~\ref{claim:thm:base:strictly:small:sec:4}, the theorem follows. \end{proof} For homogeneous elements $s_1, \ldots, s_l$ of $R$, we define $\operatorname{Bs}_{{\mathbb{Q}}}(s_1, \ldots, s_l)$ to be \[ \operatorname{Bs}_{{\mathbb{Q}}}(s_1, \ldots, s_l) = \{ x \in X_{{\mathbb{Q}}} \mid s_1(x) = \cdots = s_l(x) = 0 \}. \] As an application of Theorem~\ref{thm:base:strictly:small:sec}, we have the following theorem. \begin{Theorem} \label{thm:base:free:noetherian:lambda} If $R_{{\mathbb{Q}}}$ is noetherian and there are homogeneous elements $s_1, \ldots, s_l \in R$ of positive degree such that $\operatorname{Bs}_{{\mathbb{Q}}}(s_1, \ldots, s_l) = \emptyset$, then there is a positive constant $B$ such that \[ \lambda_{{\mathbb{Q}}}(R_n, \operatorname{\|\text{\textperiodcentered}\|}_n) \leq B n^{d(d-1)/2}\left( \max \left\{ \Vert s_1 \Vert^{1/\deg(s_1)},\ldots, \Vert s_l \Vert^{1/\deg(s_l)}\right\}\right)^n, \] for all $n \geq 1$. \end{Theorem} \begin{proof} Let us begin with the following claim: \begin{Claim} \label{claim:thm:base:free:noetherian:lambda:1} We may assume that $R$ is generated by $R_1$ over $R_0$ and that $s_1, \ldots, s_l \in R_1$. \end{Claim} Since $R_{{\mathbb{Q}}}$ is noetherian, there are homogeneous elements $x_1, \ldots, x_r \in R_{{\mathbb{Q}}}$ such that $R_{{\mathbb{Q}}} = (R_{0})_{{\mathbb{Q}}}[x_1, \ldots, x_r]$ (cf. \cite[Chap.~III, \S~1, $\text{n}^{\circ}$~2, Corollaire]{Bourbaki}). Replacing $x_i$ with $m x_i$ ($m \in {\mathbb{Z}}_{> 0}$), we may assume that $x_i \in R$ for all $i$. We set \[ R' = R_0[x_1, \ldots, x_r, s_1, \ldots, s_l] \] in $R$. Then $R'_{{\mathbb{Q}}} = R_{{\mathbb{Q}}}$. As $R_{n}/R'_n$ is a torsion module, by (1) in Lemma~\ref{lem:iso:Q:comp:lambda}, we have $\lambda_{{\mathbb{Q}}}(R_n, \operatorname{\|\text{\textperiodcentered}\|}_n) \leq \lambda_{{\mathbb{Q}}}(R'_n,\operatorname{\|\text{\textperiodcentered}\|}_n)$ for all $n \geq 0$. Thus we may assume that $R$ is noetherian. Therefore, there is a positive integer $h$ such that $R^{(h)}$ is generated by $R_h$ over $R_0$ (cf. \cite[Chap.~III, \S~1, $\text{n}^{\circ}$~3, Proposition~3]{Bourbaki}). Letting $a_i$ be the degree of $s_i$, we set $a = a_1 \cdots a_l$ and $s'_i = s_i^{ha_1 \cdots a_{i-1} a_{i+1} \cdots a_l}$ for each $i$. Then $s'_1, \ldots, s'_l \in R_{ah}$ and \[ \max \{ \Vert s'_1 \Vert, \ldots, \Vert s'_l \Vert\} \leq \left( \max \left\{ \Vert s_1 \Vert^{1/\deg(s_1)},\ldots, \Vert s_l \Vert^{1/\deg(s_l)}\right\}\right)^{ah}. \] Moreover, $R^{(ah)}$ is generated by $R_{ah}$ over $R_0$. Thus, as in Claim~\ref{claim:thm:base:strictly:small:sec:1}, by Lemma~\ref{lem:asym:R:asym:M} and Lemma~\ref{lem:R:finite:over:R:m}, we have the assertion. \CQED \begin{Claim} \label{claim:thm:base:free:noetherian:lambda:2} We may assume that $R_1$ is base point free, that is, $R_1 \otimes {\mathcal{O}}_X \to L$ is surjective. \end{Claim} Let $\mathcal{I}$ be the ideal sheaf of $X$ given by \[ \operatorname{Image}(R_1 \otimes {\mathcal{O}}_{X} \to L) = \mathcal{I} \cdot L. \] Let $\mu : X' \to X$ be the blowing-up with respect to $\mathcal{I}$. Then $\mathcal{I} \cdot {\mathcal{O}}_{X'}$ is invertible. Let $t$ be the canonical section of of $(\mathcal{I} \cdot {\mathcal{O}}_{X'})^{-1}$, that is, ${\mathcal{O}}_{X'}(-\operatorname{div}(t)) = \mathcal{I} \cdot {\mathcal{O}}_{X'}$, and let $L' = \mathcal{I} \cdot \mu^*(L)$. Then, as $\left\langle (R_1)^n \right\rangle_{R_0} = R_n$, for $s \in R_n$, \[ \tilde{s} := \mu^*(s) \otimes t^{-n} \in H^0(X', nL'). \] It is easy to see the following properties: \[ \begin{cases} \widetilde{s_1 + s_2} = \widetilde{s_1} + \widetilde{s_2}, \ \widetilde{as} = a \tilde{s} & (s_1, s_2, s \in R_n, \ a \in {\mathbb{Z}}), \\ \widetilde{s_1 \cdot s_2} = \widetilde{s_1} \cdot \widetilde{s_2} & (s_1 \in R_n, \ s_2 \in R_{n'}). \end{cases} \] Let $\beta_n : R_n \to H^0(X', nL')$ be the homomorphism given by $\beta_n(s) = \tilde{s}$, and $R'_n = \beta_n(R_n)$. Then, by the above properties, \[ \bigoplus_{n=0}^{\infty} \beta_n : \bigoplus_{n=0}^{\infty} R_n \to \bigoplus_{n=0}^{\infty} R'_n \] yields a ring isomorphism. Let $\operatorname{\|\text{\textperiodcentered}\|}'_n$ be the norm of $(R'_n)_{{\mathbb{R}}}$ given by $\Vert\beta_n(s)\Vert'_n = \Vert s \Vert_n$ for $s \in (R_n)_{{\mathbb{R}}}$. Then \begin{multline*} \Vert \beta_n(s) \beta_{n'}(s') \Vert'_{n+n'} = \Vert \beta_{n+n'} (s s') \Vert'_{n+n'} = \Vert s s' \Vert_{n+n'} \\ \leq \Vert s \Vert_n \Vert s' \Vert_{n'} = \Vert \beta_n(s) \Vert'_n \Vert \beta_{n'}(s') \Vert'_{n'} \end{multline*} for all $s \in (R_n)_{{\mathbb{R}}}$ and $s' \in (R_{n'})_{{\mathbb{R}}}$. Thus $\bigoplus_{n=0}^{\infty}\beta$ extends to a ring isometry \[ \bigoplus_{n=0}^{\infty} (R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \overset{\sim}{\longrightarrow} \bigoplus_{n=0}^{\infty} (R'_n,\operatorname{\|\text{\textperiodcentered}\|}'_n) \] as normed graded rings over ${\mathbb{Z}}$. Note that $R'_1 \otimes {\mathcal{O}}_{X'} \to L'$ is surjective. Hence the claim follows. \CQED \begin{Claim} \label{claim:thm:base:free:noetherian:lambda:3} We may assume that $L$ is very ample and $R_n = H^0(X, nL)$ for $n \gg 1$. \end{Claim} By Claim~\ref{claim:thm:base:free:noetherian:lambda:2}, \[ \left(\bigoplus_{n=0}^{\infty} R_n\right) \otimes {\mathcal{O}}_X \to \bigoplus_{n=0}^{\infty} nL \] is surjective, which gives rise to a morphism \[ \phi : X \to Z := \operatorname{Proj}\left( \bigoplus_{n=0}^{\infty} R_n\right) \] such that $\phi^*({\mathcal{O}}_Z(1)) = L$. Note that $Z$ is a projective arithmetic variety. Moreover, there is a natural injective homomorphism $\alpha_n : R_n \to H^0(Z, {\mathcal{O}}_{Z}(n))$ such that $\phi_n^*(\alpha_n(s)) = s$ for all $s \in R_n$, where $\phi_n^*$ is the natural homomorphism $H^0(Z, {\mathcal{O}}_{Z}(n))) \to H^0(X, nL)$. If we set $R''_n = \alpha_n(R_n)$, then $\bigoplus_{n=0}^{\infty} \alpha_n$ yields to a ring isomorphism \[ \bigoplus_{n=0}^{\infty} R_n \overset{\sim}{\longrightarrow} \bigoplus_{n=0}^{\infty} R''_n, \] so that, as in Claim~\ref{claim:thm:base:free:noetherian:lambda:2}, there are norms $\operatorname{\|\text{\textperiodcentered}\|}''_0, \ldots, \operatorname{\|\text{\textperiodcentered}\|}''_n, \ldots$ of $R''_0, \ldots, R''_n, \ldots$ such that \[ \bigoplus_{n=0}^{\infty} (R_n,\operatorname{\|\text{\textperiodcentered}\|}_n) \overset{\sim}{\longrightarrow} \bigoplus_{n=0}^{\infty} (R''_n,\operatorname{\|\text{\textperiodcentered}\|}''_n) \] as normed graded rings over ${\mathbb{Z}}$. Moreover, if we set $s''_i = \alpha_1(s_i)$, then $\phi^*(s''_i) = s_i$. Therefore, $\operatorname{Bs}_{{\mathbb{Q}}}(s''_1, \ldots, s''_l) = \emptyset$ on $Z_{{\mathbb{Q}}}$. Further, it is well known that $\alpha_n$ is an isomorphism for $n \gg 1$ (cf. \cite[the proof of Theorem~5.19 and Remark~5.19.2 in Chapter~II]{Hartshorne}). Hence the claim follows. \CQED Gathering the assertions of Claim~\ref{claim:thm:base:free:noetherian:lambda:1} and Claim~\ref{claim:thm:base:free:noetherian:lambda:3}, to prove the corollary, we may assume the following: \begin{enumerate} \renewcommand{\labelenumi}{\textup{(\alph{enumi})}} \item $s_1, \ldots, s_l \in R_1$ and $\operatorname{Bs}_{{\mathbb{Q}}}(s_1, \ldots, s_l) = \emptyset$. \item $L$ is very ample. \item $R_n = H^0(X, nL)$ for $n \gg 1$. \end{enumerate} Let $\upsilon : \Sigma_X \to {\mathbb{R}}_{> 0}$ be the constant map given by \[ \upsilon(Y) = \max \{ \Vert s_1 \Vert_1, \ldots, \Vert s_l \Vert_1 \} \] for $Y \in \Sigma_X$. Then $(R,\operatorname{\|\text{\textperiodcentered}\|})$ and $\upsilon$ satisfy the conditions (1), (2) and (3) of Theorem~\ref{thm:base:strictly:small:sec}. Hence the corollary follows. \end{proof} \begin{Corollary} \label{cor:base:point:free:small:sec} Let $\overline{L}$ be a continuous hermitian invertible sheaf on $X$. If there are a positive integer $n_0$ and $s_1, \ldots, s_l \in H^0(X, n_0L)$ such that $\operatorname{Bs}_{{\mathbb{Q}}}(s_1, \ldots, s_l) = \emptyset$, then there is $B \in {\mathbb{R}}_{>0}$ such that \[ \lambda_{{\mathbb{Q}}}(H^0(X,nL), \operatorname{\|\text{\textperiodcentered}\|}_{\sup}) \leq B n^{d(d-1)/2} \left( \max \{ \Vert s_1 \Vert_{\sup}, \ldots, \Vert s_l \Vert_{\sup} \} \right)^{n/n_0} \] for all $n \geq 1$. \end{Corollary} \begin{proof} By Theorem~\ref{thm:base:free:noetherian:lambda}, it is sufficient to show the following lemma. \end{proof} \begin{Lemma} Let $X$ be a projective variety over a field $k$ and $L$ an invertible sheaf on $X$. If there is a positive integer $m$ such that $mL$ is base point free, then $R = \bigoplus_{n=0}^{\infty} H^0(X, nL)$ is noetherian. \end{Lemma} \begin{proof} Since $mL$ is base point free, there are a projective variety $Z$, an ample invertible sheaf $A$ on $Z$ and a morphism $\phi : X \to Z$ such that $\phi^*(A) = mL$. As $A$ is ample, it is well known that if $F$ is a coherent sheaf on $Z$, then $R' = \bigoplus_{l=0}^{\infty} H^0(Z, lA)$ is noetherian and $\bigoplus_{l=0}^{\infty} H^0(Z, lA \otimes F)$ is a finitely generated $R'$-module. Note that \begin{align*} R & = \bigoplus_{n=0}^{\infty} H^0(X, nL) = \bigoplus_{r=0}^{m-1} \left( \bigoplus_{l=0}^{\infty}H^0(X, (lm + r)L) \right) \\ & = \bigoplus_{r=0}^{m-1} \left( \bigoplus_{l=0}^{\infty}H^0(Z, l A \otimes \phi_*(rL)) \right). \end{align*} Therefore $R$ is noetherian because $R$ is a finitely generated $R'$-module. \end{proof} \begin{Remark} Theorem~\ref{thm:intro:A} and Corollary~\ref{cor:intro:B} in the introduction are consequences of Theorem~\ref{thm:base:free:noetherian:lambda} and Corollary~\ref{cor:base:point:free:small:sec} respectively together with Lemma~\ref{lem:lambda:lambda:prime}. The following examples show that base point freeness by strictly small sections is substantially crucial. \end{Remark} \begin{Example} \label{example:projective:line} Let ${\mathbb{P}}^1_{{\mathbb{Z}}} = \operatorname{Proj}({\mathbb{Z}}[X,Y])$ be the projective line over ${\mathbb{Z}}$ and ${\mathcal{O}}(1)$ the tautological invertible sheaf on ${\mathbb{P}}^1_{{\mathbb{Z}}}$. Then $H^0({\mathbb{P}}^1_{{\mathbb{Z}}}, {\mathcal{O}}(d))$ is naturally identified with ${\mathbb{Z}}[X, Y]_d$. Let $\beta, \gamma \in (0,1) (= \{ x \in {\mathbb{R}} \mid 0 < x < 1 \})$ and $\alpha := \beta^{1-(1/\gamma)} > 1$. For each $d \geq 0$, we give a continuous metric $|\cdot |_d$ of ${\mathcal{O}}(d)$ as follows: for $(x:y) \in {\mathbb{P}}^1_{{\mathbb{Z}}}({\mathbb{C}})$ and $s \in {\mathbb{C}}[X,Y]_d$, \[ | s |_{d}(x:y) = \frac{| s(x,y) |}{\left( \max \{ \alpha | x |, \beta | y | \} \right)^d}. \] We set $\overline{{\mathcal{O}}}(d) = ({\mathcal{O}}(d), | \cdot |_d)$. Note that $\overline{{\mathcal{O}}}(d) = \overline{{\mathcal{O}}}(1)^{\otimes d}$. Here we have the following: \begin{align} \addtocounter{Claim}{1} \label{eqn:example:projective:line:1} \left\langle \{ s \in H^0(X, {\mathcal{O}}(d)) \mid \Vert s \Vert_{\sup} < 1 \} \right\rangle_{{\mathbb{Z}}} & = \bigoplus_{ d \gamma < i \leq d} {\mathbb{Z}} X^i Y^{d-i}, \\ \addtocounter{Claim}{1} \label{eqn:example:projective:line:2} \left\langle \{ s \in H^0(X, {\mathcal{O}}(d)) \mid \Vert s \Vert_{\sup} \leq 1 \} \right\rangle_{{\mathbb{Z}}} & = \bigoplus_{ d \gamma \leq i \leq d} {\mathbb{Z}} X^i Y^{d-i}. \end{align} \begin{proof} Indeed, by a straightforward calculation, \[ \Vert X^i Y^{d-i} \Vert_{\sup} = \frac{1}{\alpha^i\beta^{d-i}} \] for $0 \leq i \leq d$. Thus $X^iY^{d-i}$ is a strictly small section for $i$ with $d \gamma < i \leq d$ because $\alpha^i\beta^{d-i} > 1$. On the other hand, for $s = \sum_{i=0}^d a_i X^i Y^{d-i} \in {\mathbb{C}}[X,Y]_d$, we can see \begin{align*} \Vert s \Vert_{\sup} & \geq \sup\left\{ | s |_d(z:1) \mid | z | = \frac{\beta}{\alpha} \right\} = \frac{1}{\beta^d} \sup \left\{ | s(z,1) | \mid | z | = \frac{\beta}{\alpha} \right\} \\ & \geq \frac{1}{\beta^d} \sqrt{ \int_0^{1} \left| s\left(\left(\frac{\beta}{\alpha}\right)e^{2\pi\sqrt{-1}\theta},1\right) \right|^2 d\theta } \\ & = \frac{1}{\beta^d} \sqrt{\sum_{0 \leq i,j \leq d} \int_0^{1} a_i \bar{a}_j \left(\frac{\beta}{\alpha}\right)^{i+j} e^{2\pi\sqrt{-1}(i-j)\theta}d\theta} \\ & = \sqrt{\sum_{i=0}^d \left( \frac{| a_i |}{\alpha^i\beta^{d-i}} \right)^2}. \end{align*} Thus, if $s = \sum_{i=0}^d a_i X^i Y^{d-i} \in {\mathbb{Z}}[X,Y]_d$ is a strictly small section, then $a_j = 0$ for $j$ with $0 \leq j \leq d\gamma$ because $\alpha^j\beta^{d-j} \leq 1$. These observations yield \eqref{eqn:example:projective:line:1}. Similarly we obtain \eqref{eqn:example:projective:line:2}. \end{proof} \end{Example} \begin{Example} \label{example:projective:plane} Let ${\mathbb{P}}^2_{{\mathbb{Z}}} = \operatorname{Proj}({\mathbb{Z}}[X,Y,Z])$ be the projective plane over ${\mathbb{Z}}$ and ${\mathcal{O}}(1)$ the tautological invertible sheaf on ${\mathbb{P}}^2_{{\mathbb{Z}}}$. Let $\Delta$ be the arithmetic subvariety of ${\mathbb{P}}^2_{{\mathbb{Z}}}$ given by the homogeneous ideal $Y {\mathbb{Z}}[X,Y,Z] + Z {\mathbb{Z}}[X,Y,Z]$. Let $\mu : X \to {\mathbb{P}}^2_{{\mathbb{Z}}}$ be the blowing-up along $\Delta$ and $E$ the exceptional divisor of $\mu$. Note that $E$ is a Cartier divisor. We set $L = \mu^*({\mathcal{O}}(1)) + {\mathcal{O}}_X(E)$ and $R = \bigoplus_{n=0}^{\infty} H^0(X, nL)$. Since $\mu_*(nL) = {\mathcal{O}}(n)$ for all $n \in {\mathbb{Z}}_{\geq 0}$, the natural ring homomorphism \[ \mu^*: \bigoplus_{n=0}^{\infty} H^0({\mathbb{P}}^2_{{\mathbb{Z}}}, {\mathcal{O}}(n)) \longrightarrow R \] yields a ring isomorphism, and \[ \{ x \in X \mid \text{$s(x) = 0$ for all $s \in H^0(X, nL)$}\} = E \] for $n \in {\mathbb{Z}}_{>0}$. Here we give a metric $\vert\cdot\vert_{FS}$ of ${\mathcal{O}}(1)$ in the following way: for $s \in H^0({\mathbb{P}}^2_{{\mathbb{C}}}, {\mathcal{O}}(1)) = {\mathbb{C}}[X,Y,Z]_1$ and $(x:y:z) \in {\mathbb{P}}^2({\mathbb{C}})$, \[ \vert s \vert_{FS} (x:y:z) = \frac{\vert s (x,y,z) \vert}{\sqrt{\vert x \vert^2 + \vert y \vert^2 + \vert z \vert^2}}. \] We set $\overline{{\mathcal{O}}}(n) = (\overline{{\mathcal{O}}}(1), \vert\cdot\vert_{FS})^{\otimes n}$. Then it is easy to check that $\Vert X^i Y^j Z^k \Vert_{\sup} \leq 1$ for all $n > 0$ and $i,j,k \in {\mathbb{Z}}_{\geq 0}$ with $i+j+k= n$. Let $t$ be the canonical section of ${\mathcal{O}}_X(E)$. We choose a $C^{\infty}$-metric $\vert\cdot\vert_E$ of ${\mathcal{O}}_X(E)$ such that $\Vert t \Vert_{\sup} < 1$, and set \[ \overline{L} = \mu^*(\overline{{\mathcal{O}}}(1)) + ({\mathcal{O}}_X(E), \vert\cdot\vert_E). \] Then $\Vert \mu^*(X^i Y^j Z^k) \otimes t \Vert _{\sup} < 1$ for all $n > 0$ and $i,j,k \in {\mathbb{Z}}_{\geq 0}$ with $i+j+k= n$. As a consequence, $R_n$ has non-empty base loci, but possesses a free basis consisting of strictly small sections. However, in this example, the free basis comes from the base point free ${\mathbb{Z}}$-module $H^0({\mathbb{P}}^2_{{\mathbb{Z}}}, {\mathcal{O}}(n))$. \end{Example} \section{Variants of arithmetic Nakai-Moishezon's criterion} Let $X$ be a projective arithmetic variety and $Y$ an arithmetic subvariety of $X$. Let $\overline{L}$ be a continuous hermitian invertible sheaf on $X$. We denote \[ \operatorname{Image}(H^0(X, L) \to H^0(Y, \rest{L}{Y})) \] by $H^0(X|Y, L)$. Let $\Vert\cdot\Vert_{\sup,\operatorname{quot}}^{X|Y}$ be the the quotient norm of $H^0(X|Y, L) \otimes_{{\mathbb{Z}}} {\mathbb{R}}$ induced by \[ H^0(X, L) \otimes_{{\mathbb{Z}}} {\mathbb{R}} \twoheadrightarrow H^0(X|Y, L) \otimes_{{\mathbb{Z}}} {\mathbb{R}} \] and the norm $\Vert\cdot\Vert_{\sup}$ on $H^0(X, L) \otimes_{{\mathbb{Z}}} {\mathbb{R}}$. As in \cite{MoArLin}, we define $\widehat{\operatorname{vol}}_{\operatorname{quot}}(X|Y, \overline{L})$ to be \[ \widehat{\operatorname{vol}}_{\operatorname{quot}}(X|Y, \overline{L}) := \limsup_{m\to\infty} \frac{\log \# \left\{ s \in H^0(X|Y, mL) \mid \Vert s \Vert_{\sup,\operatorname{quot}}^{X|Y} \leq 1 \right\} }{m^{\dim Y}/(\dim Y)!}. \] Then we have the following variants of arithmetic Nakai-Moishezon's criterion. Theorem~\ref{thm:Nakai:Moishezon:2} is a slight generalization of the original criterion due to Zhang \cite{ZhPos}, that is, we do not assume that $L_{{\mathbb{Q}}}$ is ample. \begin{Theorem} \label{thm:Nakai:Moishezon:1} If $\widehat{\operatorname{vol}}_{\operatorname{quot}}(X|Y, \overline{L}) > 0$ for all arithmetic subvarieties $Y$ of $X$, then $L_{{\mathbb{Q}}}$ is ample and there is a positive integer $n_0$ such that, for all $n \geq n_0$, $H^0(X, nL)$ has a free ${\mathbb{Z}}$-basis consisting of strictly small sections. \end{Theorem} \begin{proof} First of all, note that \[ \widehat{\operatorname{vol}}\left(Y, \rest{\overline{L}}{Y}\right) \geq \widehat{\operatorname{vol}}_{\operatorname{quot}}(X|Y, \overline{L}) > 0 \] for all arithmetic subvarieties $Y$ of $X$. In particular, by \cite[Corollary~2.4]{Yuan} or \cite[Theorem~4.6]{MoCont}, $\rest{L_{{\mathbb{Q}}}}{Y_{{\mathbb{Q}}}}$ is big. Thus, by algebraic Nakai-Moishezon's criterion, $L_{{\mathbb{Q}}}$ is ample. Let us consider a normed graded ring \[ (R,\operatorname{\|\text{\textperiodcentered}\|}) = \bigoplus_{n \in {\mathbb{Z}}_{\geq 0}} (H^0(X, nL), \operatorname{\|\text{\textperiodcentered}\|}_{\sup}). \] As $L_{{\mathbb{Q}}}$ is ample, $R$ satisfies the conditions (1) and (2) of Theorem~\ref{thm:base:strictly:small:sec}. Moreover, if we take a sufficiently small positive number $\epsilon$, then \[ \widehat{\operatorname{vol}}_{\operatorname{quot}}(X|Y, \overline{L} - \overline{{\mathcal{O}}}(\epsilon)) > 0 \] by \cite[(2) in Proposition~6.1]{MoArLin}, which means that we can choose a map $\upsilon : \Sigma_X \to {\mathbb{R}}_{> 0}$ such that $\upsilon(Y) < 1$ for all $Y \in \Sigma_X$ and the condition (3) of Theorem~\ref{thm:base:strictly:small:sec} holds for $(R,\operatorname{\|\text{\textperiodcentered}\|})$ and $\upsilon$. Thus the last assertion follows. \end{proof} \begin{Theorem} \label{thm:Nakai:Moishezon:2} We assume that $X$ is generically smooth, the metric of $\overline{L}$ is $C^{\infty}$, $L$ is nef on every fiber of $X \to \operatorname{Spec}({\mathbb{Z}})$ and that the first Chern form $c_1(\overline{L})$ is semipositive on $X({\mathbb{C}})$. If $\widehat{\operatorname{deg}}\left((\rest{\hat{c}_1(\overline{L})}{Y})^{\dim Y}\right) > 0$ for all arithmetic subvarieties $Y$ of $X$, then $L_{{\mathbb{Q}}}$ is ample and there is a positive integer $n_0$ such that, for all $n \geq n_0$, $H^0(X, nL)$ has a free ${\mathbb{Z}}$-basis consisting of strictly small sections. \end{Theorem} \begin{proof} By virtue of the Generalized Hodge index theorem \cite[Theorem~6.2]{MoCont}, \[ \widehat{\operatorname{vol}}\left(Y, \rest{\overline{L}}{Y}\right) \geq \widehat{\operatorname{deg}}\left((\rest{\hat{c}_1(\overline{L})}{Y})^{\dim Y}\right) > 0. \] Thus $L_{{\mathbb{Q}}}$ is ample as in the proof of Theorem~\ref{thm:Nakai:Moishezon:1}. In particular, if we set \[ (R, \operatorname{\|\text{\textperiodcentered}\|}) = \bigoplus_{n \geq 0} (H^0(X, nL), \operatorname{\|\text{\textperiodcentered}\|}_{\sup}), \] then $R$ satisfies the conditions (1) and (2) of Theorem~\ref{thm:base:strictly:small:sec}. As $\widehat{\operatorname{vol}}\left(Y, \rest{\overline{L}}{Y}\right) > 0$, we can find a non-zero strictly small section $s$ of $\rest{n_1L}{Y}$ for some positive integer $n_1$. By \cite[Theorem~3.3 and Theorem~3.5]{ZhPos}, there are a positive integer $n_2$ and $s' \in H^0(X, n_2n_1L) \otimes {\mathbb{R}}$ with $\rest{s'}{Y} = s^{\otimes n_2}$ and $\Vert s' \Vert_{\sup} < 1$. Thus a map $\upsilon : \Sigma_X \to {\mathbb{R}}_{> 0}$ given by \[ \upsilon(Y) = \left( \Vert s' \Vert_{\sup}\right)^{1/n_1n_2} \] satisfies the condition (3) of Theorem~\ref{thm:base:strictly:small:sec}. Hence the theorem follows from Theorem~\ref{thm:base:strictly:small:sec}. \end{proof}
099b96cd5c171fe6e4bd74f59b97ab49f08a7d88
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Max--stable processes have been studied extensively in the past 30 years. The works of Balkema and Resnick~\cite{balkema77max}, de Haan~\cite{dehaan78characterization,dehaan84spectral}, de Haan and Pickands~\cite{dehaan86stationary}, Gin\'e {\it et al.}~\cite{gine90max} and Resnick and Roy~\cite{resnick91random}, among many others have lead to a wealth of knowledge on max--stable processes. The seminal works of de Haan \cite{dehaan84spectral} and de Haan and Pickands \cite{dehaan86stationary} laid the foundations of the spectral representations of max--stable processes and established important structural results for stationary max--stable processes. Since then, however, while many authors focused on various important aspects of max--stable processes, the general theory of their representation and structural properties had not been thoroughly explored. At the same time, the structure and the classification of sum--stable processes has been vigorously studied. Rosi\'nski \cite{rosinski95structure}, building on the seminal works of Hardin \cite{hardin81isometries,hardin82spectral} about minimal representations, developed the important connection between stationary sum--stable processes and flows. This lead to a number of important contributions on the structure of sum--stable processes (see, e.g.\ \cite{rosinski96classes,rosinski00decomposition,pipiras02structure,pipiras04stable,samorodnitsky05null}). There are relatively few results of this nature about the structure of max--stable processes, with the notable exceptions of de Haan and Pickands \cite{dehaan86stationary}, Davis and Resnick \cite{davis93prediction} and the very recent works of Kabluchko {\it et al.}~\cite{kabluchko08stationary} and Kabluchko~\cite{kabluchko08spectral}. Our goal here is to develop representation and classification theory for max--stable processes, similar to the available one for sum--stable processes. We are motivated by the strong similarities between the spectral representations of sum-- and max--stable processes. This procedure however, is non--trivial. The notion of {\it minimal extremal integral} representation plays a key role as does the {\it minimal integral representation} for $\alpha$--stable processes (see Hardin~\cite{hardin82spectral} and Rosi\'nski \cite{rosinski95structure,rosinski06minimal}). Before one can fruitfully handle the {\it minimal extremal integral} representations, it turns out that one should first thoroughly investigate the structure of max--linear isometries, also known as the {\it pistons} of de Haan and Pickands~\cite{dehaan86stationary}. We refine and extend their work in Section~\ref{sec:maxLinear}. In Section \ref{sec:minimal}, we develop the theory of minimal representations for max--stable processes. Our approach is motivated by the works of Hardin \cite{hardin82spectral} and Rosi\'nski \cite{rosinski95structure} in the sum--stable context. In Section \ref{sec:classification}, we establish general classification results for max--stable processes by using the developed theory of {\it minimal spectral representations}. In Section~\ref{sec:continuousDiscreteDecomposition}, we first show that essentially any max--stable process can be represented uniquely as the maximum of two independent components, characterized as {\it spectrally continuous} and {\it spectrally discrete}, respectively. The spectrally discrete part gives rise to the notion of {\it discrete principal components}, which may be of independent interest in modeling of max--stable processes and fields. In Section~\ref{sec:cospectral}, we introduce the notion of {\it co--spectral functions}, for the large class of measurable max--stable processes $X = \indt X$. There $T$ is a separable metric space equipped with the Borel--$\sigma$--algebra and a $\sigma$--finite measure. The co--spectral functions of such processes are invariant to the choice of the spectral representations, up to a multiplicative factor. This allows us to develop a general strategy for the classification of measurable $\alpha$--Fr\'echet processes, based on positive cones of co--spectral functions. As particular examples, we obtain the {\it conservative--dissipative} and {\it positive--null} decompositions, which correspond to certain choices of cones for the co--spectral functions. Section \ref{sec:stationary} is devoted to the classification of stationary max--stable processes. As in the sum--stable case, the minimal representations allow us to associate a measurable non--singular flow to every measurable stationary max--stable process. This correspondence enables one to apply existing ergodic theory results about the flow to characterize the max--stable process. The conservative--dissipative and positive--null decompositions introduced in Examples \ref{sec:cons-diss} and \ref{sec:pos-null} are in fact motivated by the corresponding decompositions of the underlying flow. These two results are in close correspondence with the classifications of Rosi\'nski \cite{rosinski95structure} and Samorodnitsky \cite{samorodnitsky05null} for sum--stable processes. As in Rosi\'nski \cite{rosinski95structure}, we obtain that the class of stationary max--stable processes generated by dissipative flows is precisely the class of mixed moving maxima. In Section~\ref{sec:BRp}, we apply the results in Section~\ref{sec:stationary} to Brown--Resnick processes. We give simple necessary and sufficient conditions for a generalized Brown--Resnick stationary process to be a mixed moving maxima. This extends and complements the recent results of Kabluchko {\it et al.}~\cite{kabluchko08stationary}. In fact, as a by--product, by combining our results and those in \cite{kabluchko08stationary}, we obtain an interesting fact about general zero--mean Gaussian processes $W=\{W_t\}_{t\in {\mathbb R}}$ with stationary increments and continuous paths. Namely, for such processes, we have that, with probability one, $$ \lim_{|t|\to\infty} {\Big(} W_t -{\rm Var}(W_t)/2 {\Big)} = -\infty \ \mbox{ {\it implies} }\ \int_{{\mathbb R}} \exp\{ W_t - {\rm Var}(W_t)/2\} dt < \infty. $$ In particular, we show that if $\indtr W$ is a fractional Brownian motion, then the generated Brown--Resnick process is a mixed moving maxima. We conclude Section~\ref{sec:BRp} with some open questions. Some proofs and auxiliary results are given in the Appendix. Part of our results in Sections~\ref{sec:classification} and~\ref{sec:stationary} are modifications and extensions of results of de Haan and Pickands~\cite{dehaan86stationary}. The main difference is that we provide a complete treatment of the measurability issue, when the processes are continuously indexed. Before we proceed with the more technical preliminaries, we are obliged to mention the recent work of Kabluchko \cite{kabluchko08spectral}. In this exciting contribution, the author establishes some very similar classification results by using an {\it association device} between max-- and sum--stable processes. This association allows one to transfer existing classifications of sum--stable processes to the max--stable domain. It also clarifies the connection between these two classes of processes. Our results were obtained independently and by using rather different technical tools. The combination of the two approaches provides a more clear picture on the structure of max-- and sum--stable processes as well as their interplay. \section{Preliminaries}\label{sec:prelim} The importance of max--stable processes stems from the fact that they arise in the limit of the component--wise maxima of independent processes. It is well known that the univariate marginals of a max--stable process are necessarily extreme value distributions, i.e.\ up to rescaling and shift they are either Fr\'echet, Gumbel or negative Fr\'echet. The dependence structure of the max--stable processes, however, can be quite intricate and it does not hinge on the extreme value type of the marginal distributions (see e.g.\ Proposition 5.11 in Resnick \cite{resnick87extreme}). Therefore, for convenience and without loss of generality we will focus here on max--stable process with Fr\'echet marginal distributions. Recall that a positive random variable $Z\ge 0$ has $\alpha$--Fr\'echet distribution, $\alpha>0$, if \[ \,\mathbb P(Z\leq x) = \exp\{-\sigma^\alpha x^{-\alpha}\}\,,x\in(0,\infty)\,. \] Here $\left\| Z \right\|_\alpha \mathrel{\mathop:}= \sigma>0$ stands for the \textit{scale coefficient} of $Z$. It turns out that a stochastic process $\indt X$ with $\alpha$--Fr\'echet marginals is max--stable {\it if and only if} all positive {\it max--linear combinations}: \begin{equation}\label{e:max-lin} \max_{1\leq j\leq n}a_jX_{t_j} \equiv \bigvee_{1\leq j\leq n}a_jX_{t_j}\ \ \ \forall a_j> 0,\ t_j\in T,\ 1\le j\le n, \end{equation} are $\alpha$--Fr\'echet random variables (see de Haan~\cite{dehaan78characterization} and e.g.\ \cite{stoev06extremal}). This feature resembles the definition of Gaussian or, more generally, symmetric $\alpha$--stable (sum--stable) processes, where all finite--dimensional linear combinations are univariate Gaussian or symmetric $\alpha$--stable, respectively (see e.g.\ \cite{samorodnitsky94stable}). We shall therefore refer to the max--stable processes with $\alpha$--Fr\'echet marginals as to {\it $\alpha$--Fr\'echet processes}. The seminal work of de Haan \cite{dehaan84spectral} provides convenient {\it spectral representations} for stochastically continuous $\alpha$--Fr\'echet processes in terms of functionals of Poisson point processes on $(0,1)\times (0,\infty)$. Here, we adopt the slightly more general, but essentially equivalent, approach of representing max--stable processes through extremal integrals with respect to a random sup--measures (see Stoev and Taqqu \cite{stoev06extremal}). We do so in order to emphasize the analogies with the well--developed theory of sum--stable processes (see e.g.\ Samorodnitsky and Taqqu \cite{samorodnitsky94stable}). \begin{Def} \label{def:M_alpha} Consider a measure space $(S,{\cal S},\mu)$ and suppose $\alpha>0$. A stochastic process $\{M_\alpha(A)\}_{A\in {\cal S}}$, indexed by the measurable sets $A\in {\cal S}$ is said to be an \textit{$\alpha$--Fr\'echet random sup--measure} with \textit{control measure} $\mu$, if the following conditions hold:\\ \itemnumber i the $M_\alpha(A_i)$'s are independent for disjoint $A_i\in {\cal S},\ 1\le i\le n.$\\ \itemnumber {ii} $M_\alpha(A)$ is $\alpha$--Fr\'echet with scale coefficient $\|M_\alpha(A)\|_\alpha= \mu(A)^{1/\alpha}$.\\ \itemnumber{iii} for all disjoint $A_i$'s, $i\in {\mathbb N}$, we have $M_\alpha(\cup_{i\in {\mathbb N}} A_i) = \bigvee_{i\in {\mathbb N}} M_\alpha(A_i),$ almost surely. \end{Def} \noindent Now, given an $\alpha$--Fr\'echet random sup--measure $M_\alpha$ as above, one can define the {\it extremal integral} of a non--negative simple function $f(u):= \sum_{i=1}^n a_i 1_{A_i}(u) \ge 0,\ A_i\in {\cal S}$: $$ \Eint{S} fdM_\alpha \equiv \Eint{S} f(u) M_\alpha(du) := \bigvee_{1\le i\le n} a_i M_\alpha(A_i). $$ The resulting extremal integral is an $\alpha$--Fr\'echet random variable with scale coefficient $(\int_E f^\alpha d\mu)^{1/\alpha}$. The definition of $\eint{S}fdM_\alpha$ can, by continuity in probability, be naturally extended to integrands $f$ in the space $$ L_+^\alpha(S,\mu):={\Big\{}f:S\to {\mathbb R}_+\, :\, \mbox{ $f$ measurable with }\int_S f^\alpha d\mu <\infty{\Big\}}. $$ It turns out that the random variables $\xi_j:=\eint{S}f_j dM_\alpha,\ 1\le j\le n$ are independent if and only if the $f_j$'s have pairwise disjoint supports (mod $\mu$). Furthermore, the extremal integral is {\it max--linear}: $$ \Eint{S} (a f\vee b g) d M_\alpha = a \Eint{S} f dM_\alpha \vee b \Eint{S} g d M_\alpha, $$ for all $a,b>0$ and $f,g \in L_+^\alpha(S,\mu).$ For more details, see Stoev and Taqqu \cite{stoev06extremal}. Now, for any collection of deterministic functions $\indt f\subsetL^\alpha_+(S,\mu)$, one can construct the stochastic process: \begin{equation}\label{eq:xtsft} X_t = \ \int^{\!\!\!\!\!\!\!e}_Sf_t(u)M_\alpha(du)\,,\forall t\in T\,. \end{equation} In view of the max--linearity of the extremal integrals and \eqref{e:max-lin}, the resulting process $X=\{X_t\}_{t\in T}$ is $\alpha$--Fr\'echet. Furthermore, for any $n\in\mathbb N,\ x_i>0,\ t_i\in T,\ 1\le i \le n$: \begin{equation}\label{eq:xt1a1} \,\mathbb P\{ X_{t_1}\leq x_1,\dots,X_{t_n}\leq x_n \} = \exp{\Big\{}-\int_S\Big(\vee_{1\leq i\leq n} x_i^{-1} f_{t_i}(u)\Big)^\alpha \mu(du) {\Big\}}. \end{equation} This shows that the deterministic functions $\indt f$ characterize completely the finite--dimensional distributions of the process $\indt X$. In general, if \begin{equation}\label{rep:extremalRep} \{X_t\}_{t\in T} \stackrel{\rm d}{=} {\Big\{}\Eint{S} f_t dM_\alpha {\Big\}}_{t\in T}, \end{equation} for some $\indt f \subset L_+^\alpha(S,\mu)$, we shall say that the process $X=\{X_t\}_{t\in T}$ has the {\it extremal integral} or {\it spectral representation} $\{f_t\}_{t\in T}$ over the space $L_+^\alpha(S,\mu)$. The $f_t$'s in \eqref{rep:extremalRep} are also referred to as \textit{spectral functions} of $X$. {\it Our goal in this paper is to characterize $\alpha$--Fr\'echet processes in terms of their spectral representations.} Many $\alpha$--Fr\'echet processes of practical interest have tractable spectral representations. As shown in the proposition below, an $\alpha$--Fr\'echet process $X$ has the representation \eqref{rep:extremalRep}, where $(S,\mu)$ is a {\it standard Lebesgue space} (see Appendix A in~\cite{pipiras04stable}), if and only if, $X$ satisfies {\it Condition S}. \begin{Def}\label{d:Cond-S} An $\alpha$--Fr\'echet process $X = \{X_t\}_{t\in T}$ is said to satisfy {\it Condition S} if there exists a countable subset $T_0\subseteq T$ such that for every $t\in T$, we have that $X_{t_n}\stackrel{P}\to X_t$ for some $\indn t\subset T_0$. \end{Def} \begin{Prop}\label{prop:conditionS} An $\alpha$--Fr\'echet process $X=\{X_t\}_{t\in T}$ has the extremal integral representation \eqref{rep:extremalRep}, with any (some) standard Lebesgue space $(S,\mu)$ and an $\alpha$--Fr\'echet random sup--measure on $S$ with control measure $\mu$, if (only if) it satisfies Condition S. \end{Prop} \noindent The result above follows from Proposition 3.2 in \cite{stoev06extremal}, since the standard Lebesgue space $(S,\mu)$ may be chosen to be $[0,1]$, equipped with the Lebesgue measure. \begin{Rem} As shown in Kabluchko \cite{kabluchko08spectral} (Theorem 1), every max--stable process can have a spectral representation over a sufficiently rich abstract measure space. \end{Rem} In the sequel, we focus only on the rich class of $\alpha$--Fr\'echet processes that satisfy Condition S. This includes, for example, all measurable max--stable processes $X=\{X_t\}_{t\in T}$, indexed by a separable metric space $T$ (see Proposition \ref{p:measurability} below). The fact that $(S,\mu)$ is a standard Lebesgue space implies that the space of integrands $L_+^\alpha(S,\mu)$ is a complete and {\it separable} metric space with respect to the metric: \begin{equation}\label{eq:metric} \rho_{\mu,\alpha}(f,g) = \int_S|f^\alpha-g^\alpha|d\mu\,. \end{equation} This metric is natural to use when handling extremal integrals, since as $n\to\infty$, \begin{equation}\label{e:mrho} \eint{S} f_n dM_\alpha \stackrel{P} {\longrightarrow} \xi\,,\ \ \mbox{{\it if and only if, }}\ \ \rho_{\mu,\alpha}(f_n,f) = \int_S |f_n^\alpha - f^\alpha| d\mu \to 0,\, \end{equation} where $\xi = \Eint{S} f d M_\alpha$ (see e.g.\ \cite{stoev06extremal} and also Davis and Resnick \cite{davis93prediction}). In the sequel, we equip the space $L_+^\alpha(S,\mu)$ with the metric $\rho_{\mu,\alpha}$ and often write $\|f\|_{L_+^\alpha(S,\mu)}^\alpha$ for $\int_S f^\alpha d\mu$. \section{Max--Linear Isometries}\label{sec:maxLinear} The max--linear (sub)spaces of functions in $L_+^\alpha(S,\mu)$ play a key role in the representation and characterization of max--stable processes. We say that ${\cal F}$ is a \textit{max--linear sub--space} of ${L^\alpha_+(S,\mu)}$ if the following conditions hold:\\ \itemnumber {i} $af\vee bg\in{\cal F}$, for all $a,b>0,f,g\in{\cal F}$. \itemnumber {ii} ${\cal F}\subset {L^\alpha_+(S,\mu)}$ is closed in the metric $\rho_{\mu,\alpha}$. \\ \noindent In particular, we will frequently encounter the max--linear space ${\cal F} \mathrel{\mathop:}= \overline{\vee\mbox{-}{\rm{span}}}(f_t,t\in T)$, which is generated by the max--linear combinations $\vee_{1\le i\le n}a_i f_{t_i}$, $t_i\in T,\ a_i>0$, of the spectral functions in \eqref{rep:extremalRep}. In view of \eqref{e:mrho}, the set of extremal integrals $\{\eint{S} f dM_\alpha,\ f\in {\cal F}\}$ is the smallest set that is closed with respect to convergence in probability and contains all max--linear combinations $\vee_{1\le i\le n} a_i X_{t_i}$. For more details, see \cite{stoev06extremal}. An $\alpha$--Fr\'echet process $X =\{X_t\}_{t\in T}$ as in \eqref{eq:xtsft} has many equivalent spectral representations. They are all related, however, through {\it max--linear isometries} (see e.g.\ \eqref{eq:Ucanonical} below): \begin{Def} Let $\alpha>0$. The map $U:L^\alpha_+(S_1,\mu_1)\to L^\alpha_+(S_2,\mu_2)$, is said to be a max--linear isometry, if:\\ \itemnumber i $U(a_1f_1\vee a_2\,f_1) = a_1(Uf_1)\vee a_2(Uf_2), \mu_2\mbox{-a.e.}$, for all $f_1,f_2\in L^\alpha_+(S_1,\mu_1)$ and $a_1,a_2\geq 0$.\\ \itemnumber {ii} $\left\|Uf\right\|_{L^\alpha_+(\mu_2)}=\left\|f\right\|_{L^\alpha_+(\mu_1)}$, for all $f\in L^\alpha_+(S_1,\mu_1)$.\\ The max--linear isometry $U$ is called \textit{max--linear isomorphism} if it is onto. \end{Def} Consider a max--linear sub--space ${\cal F}\subset L_+^\alpha(S_1,\mu_1)$ and a max--linear isometry $U:{\cal F} \to L_+^\alpha(S_2,\mu_2)$. Our goal in this section is somewhat technical. Namely, to characterize $U$ and also identify the largest max--linear sub--space ${\cal G} \subset L_+^\alpha(S_1,\mu_1)$, such that ${\cal F}\subset{\cal G}$ and $U$ extends to ${\cal G}$ uniquely as a max--linear isometry. This is done in Theorem \ref{hardin81thm:4.2p} below. The proofs for all results in this section are given in Appendix~\ref{sec:proofMaxLinear}. It is known that all linear isometries on $L^\alpha$ spaces for $\alpha\neq 2$ are related to a \textit{regular set isomorphism} (see~\cite{lamperti58isometries}). Regular set isomorphisms also play an important in the study of max--linear isometries. \begin{Def}\label{def:regular} Let $(S_1,{\cal S}_1,\mu_1)$ and $(S_2,{\cal S}_2,\mu_2)$ be two measure spaces. A set--mapping $T:{\cal S}_1\to{\cal S}_2$ is said to be a \textit{regular set isomorphism} if:\\ \itemnumber i For all $A\in{\cal S}_1$, $T(S_1\backslash A) = T(S_1)\backslash T(A)\mod\mu_2$;\\ \itemnumber {ii} For disjoint $A_n$'s in ${\cal S}_1$, $T(\cup_{n=1}^{+\infty}A_n) = \cup_{n=1}^{+\infty}T(A_n)\mod\mu_2$; \\ \itemnumber {iii} $\mu_2(T(A)) = 0$ if and only if $\mu_1(A) = 0$. \end{Def} \begin{Rem} Regular set isomorphisms are mappings defined modulo null sets. In the sequel, we often identify measurable sets that are equal modulo null sets. \end{Rem} \noindent The next properties follow immediately from the above definition: \itemnumber {iv} If $A_1,A_2\in{\cal S}_1$ and $\mu_1(A_1\cap A_2) = 0$, then $\mu_2(T(A_1)\cap T(A_2)) = 0$.\\ \itemnumber {v} For all, not necessarily disjoint, $A_n \in {\cal S}_1$, $n\in {\mathbb N}$, we have: $$ T(\cup_{n=1}^\infty A_n) = \cup_{n=1}^\infty T(A_n)\ \ \ \mbox{ and }\ \ \ T(\cap_{n=1}^\infty A_n) = \cap_{n=1}^\infty T(A_n).$$ Any regular set isomorphism $T$ induces a canonical function mapping $Tf$, defined for all measurable functions $f$, and such that $\{Tf \in B\} = T\{f \in B\}$, mod $\mu_2$, for all Borel sets $B\in {\cal B}_{\mathbb R}$. The resulting mapping is linear and also max--linear. If $T$ is, in addition, measure preserving, then the induced mapping becomes a max--linear isometry. For more details, see Lemma~\ref{lem:maxLinearExtension} in Appendix~\ref{sec:proofMaxLinear} or Doob~\cite{doob53stochastic}. The next result shows that any max--linear isometry, which maps the identity function ${\bf 1}$ to the identity function ${\bf 1}$, is induced by a measure preserving regular set isomorphism. \begin{Thm}\label{hardin81thm:2.2p} Suppose $\alpha>0$. Let ${\cal F}$ be a max--linear sub--space of ${L^\alpha_+(S_1,\mu_1)}$ and $U:{\cal F}\to{L^\alpha_+(S_2,\mu_2)}$ be a max--linear isometry. If $\ind_{S_1}\in{\cal F}$ and $U\ind_{S_1} = \ind_{S_2}$, then $Uf= Tf$ for all $f\in{\cal F}$, where: \itemnumber i $T$ is induced by a measure preserving regular set isomorphism from $\sigma({\cal F})$ onto $\sigma(U({\cal F}))$,\\ \itemnumber {ii} $T$ is a max--linear isometry from $L^\alpha_+(S_1,\sigma(\filF),\mu_1)$ onto $L^\alpha_+(S_2,\sigma(U(\filF)),\mu_2)$, and\\ \itemnumber {iii} $T$ is the unique extension of $U$ to a max--linear isometry from $L^\alpha_+(S_1,\sigma(\filF),\mu_1)$ to ${L^\alpha_+(S_2,\mu_2)}$. \end{Thm} Not all max--linear isometries are directly induced by regular set isomorphisms. We will show next, however, that every max--linear isometry can be related to a regular set isomorphism. \begin{Def} Let $F$ be a collection of functions in $L^\alpha_+(S,\mu)$. \itemnumber i The \textit{ratio $\sigma$--field} of $F$, written $\rho(F)\mathrel{\mathop:}=\sigma\left(\left\{f_1/f_2, f_1,f_2\in F\right\}\right)$, is defined as the $\sigma$--field generated by ratio of functions in $F$, where the ratios take values in the extended interval $[0,\infty]$;\\ \itemnumber {ii} The \textit{positive ratio space} of $F$, written $\ratios F$, is defined as $L^\alpha_+(S,\rho(F),\mu)$. \\ \itemnumber {iii} The \textit{extended positive ratio space} of $F$, written $\eratios F$, is defined as the class of all functions in $L^\alpha_+(S,\mu)$ that have the form $rf$, where $r$ is non-negative $\rho(F)$-measurable and $f\in F$. \end{Def} \noindent In the following lemma, we present some important properties of the ratio $\sigma$--fields. \begin{Lem}\label{lem:ratio1} For any non--empty class of functions $F \subset {L^\alpha_+(S,\mu)}$, we have $\rho(F) = \rho(\overline{\vee\mbox{-}{\rm{span}}}(F))\subset\sigma(F)$. If, in addition, ${\bf 1}_S\in F$, then $\rho(F) = \sigma(F)$. \end{Lem} Before introducing the main result of this section, we need some auxiliary results about the notion of \textit{full support}. \begin{Def}\label{def:fullSupport} Let $(S,\mu)$ be a measurable space and $F$ be a collection of measurable real-valued functions on $(S,\mu)$. A measurable function $f_0$ is said to have \textit{full support} w.r.t. $F$ if $\mu({\rm{supp}}(g)\setminus {\rm{supp}}(f_0)) = 0$ for all $g\in F$, where ${\rm{supp}}(f)\mathrel{\mathop:}= \{f\neq 0\}$. If, in addition, $f_0\in F$, we then write ${\rm{supp}}(F) = {\rm{supp}}(f_0)$. \end{Def} \begin{Rem} Note that the definition of full support is modulo $\mu$-null sets and the definition of ${\rm{supp}}(F)$ is independent of the choice of $f_0\in F$. Also, our definition of ${\rm{supp}}(F)$ requires implicitly that $F$ contains a function $f_0$ of full support. \end{Rem} \begin{Lem}\label{lem:fullSupport1} Let ${\cal F}$ be a max--linear sub--space of ${L^\alpha_+(S,\mu)}$. If ${\cal F}$ is separable or $\mu$ is $\sigma$-finite, then there exists a function of full support in ${\cal F}$. \end{Lem} \begin{Lem}\label{lem:fullSupport2} Let ${\cal F}$ be a max--linear sub--space of ${L^\alpha_+(S_1,\mu_1)}$ and let $U:{\cal F}\to{L^\alpha_+(S_2,\mu_2)}$ be a max--linear isometry. Assume that the measures $\mu_1$ and $\mu_2$ are $\sigma$--finite. If $f_0$ has full support in ${\cal F}$, then $Uf_0$ has full support in $U({\cal F})$. \end{Lem} \noindent We now present the main result of this section. \begin{Thm}\label{hardin81thm:4.2p} Suppose $\alpha>0$ and let ${\mathcal F}$ be a max--linear sub--space of $L^\alpha_+(S_1,\mu_1)$. Suppose also that ${\rm{supp}}({{\mathcal F}}) = S_1$. If $\mu_1$ is $\sigma$-finite and $U:{\cal F}\to{L^\alpha_+(S_2,\mu_2)}$ is a max--linear isometry, then: \noindent\itemnumber i $U$ has a unique extension to a max--linear isometry $\overline U$, defined on $\calR_{e,+}(\filF)$ to $L^\alpha_+(S_2,\mu_2)$. Moreover, $\overline U$ is also onto $\calR_{e,+}(U(\filF)) \subsetL^\alpha_+(S_2,\mu_2)$ and \begin{equation}\label{hardin81thm:4.2peq:1} \overline U(rf) = (Tr)(Uf),\ \ \ \mbox{ for all } r\in\calR_+(\filF)\,,f\in{\cal F}\,, \end{equation} where the function mapping $T:\calR_+(\filF)\to\calR_+(U(\filF))$ is induced by a regular set isomorphism of $\rho({\cal F})$ onto $\rho(U({\cal F}))$. \\ \itemnumber {ii} For all $f\in {\mathcal F},$ we have \begin{equation}\label{hardin81thm:4.2peq:2} (Uf)^\alpha d\mu_2 = d\mu_{1,f}\circ{T^{-1}}\,, \end{equation} where $d\mu_{1,f} = f^\alpha d\mu_1$. \end{Thm} \begin{Rem} Equality \eqref{hardin81thm:4.2peq:2} means that the two measures are identical on the $\sigma$--field $\rho(U(F))$, i.e.\ $\int_A (Uf)^\alpha d\mu_2 = \mu_{1,f}\circ T^{-1}(A)$, for all $A \in \rho(U(F))$. In the sequel, we will interpret equalities between measures defined on different $\sigma$--fields as equality of their corresponding restrictions to the largest common $\sigma$--field. Note that in general $(Uf)^\alpha$ in \eqref{hardin81thm:4.2peq:2} does not necessarily equal the Radon--Nikodym derivative $d(\mu_{1,f}\circ T^{-1})/d\mu_2$ since the $\sigma$--field $\rho(U(F))$ is typically rougher than ${\cal B}_{S_2}$. This is why $U$ may not have a unique extension to $L_+^\alpha(S_2,\mu_2)$, in general. See Remark 3.2(c) in Rosi\'nski~\cite{rosinski06minimal} for a detailed discussion. \end{Rem} Recall the notion of equivalence in measure of two $\sigma$--fields, defined on the same measure space $(S,{\cal S},\mu)$. Namely, for two $\sigma$--fields ${\cal A},\ {{\cal B}} \subset {\cal S}$, we write ${\cal A}\sim{\cal B}\mod\mu$, if for any $A \in {\cal A}$ ($B\in{\cal B}$, respectively), there exists $B \in {\cal B}$ ($A\in{\cal A}$, respectively) such that $\mu(A\Delta B) = 0$. The following result will be used in the next section. \begin{Lem}\label{lem:ratio2} Let $F$ be a class of functions in ${L^\alpha_+(S,\mu)}$. Suppose there exists $f_0\in F$ with full support in $F$. If $S = {\rm{supp}}(f_0) \equiv {\rm{supp}}(F)$ and if $\rho(F) \sim {\cal B}_S \mod\mu$, then $\eratios F = {L^\alpha_+(S,\mu)}$. \end{Lem} This result and Theorem \ref{hardin81thm:4.2p}, provide sufficient conditions for a max--linear isometry $U$, defined on $F$, to extend uniquely to the entire space ${L^\alpha_+(S,\mu)}$. \section{Minimal Representations for $\alpha$--Fr\'echet Processes}\label{sec:minimal} Let $\{f_t^{(i)}\}_{t\in T} \subset L^\alpha_+(S_i,\mu_i),\ i=1,2$ be two spectral representations for the $\alpha$--Fr\'echet process $X = \{X_t\}_{t\in T}$. Recall that for all $t_j \in {\mathbb R},\ c_j \ge 0,\ 1\le j\le n,$ we have $$ {\mathbb P}\{X_{t_j}\le c_j^{-1},\ 1\le j\le n\} = \int_{S_1} {\Big(} \bigvee_{j=1}^n c_j f_{t_j}^{(1)} {\Big)}^\alpha d\mu_1 = \int_{S_2} {\Big(} \bigvee_{j=1}^n c_j f_{t_j}^{(2)} {\Big)}^\alpha d\mu_2. $$ One can thus define the following natural max--linear isometry: \begin{equation}\label{eq:Ucanonical} U: \overline{\vee\mbox{-}{\rm{span}}}\{f_t^{(1)}\}_{t\in T} \to \overline{\vee\mbox{-}{\rm{span}}}\{f_t^{(2)}\}_{t\in T}\,,\ \mbox{ with }\ U f_t^{(1)} := f_t^{(2)},\ \mbox{ for all }t\in T. \end{equation} In the sequel, $U$ will be called the {\it relating max--linear isometry} of the two representations. Our goal in this section is to provide convenient representations for the max--linear isometry $U$. For any standard Lebesgue space $(S,\mu)$, we have that $\{f_t\}_{t\in T}\subset L_+^\alpha(S,\mu)$ is separable, and hence by Lemma \ref{lem:fullSupport1}, the max--linear space ${\cal F}=\overline{\vee\mbox{-}{\rm{span}}}(f_t,\ t\in T)$ contains a function with {\it full support}. Therefore, by convention, we define the support of $\{f_t\}_{t\in T}$ as follows: $$ {\rm supp}\{ f_t,\ t\in T\} := {\rm{supp}}({\cal F}) \equiv {\rm supp} {\Big(} \overline{\vee\mbox{-}{\rm{span}}}(f_t,\ t\in T) {\Big)}. $$ In view of Theorem \ref{hardin81thm:4.2p}, one can readily represent the max--linear isometry $U$ in \eqref{eq:Ucanonical} in terms of a regular set isomorphism. The latter mapping however is a set--mapping rather than point mapping. It is desirable to be able to express $U$ via measurable point mappings. Unfortunately, in general such point mappings may not be unique. In order to have a unique point mapping relating the two representations, we need to impose further \textit{minimality condition} on the spectral representations. The following definition is as in Rosi\'nski~\cite{rosinski95structure} (see also~\cite{hardin82spectral}). \begin{Def}\label{def:minimality} A spectral representation $\indt f\subsetL^\alpha_+(S,\mu)$ of an $\alpha$--Fr\'echet process is said to be {\it minimal} if:\\ \noindent \itemnumber i ${\rm{supp}}\{f_t: t\in T\} = S \quad \mu\mbox{-a.e.}$, and\\ \itemnumber {ii} for any $B\in{\cal B}_S$, there exists $A\in\rho(\{f_t: t\in T\})$ such that $\mu(A\Delta B) = 0$. \end{Def} We shall also consider minimal representations with \textit{standardized support} defined as follows. \begin{Def}\label{def:standardizedSupport} A minimal representation $\indt f\subset {L^\alpha_+(S,\mu)}$ has {\it standardized support} if, up to $\mu$-null sets:\\ \noindent \itemnumber {i} $S\subset(0,1)\cup\mathbb N$,\\ \itemnumber {ii} $S\cap(0,1) = \emptyset$ or $(0,1)$ and $\mu|_{(0,1)}$ is the Lebesgue measure, \\ \itemnumber{iii} $S\cap\mathbb N = \emptyset$, $\mathbb N$ or $\{1,\cdots,N\}$, where $N\in\mathbb N$ and $\mu|_{S\cap\mathbb N}$ is the counting measure.\\ \noindent Let $({S_{I,N}},{\lambda_{I,N}})$ denote the standard support with $I=0$ or $1$ respectively according to the two cases in (i) and $N=0,N = \infty$ or $N\in\mathbb N$ respectively according to the three cases in (ii), e.g. $S_{0,\infty} = \mathbb N$ and $S_{1,N} = (0,1)\cup\{1,\dots,N\}$. \end{Def} We now show that any spectral representation of an $\alpha$--Fr\'echet process can be transformed into a minimal one with standardized support. \begin{Thm}\label{thm:standardized} Every $\alpha$--Fr\'echet process satisfying Condition S has a minimal representation $\indt f$ with standardized support $({S_{I,N}},{\lambda_{I,N}})$. That is \begin{equation}\label{rep:minRepSS} \indtauto X \stackrel{\rm d}{=} {\Big\{}\int^{\!\!\!\!\!\!\!e}_{S_{I,N}} f_t(s)M_\alpha(ds) {\Big\}}_{t\in T}\,, \end{equation} where $M_\alpha$ is the $\alpha$--Fr\'echet random sup--measure with control measure ${\lambda_{I,N}}$. \end{Thm} \begin{proof} By Proposition~\ref{prop:conditionS}, one can let $G = \{g_t\}_{t\in T}\subset L^\alpha_+((0,1),{\cal B}_{(0,1)},ds)$ be a spectral representation of the process in question, where $ds$ is the Lebesgue measure on $(0,1)$. First, we study the ratio $\sigma$--field generated by $G$. Let ${\cal G} = \overline{\vee\mbox{-}{\rm{span}}}\{g_t,t\in T\}$ and, in view of Lemma~\ref{lem:fullSupport1}, let $g\in{\cal G}$ have full support in ${\cal G}$. By Lemma~\ref{lem:ratio1}, we have $\rho(G) = \rho({\cal G})$. Without loss of generality we assume ${\rm{supp}}(g) = {\rm{supp}}({\cal G}) = (0,1)$ and $\left\|g\right\|_\alpha = 1$. Define a new measure $\mu$ on the space $((0,1),\rho({\cal G}))$ by setting $d\mu(s) = g(s)^\alpha ds$. Since $\mu$ is a probability measure, the measure space $((0,1),\rho({\cal G}),\mu)$ has at most countably many (equivalence classes of) atoms. With some abuse of notation, we represent them as $A_1,A_2,\dots,A_N$, where $N=0$ means no atoms, $N \in \mathbb N$ for finite number of atoms, and $N = \infty$ when countably infinite number of atoms are present. Set $A=\cup_{n=1}^N A_n$ and $a_i = \mu(A_i)\,,1\leq i\leq N$. Next, we define a regular set isomorphism $T_r$ of measure space $((0,1),\rho({\cal G}),\mu)$ onto measure space $({S_{I,N}},{\calB_\ssS},{\lambda_{I,N}})$ considered in Definition~\ref{def:standardizedSupport}. For the atoms, define $T_r^N(A_n) = \{n\}, n\leq N,n\in\mathbb N$. For the non--atomic subset $A_0\equiv (0,1)\setminus A$, let ${\cal S}_0 = \rho({\cal G})\cap A_0 = \{B\cap A_0, B\in \rho({\cal G})\}$ and let $\mu_i$ be the restriction of $\mu$ to $A_i, i=0,\dots,N$. The case $a_0 = 0$ is trivial since then $\mu(A_0) = \mu_0(A_0) = 0$ and we can simply ignore $(A_0,{\cal S}_0,\mu_0)$. We thus suppose that $a_0>0$ and observe that $(A_0,{\cal S}_0,\mu_0)$ is an non--atomic separable measurable space (see p167 in~\cite{halmos50measure}) with total mass $\mu(A_0) = 1-\sum_{n=1}^Na_n \equiv a_0$. Indeed, the separability of $(A_0,{\cal S}_0,\mu_0)$ is due to the fact that ${\cal G}$ restricted on $A_0$ is separable. Now, Theorem 41.C in Halmos \cite{halmos50measure} implies that there is a measure preserving regular set isomorphism, i.e., a {\it measure algebra isomorphism} $T_r^I$ from $(A_0,{\cal S}_0,\mu_0)$ {\it onto} $((0,1),{\cal B}_{(0,1)},a_0ds)$. By combining the definitions of $T_r^N$ on all atoms $A_i,\ 1\le i \le N$ and $T_r^I$ on $(A_0,{\cal S}_0,\mu_0)$, we thus obtain a regular set isomorphism $T_r\mathrel{\mathop:}= T^I_r+T^N_r$ from $((0,1),\rho({\cal G}),\mu)$ {\it onto} $({S_{I,N}},{\calB_\ssS},{\lambda_{I,N}})$. Note that $T_r$ is not necessarily measure preserving. By using $T_r$, we construct next the desired minimal representation with standardized support. Define \begin{equation}\label{hardin82thm:1.1peq:isometry} f_t(s) = T_r(g_t/g)(s)\left(a_0^{1/\alpha}{\bf 1}_{(0,1)}(s)+\sum_{n=1}^Na_n^{1/\alpha}{\bf 1}_{\{n\}}(s)\right)\,, \end{equation} where $T_r$ is the canonical map on measurable functions induced by the constructed isomorphism (see Lemma~\ref{lem:maxLinearExtension} or p452-454 \cite{doob53stochastic}) from $L^\alpha_+((0,1), \rho({\cal G}),\mu)$ onto $L^\alpha_+({S_{I,N}},{\lambda_{I,N}})$. We claim that $\{f_t\}_{t\in T}$ is a minimal representation with standardized support. It is clearly a spectral representation, since, for any $m\in\mathbb N, t_i\in T,ci_i>0,1\leq i\leq m$, \begin{eqnarray} {\Big\|}\bigvee_{i=1}^mc_if_{t_i}{\Big\|}_{L^\alpha_+({S_{I,N}},{\lambda_{I,N}})}^\alpha & = & {\Big\|}\bigvee_{i=1}^mc_iT_r(g_{t_i}/g) {\Big(}a_0^{1/\alpha}{\bf 1}_{(0,1)}+\sum_{n=1}^Na_n^{1/\alpha}{\bf 1}_{\{n\}}{\Big)} {\Big\|}_{L^\alpha_+({S_{I,N}},{\lambda_{I,N}})}^\alpha \nonumber\\ & = & {\Big\|}a_0^{1/\alpha}\bigvee_{i=1}^mc_iT_r(g_{t_i}/g){\Big\|}_{L^\alpha_+(0,1)}^\alpha + \sum_{n=1}^N{\Big|}a_n^{1/\alpha}\bigvee_{i=1}^mc_iT_r(g_{t_i}/g)(n){\Big|}^\alpha \nonumber\\ & = & {\Big\|}\bigvee_{i=1}^mc_ig_{t_i}/g{\Big\|}_{L^\alpha(A_0,\mu_0)}^\alpha + \sum_{n=1}^N{\Big\|}\bigvee_{i=1}^mc_ig_{t_i}/g{\Big\|}_{L^\alpha(A_n,\mu_n)}^\alpha \label{eq:scaling}\\ & = & {\Big\|}\bigvee_{i=1}^mc_ig_{t_i}/g{\Big\|}_{L^\alpha((0,1),\mu)}^\alpha = {\Big\|}\bigvee_{i=1}^mc_ig_{t_i}{\Big\|}_{L^\alpha_+(0,1)}^\alpha \nonumber \end{eqnarray} where \eqref{eq:scaling} follows from the fact that $T_r^I$ is a measure preserving regular set isomorphism of $A_0$ onto $(0,1)$ and since $T_r^N$ maps atoms to integer points in a one-to-one and onto manner. Indeed, restricted on each $A_i, 0\le i\le N$, $a_i^{1/\alpha}T_r$ is a max--linear isometry satisfying \begin{eqnarray*} \Big\|a_i^{1/\alpha}T_r{\bf 1}_{A_i}\Big\|_{L^\alpha(T_r(A_i),{\lambda_{I,N}})}^\alpha & = & \Big\|a_i^{1/\alpha}{\bf 1}_{T_r{A_i}}\Big\|_{L^\alpha(T_r(A_i),{\lambda_{I,N}})}^\alpha\\ & = & a_i{\lambda_{I,N}}(T_rA_i) = \mu(A_i) = \left\|{\bf 1}_{A_i}\right\|_{L^\alpha(A_i,\mu_i)}^\alpha\,. \end{eqnarray*} We will complete the proof by verifying the minimality of $\{f_t\}_{t\in T}$ (by Definition~\ref{def:minimality}). Let ${\cal F}$ denote $\overline{\vee\mbox{-}{\rm{span}}}\{f_t,t\in T\}$ and note that $g\in {\cal G} = \overline{\vee\mbox{-}{\rm{span}}}\{g_t,\ t\in T\}$. Since $T_r(g/g) = {\bf 1}_{S_{I,N}}$, by~\eqref{hardin82thm:1.1peq:isometry}, we obtain that \begin{equation}\label{eq:ssf} {f_{I,N}}(s) \mathrel{\mathop:}= a_0^{1/\alpha}{\bf 1}_{(0,1)}(s)+\sum_{n=1}^Na_n^{1/\alpha}{\bf 1}_{\{n\}}(s) \ \ \mbox{ belongs to }\ \ {\cal F}. \end{equation} This implies ${\rm{supp}}({f_{I,N}}) = {\rm{supp}}({\cal F}) = {S_{I,N}}$, and whence (i) in Definition~\ref{def:minimality} holds. To verify (ii), observe that by \eqref{hardin82thm:1.1peq:isometry} and Lemma~\ref{lem:maxLinearExtension}, $f_1/f_2 = T_r(g_1/g)/T_r(g_2/g) = T_r(g_1/g_2)$ for all $g_1,g_2\in{\cal G}$. Therefore $T_r(\rho({\cal G})) \equiv \rho({\mathcal F})$, and since, as shown above, the regular set isomorphism $T_r$ maps $\rho({\cal G})$ {\it onto} ${\cal B}_{{S_{I,N}}}$, it follows that (ii) holds. \ifthenelse{\boolean{qedTrue}}{\qed}{} \end{proof} \begin{Rem} Theorem \ref{thm:standardized} shows the existence of minimal representations with standardized support. One can have many minimal representations whose supports are not necessarily standardized in the same way. For example, in the proof of Theorem \ref{thm:standardized}, we could define $\tilde\lambda_{I,N}$ on $S_{I,N}$ so that restricted on the atoms $A_i$, $1\le i \le N$, we have $d\tilde\lambda_{I,N} = a_i^{1/\alpha}d{\lambda_{I,N}}$. In this case, one obtains a finite measure $\tilde\lambda_{I,N}$ on $S_{I,N}$ as discussed in Rosi\'nski \cite{rosinski94uniqueness} (p. 626) for the case of symmetric $\alpha$--stable processes. Our measure $\lambda_{I,N}$ may be infinite, since it is a counting measure on the atoms. \end{Rem} \begin{Rem} Theorem~\ref{thm:standardized} can be seen as a generalization of Theorem 4.1 in de Haan and Pickands III~\cite{dehaan86stationary}. Instead of minimal representation, \textit{proper representation} is involved therein. A spectral representation is proper if the spectral functions $\indt f$ satisfy (i) ${\rm{supp}}\{f_t\,,t\in T\} = S\,,\mu\mbox{-a.e.}$ and (ii) $\forall B\in{\cal B}_S$, either there exists $A\in \rho(\{f_t\,,t\in T\})$ such that $\mu(A\Delta B) = 0$ or there exists an atom $A\in\rho(\{f_t\,,t\in T\})$ such that $\mu(B\cap A)>0$. This definition is closely related to our definition of minimality, in the sense that any proper representation can be transformed into a minimal one. Indeed, this essentially involves contracting the atoms to points as in the proof of Theorem~\ref{thm:standardized}. \end{Rem} Consider the \textit{canonical} max--linear isometry $U$ relating two spectral representations as in~\eqref{eq:Ucanonical}. Theorem \ref{hardin81thm:4.2p} implies that $U$ extends uniquely to a max--linear isometry $U:\eratios{{\cal F}\topp1}\to\eratios{{\cal F}\topp2}$ between extended positive ratio spaces, where ${\cal F}\topp i = \overline{\vee\mbox{-}{\rm{span}}}\{f_t\topp i:t\in T\}\,,i=1,2$. Now, if the first spectral representation $\indt{f\topp1}$ is \textit{minimal}, then by Lemma~\ref{lem:ratio2}, $\eratios{{\cal F}\topp1} = L^\alpha_+({S_{I,N}},{\lambda_{I,N}})$. In this case, one can also represent $U$ in terms of \textit{measurable point mappings}. This \textit{point mapping representation} is developed in the following result. It will be essential for our studies in Sections \ref{sec:classification} and \ref{sec:stationary}. \begin{Thm}\label{thm:relation} Let $\indt f\subsetL^\alpha_+({S_{I,N}},{\lambda_{I,N}})$ and $\indt g\subset{L^\alpha_+(S,\mu)}$ be two spectral representations of an $\alpha$--Fr\'echet process $\indt X$. Let $U$ be the relating max--linear isometry of $\indt f$ and $\indt g$. If $\indt f$ is minimal and $\indt g$ is arbitrary, then\\ \itemnumber i $U$ can be uniquely extended to $L^\alpha_+({S_{I,N}},{\lambda_{I,N}})$; \\ \itemnumber {ii} $U$ can be represented by measurable functions $\Phi:S\to{S_{I,N}}$ and $h:S\to\mathbb R_+\setminus\{0\}$, such that $\Phi$ is onto, and the following statements hold: \begin{equation}\label{eq:pointRep1} g_t(s) = Uf_t(s) = h(s)\left(f_t\circ\Phi\right)(s)\,, \quad\mu\mbox{-a.e.} \,, \end{equation} and \begin{equation}\label{eq:pointRep2} d{\lambda_{I,N}} = d\left(\mu_h\circ\Phi^{-1}\right), \end{equation} where $d\mu_h(s) = h(s)^\alpha d\mu$. $\Phi$ is unique modulo $\mu$. \end{Thm} \begin{proof} Let $F$ and $G$ denote $\indt {f}$ and $\indt g$ respectively. By Theorem~\ref{hardin81thm:4.2p}, there exists a regular set isomorphism $T_r$ from ${\calB_\ssS}$ {\it onto} $\rho(G)$ such that \[ g_t(s) = Uf_t(s) = (T_rf_t)(s) \left(\frac{Uf_0}{T_r f_0}\right)(s),\ \ \ \mu\mbox{-a.e.}\,,\forall t\in T\,, \] for some function with full support $f_0 \in \overline{\vee\mbox{-}{\rm{span}}}\{f_t,\ t\in T\}$. In the last relation we used the facts that $T_r(1/f_0)= 1/T_r(f_0)$ and $T_r(f_t/f_0) = T_r(f_t)/T_r(f_0)$ (Lemma \ref{lem:maxLinearExtension}). Moreover, we have that \begin{equation}\label{eq:uf0a} \left(Uf_0\right)^\alpha d\mu = d(\mu_{1,f_0}\circ{T_r^{-1}}) = \left(T_r f_0 \right)^\alpha d\left({\lambda_{I,N}}\circ T_r^{-1}\right) \,,\mu\mbox{-a.e.}\,. \end{equation} By Theorem 32.5 in Sikorski~\cite{sikorski64boolean}, the regular set isomorphism $T_r$ can be induced by a point mapping $\Phi$ from $S$ onto ${S_{I,N}}$ such that $T_rf = f\circ \Phi$, for all measurable functions $f$ defined on $S_{I,N}$. Moreover, $\Phi$ is unique modulo $\mu$. Note that in general $\Phi$ is not one-to-one, because of the possible presence of atoms in $(S,\rho({\cal G}),\mu)$. To show that \eqref{eq:pointRep2} is true, let \[ \widetilde h(s) = \frac{Uf_0}{T_r f_0}(s) = \frac{Uf_0}{f_0\circ \Phi}(s)\, \] Note that by Lemma~\ref{lem:fullSupport2}, $\widetilde h(s)>0\,,\mu\mbox{-a.e.}$. Put \begin{equation}\label{eq:hs} h(s) = \left\{ \begin{array}{l@{\mbox{ if }}l} \widetilde h(s) & \widetilde h(s)>0\\ 1 & \widetilde h(s) = 0\end{array} \right. \mbox{ and } d\mu_h = h^\alpha d\mu. \end{equation} Observe that $h$ is a measurable function from $S$ to $\mathbb R_+\setminus\{0\}$. Thus, relation~\eqref{eq:pointRep2} follows by~\eqref{eq:hs} and~\eqref{eq:uf0a}. This completes the proof.\ifthenelse{\boolean{qedTrue}}{\qed}{} \end{proof} \begin{Rem}\label{rem:Phi-non-sing} Relation \eqref{eq:pointRep2} and the fact that $h(s)>0$ for all $s$ imply that $\mu\circ \Phi^{-1}\sim \lambda_{I,N}$. \end{Rem} \noindent Now, if both representations in Theorem~\ref{thm:relation} are minimal, we have the following: \begin{Coro}\label{coro:uniquePointMapping} If $\indt {f\topp i}\,,i=1,2$ are two minimal representations of an $\alpha$--Fr\'echet process $\indt X$ with standardized support $(S_{I_i,N_i},\lambda_{I_i,N_i})\,,i=1,2$, then the relating max--linear isometry $U$ from $L^\alpha_+(S_{I_1,N_1},\lambda_{I_1,N_1})$ onto $L^\alpha_+(S_{I_2,N_2},\lambda_{I_2,N_2})$ is determined by, unique modulo $\lambda_{I_2,N_2}$, functions $\Phi: S_{I_2,N_2}\to S_{I_1,N_1}$ and $h: S_{I_2,N_2}\to\mathbb R_+\setminus\{0\}$ such that $\Phi$ is one-to-one and onto and, for each $t\in T$, \begin{equation}\label{eq:minimalRelation1} f_t\topp2(s) = Uf_t\topp1(s) = h(s)\left(f_t\topp1\circ\Phi\right)(s)\,, \quad \lambda_{I_2,N_2}\mbox{-a.e.} \end{equation} and \begin{equation}\label{eq:minimalRelation2} \frac{d(\lambda_{I_1,N_1}\circ\Phi)}{d\lambda_{I_2,N_2}}(s) = h(s)^\alpha,\quad \lambda_{I_2,N_2}\mbox{-a.e.}\,. \end{equation} \end{Coro} \noindent An important consequence of Corollary~\ref{coro:uniquePointMapping} is the following. \begin{Coro}\label{coro:uniqueness} Let $\indt {f\topp i}\,,i=1,2$ be as in Corollary~\ref{coro:uniquePointMapping}. Then \[ I_1 = I_2 = I\quad\mbox{and}\quad N_1 = N_2 = N\,. \] Moreover, the relating max--linear isometry $U:L^\alpha_+({S_{I,N}},{\lambda_{I,N}})\toL^\alpha_+({S_{I,N}},{\lambda_{I,N}})$ satisfies\\ \itemnumber i if $I = 1$, then $\forall f\inL^\alpha_+(0,1)$, \begin{equation}\label{eq:ufi} Uf = \left(\dfrac\lambda{\Phi_I}\lambda\right)^\alpha(f\circ\Phi_I)\,,\lambda\mbox{-a.e.}\,, \end{equation} where $\lambda$ is the Lebesgue measure on $(0,1)$, $\Phi_I$ is a point map from $(0,1)$ onto $(0,1)$, and\\ \itemnumber {ii} if $N \neq 0$, then $\forall f\inL^\alpha_+({S_{I,N}}\cap\mathbb N,{\lambda_{I,N}})$, \begin{equation}\label{eq:ufn} Uf = f\circ\Phi_N\,, \end{equation} where $\Phi_N$ is an automorphism of ${S_{I,N}}\cap\mathbb N$. \end{Coro} \begin{proof} We start by recalling that $U$ is induced by $T_r$, which is an one-to-one isomorphism modulo ${\lambda_{I,N}}$-null sets from ${\cal B}_{S_{I_1,N_1}}$ onto ${\cal B}_{S_{I_2,N_2}}$ (by Theorem~\ref{hardin81thm:4.2p}). Since $T_r$ is a regular set isomorphism, one has that for all $A,B\in{\cal B}_{S_{I_1,N_1}}$, \[ \lambda_{I_1,N_1}(A)\lambda_{I_1,N_1}(B\setminus A) = 0\Leftrightarrow \lambda_{I_2,N_2}(T_rA)\lambda_{I_2,N_2}(T_rB\setminus T_rA) = 0\,. \] Thus $T_r$ maps \textit{atoms} to \textit{atoms} and non-atomic sets to non-atomic sets. Hence, \[ T_r\left({\cal B}_{S_{I_1,N_1}}\cap(0,1)\right) \subset {\cal B}_{S_{I_2,N_2}}\cap(0,1) \mbox{ and } T_r\left({\cal B}_{S_{I_1,N_1}}\cap\mathbb N\right) \subset {\cal B}_{S_{I_2,N_2}}\cap\mathbb N\,. \] Since $T_r$ is onto, we also have that \[ T_r\left({\cal B}_{S_{I_1,N_1}}\cap(0,1)\right) = {\cal B}_{S_{I_2,N_2}}\cap(0,1) \mbox{ and } T_r\left({\cal B}_{S_{I_1,N_1}}\cap\mathbb N\right) = {\cal B}_{S_{I_2,N_2}}\cap\mathbb N\,. \] This implies that $I_1=I_2$. Moreover, since $T_r$ is one--to--one and onto, we have $N_1 = N_2$. This also shows that $T_r:{S_{I,N}}\cap\mathbb N\to{S_{I,N}}\cap\mathbb N$ is a bijection where $I \mathrel{\mathop:}= I_1=I_2$ and $N \mathrel{\mathop:}= N_1=N_2$. By Corollary~\ref{coro:uniquePointMapping}, it follows that (i) and (ii) holds. Note that in (ii) we have simpler formula for $Uf$. This is because that on the discrete part ${S_{I,N}}\cap\mathbb N$, the function $h(s)$ defined in~\eqref{eq:minimalRelation2} equals 1.\ifthenelse{\boolean{qedTrue}}{\qed}{} \end{proof} \begin{Rem} Theorem~\ref{thm:relation} and Corollary~\ref{coro:uniquePointMapping} are valid even if the minimal representations therein do not have standardized support (see Theorem~4.1 and Theorem~4.2 in~\cite{dehaan86stationary} for results on discrete processes; see also Theorem~2.1 in Rosi\'nski~\cite{rosinski95structure} for analogous result in the sum--stable setting). The advantage of having minimal representation with \textit{standardized support} is shown in Corollary~\ref{coro:uniqueness} and further exploited in the next section. \end{Rem} \section{Classification of $\alpha$--Fr\'echet Processes}\label{sec:classification} We now apply the abstract results on max--linear isometries and minimal representations to classify $\alpha$--Fr\'echet processes. The first classification result is an immediate consequence of the notion of minimal representation with standardized support and it applies to general max--stable processes. \subsection{Continuous--discrete decomposition}\label{sec:continuousDiscreteDecomposition} Consider an $\alpha$--Fr\'echet process $X=\{X_t\}_{t\in T}$, which has a minimal representation with standardized support $\{f_t\}_{t\in T}\subset L_+^\alpha(S_{I,N},\lambda_{I,N})$. By Corollary~\ref{coro:uniqueness}, the support $({S_{I,N}},{\lambda_{I,N}})$ is unique. We therefore call $S_{I,N}$ the {\it standardized support} of $X$ and focus on the {\it continuous} and {\it discrete} parts of $S_{I,N}$, respectively: \[ S_I \mathrel{\mathop:}= {S_{I,N}}\cap(0,1),\quad\mbox{ and }\quad S_N\mathrel{\mathop:}= {S_{I,N}}\cap\mathbb N. \] Let $f\tpd I_t = f_t{\bf 1}_{S_I},$ and $f\tpd N_t = f_t{\bf 1}_{S_N}$ be the restrictions of the $f_t$'s to $S_I$ and $S_N$, respectively. One can write: \begin{equation}\label{eq:continuousDiscreteDecomposition} \indt X \stackrel{\rm d}{=} \left\{X^I_t \vee X^N_t\right\}_{t\in T}, \end{equation} where \begin{equation}\label{eq:continuousDiscreteDecomposition2} X_t^I := \int^{\!\!\!\!\!\!\!e}_{S_I}f^I_t(s)M_\alpha(ds)\quad \mbox{ and }\quad X^N_t := \int^{\!\!\!\!\!\!\!e}_{S_N}f^N_t(s)M_\alpha(ds)\,, \end{equation} are two independent $\alpha$--Fr\'echet processes. The following result shows that the decomposition \eqref{eq:continuousDiscreteDecomposition} does not depend on the choice of the representation $\{f_t\}_{t\in T}$. \begin{Thm}\label{thm:continuousDiscreteDecomposition} Let $\indt X$ be an $\alpha$--Fr\'echet process with minimal representation of standardized support $\{f_t\}_{t\in T}\subset L_+^\alpha(S_{I,N},\lambda_{I,N})$. Then:\\ \itemnumber {i} The decomposition~\eqref{eq:continuousDiscreteDecomposition} is unique in distribution. \itemnumber {ii} The processes $X^I=\indt {X^I}$ and $X^N=\indt {X^N}$ are independent and they have standardized supports $S_I$ and $S_N$, respectively. \itemnumber {iii} The functions $\{f_t^I\}_{t\in T} \subset L_+^\alpha(S_I,\lambda_I)$ and $\{f_t^N\}_{t\in T} \subset L_+^\alpha(S_N,\lambda_N)$ provide minimal representations for the processes $X^I$ and $X^N$, respectively. \end{Thm} \begin{proof} To prove {\it (i)}, suppose $\indt g\subsetL^\alpha_+({S_{I,N}},{\lambda_{I,N}})$ is another minimal representation of $X$ with standardized support and consider the decomposition $\indt X\stackrel{\rm d}{=} \left\{Y^I_t\vee Y^N_t\right\}_{t\in T}$, where \[ Y_t^I := \int^{\!\!\!\!\!\!\!e}_{S_I}g^I _t(s)M_\alpha(ds)\quad \mbox{ and }\quad Y^N_t := \int^{\!\!\!\!\!\!\!e}_{S_N}g^N _t(s)M_\alpha(ds)\,,\forall t\in T\,. \] By Corollary~\ref{coro:uniqueness}, the relating max--linear isometry $U$ of $\indt f$ and $\indt g$ is such that for all $t\in T$, $U(f_t^I) = g_t^I$ and $U(f_t^N) = g_t^N$. Moreover, $U$ remains a max--linear isometry when restricted to $S_I$ and $S_N$, and hence \[ \indt {X^I} \stackrel{\rm d}{=} \indt {Y^I}\quad \mbox{ and }\quad\indt {X^N} \stackrel{\rm d}{=} \indt {Y^N}\,. \] The last two relations imply that the decomposition~\eqref{eq:continuousDiscreteDecomposition} does not depend on the choice of the representation. The components $\indt{X^{I}}$ and $\indt{X^{N}}$ are independent since they are defined by extremal integrals over two disjoint sets $S_I$ and $S_N$. The minimality of $\indt f$ implies the minimality of $\indt{f^I}$ and $\indt {f^N}$, restricted to $S_I$ and $S_N$, respectively. This completes the proof, since the supports $S_I$ and $S_N$ of $\indt{f^I}$ and $\indt {f^N}$ are standardized (Definition \ref{def:standardizedSupport}). \end{proof} The processes $\indt{X^{I}}$ and $\indt{X^{N}}$ in the Decomposition \eqref{eq:continuousDiscreteDecomposition} will be referred to as the {\it spectrally continuous} and {\it spectrally discrete} components of $X$, respectively. The next result clarifies further their structure. \begin{Coro}\label{coro:continuousDiscreteDecomposition} Let $\indt f$ and $\indt g$ be two minimal representations with standardized support of an $\alpha$--Fr\'echet process $\indt X$. Then, the relating max--linear isometry $U$ of these representations, has the form \begin{equation}\label{eq:ufti} Uf_t^ I = \left(\dfrac\lambda{\Phi_I}\lambda\right)^{1/\alpha}(f_t^ I\circ\Phi_I) = g_t^ I\quad\mbox{and}\quad Uf_t^ N = f_t^ N\circ\Phi_N = g_t^ N\,, \lambda\mbox{-a.e.}, \ \ \forall t\in T, \end{equation} where $\Phi_I$ is a point mapping from $S_I$ onto $S_I$ and $\Phi_N$ is a permutation of $S_N$ (a one-to-one mapping from $S_N$ onto $S_N$). \end{Coro} \noindent The proof is an immediate consequence of Relations~\eqref{eq:ufi} and~\eqref{eq:ufn} above. This result shows that the discrete component of an $\alpha$--Fr\'echet process has an interesting invariance property. Namely, suppose that $X$ has a non--trivial discrete component $X^N = \{X_t^N\}_{t\in T}$. By Corollary \ref{coro:continuousDiscreteDecomposition}, there exists a {\it unique} set of functions $t\mapsto \phi_t(i),\ i\in S_N,\ t\in T$, such that: {\it (i)} ${\rm supp}\{ \phi_t,\ t\in T\} \equiv S_N$, {\it (ii)} $\rho\{\phi_t,\ t\in T\} = {\cal B}_{S_{I,N}} \equiv 2^{S_N}$ and {\it (iii)} $\sum_{1\le i \le N} \phi_t(i)^\alpha <\infty$, for all $t\in T$ and $$ \{X_t^N\}_{t\in T} \stackrel{\rm d}{=} \bigvee_{i=1}^N \phi_t(i) Z_i, $$ where $Z_i,\ 1\le i\le N$ are independent standard $\alpha$--Fr\'echet random variables. The functions $t\mapsto \phi_t(i),\ 1\le i\le N$ do not depend on the particular representation of $X^N$. By analogy with the Karhunen--Lo\`eve decomposition of Gaussian processes (see e.g. p57 in~\cite{hida93gaussian}), we call the functions $t\mapsto \phi_t(i)$ the {\it discrete principal components} of $X$. \begin{Prop}\label{p:principal-comp} The finite or countable collection of functions $\{t\mapsto \phi_t(i),\ i \in S_N,\ t\in T\},\ N\in {\mathbb N}\cup\{\infty\}$ can be the discrete principal components of an $\alpha$--Fr\'echet process, if and only if, the representation $\{\phi_t\}_{t\in T}\subset L_+^\alpha(S_N,\lambda_N)$ is minimal. \end{Prop} \noindent The proof is trivial. We state this result to emphasize that not every collection of non--negative functions can serve as discrete principal components. The {\it minimality} constraint can be viewed as the counterpart of the {\it orthogonality} condition on the principal components in the Gaussian case. The following two examples illustrate typical spectrally discrete and spectrally continuous processes. \begin{Example} Let $Z_i,\ i\in {\mathbb N}$ be independent standard $\alpha$--Fr\'echet variables and let $g_t(i)\ge 0,\ t\in T$ be such that $\sum_{i\in {\mathbb N}} g_t^\alpha(i) <\infty$, for all $t\in T$. It is easy to see that the $\alpha$--Fr\'echet process $$ X_t := \bigvee_{i\in{\mathbb N}} g_t(i) Z_i \equiv \Eint{{\mathbb N}} g_t dM_\alpha,\ \ t\in T, $$ is {\it spectrally discrete}. That is, $X = \{X_t\}_{t\in T}$ has trivial spectrally continuous component. Indeed, this follows from Theorem \ref{thm:relation} since the mapping $\Phi$ therein is onto, and thus the set $\Phi({\mathbb N}) = S_{I,N}$ is necessarily countable. \end{Example} \begin{Example} Consider the well--known $\alpha$--Fr\'echet {\it extremal process} ($\alpha>0$): \begin{equation}\label{rep:extremalProcess} \{X_t\}_{t\in\mathbb R_+} \stackrel{\rm d}{=} {\Big\{}\int^{\!\!\!\!\!\!\!e}_{\mathbb R_+}{\bf 1}_{(0,t]}(u) M_\alpha(du) {\Big\}}_{t\in \mathbb R_+}\,, \end{equation} where $M_\alpha$ has the Lebesgue control measure on $\mathbb R_+$. The process $X=\{X_t\}_{t\in\mathbb R_+}$ can be viewed as the max--stable counterpart to a sum--stable L\'evy process. This is because $X$ has \textit{independent max--increments}, i.e., for any $0=t_0<t_1<\dots<t_n$, \[ (X_{t_1},\dots,X_{t_n}) \stackrel{\rm d}{=} (\xi_1,\xi_1\vee\xi_2,\dots,\xi_1\vee\dots\vee\xi_n)\,, \] where $\xi_i = M_\alpha((t_{(i-1)},t_i])$,\ $1\le i\le n$. The representation in~\eqref{rep:extremalProcess} is minimal but its support is not standardized. Let $$ f_t(s) := s^{-1/\alpha} {\bf 1}_{(0,t]}(\log(1/s)),\ \ s\in (0,1), $$ and observe that $f_t(s) \in L_+^\alpha((0,1),ds)$. By using a change of variables one can show that $$ \{X_t\}_{t\in\mathbb R_+} \stackrel{\rm d}{=}{\Big\{} \int^{\!\!\!\!\!\!\!e}_{(0,1)} f_t(s) M_\alpha(ds){\Big\}}_{t\in \mathbb R_+}, $$ where the last representation is minimal and has standardized support. Thus, the $\alpha$--Fr\'echet extremal process $X$ is {\it spectrally continuous}. \end{Example} \subsection{Classification via co--spectral functions}\label{sec:cospectral} Here we present a characterization of $\alpha$--Fr\'echet processes based on a different point of view. Namely, instead of focusing on the spectral functions $s\mapsto f_t(s)$, we now consider the \textit{co--spectral functions} $t\mapsto f_t(s)$, which are functions of $t$, with $s$ fixed. To be able to handle the co--spectral functions, we suppose that $T$ is a {\it separable} metric space with respect to a metric $\rho_T$ and let ${\cal T}$ be its Borel $\sigma$--algebra. We say that the spectral representation $\{f_t(s)\}_{t\in T} \subset L_+^\alpha(S,\mu)$ is jointly measurable if the mapping $(t,s)\mapsto f_t(s)$ is measurable w.r.t.\ the product $\sigma$--algebra ${\cal T}\otimes{\cal S} := \sigma( {\cal T}\times {\cal S})$. The following result clarifies the connection between the joint measurability of the spectral functions $f_t(s)$ and the measurability of its corresponding $\alpha$--Fr\'echet process. \begin{Prop}\label{p:measurability} Let $(S,\mu)$ be a standard Lebesgue space and $M_\alpha$ ($\alpha>0$) be an $\alpha$--Fr\'echet random sup--measure on $S$ with control measure $\mu$. As above, let $(T,\rho_T)$ be a separable metric space. \\ \itemnumber i Let $X=\indt X$ have a spectral representation $\indt f\subset{L^\alpha_+(S,\mu)}$ as in~\eqref{rep:extremalRep}. Then, $X$ has a measurable modification if and only if $\{f_t(s)\}_{t\in T}$ has a jointly measurable modification, i.e., there exists a ${\cal T}\otimes{\cal B}_S-$measurable mapping $(s,t)\mapsto g_t(s)$, such that $f_t(s) = g_t(s)$ $\mu\mbox{-a.e.}$ for all $t\in T$. \itemnumber {ii} If an $\alpha$--Fr\'echet process $X=\{X_t\}_{t\in T}$ has a measurable modification, then it satisfies Condition S (see Definition~\ref{d:Cond-S}), and hence it has a representation as in \eqref{rep:extremalRep}. \end{Prop} \noindent The proof is given in Appendix. The above result shows that for a measurable $\alpha$--Fr\'echet process $X=\{X_t\}_{t\in T}$, one can always have a representation as in \eqref{rep:extremalRep}, with jointly measurable spectral representations. Conversely, any $X$ as in \eqref{rep:extremalRep} with measurable spectral functions has a measurable modification. Let now $\lambda$ be a $\sigma$--finite Borel measure on $T$. We will view each $f_\cdot(s)$ as an element of the classes $L^0_+(T,\calT,\lambda)$ of non--negative ${\cal T}$--measurable functions, identified with respect to equality $\lambda$--almost everywhere. Recall that a set ${\cal P}\subsetL^0_+(T,\calT,\lambda)$ is said to be a \textit{positive cone} in $L^0_+(T,\calT,\lambda)$, if $c{\cal P} \subset {\cal P}$ for all $c\ge 0$. Two cones ${\cal P}_1$ and ${\cal P}_2$ are \textit{disjoint} if ${\cal P}_1\cap{\cal P}_2 = \{0\}$. We propose a general strategy for classification of $\alpha$--Fr\'echet processes, based on any collection of disjoint positive cones ${\cal P}_j\subsetL^0_+(T,\calT,\lambda),\ 1\le j\le n$. For any $\alpha$--Fr\'echet process $X = \{X_t\}_{t\in T}$ with jointly measurable representation of full support $\{f_t(s)\}_{t\in T} \subset {L^\alpha_+(S,\mu)}$, we say the representation has a {\it co--spectral decomposition} w.r.t.\ $\{{\cal P}_j\}_{1\leq j\leq n}$, if there exist measurable sets $S\topp j, 1\leq j\leq n$, such that \begin{equation}\label{eq:Sis} S\topp j \subset \{s\in S\, :\, f_.(s) \in {\cal P}_j \},\ \ 1\le j \le n\quad\mbox{ and }\quad \mu\Big(S\setminus\bigcup_{j=1}^nS\topp j\Big) = 0\,. \end{equation} The sets $S\topp j,1\leq j\leq n$ are modulo $\mu$ disjoint. Indeed, Let $A:= \{ s\in S\, :\, f_\cdot(s) \equiv 0\}$ and note that $\mu(A) = 0$ by the fact ${\rm{supp}}\{f_t,\ t\in T\} = S$ modulo $\mu$ and Fubini's Theorem. Since ${\cal P}_j \cap {\cal P}_k = \{0\}$, we have that $S\topp j\cap S\topp k = A$ for all $1\le j \not = k \le n$. That is, the space $S$ is partitioned into $n$ modulo $\mu$ disjoint components: \begin{equation}\label{e:S-co-spec} S = S\topp1 \cup \cdots \cup S\topp n\mod\mu,\ \mbox{ with }\ \mu(S\topp j \cap S\topp k) = 0,\ j\not = k. \end{equation} This yields the {\it decomposition}: \begin{equation}\label{e:X-co-spec} \indt X \stackrel{\rm d}{=} \bccbb{X_t^{(1)} \vee \cdots \vee X_t^{(n)}}_{t\in T}\,, \end{equation} with: \ X_t^{(j)} := \Eint{S\topp j} f_t (s)M_\alpha(ds), \ \ 1\le j\le n\,,\ \ \forall t\in T. \ Note that given a spectral representation $\indt f\subsetL^\alpha_+{(S,\mu)}$, the co--spectral decomposition is defined modulo $\mu$--null sets and the induced decomposition is invariant w.r.t.\ the versions of the decomposition. Namely, if there is another co--spectral decomposition w.r.t.~$\{{\cal P}_j\}_{1\leq j\leq n}$, say $S = \bigcup_{1\leq j\leq n}\widetilde S\topp j\mod\mu$, then from~\eqref{eq:Sis} and the disjointness of $\{{\cal P}_j\}_{1\leq j\leq n}$, it follows that $\mu(\widetilde S\topp j\cap S\topp j) = 0, 1\leq j\leq n$. This yields the same decomposition~\eqref{e:S-co-spec}. Moreover, the decomposition is invariant w.r.t.\ the choice of spectral representation. \begin{Thm}\label{thm:cospectralDecomp} Suppose $\{{\cal P}_j\}_{1\leq j\leq n}$ are disjoint positive cones in $L^0_+(T,\calT,\lambda)$. For any $\alpha$--Fr\'echet process $\indt X$ with measurable spectral representation $\indt f\subsetL^\alpha_+{(S,\mu)}$, suppose $\indt f$ has a co--spectral decomposition w.r.t.\ $\{{\cal P}_j\}_{1\leq j\leq n}$. Then, \itemnumber i the decomposition~\eqref{e:X-co-spec} is unique in distribution. \itemnumber {ii} the components $\indt {X\topp j}, 1\leq j\leq n$ are independent $\alpha$--Fr\'echet processes. \end{Thm} The proof is given in Appendix. In the special case when $n=1$, Theorem~\ref{thm:cospectralDecomp} yields the following: \begin{Coro}\label{coro:cospectral} Let $X=\indt X$ be an $\alpha$--Fr\'echet process with two jointly measurable representations $\{f_t^{(i)}(s)\}_{t\in T} \subsetL^\alpha_+(S_i,\mu_i)$, $i=1,2$. Consider a positive cone ${\cal P}\subset L^0_+(T,\calT,\lambda)$. If $f_\cdot^{(1)}(s) \in {\cal P}$, for $\mu_1$--almost all $s\in S_1$, then $f_\cdot^{(2)}(s) \in {\cal P}$, for $\mu_2$--almost all $s\in S_2$. \end{Coro} Corollary \ref{coro:cospectral} can be used to distinguish between various $\alpha$--Fr\'echet processes in terms of their co--spectral functions. For example, any measurable representation of the $\alpha$--Fr\'echet {\it extremal process} in \eqref{rep:extremalProcess} should involve simple indicator--type co--spectral functions with one jump down to zero. The next result shows another application of Corollary~\ref{coro:cospectral}. \begin{Coro} Consider the moving maxima $\alpha$--Fr\'echet random fields: $$ \{X_t\}_{t\in{\mathbb R}^d} \stackrel{\rm d}{=} {\Big\{}\eint{{\mathbb R}^d} f(t-s) M_\alpha(ds){\Big\}}_{t\in{\mathbb R}^d} \ \ \mbox{ and } \ \ \{Y_t\}_{t\in {\mathbb R}^d} \stackrel{\rm d}{=} {\Big\{} \eint{{\mathbb R}^d} g(t-s)M_\alpha(ds){\Big\}}_{t\in{\mathbb R}^d}, $ with $d\in{\mathbb N}$, where $f$ and $g$ belong to $L_+^\alpha({\mathbb R}^d,\lambda)$. Here $M_\alpha$ is a an $\alpha$--Fr\'echet random sup--measure on ${\mathbb R}^d$ with the Lebesgue control measure. We have $\{X_t\}_{t\in T} \stackrel{\rm d}{=} \{Y_t\}_{t\in T}$, if and only if $g(x) = f(x+\tau)$, almost all $x\in {\mathbb R}^d$, with some fixed $\tau\in {\mathbb R}^d$. \end{Coro} \begin{proof} The `if' part is trivial. To prove the `only if' part, introduce the cone ${\cal P}_f =\{ cf(\cdot + \tau),\ c\ge 0,\ \tau\in{\mathbb R}^d \}$. Corollary \ref{coro:cospectral} implies that $g(\cdot) \in {\cal P}_f$, and hence $g(x)= c f(x + \tau)$. Since $$ \|X_0\|_\alpha^\alpha = \int_{{\mathbb R}^d} g^\alpha (x) dx = \int_{{\mathbb R}^d} f^\alpha(x) dx, $$ it follows that $c=1$. This completes the proof. \end{proof} Theorem~\ref{thm:cospectralDecomp} is a general result in the sense that the cones $\{{\cal P}_j\}_{1\le j \le n}$ may be associated with various properties of the co--spectral functions $t\mapsto f_t(s)$ of the process $X$. If $T\equiv {\mathbb R}^d,\ d\ge 1$, for example, one can consider the cones of co--spectral functions that are: {\it differentiable}, {\it continuous}, {\it integrable}, or {\it $\beta$--H\"older continuous}. Every choice of cones leads to different types of classifications for measurable $\alpha$--Fr\'echet processes or fields $X=\{X_t\}_{t\in T}$. We conclude this section by giving two important examples of classifications, motivated by existing results in the literature on sum--stable processes. \begin{Rem} Note that, instead of~\eqref{e:S-co-spec}, one may want to define $S\topp j\mathrel{\mathop:}=\{s:f_\cdot(s)\in{\cal P}_j\},1\leq j\leq n$. However, for certain cones, the $S\topp j$'s defined in this way may not be measurable. See Example~\ref{sec:pos-null}. \end{Rem} \begin{Example}[\sc Conservative--dissipative decomposition]\label{sec:cons-diss} Let $X = \{X_t\}_{t\in T}$ be an $\alpha$--Fr\'echet process with measurable representation $\{f_t(s)\}_{t\in T}\subset L_+^\alpha(S,\mu)$. Consider the following partition of the set $S = C\cup D$ with \begin{equation} C \mathrel{\mathop:}= {\Big\{}s: s\in S\,,\int_Tf_t^\alpha(s)\lambda(dt) = \infty {\Big\}}\label{decomp:C,D}\ \ \ \mbox{ and } \ \ \ D \mathrel{\mathop:}= {\Big\{}s: s\in S\,,\int_Tf_t^\alpha(s)\lambda(dt) < \infty {\Big\}}\,. \end{equation} Note that $C$ and $D = S\setminus C$ are both ${\cal S}$--measurable since $f_t(s)$ is jointly measurable. Observe that this partition of $S$ yields the decomposition: \begin{equation}\label{decomp:CD} \indt X\stackrel{\rm d}{=} \left\{X^C_t\vee X^D_t\right\}_{t\in T}\,, \end{equation} where $X^C = \indt{X^C}$ and $X^D = \indt{X^D}$ are defined as: \begin{equation}\label{decomp:CD2} X_t^C = \int^{\!\!\!\!\!\!\!e}_Cf_tdM_\alpha\ \quad \mbox{ and }\ \quad X_t^D = \int^{\!\!\!\!\!\!\!e}_Df_tdM_\alpha,\ \forall t\in T. \end{equation} Here $M_\alpha$ is an $\alpha$--Fr\'echet random sup-measure with control measure $\mu$. The decomposition in \eqref{decomp:CD} corresponds to the general decomposition in \eqref{e:X-co-spec}. Indeed, the co--spectral functions of the component $X^D$ belong to the positive cone of {\it integrable} functions, while those of $X^C$ belong to the cone of non--integrable functions. By Theorem~\ref{thm:cospectralDecomp}, the decomposition \eqref{decomp:CD} does not depend on the choice of the representation. The components $X^C$ and $X^D$ of $X$ are independent and they are called the {\it conservative} and {\it dissipative} parts of $X$, respectively. The Decomposition~\eqref{decomp:CD} is referred to as the {\it conservative--dissipative} decomposition. \end{Example} \begin{Example}[\sc Positive--null decomposition]\label{sec:pos-null} Following Samorodnitsky \cite{samorodnitsky05null}, consider $T = \mathbb R$ or $\mathbb Z$. Introduce the class ${\cal W}$ of {\it positive} weight functions $w:T\to{\mathbb R}_+$: \begin{equation}\label{e:W-def} {\cal W} := {\Big\{}w: \int_{T} w(t) \lambda(dt) = \infty,\ w(t)\mbox{ and } w(-t)\mbox{ are non--decreasing on $T\cap (0,\infty)$}{\Big\}}. \end{equation} Now we consider the cone $$ {\cal P}_{\rm pos} :=\Big\{ f \in L_+^0(T,\lambda)\, :\, \int_T w(t) f_t^\alpha \lambda(dt) =\infty, \mbox{ for all } w\in {\cal W} \Big\} $$ and its complement cone ${\cal P}_{\rm null} := \{0\}\cup (L_+^0 (T,\lambda) \setminus {\cal P}_{\rm pos})$. This choice of cones yields the decomposition \begin{equation}\label{e:pos-null} \{X_t\}_{t\in T} \stackrel{\rm d}{=}\{X_t^{\rm pos} \vee X_t^{\rm null}\}_{t\in T}, \end{equation} where \begin{equation}\label{e:pos-null-1} X_t^{\rm pos} := \Eint{P} f_t(s) M_\alpha(ds)\ \ \mbox{ and } \ \ X_t^{\rm null} := \Eint{N} f_t(s) M_\alpha(ds),\ \forall t\in T\,, \end{equation} with $P$ and $N$, measurable subsets of $S$, satisfying $\mu(P\cap N) = 0$, $\mu(S\setminus (P\cup N)) = 0$ and \begin{equation}\label{e:P-N} f_\cdot(s)\in{\cal P}_{\rm pos},\forall s\in P\quad\mbox{ and }\quad f_\cdot(s)\in {\cal P}_{\rm null}, \forall s\in N\,. \end{equation} The components $X^{\rm pos}=\{X_t^{\rm pos}\}_{t\in T}$ and $X^{\rm null}=\{X_t^{\rm null}\}_{t\in T}$ in \eqref{e:pos-null-1} are said to be the {\it positive} and {\it null} components of the process $X$, respectively. By Theorem~\ref{thm:cospectralDecomp}, Decomposition~\eqref{e:pos-null} does not depend on the choice of the measurable representation $\{f_t(s)\}_{t\in T} \subset L_+^\alpha(S,\mu)$. It is referred to as the {\it positive--null} decomposition. Note that, a technical difference between this example and Example~\ref{sec:cons-diss} is that the set $\widetilde P\mathrel{\mathop:}=\{s:f_\cdot(s)\in{\cal P}_{\rm pos}\}$ may not be measurable, even when $f_t(s)$ is jointly measurable. \end{Example} In the following section, we will study the above decompositions in more detail, for the case of stationary max--stable processes. \section{Classification of Stationary $\alpha$--Fr\'echet Processes}\label{sec:stationary} In this section, we focus on stationary, measurable max--stable processes $X = \{X_t\}_{t\in T}$, where $T={\mathbb R}$ or $T={\mathbb Z}$ is equipped with the Lebesgue or the counting measure $\lambda$, respectively. In this case, the process $X$ can be associated with a non--singular flow. Therefore, as in the symmetric $\alpha$--stable case, the ergodic theoretic properties of the flow yield illuminating structural results. \subsection{Non--singular flows associated with max--stable processes} \label{sec:flows} Following Rosi\'nski \cite{rosinski95structure} (see also Appendix A in~\cite{pipiras04stable}), we recall some notions from ergodic theory. \begin{Def}\label{def:flow} A family of functions $\phi=\{\phi_t\}_{t\in T}$, $\phi_t:S\to S$ for all $t\in T$, is a flow on $(S,{\cal B},\mu)$ if\\ \itemnumber i $\phi_{t_1+t_2}(s) = \phi_{t_2}(\phi_{t_1}(s))\,, \forall t_1,t_2\in T\,,s\in S$.\\ \itemnumber {ii} $\phi_0(s) = s\,, \forall s\in S$.\\ A flow $\phi$ is said to be \textit{measurable} if $\phi_t(s)$ is a measurable map from $T\times S$ to $S$; A flow $\phi$ is said to be \textit{non--singular} if $\mu(\phi_t^{-1}(A)) = 0 \Leftrightarrow \mu(A) = 0, \forall A\in {\cal B}\,,t\in T$. \end{Def} \noindent The next result relates the spectral functions of stationary $\alpha$--Fr\'echet processes to flows. \begin{Thm}\label{thm:flow} Let $\{X_t\}_{t\in T}$ be a stationary $\alpha$--Fr\'echet process. Suppose that $X$ has a measurable representation $\{f_t\}_{t\in T}\subset L^\alpha_+({S_{I,N}},{\lambda_{I,N}})$, which is minimal, with standardized support. Then, there exist a unique, modulo ${\lambda_{I,N}}$, non--singular and measurable flow $\{\phi_t\}_{t\in T}$ such that for each $t\in T$, \begin{equation}\label{rep:flowRepSS1} f_t(s) = \left(\dfrac{\lambda_{I,N}}{\phi_t}{\lambda_{I,N}}\right)^{1/\alpha}(s)(f_0\circ\phi_t)(s)\,,\quad {\lambda_{I,N}}\mbox{-a.e.}\,. \end{equation} \end{Thm} Theorem~\ref{thm:flow} is stronger than Theorem~6.1 in~\cite{dehaan86stationary}, where the measurability is not considered and the flow structure is not explicitly explored. The proof is given in Appendix~\ref{sec:proofStationary}. For the readers familiar with Rosi\'nski's work~\cite{rosinski95structure}, this result is similar to Theorem 3.1 therein. In view of this result, we will say that a stationary $\alpha$--Fr\'echet measurable process $\indt X$ is {\it generated} by the non--singular measurable flow $\indt\phi$ on $(S,\mu)$ if it has a spectral representation $\{f_t\}_{t\in T}\subset L_+^\alpha(S,\mu)$, where: \begin{equation}\label{rep:flowRep1} f_t = \left(\dfrac\mu{\phi_t}\mu\right)^{1/\alpha}(f_0\circ\phi_t),\quad\mu\mbox{-a.e.}, \end{equation} and \begin{equation}\label{rep:flowRep2} {\rm{supp}}\{f_0\circ\phi_t:t\in T\} = S,\quad\mu\mbox{-a.e.} \end{equation} Note that in the representation~\eqref{rep:flowRep1} and~\eqref{rep:flowRep2}, we do not assume $\indt f$ to be minimal. However, the minimality plays a crucial role in the proof of the existence of flow representations in Theorem~\ref{thm:flow}. \begin{Def}\label{def:equivalenceOfFlows} We say two measurable non--singular flows $\indt{\phi\topp1}$ and $\indt{\phi\topp2}$ on $(S_i,\mu_i), i = 1,2$, are equivalent, written $\indt{\phi\topp1}\sim^\Phi \indt{\phi\topp2}$, if there exists a measurable map $\Phi:S_2\to S_1$ such that: \itemnumber {i} There exist $N_i\subset S_i$ with $\mu_i(N_i) = 0, i=1,2$ such that $\Phi$ is a Borel isomorphism between $S_2\setminus N_2$ and $S_1\setminus N_1$.\\ \itemnumber {ii} $\mu_1$ and $\mu_2\circ\Phi^{-1}$ are mutually absolutely continuous.\\ \itemnumber {iii} $\phi_t\topp1\circ\Phi = \Phi\circ\phi_t\topp2 \mu_2\mbox{-a.e.}$ for each $t\in T$. \end{Def} The next result shows the connection between different flows generating the same stationary $\alpha$--Fr\'echet process $\indt X$. The proof is given in Appendix~\ref{sec:proofStationary}. \begin{Prop}\label{prop:equiFlow} Let $\indt X$ be a measurable stationary $\alpha$--Fr\'echet process.\\ \itemnumber i Suppose $\indt{\phi\topp 1}$ is a flow on $(S_1,\mu_1)$ and $\indt X$ is generated by $\indt{\phi\topp 1}$ with spectral function $f_0\topp1\inL^\alpha_+(S_1,\mu_1)$. If $\indt{\phi\topp2}$ is another flow on $(S_2,\mu_2)$ and it is equivalent to $\indt{\phi\topp1}$ via $\Phi$, then $\indt X$ can also be generated by $\indt{\phi\topp2}$ with the spectral function \begin{equation}\label{eq:equivalence2} f_0\topp2(s) = \left(\dfrac{\mu_1}{\Phi}{\mu_2}(s)\right)^{1/\alpha}\left(f_0\topp1\circ\Phi\right)(s)\,. \end{equation} Moreover, if $\indt {f\topp1}$ is minimal, then $\indt {f\topp2}$ is minimal.\\ \itemnumber {ii} If $\indt X$ has two measurable minimal representations generated by flows $\indt{\phi\topp i}$ on $(S_i,\mu_i)$ for $i=1,2$, then $\indt{\phi\topp1}\sim^\Phi\indt{\phi\topp2}$ and~\eqref{eq:equivalence2} holds, for some $\Phi$ satisfying conditions in Definition~\ref{def:equivalenceOfFlows}. \end{Prop} \begin{Rem} Not all flow representations are minimal. Proposition \ref{prop:equiFlow} shows, however, that any two flows corresponding to minimal representations of the same $\alpha$--Fr\'echet process are equivalent. \end{Rem} \subsection{Decompositions induced by non--singular flows}\label{sec:decompositions} The decompositions introduced in Examples \ref{sec:cons-diss} and \ref{sec:pos-null} are motivated by corresponding notions from ergodic theory. \begin{Def}\label{def:wandering} Consider a measure space $(S,\mu)$ and a measurable, non--singular map $\phi : S \to S$. A measurable set $B\subset S$ is said to be: \itemnumber i {\it wandering}: if $\phi^{-n} (B),\ n=0,1,2,\cdots$ are disjoint. \itemnumber {ii} {\it weakly wandering}: if $\phi^{-n_k} (B),\ n_k\in {\mathbb N}$ are disjoint, for an infinite sequence $0=n_0 < n_1 <\cdots$. \end{Def} Now we give two decompositions for max--stable processes. Their counterparts for sum--stable processes have been thoroughly studied (see~\cite{rosinski95structure} and~\cite{samorodnitsky05null}).\medskip {\sc Hopf (conservative--dissipative) decomposition.} The map $\phi$ is said to be {\it conservative} if there is no {\it wandering} measurable set $B\subset S$, with positive measure $\mu(B)>0$. One can show that for any measurable, non--singular map $\phi:S\to S$, there exists a partition of $S$ into two disjoint measurable sets $S = C\cup D$, $C\cap D = \emptyset$ such that: {\it (i)} $C$ and $D$ are $\phi-$invariant; {\it (ii)} $\phi:C\to C$ is conservative and $D = \cup_{k\in {\mathbb Z}} \phi^k(B),$ for some wandering set $B\subset S$. This decomposition is unique (mod $\mu$) and is called the {\it Hopf decomposition} of $S$ with respect to $\phi$. If the component $C$ is trivial, i.e.\ $\mu(C)=0$, then $\phi$ is said to be {\it dissipative}. The restrictions $\phi:C\to C$ and $\phi:D\to D$ are the {\it conservative} and {\it dissipative} components of the mapping $\phi$, respectively. Now, given a jointly measurable, non--singular flow $(t,s)\mapsto \phi_t(s),\ t\in T,\ s\in S$, one can consider the Hopf decompositions $S=C_t\cup D_t$ for each $\phi_t,\ t\in T\setminus\{0\}$. By the measurability however, it follows that $\mu(C_t \Delta C) = \mu(D_t \Delta D) = 0$, for some $C\cap D = \emptyset$, $S = C\cup D$ (see e.g.\ \cite{rosinski95structure,Krengel85ergodic}). One thus obtains that any measurable non--singular flow $\{\phi_t\}_{t\in T}$ has a Hopf decomposition $S= C\cup D$, where $\phi^C:= \{\phi_t\vert_C\}_{t\in T}$ and $\phi^D:= \{\phi_t\vert_D\}_{t\in T}$ are {\it conservative} and {\it dissipative} flows, respectively. The following result is an immediate consequence from the proofs of Theorem 4.1 and Corollary 4.2 in Rosi\'nski \cite{rosinski95structure}. \begin{Thm}\label{thm:cons-diss} Let $X=\{X_t\}_{t\in T}$ be a stationary $\alpha$--Fr\'echet process with measurable representation $\{f_t(s)\}_{t\in T}\subset L_+^\alpha(S,\mu)$ of full support. Then: \itemnumber {i} $X$ is generated by a conservative flow, if and only if, $$ \int_T f^\alpha_t(s) \lambda(dt) = \infty,\ \ \mbox{ for $\mu$--almost all $s\in S$; } $$ \itemnumber {ii} $X$ is generated by a dissipative flow, if and only if, $$ \int_T f^\alpha_t(s) \lambda(dt) < \infty,\ \ \mbox{ for $\mu$--almost all $s\in S$. } $$ \itemnumber {iii} If $X$ is generated by a conservative (dissipative) flow in one representation, then so is the case for any other measurable representation of $X$. \end{Thm} This result justifies the terminology in the {\it conservative--dissipative} decomposition of Example~\ref{sec:cons-diss}. In particular, the sets $C$ and $D$ in \eqref{decomp:CD} correspond precisely to the conservative and dissipative parts in the Hopf decomposition of the flow $\{\phi_t\}_{t\in T}$ associated with the process $X$. \medskip {\sc Positive--null decomposition.} Recall the notion of {\it weakly wandering} set (Definition \ref{def:wandering}). If one replaces `wandering' by `weakly wandering' in the Hopf decomposition, one obtains the so--called {\it positive--null decomposition} of $S$. Alternatively, the map $\phi$ is said to be {\it positive}, if there exists a finite measure $\nu \sim \mu$, such that $\phi$ is $\nu$--invariant. In this case, there are no weakly wandering sets $B$ of positive $\mu$--measure (or equivalently, $\nu$--measure). For any non--singular map $\phi$, there exists a partition $S = P\cup N$, unique modulo $\mu$, such that $P$ and $N$ are disjoint, measurable and $\phi$--invariant. Furthermore, $\phi:P\to P$ is positive, and $N = \cup_{k\ge 0} \phi^{-n_k}(B)$, for some disjoint $\phi^{-n_k}(B)$'s, where $B$ is weakly wandering. The set $N$ ($P$ resp.) is called the null--recurrent (positive--recurrent) part of $S$, w.r.t. the map $\phi$ (see e.g.\ Section 1.4 in \cite{aaronson97introduction}). As in the case of the Hopf decomposition, a jointly measurable, non--singular flow $\{\phi_t\}_{t\in T}$ gives rise to a {\it positive--null decomposition}: $S = P\cup N,$ where $\mu(P_t\Delta P) = \mu(N_t\Delta N) = 0$, for all $t\in T\setminus\{0\}$, and where $S = P_t \cup N_t$ is the positive--null decomposition of the map $\phi_t,\ t\in T\setminus\{0\}$ (see e.g.\ \cite{samorodnitsky05null,Krengel85ergodic}). Theorem 2.1 of Samorodnitsky \cite{samorodnitsky05null} about symmetric $\alpha$--stable processes applies {\it mutatis mutandis} to the max--stable case: \begin{Thm}\label{thm:pos-null} Let $X=\{X_t\}_{t\in T}$ be a stationary $\alpha$--Fr\'echet process with measurable representation $\{f_t(s)\}_{t\in T}\subset L_+^\alpha(S,\mu)$ of full support. Then: \\ \itemnumber {i} $X$ is generated by a positive flow, if and only if, for all $w\in {\cal W}$, $$ \int_T w(t) f^\alpha_t(s) \lambda(dt) = \infty,\ \ \mbox{ for $\mu$--almost all $s\in S$, } $$ where ${\cal W}$ is as in \eqref{e:W-def}.\\ \itemnumber {ii} $X$ is generated by a null flow, if and only if, for some $w\in {\cal W}$, $$ \int_T w(t) f^\alpha_t(s) \lambda(dt) < \infty,\ \ \mbox{ for $\mu$--almost all $s\in S$. } $$ \itemnumber {iii} If $X$ is generated by a positive (null) flow in one representation, then so is the case for any other measurable representation of $X$. \end{Thm} As in the Hopf decomposition, Theorem \ref{thm:pos-null} shows that the components $X^{\rm pos}$ and $X^{\rm null}$ in the decomposition \eqref{e:pos-null} are generated by positive-- and null--recurrent flows, respectively. This is because the sets $P$ and $N$ in \eqref{e:P-N} yield the positive--null decomposition of a flow $\{\phi_t\}_{t\in T}$ associated with $X$. \subsection{Structural results, examples and open questions} Here, we collect some structural results and observations on the interplay between the three types of classifications of max--stable processes discussed above. Namely, {\it (i)} continuous--discrete {\it (ii)} conservative--dissipative and {\it (iii)} positive--null. Theorems~\ref{thm:cons-diss} and~\ref{thm:pos-null} imply that the {\it positive} component of a max--stable process is {\it conservative} and the {\it dissipative} one is {\it null--recurrent}. Thus, for a measurable stationary $\alpha$--Fr\'echet process $\indt X$, we have the decomposition: \begin{equation}\label{decomp:flow} \indt X \stackrel{\rm d}{=} \bccbb{X^{\rm pos}_t \vee X^{C,\rm null}_t \vee X^D_t}_{t\in T}\,, \end{equation} where $X^{C}_t = X^{\rm pos}_t \vee X^{C,\rm null}_t$ and $X^{\rm null}_t = X^{C,\rm null}_t\vee X^{D}_t$, $t\in T$. Here $X^{\rm pos},\ X^{C,\rm null}$ and $X^D$ are independent $\alpha$--Fr\'echet processes. $X^{\rm pos}$ is positive--recurrent and conservative, $X^D$ is dissipative and null--recurrent, and $X^{C,\rm null}$ is conservative and null--recurrent. We will see that the $X^D$ is precisely the mixed moving maxima. Moreover, we show that the spectrally discrete component has no conservative--null component $X^{C,\rm null}$. The following theorem shows that the {\it purely dissipative} stationary $\alpha$--Fr\'echet processes are precisely the \textit{mixed moving maxima}. \begin{Thm}\label{thm:mmm} Let $\{X_t\}_{t\in T}$ be a measurable stationary $\alpha$--Fr\'echet process. This process is generated by a dissipative flow if and only if there exist a Borel space $W$, a $\sigma$-finite measure $\nu$ on $W$ and a function $g\inL^\alpha_+(W\times T,\nu\otimes\lambda)$ such that \[ \{X_t\}_{t\in T} \stackrel{\rm d}{=} \left\{\int^{\!\!\!\!\!\!\!e}_{W\times T}g(x,t+u)M_\alpha(dx,du)\right\}_{t\in T}\,. \] Here $M_\alpha$ is an $\alpha$--Fr\'echet random sup-measure on $W\times T$ with the control measure $\nu\otimes\lambda$ and $\lambda$ is the Lebesgue measure if $T=\mathbb R$ and the counting measure if $T=\mathbb Z$. Moreover, one can always choose $(W,\nu)$ and $g$ such that the representation $g_t(x,u) \mathrel{\mathop:}= g(x,t+u)$ is minimal. \end{Thm} \begin{proof} Since $g\inL^\alpha_+(W\times T,\nu\otimes\lambda)$, the Fubini's theorem implies $\int_{T} g(x,t+u)^\alpha \lambda(dt)<\infty$, for almost all $(x,u)\in W\times T$. This, in view of \eqref{decomp:C,D} implies that $X$ is dissipative.. The `only if' part follows as in the proof of Theorem 4.4 in Rosi\'nski \cite{rosinski95structure} from the results of Krengel \cite{krengel69}. \comment{Namely, for every dissipative flow $\indt\phi$ on $(S, \mu)$, there exists a finite standard Lebesgue space $(W,\nu)$ such that the flow $\indt\phi$ is null isomorphic to a flow $\indt\beta$ defined on $(W\times T,\nu\otimes \lambda)$ by \[ \beta_t (x,u) = (x,t+u)\,,\quad(x,u)\in W\times T\,,t\in T\,. \] That is, there exists a non--singular invertible map $\Phi:W\times T\to S$ such that $\Phi\circ\beta_t = \phi_t\circ\Phi\,,\nu\mbox{-a.e.}$ for all $t\in T$. Since $d[(\nu\otimes\lambda)\circ\beta_t]/d(\nu\otimes\lambda) = 1$ for every $t\in T$, $\indt\phi$ and $\indt\beta$ are equivalent and by Proposition~\ref{prop:equiFlow} we obtain the desired result. In effect, let $\indt f$ be the spectral function in flow representation~\eqref{rep:flowRep1}. Define \[ g_0 = \left(\dfrac{\mu}{\Phi}{(\nu\otimes\lambda)}\right)^{1/\alpha}(f_0\circ\Phi)\,,\quad g_t = g_0\circ\beta_t\,. \] Then, by Proposition \ref{prop:equiFlow}, it follows that $\indt X$ has the representation \[ \indt X \stackrel{\rm d}{=} \left\{\int^{\!\!\!\!\!\!\!e}_{W\times T}g_0\circ\beta_t(x,u) M_\alpha(dx,du)\right\}_{t\in T}\,, \] and if $\indt{f}$ is minimal, then so is $\indt g$.} \end{proof} \begin{Rem} Theorem \ref{thm:mmm} parallels the fact that the class of stationary and dissipative symmetric $\alpha$--stable processes is precisely the class of mixed moving averages (see Theorem 4.4 in \cite{rosinski95structure}). Recently, Kabluchko \cite{kabluchko08spectral} established the same result as in Theorem \ref{thm:mmm} by using an interesting {\it association device} between $\alpha$--Fr\'echet ($\alpha\in (0,2)$) and symmetric $\alpha$--stable processes. \end{Rem} As shown in \cite{stoev08ergodicity}, the mixed moving maxima processes are mixing and hence ergodic. Thus, Theorem \ref{thm:mmm} implies that the dissipative component of a max--stable process is mixing. On the other hand, Samorodnitsky \cite{samorodnitsky05null} has shown (Theorem 3.1 therein) that stationary symmetric $\alpha$--stable processes are ergodic {\it if and only if} they are generated by a null--recurrent flow. Kabluchko \cite{kabluchko08spectral} (Theorem 8 therein) has shown that this continues to be the case for stationary $\alpha$--Fr\'echet processes. The previous discussion shows that the ergodic and mixing properties of the null and dissipative components are in line with the decomposition $X_t^{\rm null} = X_t^{D}\vee X_t^{C,\rm null},\ t\in T$. An example of conservative--null flow can be found in \cite{samorodnitsky05null}. This yields non--trivial examples of sum-- and max--stable processes that are conservative and null. We are not aware, however, of an example of an ergodic max--stable process that is not mixing. The next two results clarify the structure of the stationary {\it spectrally discrete} processes in discrete $(T={\mathbb Z})$ and continuous ($T={\mathbb R}$) time, respectively. We first show that for {\it spectrally discrete} stationary max--stable time series, the conservative--dissipative and positive--null decompositions coincide. That is, such processes have no {\it conservative--null} components. Moreover, the {\it dissipative} (equivalently {\it null--recurrent}) component does not exist if the time series has only {\it finite} number of principal components. \begin{Prop} \label{p:disc-cons} Let $X = \{X_t\}_{t\in T}$, with $T={\mathbb Z}$ be a stationary $\alpha$--Fr\'echet process (time series). \medskip \itemnumber {i} $X^N$ has no conservative--null component, i.e.\ $X^{N,C,\rm null} = 0$.\\ \itemnumber {ii} If $1\leq N<\infty$, then $X^N$ is necessarily conservative, and equivalently, positive recurrent. \end{Prop} \begin{proof} Without loss of generality, suppose $X = X^N$ and let $\{f_t(s)\}_{t\in T} \subset L_+^\alpha(S_N,\lambda_N)$ be a minimal representation with standardized support for $X$. We have that $$ f_t = \left(\dfrac{\lambda_N}{\phi_t}{\lambda_N} \right)^{1/\alpha} f_0\circ\phi_t, $$ where $\phi_t:S_N\to S_N$ is a non--singular flow on $(S_N,\lambda_N)$. Since $S_N\subset\mathbb N$ and $\lambda_N$ is the counting measure, the non--singular transformations are necessarily measure--preserving, i.e., permutations. Thus the term $d(\lambda_N\circ \phi_t)/d\lambda_N \equiv 1$ and $f_t(s) = f_0\circ\phi_t(s)$. We start by proving {\it (ii)}. Since $\phi_1:\{1,\cdots,N\} \to \{1,\cdots,N\}$ is a permutation, it has a finite invariant measure and hence the flow $\indt\phi$ is positive--recurrent and hence conservative. Now we prove {\it (i)}. Note that when $1\leq N<\infty$, we have shown in {\it (ii)} that $X^N$ is conservative and positive--recurrent. For $N = \infty$, we consider two cases. First we suppose that for every $s\in S_N$, the recurrent time \begin{equation}\label{eq:taus} \tau_s\mathrel{\mathop:}=\inf \{t>0:\phi_t(s) = s\} \end{equation} is finite. Let $\mathfrak O(s)$ denote the orbit of state $s$ w.r.t. flow $\indt\phi$, i.e., $\mathfrak O(s)\mathrel{\mathop:}=\{\phi_t(s):t\in T\}$. Every orbit of $\indt\phi$ is $\tau_s$--periodic, i.e., $|\mathfrak O(s)| <\infty$. Since $N = \infty$, the total number of different orbits must be infinite. Enumerate all the orbits by $\mathfrak O_1,\mathfrak O_2,\cdots$, so that $\mathfrak O(s) = \mathfrak O_{\pi(s)}$ with $\pi: S_N\to\mathbb N$ and $S_N = \bigcup_{k\in\mathbb N}\mathfrak O_k$. Observe that the orbits are disjoint. We now define a finite invariant measure on $S_N$, equivalent to the counting measure: \[ \widetilde\lambda(\{s\})\mathrel{\mathop:}= 2^{-\pi(s)}\frac1{|\mathfrak O_{\pi(s)}|}\,,\forall s\in S_N\,. \] This measure is clearly is invariant on each $\mathfrak O_k$, for all $k\in\mathbb N$. Since $\widetilde\lambda(\mathfrak O_k) = 2^{-k}$, the measure $\widetilde \lambda$ is finite and it is clearly equivalent to the counting measure. Thus, $X^N$ is positive and conservative. On the other hand, suppose that there exists a state $s$ with $\tau_s=\infty$. Then, its orbit is infinite and non--recurrent., i.e., $|\mathfrak O_k(s)| = \infty$. Then, the flow $\indt\phi$ is both null--recurrent and dissipative on $\mathfrak O_k(s)$. Indeed, the null recurrence follows from the fact that there is no positive finite invariant measure on $\mathfrak O_k(s)$. The dissipativity follows from the remark that $\mathfrak O_k(s) = \bigcup_{j\in\mathbb Z}\phi_j(s)$ is a disjoint union. We have thus shown that $\indt\phi$ is dissipative and null--recurrent on non--recurrent orbits. \end{proof} The following result shows that the {\it continuous--time} stationary, measurable and {\it spectrally discrete} max--stable processes are trivial. \begin{Thm}\label{thm:continuousTime} Let $X = \{X_t\}_{t\in T}$, with $T = {\mathbb R}$ be a stationary and measurable $\alpha$--Fr\'echet process. If $N\geq 1$, then it must be $N = 1$. That is, the spectrally discrete component $X^N$ is the random constant process: $\{X^N_t\}_{t\in {\mathbb R}} \stackrel{\rm d}{=} \{Z\}_{t\in{\mathbb R}}$, for some $\alpha$--Fr\'echet variable $Z$. \end{Thm} \begin{proof} Let $\indt f$ and $\indt\phi$ be as in Proposition~\ref{p:disc-cons}. Observe moreover that, in this case, the $\phi_t$'s are measure--preserving bijections, and in view of Theorem \ref{thm:flow}, the flow $\{\phi_t(s)\}$ is measurable. For any {\it fixed} $s\in S_N$, consider $\tau_s$ defined in~\eqref{eq:taus}. The proof consists of three steps. \itemnumber i {\it We show first that $\tau_s = 0$ implies $\phi_t(s) \equiv s$, for all $t\in\mathbb R$.} Indeed, suppose that $\tau_s = 0$ and note that, by definition, for all $n>0$, there exist $0<t_{n,1}<t_{n,2}<1/n$ such that $\phi_{t_{n,1}}(s) = \phi_{t_{n,2}}(s) = s$. Set $T_0\mathrel{\mathop:}=\bigcup_{n\in\mathbb N}\bigcup_{k\in\mathbb Z}\{t_{n,1}+k(t_{n,2}-t_{n,1})\}$. It follows that $T_0$ is dense in $\mathbb R$ and $\phi_t(s) = s$, for all $t\in T_0$. Hence $f_t(s) = f_0\circ\phi_t(s) = f_0(s)$, for all $t\in T_0$. Now, we define a new $\alpha$--Fr\'echet process $Y = \indt Y$: \[ \indt Y \stackrel{\rm d}{=}\bccbb{\int^{\!\!\!\!\!\!\!e}_{S_N}{\bf 1}_{\{\cdot = s\}}\circ\phi_t(r) M_\alpha(dr)}_{t\in T}\,. \] Since $\indt \phi$ is a flow, $\phi_t$ is invertible, for any $t\in T_0$. Hence, for all $t\in T_0$, we have $\phi_t(r) = \phi_t(s)\equiv s$ if and only $r = s$. This shows that, for all $t\in T_0$, $$ {\bf 1}_{\{\cdot = s\}}\circ\phi_t(r) \equiv {\bf 1}_{\{\phi_t(r) = s\}} = {\bf 1}_{\{r=s\}} \equiv {\bf 1}_{\{\cdot = s\}} \circ \phi_0(r), $$ which implies that $Y_t = Y_0,$ almost surely, $\forall t\in T_0$. Moreover, as $\indt \phi$ is measurable, so is $\indt Y$ by Proposition~\ref{p:measurability}. Also, $Y=\indt Y$ is stationary, since it is generated by a measure preserving flow. Thus, the stationarity and measurability of $Y$ imply that it is {\it continuous in probability} (see Theorem~3.1 in~\cite{stoev08ergodicity}). This, and the fact that $Y_t = Y_0$, a.s., for all $t$ in a dense sub--set $T_0$ of $\mathbb R$, imply that $Y_t = Y_0$, a.s., for all $t\in{\mathbb R}$. Therefore, for the spectral functions, we obtain ${\bf 1}_{\{\phi_t(r) = s\}} = {\bf 1}_{\{r = s\}}\,,\forall r\in S_N, t\in\mathbb R$. This shows that $\phi_t(s) = s,\forall t\in\mathbb R$. \itemnumber {ii} {\it We show next that $\tau_s>0$ implies $\phi_{\tau_s}(s) = s$.} Suppose that $\phi_{\tau_s}(s) \not = s$. Then, as above, there exist $t_1,t_2\in(\tau_s,\tau_s+\tau_s/2)$ such that $\phi_{t_1}(s) = \phi_{t_2}(s) = s$. But it follows that $\phi_{t_1+k(t_2-t_1)}(s) = s$ for all $k\in\mathbb Z$. This, since $\{t_1+k(t_2-t_1)\}_{k\in{\mathbb Z}} \cap (0,\tau_s) \not= \emptyset$, contradicts the definition of $\tau_s$. \itemnumber {iii} {\it Now, we show that it is impossible to have $\tau_s>0$ for all $s\in S_N$.} Write $\mathfrak T_s = \{t:\phi_t(s_0) = s,\ \mbox{ for some }s_0\in S_N\}$. Observe that the set $\mathfrak T_s$ is countably infinite for all $s\in S_N$ such that $\tau_s>0$, since by {\it (ii)} above, $\mathfrak T_s = \{ k\tau_s\}_{k\in{\mathbb Z}}$. Note also that $\bigcup_{s\in S_N}\mathfrak T_s = \mathbb R$. However, the assumption that $\tau_s>0$ for all $s\in S_N$ would imply $\bigcup_{s\in S_N}\mathfrak T_s$ has cardinality of $\mathbb N$ equals that of $\mathbb R$, which is a contradiction. We now conclude the proof. By {\it (iii)} above, there must exist $s\in S_N$ such that $\tau_{s} = 0$. Set $\mathfrak R = \{s\in S_N:\phi_t(s) = s,\forall t\in\mathbb R\}$. We have already seen in {\it (i)} that $\tau_{s} = 0$ implies $\phi_t(s) \equiv s$, for all $t\in\mathbb R$, whence $\mathfrak R$ is $\phi$--invariant. Consider now a new $\alpha$--Fr\'echet process \[ \indt{Y}\stackrel{\rm d}{=}\bccbb{\int^{\!\!\!\!\!\!\!e}_{S_N\setminus\mathfrak R}f_t(r)M_\alpha(dr)}_{t\in T}\,. \] Since the $f_t$'s, restricted to the $\phi-$invariant set $S_N\setminus\mathfrak R$ yield a minimal representation for $Y= \indt{Y}$ with standardized support. This process is generated by the same flow $\{\phi_t\}_{t\in{\mathbb R}}$, restricted to $S_N\setminus\mathfrak R$. Since $\tau_s>0,\ \forall s\in S_N\setminus\mathfrak R$, by {\it (ii)}, it follows that $S_N\setminus\mathfrak R = \emptyset$. On the other hand, since $\phi_t(s)\equiv s\,,\forall t\in\mathbb R, s\in\mathfrak R$, the minimality of $\indt f$ implies that $| \mathfrak R | = |S_N| =1$. Therefore, $\indt X \stackrel{\rm d}{=} \{Z\}_{t\in T}$ for some $\alpha$--Fr\'echet random variable $Z$. \end{proof} \begin{Example} In contrast with Proposition \ref{p:disc-cons} {\it (i)}, the spectrally discrete component of a stationary $\alpha$--Fr\'echet time series may be dissipative if it involves infinite number of principal components. Indeed, by Theorem \ref{thm:mmm}, the moving maxima $X_t:= \Eint{\mathbb Z} f(t+s) M_\alpha(ds) \equiv \bigvee_{i\in\mathbb Z} f(t+i) M_\alpha(\{i\}),$ is dissipative and spectrally discrete, where $M_\alpha$ has the counting control measure on $\mathbb Z$. \end{Example} \begin{Example} Suppose that $(E,{\cal E},\mu)$ is a probability space, i.e.\ $\mu(E)=1$. Let $M_\alpha$ be an $\alpha$--Fr\'echet random sup--measure on $E$ with control measure $\mu$, which is defined on a {\it different} probability space. Suppose that $\{Y_t\}_{t\in T}$ is a positive stochastic process on $(E,{\cal E},\mu)$ such that ${\mathbb E}_\mu Y_t^\alpha <\infty,$ for all $t\in T$. Then, the $\alpha$--Fr\'echet process: \begin{equation}\label{rep:doublyStochastic} \indt X \stackrel{\rm d}{=} {\Big\{}\int^{\!\!\!\!\!\!\!e}_EY_t(u) M_\alpha(du){\Big\}}_{t\in T}\,, \end{equation} is said to be {\it doubly stochastic}. \end{Example} One can show that, in~\eqref{rep:doublyStochastic}, if $\indt {Y}$ is stationary, then so is $\indt X$. The Brown--Resnick processes discussed in next section shows that the converse is not always true. \section{Brown--Resnick Processes}\label{sec:BRp} Consider the following \textit{doubly stochastic process} (see e.g. \cite{kabluchko08stationary} and \cite{stoev08ergodicity}): \begin{equation}\label{rep:brownResnick} \indtr X \stackrel{\rm d}{=} \left\{\int^{\!\!\!\!\!\!\!e}_Ee^{W_t-\sigma^2_t/2}dM_1\right\}_{t\in \mathbb R}\,. \end{equation} Here $W_t$ is a zero--mean Gaussian process defined on the probability space $(E,{\cal E},\mu)$ with variance $\sigma^2_t$. Since $\,{\mathbb E}_\mu {\Big(}e^{W_t-\sigma_t^2/2} {\Big)} = 1<\infty$, the 1--Fr\'echet process in~\eqref{rep:brownResnick} is well--defined. The processes having representation~\eqref{rep:brownResnick} were first introduced by Brown and Resnick~\cite{brown77extreme} with $W_t$ being the standard Brownian motion. In general, we will call $\indtr X$ as in~\eqref{rep:brownResnick} a \textit{Brown--Resnick 1--Fr\'echet process}. Kabluchko {\it et al.} \cite{kabluchko08stationary} have shown that if $\indtr W$ has \textit{stationary increments}, then the Brown--Resnick process $\indtr X$ in~\eqref{rep:brownResnick} is stationary. The following interesting result about an arbitrary zero--mean Gaussian process with stationary increments and continuous paths is obtained by combining the results of \cite{kabluchko08stationary} and our Theorems \ref{thm:cons-diss} and \ref{thm:mmm} above. \begin{Thm}\label{thm:bRdissipative} Let $W = \indtr W$ be a Gaussian zero--mean process with stationary increments and continuous paths. If \begin{equation}\label{eq:limwt} \lim_{|t|\to\infty}\left(W_t-\sigma_t^2/2\right) = -\infty,\ \mbox{ almost surely} \end{equation} then, \begin{equation}\label{eq:intewt} \int_{-\infty}^{\infty} e^{W_t-\sigma^2_t/2}dt <\infty,\ \mbox{ almost surely}, \end{equation} where $\sigma_t^2 = \,{\mathbb E} W_t^2 = {\rm{Var}}(W_t)$. \end{Thm} \begin{proof} Let $\indtr X$ be the Brown--Resnick process defined in~\eqref{rep:brownResnick}. Note that the process $\indtr {\log X}$ is also max--stable but it has Gumbel marginals. Kabluchko {\it et al.}\cite{kabluchko08stationary} have shown that $\indtr{\log X}$ is stationary and hence so is $\indtr X$. Moreover, by Theorem~13 in~\cite{kabluchko08stationary}, Condition~\eqref{eq:limwt} implies that $\indtr{\log X}$, or equivalently, $\indtr X$, has a mixed moving maxima representation. On the other hand, Theorem~\ref{thm:mmm} implies that any process with mixed moving maxima representation is dissipative. Dissipativity of $\indtr X$ is equivalent to~\eqref{eq:intewt} by Theorem~\ref{thm:cons-diss}. This completes the proof. \end{proof} \noindent The following question arises. \begin{Question} For what general classes of continuous--path, zero mean Gaussian processes $\{W_t\}_{t\in{\mathbb R}}$ with stationary increments, is the Brown--Resnick stationary process \eqref{rep:brownResnick} {\it purely dissipative}? \end{Question} \noindent The next result provides a {\it partial} answer to this question for the interesting case when $W=\{W_t\}_{t\in{\mathbb R}}$ is the fractional Brownian motion (fBm). Recall that the fBm is a zero--mean Gaussian processes with stationary increments, which is self--similar. The process $W$ is said to be self--similar with self--similarity parameter $H>0$, if for all $c>0$, we have that $\{W_{ct}\}_{t\in {\mathbb R}}\stackrel{{\rm d}}{=}\{c^H W_t\}_{t\in{\mathbb R}}$. The fBm necessarily has the covariance function \begin{equation}\label{eq:fBm} {\mathbb E} W_t W_s = \frac{\sigma^2}{2} {\Big(} |t|^{2H} + |s|^{2H} - |t-s|^{2H}{\Big)},\ \mbox{ with } t,\ s\in{\mathbb R}, \end{equation} where $0<H\le 1$ is the self--similarity parameter of $W$. The fractional Brownian motions have versions with continuous paths (see e.g.\ \cite{samorodnitsky94stable}). \begin{Prop} The stationary Brown--Resnick processes $X=\{X_t\}_{t\in{\mathbb R}}$ associated with the fractional Brownian motions $\{W_t\}_{t\in{\mathbb R}}$ in \eqref{eq:fBm} are purely dissipative and hence they have mixed moving maxima representations. \end{Prop} \begin{proof} Without loss of generality, we will suppose that the fBm $W$ has continuous paths. As indicated above, the stationarity of $X$ follows from the fact that $W$ has stationary increments (see Kabluchko et al.~\cite{kabluchko08stationary}). Now, by Theorem~\ref{thm:cons-diss}, $X$ is dissipative, if and only if \begin{equation}\label{eq:ebt} \int_{-\infty}^\infty \exp\left\{W_t-\sigma_t^2/2\right\}{\rm d} t <\infty,\ \mbox{ almost surely}. \end{equation} It is enough to focus on the integral $\int_{0}^\infty \exp\left\{W_t-\sigma_t^2/2\right\}{\rm d} t$. By the Law of the Iterated Logarithm for fractional Brownian motion (see Oodaira~\cite{oodaira72strassen}), we have \[ \limsup_{t\to\infty}W_t/\sqrt{2\sigma_t^2\log\log t} = 1,\ \mbox{ almost surely}. \] Hence, with probability one, for any $\delta>0$, there exists $T_1$ (possibly random) such that $\forall t>T_1$, we have $W_t<(1+\delta)\sqrt{2\sigma^2_t\log\log t}$ almost surely. Moreover, there exists $T_2$ sufficiently large (possibly random), such that $\forall t>T_2$, we have $$ (1+\delta)\sqrt{2\sigma^2_t\log\log t}< \sigma_t^2/4 \equiv \sigma^2 t^{2H}/4\,\mbox{ almost surely}, $$ where $H \in (0,1]$ is the self--similarity parameter of the fractional Brownian motion $W$. Now, let $T_0 = \max(T_1,T_2)$. It follows that \[ \int_{T_0}^\infty {\rm e}^{W_t-\sigma_t^2/2}{\rm d} t < \int_{T_0}^\infty {\rm e}^{(1+\delta)\sqrt{2\sigma^2_t\log\log t} - \sigma^2 t^{2H}/ 2}{\rm d} t \leq \int_{T_0}^\infty {\rm e}^{-\sigma^2 t^{2H}/4}{\rm d} t<\infty\,\mbox{ almost surely}, \] which implies \eqref{eq:ebt} since $W_t$ is continuous, with probability one.. \end{proof} Observe that the above result continues to hold even in the degenerate case $H=1$. One then has that $W_t = tZ,\ t\in {\mathbb R}$, where $Z$ is a zero--mean Gaussian random variable. In this case, the corresponding Brown--Resnick process has a simple {\it moving maxima} representation. Indeed, for simplicity, let $\sigma^2 = {\rm Var}(Z) = 1$ and observe that $$ X_t := \Eint{E} e^{tZ(u) - t^2/2} M_1(du) = \Eint{E} e^{Z^2(u)/2} e^{- (t-Z(u))^2/2} M_1(du). $$ Note that the measure $\nu(A):= \int_E{\bf 1}_{\{Z(u)\in A\}} e^{Z^2(u)/2} \mu(du) \equiv \lambda(A)/\sqrt{2\pi}$, is up to a constant factor equal to the Lebesgue measure $\lambda$ on ${\mathbb R}$. Therefore, one can show that $$ \{X_t\}_{t\in {\mathbb R}} \stackrel{\rm d}{=} {\Big\{} \frac{1}{\sqrt{2\pi}} \Eint{\mathbb R} e^{-(t-z)^2/2} \widetilde M_1(dz){\Big\}}_{t\in {\mathbb R}}, $$ where $\widetilde M_1$ is a $1-$Fr\'echet random sup--measure with the Lebesgue control measure. This shows that $X$ in this simple case is merely a {\it moving maxima} rather than a {\it mixed moving maxima}. We have thus shown that the Brown--Resnick process~\eqref{rep:brownResnick} driven by fractional Brownian motion $\indt W$ is purely dissipative. Thus, by Theorem~\ref{thm:mmm} we have that $\indt X$ is a {\it mixed moving maxima}. It is not clear how one can prove this fact without the use of our classification results. In two very recent papers~\cite{kabluchko08stationary,kabluchko08spectral}, Kabluchko and co--authors established very similar classification results by using very different methods based on Poisson point processes on abstract path--spaces. Their approach yields directly the moving--maxima representation (and hence dissipativity) of the Brown--Resnick type processes $X$ under the alternative Condition \eqref{eq:limwt}. This condition is only shown to be {\it sufficient} for dissipativity of $X$. Its relationship with our {\it necessary and sufficient condition} \eqref{eq:intewt} is a question of independent interest. The question raised in Kabluchko \cite{kabluchko08spectral} on whether there exist stationary Brown--Resnick processes $X$ of mixed type i.e.\ with non--trivial dissipative and conservative components still remains open. In view of our new necessary and sufficient condition \eqref{eq:intewt}, this question is {\it equivalent} to the following: \begin{Question} Is it true for Gaussian processes $W= \{W_t\}_{t\in{\mathbb R}}$ with stationary increments and continuous paths that $\mu{\{} \int_{-\infty}^{\infty} e^{W_t-\sigma^2_t/2}dt <\infty {\}} \in \{0,1\}$? \end{Question}
7409d923f84075d62ebbc5e9681ef9c2c065429e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{1. Introduction} Hamiltonian formulation of $n=3$ systems has been intensively considered in the last two decades. Works \cite{ber1}, \cite{ber2} on this subject give a very large class of solutions of the Jacobi equation for the Poisson matrix $J$. Recently generalizing the solutions given in \cite{ber1} we gave the most general solution of the Jacobi equation in ${\mathbb R}^3$, \cite{ay}. Matrix $J=(J^{ij}), \quad i,j=1,2, \cdots,n$ defines a Poisson structure in ${\mathbb R}^n$ if it is skew-symmetric, $J^{ij}=-J^{ji}$, and its entries satisfy the Jacobi equation \begin{equation}\label{jacobi1} J^{li}\partial_l\,J^{jk}+J^{lj}\partial_l\,J^{ki}+J^{lk}\partial_l\,J^{ij}=0, \end{equation} where $i,j,k=1,2, \cdots,n$. Here we use the summation convention, meaning that repeated indices are summed up. We showed in \cite{ay} that the general solution of the above equation (\ref{jacobi1}) in the case $n=3$ has the form \begin{equation}\label{GenSol} J^{ij}=\mu \epsilon^{ijk}\partial_k \Psi,~~ i,j=1,2,3, \end{equation} where $\mu$ and $\Psi$ are arbitrary differentiable functions of $x^{i}, t$, $i=1,2,3$ and $\epsilon^{ijk}$ is the Levi-Civita symbol. Here $t$ should be considered as a parameter. In the same work we have also considered a bi-Hamiltonian representation of Hamiltonian systems. It turned out that any Hamiltonian system in ${\mathbb R}^3$ has a bi-Hamiltonian representation. In the present paper we prove that any $n$-dimensional dynamical system \begin{equation} \dot{\vec{x}}=\vec{X}(x^{1},x^{2},\ldots ,x^{n},t), \label{DinSystem} \end{equation}% where $\vec{x}=(x^{1},x^{2},\ldots ,x^{n}),$ is Hamiltonian, that is, has the form% \begin{equation} \dot{{x^{i}}}=J^{ij}\partial _{j}H,~~ i=1,2,\ldots ,n, \label{HamEqn} \end{equation}% where $J=(J^{ij})$ is a Poisson matrix and $H$, as well as $J^{ij},$ are differentiable functions of the variables $x_{1},x_{2},\ldots ,x_{n},t$. Moreover, we show that the system (\ref{DinSystem}) is $(n-1)$-Hamiltonian. This problem in the case $n=3$ was considered in \cite{haas}, \cite{zhong} where authors start with an invariant of the dynamical system as a Hamiltonian and then proceed by writing the system in the form (\ref{HamEqn}) and imposing conditions on $J$ so that it satisfies the Jacobi equation. But proofs given in these works are, as it seems to us, incomplete and not satisfactory. Using (\ref{GenSol}) for matrix $J$ we can write equation (\ref{HamEqn}) in ${\mathbb R}^3$ as \begin{equation}\label{CrossProdEqn} \dot{\vec{x}}=\mu \vec{\nabla} \Psi \times \vec{\nabla} H. \end{equation} Let $\vec X$ be a vector field in ${\mathbb R}^3$. If $H_1$ and $H_2$ are two invariant functions of $\vec X$, i.e., $\vec{X}(H_\alpha)=X^j\partial_jH_{\alpha}=0, \quad \alpha=1,2$, then $\vec X$ is parallel to $\vec{\nabla} H_1\times \vec{\nabla} H_2$. Therefore \begin{equation}\label{CrossProdField} \vec{X}=\mu \vec{\nabla} H_1\times \vec{\nabla} H_2, \end{equation} where the function $\mu$ is a coefficient of proportionality. The right-hand side of equation (\ref{CrossProdField}) is in the same form as the right-hand side of equation (\ref{CrossProdEqn}), so $\vec{X}$ is a Hamiltonian vector field. We note that the equation which allows to find the invariants of a vector field $\vec X$ is a first order linear partial differential equation. We remark here that dynamical systems in ${\mathbb R}^3$ differ from the dynamical systems in ${\mathbb R}^n$ for $n>3$. We know the general solution (\ref{GenSol}) of the Jacobi equation (\ref{jacobi1}) in ${\mathbb R}^3$. In ${\mathbb R}^n$, as we shall see in the last section, we know only the rank $2$ solutions of the Jacobi equations for all $n$. An important difference of our work, contrary to other works in the subject, is that in the construction of the Poisson structures we take into account the invariant functions of the vector field $\vec{X}$ rather than the invariants (constants of motion) of the dynamical system. The total time derivative of a differentiable function $F$ in ${\mathbb R}^n$ along the phase trajectory is given by \begin{equation} {dF \over dt}={\partial F \over \partial t}+\vec{X} \cdot \vec{\nabla} F. \end{equation} An invariant function of the vector field $\vec{X}(x^{1},x^{2}, \ldots,x^{n}, t)$, i.e., $\vec{X} \cdot \vec{\nabla} F=0$, is not necessarily an invariant function (constant of motion) of the dynamical system. For autonomous systems where $\vec{X}=\vec{X}(x^{1},x^{2}, \ldots,x^{n})$ these invariant functions are the same. We give a representation of the vector field $\vec{X}$ in terms of its invariant functions. We show that all autonomous dynamical systems are super-integrable. A key role plays the existence of $n-1$ functionally independent solutions $\zeta_{\alpha}(x^{1},x^{2}, \ldots,x^{n},t), (\alpha=1,2,\cdots, n-1)$ of the linear partial differential equation \begin{equation}\label{lin1} \vec{X} \cdot \vec{\nabla} \zeta \equiv X^{1}\, {\partial \zeta \over \partial x^{1}}+X^{2}\, {\partial \zeta \over \partial x^{2}}+\cdots+X^{n}\, {\partial \zeta \over \partial x^{n}}=0, \end{equation} where $X^{i}=X^{i}(x^{1},x^{2},\ldots,x^{n},t)$, ~$i=1,2,\cdots,n$, are given functions (see \cite{olv}-\cite{sned}). For all $\alpha=1,2,\cdots, n-1$, $\vec{\nabla} \zeta_{\alpha}$ is perpendicular to the vector field $\vec{X}$. This leads to the construction of the rank 2 Poisson tensors for $n>3$: \begin{equation}\label{poysonn0} J_{\alpha}^{ij}=\mu \,\epsilon^{\alpha \alpha_{1} \alpha_{2} \cdots \alpha_{n-2}}\,\,\epsilon ^{ijj_{1}\cdots j_{n-2}}\,\partial _{j_{1}}\zeta_{\alpha_{1}}\,\partial _{j_{2}}\,\zeta_{\alpha_{2}}\cdots \partial _{j_{n-2}}\zeta_{\alpha_{n-2}}, \end{equation} where $i,j=1,2,\cdots ,n$, and $\alpha=1,2,\cdots, n-1$. Here $\epsilon ^{ijj_{1}\cdots j_{n-2}}$ and $\epsilon^{\alpha \alpha_{1} \alpha_{2} \cdots \alpha_{n-2}}$ are Levi-Civita symbols in $n$ and $n-1$ dimensions respectively. Any dynamical system with the vector field $\vec{X}$ possesses Poisson structures in the form given in (\ref{poysonn0}). Hence we can give a classification of dynamical systems in ${\mathbb R}^n$ with respect to the invariant functions of the vector field $\vec{X}$. There are mainly three classes where the super-integrable dynamical systems constitute the first class. By the use of the invariant functions of the vector field $\vec{X}(x^{1},x^{2}, \ldots,x^{n}, t)$ in general we give a Poisson structure in ${\mathbb R}^n$ which has rank 2. For autonomous systems, the form (\ref{poysonn0}) of the above Poisson structure first was given in the works \cite{raz} and \cite{nam}. Our results in this work are mainly local. This means that our results are valid in an open domain of ${\mathbb R}^n$ where the Poisson structures are different from zero. In \cite{ay} we showed that the Poisson structure (\ref{GenSol}) in ${\mathbb R}^3$ preserves its form in the neighborhood of irregular points, lines and planes. In the next section we give new proofs of the formula (\ref{GenSol}) and prove that any dynamical system in ${\mathbb R}^3$ is Hamiltonian. So, following \cite{ay} we show that any dynamical system in ${\mathbb R}^3$ is bi-Hamiltonian. Applications of these theorems to several dynamical systems are presented. Here we also show that the dynamical system given by Bender at al \cite{ben} is bi-Hamiltonian. In section 3 we discuss Poisson structures in ${\mathbb R}^n$. We give a representation of the Poisson structure in ${\mathbb R}^n$ in terms of the invariant functions of the vector field $\vec{X}$. Such a representation leads to a classification of dynamical systems with respect to these functions. \section*{2. Dynamical Systems in ${\mathbb R}^3$} Although the proof of (\ref{GenSol}) was given in \cite{ay}, here we shall give two simpler proofs. The first one is a shorter proof than the one given in \cite{ay}. In the sequel we use the notations $x^{1}=x, x^2=y, x^3=z$. \vspace{0.3cm} \noindent {\bf Theorem 1.} {\it All Poisson structures in ${\mathbb R}^3$ have the form (\ref{GenSol}), i.e., $J^{ij}=\mu\, \epsilon^{ijk}\, \partial_{k}\, H_{0}$. Here $\mu$ and $H_{0}$ are some differentiable functions of $x^{i}$ and $t$, ($i=1,2,3$) } \vspace{0.3cm} \noindent {\bf Proof.} Any skew-symmetric second rank tensors in ${\mathbb R}^3$ can be given as \begin{equation}\label{j1} J^{ij}=\epsilon^{ijk} J_{k}, ~~ i,j=1,2,3, \end{equation} where $J_{1},J_{2}$ and $J_{3}$ are differentiable functions in ${\mathbb R}^3$ and we assume that there exists a domain $\Omega$ in ${\mathbb R}^3$ so that these functions do not vanish simultaneously. When (\ref{j1}) inserted into the Jacobi equation (\ref{jacobi1}) we get \begin{equation}\label{j2} \vec{J} \cdot (\vec{\nabla} \times \vec{J})=0, \end{equation} where $\vec{J}=(J_{1},J_{2},J_{3})$ is a differentiable vector field in ${\mathbb R}^3$ not vanishing in $\Omega$. We call $\vec{J}$ as the Poisson vector field. It is easy to show that (\ref{j2}) has a local scale invariance. Let $\vec{J}=\psi\, \vec{E}$, where $\psi$ is an arbitrary function. If $\vec{E}$ satisfies (\ref{j2}) then $\vec{J}$ satisfies the same equation. Hence it is enough to show that $\vec{E}$ is proportional to the gradient of a function. Using freedom of local scale invariance we can take $\vec{E}=(u,v,1)$ where $u$ and $v$ are arbitrary functions in ${\mathbb R}^3$. Then (\ref{j2}) for vector $\vec{E}$ reduces to \begin{equation}\label{j3} \partial_y u-\partial_x v-v \partial_z u+u \partial_z v=0, \end{equation} where $x,y,z $ are local coordinates. Letting $u={\partial_x f \over \rho}$ and $v={\partial_y f \over \rho}$, where $f$ and $\rho$ are functions of $x,y,z$ we get \begin{equation} \partial_x f\, \partial_y(\rho-\partial_z f)-\partial_y f\partial_x(\rho-\partial_z f)=0. \end{equation} General solution of this equation is given by \begin{equation} \rho-\partial_z f=h(f,z), \end{equation} where $h$ is an arbitrary function of $f$ and $z$. Then the vector filed $\vec{E}$ takes the form \begin{equation}\label{j4} \vec{E}={1 \over \partial_z f+h}\, (\partial_x f,\partial_y f,\partial_z f+h). \end{equation} Let $g(f,z)$ be a function satisfying $g_{,z}=h\partial_f g$. Here we note that $\partial_{z}\, g(f,z)={\partial g \over \partial f}\, {\partial_{z} f}+g_{,z}$ where $g_{,z}=\partial_{s}\,g(f(x,y,z),s)|_{s=z}$. Then (\ref{j4}) becomes \begin{equation} \vec{E}={1 \over (\partial_zf+h) \partial_f g}\, \vec{\nabla}\, g, \end{equation} which completes the proof. Here $\partial_{f}\, g={\partial g \over \partial f}$. $\Box$ \vspace{0.3cm} \noindent The second proof is an indirect one which is given in \cite{sned} (Theorem 5 in this reference). \vspace{0.3cm} \noindent {\bf Definition 2}.\, {\it Let $\vec F$ be a vector field in ${\mathbb R}^3$. Then the equation $ \vec{F} \cdot d \vec{x}=0$ is called a Pfaffian differential equation. A Pfaffian differential equation is called integrable if the 1-form $\vec{F} \cdot d \vec{x}=\mu d H $, where $\mu$ and $H$ are some differentiable functions in ${\mathbb R}^3$.} \vspace{0.3cm} \noindent Let us now consider the Pfaffian differential equation with the Poisson vector field $\vec{J}$ in (\ref{j1}) \begin{equation} \vec{J} \cdot d\vec{x}=0. \end{equation} For such Pfaffian differential equations we have the following result (see \cite{sned}). \vspace{0.3cm} \noindent {\bf Theorem 3}. {\it A necessary and sufficient condition that the Pfaffian differential equation $\vec{J} \cdot d \vec{x}=0$ should be integrable is that $\vec{J} \cdot (\vec{\nabla} \times \vec{J})=0$.} \vspace{0.3cm} \noindent By (\ref{j2}), this theorem implies that $\vec{J}=\mu {\vec \nabla} \Psi$. A well known example of a dynamical system with Hamiltonian structure of the form (\ref{HamEqn}) is the Euler equations. \vspace{0.3cm} \noindent {\bf Example 1.} The Euler equations \cite{olv} are \begin{equation}\label{e} \begin{array}{lll} \dot{x}&=&\displaystyle{\frac{I_2-I_3}{I_2I_3}}yz,\\ \dot{y}&=&\displaystyle{\frac{I_3-I_1}{I_3I_1}}xz,\\ \dot{z}&=&\displaystyle{\frac{I_1-I_2}{I_1I_2}}xy,\\ \end{array} \end{equation} where $ I_1,I_2,I_3\in {\mathbb R}$ are some (non-vanishing) real constants. This system admits Hamiltonian representation of the form~(\ref{HamEqn}). The matrix $J$ can be defined in terms of functions $\Psi=H_{0}=-\frac{1}{2}(x^2+y^2+z^2)$ and $\mu=1$, and we take $H=H_{1}=\displaystyle{\frac{x^2}{2I_1}+\frac{y^2}{2I_2}+\frac{z^2}{2I_3}}$.\\ Writing the Poisson structure in the form (\ref{GenSol}) allows us to construct bi-Hamiltonian representations of a given Hamiltonian system. \vspace{0.3cm} \noindent {\bf Definition 4.} {Two Poisson structures $J_{0}$ and $J_{1}$ are compatible, if the sum $J_{0}+J_{1}$ defines also a Poisson structure.} \vspace{0.3cm} \noindent {\bf Lemma 5.} {\it Let $\mu, H_{0},$ and $H_{1}$ be arbitrary differentiable functions. Then the Poisson structures $J_{0}$ and $J_{1}$ given by $J_{0}^{ij}=\mu\epsilon^{ijk}\partial_k \,H_{0}$ and $J_{1}^{ij}=-\mu \epsilon^{ijk}\partial_k \,H_{1}$ are compatible.} \vspace{0.3cm} \noindent This suggests that all Poisson structures in ${\mathbb R}^3$ have compatible companions. Such compatible Poisson structures can be used to construct bi-Hamiltonian systems (for Hamiltonian and bi-Hamiltonian systems see \cite{olv},\cite{Blz} and the references therein). \vspace{0.3cm} \noindent {\bf Definition 6.} {\it A Hamiltonian equation is said to be bi-Hamiltonian if it admits compatible Poisson structures $J_{0}$ and $J_{1}$ with the corresponding Hamiltonian functions $H_{1}$ and $H_{0}$ respectively, such that \begin{equation} \frac{dx}{dt}=J_{0}\nabla H_{1}=J_{1}\nabla\,H_{0}. \end{equation} } \vspace{0.3cm} \noindent {\bf Lemma 7.} {\it Let $J_{0}$ be given by (\ref{GenSol}), i.e., $ J_{0}^{ij}=\mu \epsilon ^{ijk}\partial _{k}\,H_{0},$ and let $H_{1}$ be any differentiable function, then the Hamiltonian equation \begin{equation} \frac{dx}{dt}=J_{0}\nabla H_{1}=J_{1}\nabla\,H_{0} =\mu\, {\vec{\nabla} H_{1}} \times {\vec{\nabla} H_{0}}, \end{equation} is bi-Hamiltonian with the second Poisson structure given by $J_{1}$ with entries $J_{1}^{ij}=-\mu \epsilon^{ijk}\partial_k H_{1}$ and the second Hamiltonian $H_{0}$.} \vspace{0.3cm} Let us prove that any dynamical system in ${\mathbb R}^3$ has Hamiltonian form. \vspace{0.3cm} \noindent {\bf Theorem 8}.\,{\it All dynamical systems in ${\mathbb R}^3$ are Hamiltonian. This means that any vector field $\vec{X}$ in ${\mathbb R}^3$ is Hamiltonian vector field. Furthermore all dynamical systems in ${\mathbb R}^3$ are bi-Hamiltonian.} \vspace{0.3cm} \noindent {\bf Proof}. Let $\zeta$ be an invariant function of the vector field $\vec{X}$, i.e., $X(\zeta) \equiv \vec{X} \cdot \vec{\nabla} \zeta=0$. This gives a first order linear differential equation in ${\mathbb R}^3$ for $\zeta$. For a given vector field $\vec{X}=(f,g,h)$ this equation becomes \begin{equation}\label{first} f(x,y,z,t)\, {\partial \zeta \over \partial x}+ g(x,y,z,t)\, {\partial \zeta \over \partial y}+ h(x,y,z,t)\, {\partial \zeta \over \partial z}=0, \end{equation} where $x, y, z$ are local coordinates. From the theory of first order linear partial differential equations \cite{olv}, \cite{olv1}, \cite{sned} the general solution of this partial differential equation can be determined from the following set of equations \begin{equation}\label{syst} {dx \over f(x,y,z,t)}={dy \over g(x,y,z,t)}={dz \over h(x,y,z,t)}. \end{equation} There exist two functionally independent solutions $\zeta_{1}$ and $\zeta_{2}$ of (\ref{syst}) in an open domain $D \subset {\mathbb R}^3$ and the general solution of (\ref{first}) will be an arbitrary function of $\zeta_{1} $ and $\zeta_{2}$, i.e., $\zeta=F(\zeta_{1}, \zeta_{2})$. This implies that the vector field $\vec{X}$ will be orthogonal to both $\vec{\nabla} \zeta_{1}$ and $\vec{\nabla} \zeta_{2}$. Then $\vec{X}=\mu \, (\vec{\nabla} \zeta_{1}) \times (\vec{\nabla} \zeta_{2})$. Hence the vector field $\vec{X}$ is Hamiltonian by (\ref{CrossProdEqn}). $\Box$ \vspace{0.3cm} \noindent This theorem gives also an algorithm to find the Poisson structures or the functions $H_{0}$, $H_{1}$ and $\mu$ of a given dynamical system. The functions $H_{0}$ and $H_{1}$ are the invariant functions of the vector field $\vec{X}$ which can be determined by solving the system equations (\ref{syst}) and $\mu$ is determined from \begin{equation}\label{mu} \mu={\vec{X} \cdot \vec{X} \over \vec{X} \cdot (\vec{\nabla}\, H_{0} \times \vec{\nabla}\, H_{1})}. \end{equation} Note that $\mu$ can also be determined from \begin{eqnarray}\label{mu1} \mu&=&{X^{1} \over \partial_{2} H_{0} \partial_{3} H_{1}-\partial_{3} H_{0} \partial_{2} H_{1}}\\ &=& {X^{2} \over \partial_{3} H_{1} \partial_{3} H_{1}-\partial_{1} H_{0} \partial_{3} H_{1}} \nonumber\\ &=& {X^{3} \over \partial_{1} H_{0} \partial_{2} H_{1}-\partial_{2} H_{0} \partial_{1}H_{1}}. \nonumber \end{eqnarray} \vspace{0.3cm} \noindent {\bf Example 2.} As an application of the method described above we consider Kermac-Mckendric system \begin{equation} \begin{array}{lll} \dot{x}&=&-rxy,\\ \dot{y}&=&rxy-ay,\\ \dot{z}&=&ay,\\ \end{array} \end{equation} where $r,a\in {\mathbb R}$ are constants. Let us put the system into Hamiltonian form. For the Kermac-Mckendric system, equations (\ref{syst}) become \begin{equation}\label{syst1} {dx \over -r xy}={dy \over rxy-a y}={dz \over ay}. \end{equation} Here $a$ and $r$ may depend on $t$ in general. Adding the numerators and denominators of (\ref{syst1}) we get \begin{equation} {dx \over -r xy}={dx+dy+dz \over 0}. \end{equation} Hence $H_{1}=x+y+z$ is one of the invariant functions of the vector field. Using the first and last terms in (\ref{syst1}) we get \begin{equation} {dx \over -r x}={dz \over a}, \end{equation} which gives $H_{0}=r\,z+a\ln x$ as the second invariant function of the vector field $\vec{X}$. Using (\ref{mu}) we get $\mu=xy$. Since $\vec X=\mu \vec{\nabla} H_0 \times \vec{\nabla} H_1$, the system admits a Hamiltonian representation where the Poisson structure $J$ is given by (\ref{GenSol}) with $\mu=xy$, $\Psi=H_{0}=rz+a\ln x$, and the Hamiltonian is $H_{1}=x+y+z$. \vspace{0.3cm} \noindent {\bf Example 3.} The dynamical system is given by \begin{equation} \begin{array}{lll} \dot{x}&=&yz (1+2x^2\, N/D),\\ \dot{y}&=&-2xz (1-y^2 N/D),\\ \dot{z}&=&xy(1+2 z^2 N/D),\\ \end{array} \end{equation} where $N=x^2+y^2+z^2-1,~D=x^2 y^2+y^2 z^2+4 x^2 z^2$. This example was obtained by Bender et all \cite{ben} by complexifying the Euler system in Example 1. They claim that this system is not Hamiltonian apparently bearing in mind the more classical definition of a Hamiltonian system. Using the Definition 6 we show that this system is not only Hamiltonian but also bi-Hamiltonian. We obtain that \begin{equation} H_{0}={(N+1)^2 \over D}\,N,~~ H_{1}={x^2-z^2 \over D}(2y^2 z^2+4 x^2 z^2 +y^4+2x^2 y^2-y^2). \end{equation} Here \begin{equation} \mu={D^2 \over 4[3D^2+D\, P+Q]}, \end{equation} where \begin{eqnarray} P&=&-2x^4+4y^4-4x^2 y^2+x^2-2y^2-4y^2 z^2+14 z^4+z^2, \nonumber \\ Q&=&-2x^8+12x^6 z^2+2 x^6-20 x^4 z^4 -6x^4 z^2-52 x^2 z^6-6 x^2 z^4 \nonumber\\ && +y^8-y^6+4y^4 z^4-16 y^2 z^6 -2 z^8 +2 z^6. \end{eqnarray} Indeed these invariant functions were given in \cite{ben} as functions $A$ and $B$. The reason why Bender et al \cite{ben} concluded that the system in Example 3 is non-Hamiltonian is that the vector filed $\vec{X}$ has nonzero divergence. It follows from $\vec{X}=\mu \vec{\nabla}H_{0}\times \vec{\nabla}% H_{1}$ that $\vec{\nabla}\cdot \left( {\frac{1}{\mu }}\,\vec{X}\right) =0$. When $\mu $ is not a constant the corresponding Hamiltonian vector field has a nonzero divergence. \vspace{0.3cm} \noindent {\bf Remark 1}.\, \thinspace\ With respect to the time dependency of invariant functions of the vector field $\vec{X}$ dynamical systems in $% {\Bbb R}^{3}$ can be split into three classes. \vspace{0.3cm} \noindent {\bf Class A}.\, Both invariant functions $H_{0}$ and $H_{1}$ of the vector field $\vec{X}$ do not depend on time explicitly. In this case both $H_{0}$ and $H_{1}$ are also invariant functions of the dynamical systems. Hence the system is super-integrable. All autonomous dynamical systems such as the Euler equation (Example 1) and the Kermac-Mckendric system (Example 2) belong to this class. \vspace{0.3cm} \noindent {\bf Class B}.\, One of the invariant functions $H_{0}$ and $H_{1}$ of the vector field $\vec{X}$ depends on $t$ explicitly. Hence the other one is an invariant function also of the dynamical system. When $I_{1}, I_{2}$ and $I_{3}$ in Example 1 are time dependent the Euler system becomes the member of this class. In this case $H_{0}$ is the Hamiltonian function and $H_{1}$ is the function defining the Poisson structure. Similarly, in Example 2 we may consider the parameters $a$ and $r$ as time dependent. Then Kermac-Mckendric system becomes also a member of this class. \vspace{0.3cm} \noindent {\bf Class C}.\, Both $H_{0}$ and $H_{1}$ are explicit functions of time variable $t$ but they are not the invariants of the system. There may be invariants of the dynamical system. Let $F$ be such an invariant. Then \begin{equation} {dF \over dt} \equiv {\partial F \over \partial t}+\{F,H_{1}\}_{0}={\partial F \over \partial t}+\{F,H_{0}\}_{1}=0, \end{equation} where for any $F$ and $G$ \begin{equation} \{F,G\}_{\alpha} \equiv J_{\alpha}^{ij}\, \partial_{i} \,F \partial_{j}\, G,~~~ \alpha=0,1. \end{equation} \vspace{0.3cm} \section*{3. Poisson structures in ${\mathbb R}^n$} Let us consider the dynamical system \begin{equation} {\frac{dx^{i}}{dt}}=X^{i}(x^{1},x^{2},\cdots ,x^{n},t),~~i=1,2,\cdots ,n. \label{dyn3} \end{equation} \bigskip \noindent {\bf Theorem 9. }{\it All dynamical systems in }${\Bbb R}^{n}$ {\it are Hamiltonian. Furthermore all dynamical systems in }${\Bbb R}^{n}$ {\it are} $(n-1)${\it -Hamiltonian.} \noindent {\bf Proof.} Extending the proof of Theorem 8 to ${\Bbb R} ^{n}$ consider the linear partial differential equation (\ref{lin1}). There exist $n-1$ functionally independent solutions $H_{\alpha}, (\alpha=1,2, \cdots, n-1)$ of this equation (which are invariant functions of the vector field $\vec{X}$) \cite{olv}-\cite{sned}. Since $\vec{X}$ is orthogonal to the vectors $\vec{\nabla}% H_{\alpha },~(\alpha =1,2,\cdots ,n-1),$ we have \begin{equation} \vec{X}= \mu \left| \begin{array}{rrrrrr} \vec{e_{1}}& \vec{e_{2}} & \cdot& \cdot& \cdot &\vec{e_{n}}\\ \partial_{1} H_{1}&\partial_{2} H_{1}& \cdot&\cdot&\cdot&\partial_{n} H_{1} \\ \cdot& \cdot&\cdot&\cdot&\cdot&\cdot\\ \cdot& \cdot&\cdot&\cdot&\cdot&\cdot\\ \partial_{1} H_{n-1}&\partial_{2} H_{n-1}& \cdot&\cdot&\cdot&\partial_{n} H_{n-1} \end{array} \right|, \end{equation} where the function $\mu$ is a coefficient of proportionality and $\vec{e_{i}}$ is $n$-dimensional unit vector with the $i$th coordinate $1$ and remaining coordinates $0$. Therefore \begin{equation} X^{i}=\mu \epsilon ^{ij_{1}j_{2}\cdots j_{n-1}}\partial _{j_{1}}H_{1}\partial _{j_{2}}H_{2}\cdots \partial _{j_{n-1}}H_{n-1}. \end{equation} Hence all dynamical systems (\ref{dyn3}) have the Hamiltonian representation \begin{equation}\label{sysn} {\frac{dx^{i}}{dt}}=J_{\alpha}^{ij}\partial _{j}H_{\alpha},~~i=1,2,\cdots ,n, \label{dyn4}~~(\mbox{no sum on}~~ \alpha) \end{equation} with \begin{equation}\label{poysonn} J_{\alpha}^{ij}=\mu \epsilon^{\alpha \alpha_{1} \alpha_{2} \cdots \alpha_{n-2}}\,\epsilon ^{ijj_{1}\cdots j_{n-2}}\,\partial _{j_{1}}H_{\alpha_{1}}\,\partial _{j_{2}}\,H_{\alpha_{2}} \cdots \partial _{j_{n-2}}H_{\alpha_{n-2}}, \end{equation} where $i,j=1,2,\cdots ,n$, $\alpha=1,2,\cdots,n-1$. Here $\epsilon ^{ijj_{1}\cdots j_{n-2}}$ and $\epsilon^{\alpha \alpha_{1} \alpha_{2} \cdots \alpha_{n-2}}$ are Levi-Civita symbols in $n$ and $n-1$ dimensions respectively. The function $\mu$ can be determined, for example, from \begin{equation} \mu={X^{1} \over \left| \begin{array}{rrrrr} \partial_{2} H_{1}& \cdot&\cdot&\cdot&\partial_{n} H_{1} \\ \cdot& \cdot&\cdot&\cdot&\cdot\\ \cdot& \cdot&\cdot&\cdot&\cdot\\ \partial_{2} H_{n-1}& \cdot&\cdot&\cdot&\partial_{n} H_{n-1} \end{array} \right|. } \end{equation} It can be seen that the matrix $J_{\alpha}$ with the entries $J_{\alpha}^{ij}$ given by (\ref{poysonn}) defines a Poisson structure in ${\mathbb R}^n$ and since \begin{equation} J_{\alpha} \cdot \nabla H_{\beta}=0, ~~\alpha, \beta=1,2, \cdots, n-1, \end{equation} with $\beta \ne \alpha$, the rank of the matrix $J_{\alpha}$ equals 2 (for all $\alpha=1,2,\cdots, n-1$). In (\ref{sysn}) we can take any of $H_{1}, H_{2}, \cdots, H_{n-1}$ as the Hamilton function and use the remaining $H_{k}$'s in (\ref{poysonn}). We observe that all dynamical systems (\ref{dyn3}) in ${\Bbb R}^{n}$ have $% n-1$ number of different Poisson structures in the form given by (\ref{poysonn}). The same system may have a Poisson structure with a rank higher than two. The following example clarifies this point. \vspace{0.3cm} \noindent {\bf Example 4}. Let \begin{equation} {\dot{x_{1}}}=x_{4},~{\dot{x_{2}}}=x_{3},~{\dot{x_{3}}}=-x_{2},~{\dot{x_{4}}}=-x_{1}. \end{equation} Clearly this system admits a Poisson structure with rank four \begin{equation} J= \left( \begin{array}{rrrr} 0& 0 & 0&1\\ 0&0&1&0 \\ 0& -1&0&0\\ -1&0&0&0 \end{array} \right), ~~ H={1 \over 2}(x_{1}^2+x_{2}^2+x_{3}^2+x_{4}^2). \end{equation} The invariant functions of the vector field $\vec{X}=(x_{4},x_{3},-x_{2},-x_{1})$ are \begin{eqnarray} H_{1}&=&{1 \over 2}(x_{1}^2+x_{2}^2+x_{3}^2+x_{4}^2),\\ H_{2}&=&{1 \over 2}(x_{2}^2+x_{3}^2),\\ H_{3}&=&x_{1}\,x_{3}-x_{2}\,x_{4}. \end{eqnarray} Then the above system has three different ways of representation with the second rank Poisson structures \begin{eqnarray} J_{1}^{ij}&=&\mu \epsilon^{ijkl}\, \partial_{k}\, H_{1} \partial_{l} H_{2},~~ H=H_{3},\\ J_{2}^{ij}&=&-\mu \epsilon^{ijkl}\, \partial_{k}\, H_{1} \partial_{l} H_{3},~~ H=H_{2},\\ J_{3}^{ij}&=&\mu \epsilon^{ijkl}\, \partial_{k}\, H_{2} \partial_{l} H_{3},~~ H=H_{1}, \end{eqnarray} where $\mu (x_{1}x_{2}+x_{3}x_{4})=1$. These Poisson structures are compatible not only pairwise but also triple-wise. This means that any linear combination of these structures is also a Poisson structure. Let $J=\alpha_{1} J_{1}+\alpha_{2} J_{2}+\alpha_{3} J_{3}$ then it is possible to show that \begin{equation} J^{ij}=\mu \epsilon^{ijkl}\, \partial_{k}\, {\tilde H}_{1} \partial_{l} {\tilde H}_{2}, \end{equation} where $\tilde{H}_{1}$ and $\tilde{H}_{2}$ are linear combinations of $H_{1}, H_{2}$ and $H_{3}$, \begin{eqnarray} \tilde{H}_{1}&=&H_{1}-{\alpha_{3} \over \alpha_{2}} H_{2},~ \tilde{H}_{2}=\alpha_{1} H_{2}-\alpha_{2} H_{3} ~~\mbox{if} ~~\alpha_{2} \ne 0,\\ \tilde{H}_{1}&=&\alpha_{1} H_{1}-\alpha_{3} H_{2},~ \tilde{H}_{2}= H_{2} ~~\mbox{if}~~ \alpha_{2} = 0. \end{eqnarray} \vspace{0.3cm} \noindent {\bf Definition 10}. {\it A dynamical system (\ref{dyn3}) in ${\mathbb R}^n$ is called super-integrable if it has $n-1$ functionally independent first integrals (constants of motion)}. \vspace{0.3cm} \noindent {\bf Theorem 11}. {\it All autonomous dynamical systems in ${\mathbb R}^n$ are super-integrable}. \vspace{0.3cm} \noindent {\bf Proof}. If the system (\ref{dyn3}) is autonomous, then the vector field $\vec{X}$ does not depend on $t$ explicitly. Therefore each of the invariant functions $H_{\alpha},~(\alpha=1,2,\cdots, n-1)$ of the vector field $\vec{X}$ is a constant of motion of the system (\ref{dyn3}). \vspace{0.3cm} \vspace{0.3cm} Some (or all) of the invariant functions $ H_{\alpha }$, $(\alpha =1,2,\cdots ,n-1)$ of the vector field $\vec{X}$ may depend on $t$. Like in ${\mathbb R}^3$ we can classify the dynamical systems in ${\mathbb R}^n$ with respect to the invariant functions of the vector field $\vec{X}(x^{1},x^{2}, \cdots,x^{n}, t)$. \vspace{0.4cm} \noindent {\bf Class A}.\, All invariant functions $H_{\alpha},~ (\alpha=1,2,\cdots. n-1)$ of the vector field $\vec{X}$ do not depend on $t$ explicitly. In this case all functions $H_{\alpha},~ (\alpha=1,2,\cdots. n-1)$ are also invariant functions (constants of motion) of the dynamical system. Hence the system is super-integrable. In the context of the the multi- Hamiltonian structure, such systems were first studied by \cite{nam} and \cite{raz}. The form (\ref{poysonn}) of the Poisson structure was given in these works. Its properties were investigated in \cite{nut}. \vspace{0.3cm} \noindent {\bf Class B}.\, At least one of the invariant functions $H_{\alpha},~ (\alpha=1,2,\cdots. n-1)$ of the vector field $\vec{X}$ does not depend on $t$ explicitly. That function is an invariant function also of the dynamical system. \vspace{0.3cm} \noindent {\bf Class C}.\, All $H_{\alpha},~ (\alpha=1,2,\cdots. n-1)$ are explicit function of time variable $t$ but they are not the invariants of the system. There may be invariants of the dynamical system. Let $F$ be such an invariant. Then \begin{equation} {dF \over dt} \equiv {\partial F \over \partial t}+\{F,H_{\alpha}\}_{\alpha}=0,~~\alpha=1,2,\cdots, n-1 \end{equation}, where for any $F$ and $G$ \begin{equation} \{F,G\}_{\alpha} \equiv J_{\alpha}^{ij}\, \partial_{i} \,F \partial_{j}\, G,~~~ \alpha=0,1,\cdots, n-1. \end{equation} \vspace{1cm} \noindent {\bf Acknowledgements}: \\ We wish to thank Prof. M. Blaszak for critical reading of the paper and for constructive comments. This work is partially supported by the Turkish Academy of Sciences and by the Scientific and Technical Research Council of Turkey.
8d94ac987abe8db804dc2d0cea9b1c01e40b1ed1
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Since their discovery by S. Iijima and T. Ichihashi \cite{Iijima} in 1993, single-walled carbon nanotubes (SWNTs) have attracted attention due to their remarkable electronic and mechanical properties \cite{Saito,Loiseau}. At low energies, they represent an almost perfect realization of a one-dimensional (1D) system of interacting electrons with an additional orbital degree of freedom due to the sublattice structure of graphene. Accounting for spin and orbital degrees of freedom implies that for nanotubes a shell structure is expected, where each shell can accommodate up to four electrons. In the absence of Coulomb interaction the energy levels are spin degenerate, while the orbital degeneracy is usually lifted due to the nanotube finite length. Coulomb interactions, however, modify this picture. The sublattice structure of graphene gives rise to a distinction between electron interactions on the same and on different sublattices. Therefore, besides the long-ranged forward scattering processes, also short-ranged interaction processes play a role in small diameter tubes \cite{Egg,Odin,Oreg,Leo1}. These short-ranged interactions cause in finite size nanotubes exchange effects leading for a tube filling of $4n+2$ to a groundstate with either total spin $S=0$ or $S=\hslash$ (a triplet)\cite{Leo1}. Signatures of the exchange interactions have indeed been inferred from stability diagrams of carbon-nanotube-based quantum dots \cite{Mor1,Sap,Liang}. In particular it was shown by Moriyama et al.\cite{Mor1} that an applied magnetic field can be used to reversibly change the groundstate from the singlet to one of the triplet states. \newline Recently, carbon nanotubes have also attracted much attention for their potential applications in spintronic devices \cite{Cott}. They are particularly interesting because they have a long spin lifetime and can be contacted with ferromagnetic materials. Indeed, spin-dependent transport in carbon nanotube spin valves has been demonstrated by various experimental groups \cite{Sahoo,Man,Haupt}, ranging from the Fabry-Perot \cite{Man,Sahoo} to the Kondo regime \cite{Haupt}. From the theoretical point of view, spin-dependent transport in interacting SWNTs has been discussed so far in the limit of very long nanotubes \cite{Bal}, for tubes in the Fabry-Perot regime \cite{Peca} and for SWNT-based quantum dots \cite{Kos,Wey,Wey2}. In the three latter works the characteristic four-electron shell-filling could be observed in the stability diagrams. In \cite{Kos} however, focus was on medium-to-large diameter SWNTs where exchange effects can be neglected. The studies in \cite{Wey,Wey2} are based on the theory by Oreg et al. \cite{Oreg}, where exchange interactions are treated on a mean-field level, and focus predominantly on shot noise\cite{Wey} and cotunneling\cite{Wey2} effects. \begin{figure} \includegraphics[width=7cm]{1.eps} \caption{\label{Fig:Setup} Single-electron-tunneling setup of a single-walled carbon nanotube (SWNT) which is weakly coupled to source and drain contacts. The contact magnetization may either be parallel or antiparallel as indicated by the arrows. The gate electrode allows to shift the chemical potential inside the SWNT.} \end{figure} \newline In this work we generalize the previous investigations of Ref.\cite{Kos} to include the short range Coulomb interactions causing exchange splittings of the six otherwise degenerate (at vanishing orbital mismatch) $4n+2$ - filling groundstates. The leads are either parallel or antiparallel spin-polarized and weakly coupled to the SWNT, see Fig. \ref{Fig:Setup}. In the low bias regime we derive analytical formulas for the conductance for both large and small orbital mismatch corresponding to an $S=0$ and $S=\hslash$ groundstate, respectively, at $4n+2$ filling. In the high bias regime we numerically calculate the stability diagrams for the two possible groundstates. We show several differences in transport between parallel and antiparallel lead magnetization, as e.g. a negative differential conductance (NDC) effect occurring only for the $S=0$ groundstate and antiparallel magnetization. We further include in the calculations a parallel magnetic field leading to a Zeeman splitting for all states with total spin unequal to zero. It is then possible to observe spin blocking effects due to transport channels that trap the system in the triplet state with $S_z=-\hslash$. Performing a magnetic field sweep, a groundstate change may be obtained as it has been shown experimentally\cite{Mor1}. \newline The paper is organized as follows. In section II we discuss the relevant features of the low energy Hamiltonian of interacting SWNTs with special focus on the filling $4n+2$. In section III we describe the set-up and method used to study spin-dependent transport in the sequential tunneling regime. Finally, in section IV, we present our results for the conductance, while in section V we focus on the nonlinear (finite bias) regime. \section{The interacting low energy spectrum}\label{model} \subsection{The interacting Hamiltonian}\label{Hamiltonian} The starting point for a microscopic, but still analytical, treatment of SWNTs is a tight-binding ansatz for the wavefunction of the $2p_z$ - electrons on the graphene honeycomb lattice. Including nearest neighbor hopping matrix elements it yields an electron-hole symmetric bandstructure with a fully occupied valence band and an empty conduction band. Since the two bands touch at the cornerpoints of the 1st Brillouin zone, the Fermi-points, graphene is a zero gap semiconductor. Wrapping the considered sheet of graphene, i.e., imposing periodic boundary conditions (PBCs) around the circumference, yields a SWNT and leads to the formation of transverse subbands. For the low energy electronic structure of metallic SWNTs, only the subbands touching at the Fermi-points are of relevance. In the following we consider armchair SWNTs of finite length and impose open boundary conditions (OBCs) at the two ends of the tube, i.e., that the wave function vanishes at the armchair edges. This condition mixes the two inequivalent Fermi points $F=\pm K_0$ from the underlying graphene first Brillouin zone and yields the linear dispersion relation \begin{figure} \includegraphics[width=5cm]{2.eps} \caption{\label{fig:BandstructureNanotube_OBC} The dispersion relation of a noninteracting SWNT with open boundary conditions. It is characterized by two linear branches, $r=\pm$, of slope $\pm \hbar v_F$ determined by the Fermi velocity $v_F$. The quantities $\epsilon_0$ and $\epsilon_{\Delta}$ are the intraband level spacing and the orbital mismatch energy, respectively. } \end{figure} of the finite size SWNT shown in Fig. \ref{fig:BandstructureNanotube_OBC}. It is characterized by two linear branches $r=\pm$ of slope $\pm \hbar v_F$ with the Fermi velocity $v_F\approx8.1\cdot 10^5\frac{m}{s}$. The allowed quasi-momentum values are given by $\kappa=(n_{\kappa}+\Delta)\pi/L$, where $n_{\kappa}\in\mathbb{Z}$, $L$ is the tube length and $\Delta$ accounts for the fact that $K_0$ may not be an integer multiple of $\pi/L$. The kinetic part of the Hamiltonian, yielding the energy relative to the Fermi-sea, correspondingly reads \begin{equation} \label{1} H_{0}=\epsilon_0\sum_{r\sigma}r\sum_{n_{\kappa}}n_{\kappa}c^{\dagger}_{r\sigma\kappa}c_{r\sigma\kappa}+\epsilon_\Delta \sum_{r\sigma}rN_{r\sigma}\ , \end{equation} \noindent where $\epsilon_0=\hslash v_F\pi/L$ is the level spacing, and $\epsilon_{\Delta}\equiv\epsilon_0\Delta$ is the band offset energy. Finally $c^{\dagger}_{r\sigma\kappa}$ creates an electron with momentum $\kappa$ and spin $\sigma$ in branch $r$ and the operator $N_{r\sigma}$ counts the total electron number in branch $r$ and of spin $\sigma$. \newline The interaction part of the Hamiltonian is given by \begin{equation} \label{2} V\!=\!\frac{1}{2}\sum_{\sigma\sigma'}\!\int\!\!\!\int\! d^3\!rd^3\!r'\Psi^{\dagger}_{\sigma}(\vec{r})\Psi^{\dagger}_{\sigma'}(\vec{r}\,')U(\vec{r}-\vec{r}\,')\Psi_{\sigma'}(\vec{r}\,')\Psi_{\sigma}(\vec{r})\ , \end{equation} \noindent where $\Psi,$ $\Psi^\dagger$ are fermion field operators and we use the Ohno-potential \cite{Barford}, \eq{146}{ U(\vec{r}-\vec{r}\,')=U_0 \left(1+\left(\frac{U_0\epsilon|\vec{r}-\vec{r}\,'|}{14.397}\right)^2\right)^{-\frac{1}{2}}eV\ \ , } \noindent with $U_0=15\ \mbox{meV}$ \cite{Ful} and $\epsilon\simeq1.4-2.4$ \cite{Egg} is the dielectric constant of graphene. In the next step we express the 3D electron operators in terms of the 1D fermion-fields \cite{Leo2} \begin{equation} \label{3} \psi_{rF\sigma}(x)=\frac{1}{\sqrt{2L}}\sum_{\kappa}e^{i\mbox{sgn}(F)\kappa x}c_{r\sigma\kappa}\ , \end{equation} and obtain \eq{101}{ \Psi_{\sigma}(\vec{r})=\sqrt{L}\sum_{rF}\mbox{sgn}(F)\psi_{rF\sigma}(x)\sum_p f_{pr}\varphi_{pF}(\vec{r})\ . } \noindent Here $F=\pm K_0$ denotes the two independent Fermi-points, $p=\pm$ the two sublattices of graphene, and the coefficients $f_{pr}$ of the sublattice wave function $\varphi_{pF}(\vec{r})$ are given by $1/\sqrt{2}$ for $p=+$ and $-r/\sqrt{2}$ for $p=-$. The sublattice wave function itself reads \eq{102}{ \varphi_{pF}(\vec{r})=\frac{1}{\sqrt{N_L}}\sum_{\vec{R}\in L_G}e^{iFR_x}\chi_{p_z}(\vec{r}-\vec{R}-\vec{\tau}_p)\ , } \noindent where $N_L$ is the number of graphene lattice sites identified by the lattice vector $\vec{R}$, and $L_G$ denotes the graphene honeycomb lattice in real space. Furthermore, $\chi_{p_z}(\vec{r}-\vec{R}-\vec{\tau}_p)$ is the $p_z$ wavefunction of a carbon atom living on sublattice $p$, identified by the sublattice vector $\vec{\tau}_p$. Upon integrating Eq. (\ref{2}) over the coordinates radial to the tube axis, one eventually arrives at a 1D interaction potential characterized by density-density and non density-density contribution \cite{Leo1} so that the total Hamiltonian reads \begin{equation} \label{7} H_{\odot}=H_0+V_{\rho\rho}+V_{{\rm n}\rho\rho}. \end{equation} With the help of bosonization \cite{Del} it is possible to diagonalize the density part $H_0+V_{\rho\rho}$. Eventually the bosonized and diagonalized Hamiltonian takes the form \cite{Leo1}: \begin{multline} \label{14} H_0+V_{\rho\rho}=\sum_{j\delta q>0}\epsilon_{j\delta q}a^{\dagger}_{j\delta q}a_{j\delta q}+\frac{1}{2}E_cN^2_c\\ +\frac{1}{2}\sum_{r\sigma}N_{r\sigma}\left[-\frac{J}{2}N_{-r\sigma}+\left(\epsilon_0-u^+\right)N_{r\sigma}+r\epsilon_{\Delta}\right]\ \ . \end{multline} Besides the ground state, it accounts for all the possible fermionic and bosonic excitations of a SWNT. \noindent The bosonic excitations are described by the first term on the right hand side. The indices refer to total/relative $(\delta=+/-)$ charge/spin $(j=c/s)$ modes. The energies $\epsilon_{j\delta q}$ are given by \eq{186}{ \epsilon_{j\delta q}\cong\left\lbrace \begin{array}{cc} \epsilon_0n_q\sqrt{1+\frac{8W_q}{\epsilon_0}} & j\delta=c+\\ \epsilon_0 n_q& j\delta=c-,s+,s- \end{array} \right.\ , } \noindent with $q=n_q\pi/L\ \mbox{for}\ n_q\in\mathbb{Z}$ and \begin{multline} \label{169} W_{q}=\frac{1}{(2L)^2}\int_0^L dx\,\int_0^L dx' U^{\rm long}(x,x')\\ \times 4\cos(qx)\cos(qx')\ , \end{multline} \noindent the contribution of the long-ranged density-density processes. Indeed $U^{\rm long}(x,x')=[U^{\rm intra}+U^{\rm inter}]/2$ is the sum of the interaction potentials for electrons living in the same (intra) and different (sublattices): \begin{multline} U^{\rm intra/inter}(x,x')=L^2\int\int d^2r_{\perp}d^2r'_{\perp}\\ \times\varphi^*_{pF}(\vec{r})\varphi^*_{\pm pF'}(\vec{r}\,')\varphi_{\pm pF'}(\vec{r}\,')\varphi_{pF}(\vec{r})U(\vec{r}-\vec{r}\,')\ . \end{multline} The second summand of (\ref{14}) is the charging term with the charging energy $E_c=W_{q=0}$ and also comes from the long range part of the Coulomb interaction. It counts the energy one has to spend to put $N_c=\sum_{r\sigma}N_{r\sigma}$ electrons on the dot, no matter what spin $\sigma\in\{\uparrow,\downarrow\}$ or pseudospin $r\in\{+,-\}$ they have. The second line of (\ref{14}) starts with an exchange term favoring spin alignment. The exchange-splitting, \begin{multline} \label{154} J=\frac{1}{2N^2_L}\sum_{\vec{R} ,\vec{R}\,'}(1+e^{-i2K_0(R_x-R'_x)})\\ \times[U(\vec{R}-\vec{R}\,')-U(\vec{R}-\vec{R}\,'+\vec{\tau}_p-\vec{\tau}_{-p})]\ , \end{multline} being proportional to the difference of the Coulomb interaction for electrons on the same and on different sublattices, accounts for the contribution of short range processes. The next term in (\ref{14}) reflects the energy cost for adding electrons of the same spin band in the same branch, i.e., the Pauli-principle, where the correction $u^+$ is \begin{multline} \label{153} u^+=\frac{1}{4N^2_L}\sum_{\vec{R} ,\vec{R}\,'}e^{-i2K_0(R_x-R'_x)}\\ \times[U(\vec{R}-\vec{R}\,')+U(\vec{R}-\vec{R}\,'+\vec{\tau}_p-\vec{\tau}_{-p})]\ . \end{multline} Finally, the last term accounts for a possible band-mismatch, see Fig. \ref{fig:BandstructureNanotube_OBC}. \newline The eigenstates of $H_0+V_{\rho\rho}$ are spanned by \begin{equation} \label{15} \ket{\vec{N},\vec{m}}=\prod_{j\delta q}\frac{\left(a^{\dagger}_{j\delta q}\right)^{m_{j\delta q}}}{\sqrt{m_{j\delta q}!}}\ket{\vec{N},0}\ . \end{equation} \noindent Here $\vec{N}\ \mbox{and}\ \vec{m}$ denote the fermionic and the bosonic configuration, respectively, such that the state $\ket{\vec{N},0}$ has no bosonic excitation. The fermionic configuration is given by the number of electrons in each branch with a certain spin $\vec{N}=(N_{-\uparrow},N_{-\downarrow},N_{+\uparrow},N_{+\downarrow})$. These eigenstates will be used to calculate the contribution of the non-density part of the interaction, i.e., $\bra{\vec{N},\vec{m}}V_{{\rm n}\rho\rho}\ket{\vec{N}',\vec{m}'}$. Away from half-filling, they only couple states close in energy and one is allowed to work with a truncated eigenbasis (we check convergence of the results as the basis is enlarged). As shown by Yoshioka and Odintsov \cite{Yoshi}, for long SWNTs a Mott-insulating transition is expected to occur at half-filling due to umklapp scattering. As found in Ref. \cite{Leo1} umklapp processes acquire increasing weight as half-filling is approached also for finite size tubes, a possible signature of the Mott instability, and the present theory breaks down. In recent experiments \cite{Des} the observation of the Mott transition in SWNT quantum dots was claimed. \subsection{Low energy spectrum away from half-filling} The low energy regime is where the energies that can be transferred to the system by the bias voltage and the temperature stay below $\epsilon_0$. This means no bosonic excitations are present, i.e., $\vec{m}=(0,0,0,0)$, and also no fermionic excitations are allowed, i.e., the four bands will be filled as equal as possible: $|N_{r\sigma}-N_{r'\sigma'}|\leq1\, \forall\, r\sigma,r'\sigma'$. Our starting point are the eigenstates, Eq. (\ref{15}), of the Hamiltonian in Eq. (\ref{14}), which accounts for the kinetic and the density part of the full Hamiltonian. Now we have to split the examination into two cases. \newline At first we consider states with total charge $N_c$ equal to $4n$, $4n+1$ and $4n+3$. Those are unambiguously described by the fermionic configuration $\vec{N}$ because they are not mixed by the exchange effects. The only impact of the short-range interaction terms on these states is given by an energy penalty for double occupation of one branch $r$, a common shift for all eigenstates with fixed $N_c\in \left\lbrace 4n,\ 4n+1,\ 4n+3\right\rbrace $. Therefore we are left with \cite{Leo1} \begin{multline} \label{17} E_{\vec{N}}=\frac{1}{2}E_cN^2_c+u^+\sum_r\mbox{min}\left(N_{r\uparrow},N_{r\downarrow}\right)\\+\frac{1}{2}\sum_{r\sigma}N_{r\sigma}\left[-\frac{J}{2}N_{-r\sigma}+\left(\epsilon_0-u^+\right)N_{r\sigma}+r\epsilon_{\Delta}\right] \end{multline} \noindent for the energy. If $\epsilon_{\Delta}\neq0$, states with the maximum allowed number of electrons in the $r=-$ branch will be the groundstates. For $N_c=4n$ the pseudospin branches $r=\pm$ are equally occupied, yielding an unique $N_c=4n$ groundstate. The corresponding configuration is taken as reference configuration for the $N_c=4n+1,\ 4n+2\ \mbox{and}\ 4n+3$ cases. The lowest lying states for $N_c\in \left\lbrace 4n+1,\ 4n+3\right\rbrace $ are presented in Fig. \ref{fig:Configuration}. E.g., for the case $N_c=4n+1$ we obtain four possible states corresponding to $\vec{N}\in\left\lbrace (n+1,n,n,n),\ (n,n+1,n,n),\ (n,n,n+1,n),\right.$ $\left.(n,n,n,n+1) \right\rbrace$. For simplicity we introduce for the states with an unpaired electron in the $r=-$ branch the notation $\ket{\uparrow,\cdot},\ \ket{\downarrow,\cdot}$. For electrons in the $r=+$ branch we set $\ket{\cdot,\uparrow},\ \ket{\cdot,\downarrow}$. \begin{figure} \includegraphics[width=7cm]{3.eps} \caption{\label{fig:Configuration} Lowest lying states for fillings $N_c=4n+1$ and $N_c=4n+3$. For simplicity only the configuration of the last partially filled shell is shown.} \end{figure} \newline Analogously, neglecting exchange effects and setting $\epsilon_\Delta=0$ for the moment, the groundstates for the $N_c=4n+2$ filling are represented by the six states $\ket{\uparrow,\uparrow}$, $\ket{\downarrow,\downarrow}$, $\ket{\uparrow,\downarrow}$, $\ket{\downarrow,\uparrow}$, $\ket{\uparrow\downarrow,\cdot}$ and $\ket{\cdot,\uparrow\downarrow}$, where, e.g., $\ket{\uparrow,\uparrow}$ means two electrons with spin $\uparrow$ one on each branch $-$ and $+$. Here the different fermionic configurations mix under the influence of the $V_{{\rm n}\rho\rho}$ processes and the groundstate structure will change dramatically due to off-diagonal contributions \begin{eqnarray} \label{18} \bra{\uparrow,\downarrow}V_{{\rm n}\rho\rho}\ket{\downarrow,\uparrow}&=&-J/2\ ,\nonumber\\ \bra{\uparrow\downarrow,\cdot}V_{{\rm n}\rho\rho}\ket{\cdot,\uparrow\downarrow}&=&J/2\ . \end{eqnarray} \noindent Diagonalization of the interaction matrix yields the groundstate spectrum as it is shown in table \ref{States}. The energies in the table are given relative to $E_{0,4n+2}=\frac{1}{2}E_cN^2_c+(2n^2+2n+1)(\epsilon_0-u^+)-\frac{J}{2}(2n^2+2n)+2u^+n$. \begin{table} \begin{center} \begin{tabular}{|c|} \hline \begin{tabular}{|c|c|c|} \hline state & relative energy & spin \\ \hline $\ket{t_1}=\ket{\uparrow,\uparrow}$ & $-J/2$ & $\hslash$\\ $\ket{t_{-1}}=\ket{\downarrow,\downarrow}$ & $-J/2$ & $\hslash$\\ $\ket{t_0}=\frac{1}{\sqrt{2}}\left(\ket{\uparrow,\downarrow}+\ket{\downarrow,\uparrow}\right)$ & $-J/2$ & $\hslash$\\ $\ket{s}=\frac{1}{\sqrt{2}}\left(\ket{\uparrow,\downarrow}-\ket{\downarrow,\uparrow}\right)$ & $+J/2$ & 0\\ $\ket{a}=\frac{1}{\sqrt{c^2_1+1}}\left(-c_1\ket{\uparrow\downarrow,\cdot}+\ket{\cdot,\uparrow\downarrow}\right)$ & $u^+-\sqrt{\left(\frac{J}{2}\right)^2+\epsilon^2_{\Delta}}$ & 0\\ $\ket{b}=\frac{1}{\sqrt{c^2_2+1}}\left(-c_2\ket{\uparrow\downarrow,\cdot}+\ket{\cdot,\uparrow\downarrow}\right)$ & $u^++\sqrt{\left(\frac{J}{2}\right)^2+\epsilon^2_{\Delta}}$ & 0\\ \hline \end{tabular}\\ \hline\\ $c_1=\frac{2\epsilon_{\Delta}+\sqrt{J^2+(2\epsilon_{\Delta})^2}}{J},\quad c_2=\frac{2\epsilon_{\Delta}-\sqrt{J^2+(2\epsilon_{\Delta})^2}}{J}$ \\\\ \hline \end{tabular} \end{center} \caption{\label{States}The six lowest energy eigenstates for the filling $N_c=4n+2$ of an interacting SWNT. Due to short-ranged interactions there are three degenerate states of total spin $S=\hslash$ and three non-degenerate states of total spin $S=0$.} \end{table} It is clear that the states $\ket{s}$ and $\ket{b}$ will always be excited states, while the spin triplet, $S=\hslash$, is energy degenerate. Now the question arises which states, the triplet or the $\ket{a}$ state, are the groundstate of the system. In accordance with table \ref{States}, the condition for a triplet groundstate is given by: \begin{eqnarray} \label{21} \epsilon^2_{\Delta}<(u^+)^2+Ju^+\ . \end{eqnarray} \noindent For a dielectric constant $\epsilon=1.4$ it holds $J=0.72\ \mbox{\AA{}}\ \frac{\epsilon_0}{d}$ and $u^+=0.22\ \mbox{\AA{}}\ \frac{\epsilon_0}{d}$. Hence we find in terms of the level spacing $\epsilon_0$ and the tube diameter $d$: \eq{23}{ \left|\epsilon_{\Delta}\right|<0.4548\ \mbox{\AA{}}\ \frac{\epsilon_0}{d} . } \noindent Obviously this makes the triplet groundstate more unlikely compared to the $S=0$ groundstate as it can be seen in Fig. \ref{fig:Epsilon-J}. For a (6,6) nanotube of 300nm length, the band-mismatch must be $\epsilon_{\Delta}<0.3\ \mbox{meV}\cong0.06\epsilon_0$ to be in a triplet groundstate. In the experiments \cite{Sap,Liang} band-mismatches are of the order of $0.3\epsilon_0$ and, as expected from our theory, $\ket{a}$ - groundstates are observed. \begin{figure} \includegraphics[width=8.2cm]{4.eps} \caption{\label{fig:Epsilon-J} Phase diagram to determine the groundstate of different tubes of length 300nm. The chance to find a triplet groundstate increases with increasing exchange parameter $J$, i.e., with decreasing tube diameters.} \end{figure} \section{Spin-dependent transport}\label{Transport} In this section we discuss the set-up to evaluate spin-dependent transport across a SWNT weakly coupled to leads, see Fig. \ref{Fig:Setup}, and the main calculation tools. The Hamiltonian of the full system reads \eq{24}{ H=H_{\odot}+\sum_{l=s,d}H_l+H_T+H_{ext}\ , } \noindent where $l=s,d$ denotes the Hamiltonian in the source and the drain contact, respectively. The leads magnetization is accounted for in terms of a Stoner Hamiltonian where the density of states, $\mathcal{D}_{l\sigma}(\epsilon)$, for the majority ($\sigma=\uparrow$) and the minority ($\sigma=\downarrow$) carriers are different. We treat the leads within the wide-band approximation, i.e., we regard the density of states as constant quantities to be evaluated at the leads chemical potentials $\mu_s$ and $\mu_d$. We can thus define the polarization by ($l=s,d$): \begin{eqnarray} \label{51} P_l=\frac{\mathcal{D}_{l\uparrow}(\mu_l)-\mathcal{D}_{l\downarrow}(\mu_l)}{\mathcal{D}_{l\uparrow}(\mu_l)+ \mathcal{D}_{l\downarrow}(\mu_l)} . \end{eqnarray} Moreover, we will consider a symmetric set up $\mathcal{D}_{s\sigma}=\mathcal{D}_{d\sigma}=\mathcal{D}_{\sigma}$ and $P_s=P_d=P$. The total density of states is given by $\mathcal{D}_{tot}=\mathcal{D}_{\uparrow}+\mathcal{D}_{\downarrow}$. We account for the bias voltage $V_b$ in terms of the difference $eV_b=\mu_s-\mu_d$ between the electrochemical potentials in the source and drain leads. Further, $H_T$ in Eq. (\ref{24}) is the tunneling Hamiltonian which we will treat as a perturbation since weak coupling to the leads is assumed. Finally, $H_{ext}$ describes the influence of the externally applied gate voltage $V_g$. The gate is capacitively coupled to the SWNT and hence contributes via a term $e\alpha V_gN_c$ with $\alpha$ a proportionality factor. \newline In order to evaluate the current-voltage characteristics we use the method developed in Ref. \cite{Kos} where, starting from the Liouville equation for the density matrix of the full system, a generalized master equation (GME) for the reduced density matrix $\rho$ (RDM) of the SWNT is obtained to second order in $H_T$. Once the stationary RDM is known, the stationary current through e.g. the source lead is evaluated from the relation $I_s=eTr\{\rho \dot N_s\}$, where $N_s$ is the number operator for electrons in the left lead. As this procedure with the relevant equations is thoroughly explained in Ref. \cite{Kos}, we refrain from repeating it here. The GME can be solved in analytic form in the linear regime, being the focus of the following Sec. IV. In the nonlinear regime, discussed in Sec. V, the differential conductance is evaluated numerically. Moreover, from here on we will focus on the transition between charge states $4n+1\longleftrightarrow4n+2$, mirror symmetric to $4n+2\longleftrightarrow4n+3$, as these two transitions are the ones that reveal exchange effects. The remaining transitions $4n\longleftrightarrow4n+1$ and $4n+3\longleftrightarrow4(n+1)$ will not qualitatively change due to the presence of short range processes and we hence refer to the discussion in \cite{Kos}. \\ If not otherwise specified, we choose nanotubes described by the parameters in table \ref{Values}: In order to obtain an $\ket{a}$ groundstate we assume a band-mismatch of $\epsilon_{\Delta}=0.3\epsilon_0=1.68\,$meV, whereas for a triplet groundstate we choose $\epsilon_{\Delta}=0$. \section{The linear regime} \subsection{Conductance at zero magnetic field} We focus on the conductance formulas for the two cases of tunneling from the $4n+1$ groundstates into the $S=0$ groundstate $\ket{a}$ or into the triplet groundstates. \begin{table} \begin{center} \begin{tabular}{|c|c|c|} \hline parameters & label & value \\ \hline \hline length & $L$ & $300.06\,$nm\\ diameter & $d$ & $0.81\,$nm\\ dielectric constant & $\epsilon$ & $1.4\,$\\ $\Downarrow$ & & \\ charging energy & $E_c$ & $6.7\,$meV\\ level spacing & $\epsilon_0$ & $5.6\,$meV\\ Coulomb excess energy & $u^+$ & $0.15\,$meV\\ exchange energy & $J$ & $0.49\,$meV\\ \hline orbital mismatch & $\epsilon_{\Delta}$ & $0\,$meV\ or\ $1.68\,$meV\\ \hline thermal energy & $k_BT$ & $4.0\times10^{-3}\,$meV\\ transmission coefficient & ${\cal D}_{\rm tot}\Phi$ & $1\times10^{-4}\,$meV\\ \hline \end{tabular} \end{center} \caption{\label{Values} Parameter set of the 300nm (6,6) nanotube investigated in this work.} \end{table} For the transition $\ket{\sigma,\cdot}\longleftrightarrow\ket{a}$ the conductances in the case of parallel, $G^P$, and antiparallel, $G^{AP}$, magnetized leads are found to be \begin{figure} \includegraphics[width=\columnwidth]{5.eps} \caption{\label{fig:singlet_0} Panels a) and c). Conductance vs. gate voltage for the $\ket{\sigma,\cdot}\longleftrightarrow\ket{a}$ resonance for parallel, $G^{P}_{a}$, and antiparallel, $G^{AP}_{a}$, lead magnetization. In both cases the analytical predictions Eqs. (\ref{40}), (\ref{41}) (continuous curves) perfectly match with the results from a numerical evaluation of the GME (squares). Strikingly $G^{P}_{a}$ is independent of the leads polarization $P$, while $G^{AP}_{a}$ is maximal at $P=0$. Panels b) and d). Schematic explanation of the different polarization dependence. The red spin specifies the spin of the state $\ket{\sigma,\cdot}$. The dashed/continuous arrows indicate rare/favorable tunneling processes. For parallel magnetization, panel b), the fast tunneling channel is the one with an excess spin $\downarrow$ and the electron transferred from source to drain is always a majority electron $\uparrow$. If the initial dot spin is $\uparrow$, this is likely to tunnel to the drain, such that at the end of the tunneling process a spin-flip has occurred, leaving the dot in the favorable configuration with a spin $\downarrow$. For antiparallel lead magnetization, panel d), the fast channel corresponds to one electron in the dot with spin $\uparrow$. To this channel, however, is associated a spin flip. Because the situation with initial spin $\downarrow$ involves a rare tunneling process from the source lead, the conductance gets diminished by increasing polarization. } \end{figure} \begin{subequations} \begin{align} \label{40} G^P_{a}&=\frac{c^2e^2\pi}{\hslash}\frac{\gamma}{1+\gamma}\ \beta\ \mathcal{D}_{tot}\ \Phi_{} \left \vert\frac{f(\mu_{a})f(-\mu_{a})}{2-f(\mu_{a})} \right \vert\ , \\\label{41} G^{AP}_{a} &=\frac{(P^2-1)\ \gamma\ (1+\gamma)}{P^2(\gamma-1)^2-(\gamma+1)^2}G^P_{a}\ , \end{align} \end{subequations} \noindent with $c={c_1}/{\sqrt{c^2_1+1}}$, the Fermi function $f(\mu)$ evaluated at the gate voltage dependent energy difference $\mu_{a}=E_{\ket{a}}-E_{\ket{\sigma,\cdot}}$ and $\beta$ the inverse temperature. The parameters $\Phi=\Phi_s$ and $\gamma=\Phi_d/\Phi_s$ describe the possible asymmetric lead transparencies \cite{Kos} (hereby, $\Phi$ is in second order of the tunneling coupling contained in $H_T$). The conductances are shown in Fig. \ref{fig:singlet_0}a) and \ref{fig:singlet_0}c) for the symmetric transparencies case $\gamma=1$ and ${\cal D}_{tot}\Phi=10^{-4}$meV. Strikingly, in the parallel magnetized case there is \textit{no} dependence on the polarization since there is never a blocking state involved in transport, see Fig. \ref{fig:singlet_0}b). For the antiparallel case, in contrast, transport is limited by the weakest channel (when there is a $\downarrow$ - electron on the dot) and one can drive the conductance to zero by tuning the polarization to $P\rightarrow1$. This feature is explained in Fig. \ref{fig:singlet_0}d). \begin{figure} \includegraphics[width=\columnwidth]{6.eps} \caption{\label{fig:triplet_0} Panels a) and c). Conductance vs. gate voltage at zero band-mismatch (triplet groundstate) for parallel, $G^{P}_{t}$, and antiparallel, $G^{AP}_{t}$, lead magnetization. $G^{P}_{t}$ is independent of the leads polarization $P$, while $G^{AP}_{t}$ is maximal at $P=0$. The absolute value of the conductance is slightly larger than for the $ \ket{\sigma,\cdot}\longleftrightarrow\ket{a}$ case since more channels are involved. Panels b) and d). Schematic explanation of the different polarization dependence. For simplicity we only drew the case in which the initial excess spin (red spin) is in the $r=+$ branch. For parallel magnetization, panel b), the fast channel corresponds to the $\ket{\uparrow,\cdot}\longleftrightarrow\ket{t_{+1}}$ transition which conserves the spin of the excess dot electron. For antiparallel magnetization, panel d), the fast channel corresponds to an initial excess spin $\uparrow$ electron likely to tunnel to the drain and being replaced by a spin $\downarrow$ from the source. The situation with an initial spin $\downarrow$, however, corresponds to a weak channel. Increasing the polarization highly populates the $\ket{t_{-1}}$ state and transport decreases. } \end{figure} For the case of the triplet groundstate we face a completely new situation. First, we have for $N_c=4n+1$ filling four degenerate states available because the band-mismatch has been chosen to be zero. Secondly, we couple to three different states in the case of $N_c=4n+2$ rather than to just one. However, the conductance plots do not qualitatively change as it may be seen in Fig. \ref{fig:triplet_0}a) and \ref{fig:triplet_0}c). The conductance formulas read: \begin{subequations} \begin{align} \label{46} G^P_{1,t}&=\frac{3e^2\pi}{\hslash}\frac{\gamma}{1+\gamma}\ \beta\ \mathcal{D}_{tot}\ \Phi\left \vert\frac{f(\mu_{t})f(-\mu_{t})}{4-f(\mu_{t})} \right \vert\ , \\ \label{47} G^{AP}_{1,t} &=\frac{(P^2-1)\ \gamma\ (1+\gamma)}{P^2(\gamma-1)^2-(\gamma+1)^2}G^{P}_{1,t}\ . \end{align} \end{subequations} \noindent Compared to Eqs. (\ref{40}), (\ref{41}) the prefactor changed from $c^2$ to 3 due to the three involved triplet states. The quantity $\mu_{t}=E_{\ket{t}}-E_{1}$ is the difference between the triplet and the $N_c=4n+1$ - groundstate energies. In addition, the denominator in the term containing the Fermi-functions has also changed to account for the degeneracy of the $4n+1$ - filling states. The qualitative behavior, however, does not change compared to the case of an $\ket{a}$ groundstate, such that one cannot determine the spin nature of the groundstate from these plots alone. \subsection{Conductance in the presence of an external magnetic field} In this section we consider the influence of an externally applied magnetic field (Zeeman-field) which clearly reveals the character of the groundstate for $4n+2$ and, moreover, may even change the groundstate depending on the field strength. The field causes an additional Zeeman energy to states with a spin-component $S_z\neq0$. The sign is negative if the concerned state in the tube is parallel to the external field and positive if antiparallel. Thus, the chemical potential differences appearing in Eqs. (\ref{40}), (\ref{41}), (\ref{46}) and (\ref{47}) will be shifted by $\pm E_z=\pm \mu_{\rm B}B$. We use the convention $\mu_{\uparrow}=\mu-E_z$ and $\mu_{\downarrow}=\mu+E_z$. Furthermore, in order to improve the readability, we introduce the abbreviation $f_{\pm\uparrow/\downarrow}=f(\pm\mu_{\uparrow/\downarrow})$. The conductances for the antiparallel set-up are \begin{subequations} \begin{multline} \label{52} G^{AP}_{a}(E_z)=\frac{c^2e^2\pi}{2\hslash}\ \beta\ \mathcal{D}_{tot}\ \Phi\\ \times\left\vert\frac{f_{+\uparrow}f_{+\downarrow}(1+P(\gamma+1)+\gamma)f_{-\downarrow}}{f_{+\downarrow}+f_{+\uparrow}f_{-\downarrow}}\right.\\ \left. +\frac{f_{+\uparrow}f_{+\downarrow}(1-P(\gamma-1)+\gamma)f_{-\uparrow}} {f_{+\downarrow}+f_{+\uparrow}f_{-\downarrow}} \right \vert\ \end{multline} and \begin{multline} \label{53} G^{AP}_{t}(E_z)=\frac{e^2\pi}{2\hslash}\ \beta\ \mathcal{D}_{tot}\ \Phi\\\times\Bigl|\Big\{ f_{-\uparrow}f_{-\downarrow} \Bigl[\Bigl(1+\gamma-P(1-\gamma)\Bigr)f_{+\downarrow}\Bigl(f_{-\downarrow}f_{+\uparrow}+2f_{+\downarrow}f_{-\uparrow}\Bigr)\\ +\Bigl(1+\gamma+P(1-\gamma)\Bigr)f_{+\uparrow}\Bigl(f_{-\uparrow}f_{+\downarrow}+2f_{-\downarrow}f_{+\uparrow}\Bigr)\Bigr]\Big\}\Big/\\ \Bigl\{f_{-\uparrow}\Bigl( 1+f_{-\downarrow}\Bigr)\Bigl( f_{-\downarrow}f_{+\uparrow}+f_{-\uparrow}f_{+\downarrow}\Bigr)+f^2_{-\downarrow}f^2_{+\uparrow}\Bigr\} \Bigr|\,. \end{multline}\end{subequations} \newline We do not find qualitative differences with respect to the zero magnetic field case: the conductances decrease in both cases with increasing polarization. In the following, we will therefore only focus on the parallel case, where we find interesting behavior for small Zemann splittings. The conductance formulas for parallel lead magnetization take the form \begin{figure} \includegraphics[width=\columnwidth]{7.eps} \caption{\label{fig:singlet_007} a) Conductance near the $\ket{\sigma,\cdot}\longleftrightarrow\ket{a}$ transition for parallel magnetized leads and applied magnetic field. The peaks corresponding to higher polarizations are shifted to lower gate voltages. b) Schematic explanation of the polarization and gate-voltage dependence for small (left sketch) and large (right sketch) polarization. The red spin indicates the spin of the excess electron initially present on the dot. The thick and thin lines are frequent and less frequent transitions, while dashed lines indicate rare transitions. Large polarizations favor processes involving majority spins while, due to the extra required Zeeman energy, the Fermi function suppresses processes where a spin $\downarrow$ is transferred. Thus at small polarizations the transport is mostly mediated by spin $\downarrow$ - electrons while at large polarizations $\uparrow$ - electrons are preferred. Correspondingly the peak position is shifted to smaller gate voltages as the polarization is increased. } \end{figure} \begin{subequations} \begin{multline} \label{48} G^P_{a}(E_z)=\frac{c^2e^2\pi}{\hslash}\frac{\gamma}{1+\gamma}\ \beta\ \mathcal{D}_{tot}\ \Phi\\ \times\left\vert\frac{f_{+\uparrow}f_{+\downarrow}\Bigl[(P+1)f_{-\uparrow}-(P-1)f_{-\downarrow}\Bigr]}{f_{+\uparrow}f_{+\downarrow}+f_{+\downarrow}f_{-\uparrow}+f_{+\uparrow}f_{-\downarrow}} \right \vert\ \end{multline} and \begin{multline} \label{49} G^{P}_{t}(E_z)=\frac{e^2\pi}{2\hslash}\frac{\gamma}{1+\gamma}\ \beta\ \mathcal{D}_{tot}\ \Phi\\ \times\Bigl| \Big\{ f_{-\uparrow}f_{-\downarrow} \Bigl[(P+1)f_{+\uparrow}\Bigl(f^2_{-\uparrow}f_{+\downarrow}\\ +f_{+\uparrow}f_{-\uparrow}f_{+\downarrow}+2f_{+\uparrow}f_{-\uparrow}f_{-\downarrow}+2f^2_{+\uparrow}f_{-\downarrow}\Bigr)\\ -(P-1)f_{+\downarrow}\Bigl(f^2_{-\downarrow}f_{+\uparrow}+2f^2_{+\downarrow}f_{-\uparrow}\\ +2f_{+\downarrow}f_{-\uparrow}f_{-\downarrow}+f_{+\downarrow}f_{+\uparrow}f_{-\downarrow}\Bigr)\Bigr]\Big\}\Big/\\ \Bigl\{2f^2_{-\downarrow}f_{+\uparrow}f_{-\uparrow}+f^2_{+\uparrow}f^2_{-\uparrow}\\ +f^2_{-\uparrow}f^2_{+\downarrow}+2f^2_{-\uparrow}f_{-\downarrow}f_{+\uparrow}+f_{-\downarrow}f_{-\uparrow}f_{+\uparrow}f_{+\downarrow}\Bigr\} \Bigr|\ . \end{multline}\end{subequations} \newline The corresponding plots can be seen in Figs. \ref{fig:singlet_007}a) and \ref{fig:triplet_007}a). In these calculations we considered a small magnetic field of $0.07\,$T which equals in magnitude the thermal energy of $k_BT=0.004\,$meV. This provides a situation with a finite occupation probability for all included states. Specifically, this means that also states containing $\downarrow$ - electrons will be populated, but the population of states containing $\uparrow$ - electrons will be preferred. The first thing we observe in both Fig. \ref{fig:singlet_007}a) and \ref{fig:triplet_007}a) is that the once degenerate curves in Figs. \ref{fig:singlet_0}a) and \ref{fig:triplet_0}a) now split into distinct curves for the four different polarizations. Moreover, the peaks of the curves corresponding to less polarized leads continuously move to higher gate voltages. Finally the conductance \textit{decreases/increases} with increasing polarization for the $a/t$ cases, respectively. Let us examine the results starting with the $\ket{a}$ - groundstate. We will divide the analysis in two cases, slightly polarized leads and strongly polarized leads. \newline For only slightly polarized or non-polarized leads the situation is intricate as we have to deal with \emph{competing processes}. On the one hand there is a highly populated $\ket{\uparrow,\cdot}$ state and a slightly populated $\ket{\downarrow,\cdot}$ state in the tube. From this point of view, the system prefers $\downarrow$ - electrons to tunnel into the $\ket{a}$ state and to leave the dot subsequently such that the tube always remains in the preferred $\ket{\uparrow,\cdot}$ state (Fig. \ref{fig:singlet_007}, sketch b), upper left panel). Only rarely, the $\uparrow$ - electron tunnels out, as this would result in a spin-flip to the disfavored $\ket{\downarrow,\cdot}$ state (Fig. \ref{fig:singlet_007}, sketch b), lower left panel). \begin{figure}[] \includegraphics[width=\columnwidth]{8.eps} \caption{\label{fig:triplet_007} a) Conductance near the triplet resonance for parallel magnetized leads and applied magnetic field. In contrast to the case of a singlet resonance, Fig. \ref{fig:singlet_007}, transport increases as the polarization is enhanced. b)Schematic explanation. At small leads polarization the distribution of $\uparrow$ - electrons and $\downarrow$ - electrons is almost equal. However, the $\ket{t_1}$ - channel is preferred to the others. Increasing the polarization enhances the dominance of this channel and correspondingly the conductance. Simultaneously the conductance peak is shifted to lower gate voltage indicating the dominance of $\uparrow$ - electrons.} \end{figure} On the other hand, entering of $\downarrow$ - electrons is suppressed compared to transport of $\uparrow$ - electrons, not so much by the small polarization, but mainly due to the Zeeman splitting in the involved Fermi-functions: The chemical potential for $\downarrow$ - electrons exceeds the one for $\uparrow$ - electrons by $2E_z$ such that $f_{+\uparrow}>f_{+\downarrow}$ at any gate voltage. However, in the end it will be a mixture of mainly $\downarrow$ - electrons and some $\uparrow$ - electrons responsible for transport. This can also be seen by the fact that the curves for small polarizations are shifted to higher gate voltages which accounts for the higher chemical potential of the $\downarrow$ - electrons. In addition, the total amplitude of the conductance is decreased compared to the case without the magnetic field, Fig. \ref{fig:singlet_0}a), as there is always a limiting element - either the small Fermi-function or the small population - involved. \newline In the case of highly polarized leads we face the situation where there are very few $\downarrow$ - electrons in the leads. As temperature provides a small, but nonzero population of the slightly excited state $\ket{\downarrow,\cdot}$, current mainly flows via the polarization-favored $\uparrow$ - electron channel. Since the chemical potential, the increment of the Fermi-functions, is smaller than in the former case the transition takes place at slightly lower gate voltages. The situation again is visualized in the sketch b) of Fig. \ref{fig:singlet_007}, in the upper and lower right panel. At the triplet resonance we observe not only quantitative, but also qualitative changes. The plot can be seen in Fig. \ref{fig:triplet_007}a) and all relevant tunneling processes are sketched in Fig. \ref{fig:triplet_007}b). Let us again start with unpolarized or just slightly polarized leads. Due to a large population of the spin $\uparrow$ states in the $N_c=4n+1$ case and of the $\ket{t_1}$ state in the $N_c=4n+2$ case transport is mainly mediated via the majority charge carriers, i.e. $\uparrow$ - electrons (Fig. \ref{fig:triplet_007}b), upper right panel). However, the resulting current is smaller than in the case without magnetic field since it is harder to make use of the $\downarrow$ - electrons that are still largely at disposal in the leads. \newline A high polarization decreases the number of $\downarrow$ - electrons in the leads in favor of the $\uparrow$ - electron number, and such transport via the already preferred $\ket{t_1}$ channel is strongly enhanced. As a consequence, the conductance by far \textit{exceeds} the conductance without magnetic field and polarization. This effect should be detectable in an experimental setup and would give a possibility to distinguish between a triplet groundstate and a $S=0$ groundstate. \section{The nonlinear regime} In the finite bias regime also excited states become available and, due to the resulting high number of involved states, it is necessary to calculate the current numerically. We show the current and the stability diagrams - the differential conductance $\frac{\mbox{d}I}{\mbox{d}V_{b}}(V_b,V_g)$ as a function of the gate and the bias voltage. The stability diagrams give a clear indication whether the involved groundstate in the transition $4n+1\longleftrightarrow 4n+2$ is the $\ket{a}$ state or the triplet. In the case of antiparallel lead magnetization we find negative differential conductance (NDC) for transitions involving the $\ket{a}$ state. We also observe NDC for transitions involving the $\ket{a}$ state or the triplet if an external magnetic field is applied. \newline The current as a function of the gate and the bias voltage is shown in Fig. \ref{fig:CURRENT}a) for the $\ket{a}$ groundstate and in Fig. \ref{fig:CURRENT}b) for the triplet groundstate. \begin{figure} \includegraphics[width=\columnwidth]{9.eps} \caption{\label{fig:CURRENT} Current versus gate and bias voltages for unpolarized leads. In total 176 states have been included, which corresponds to all states with at most one bosonic excitation. For $4n+2$-filling this amounts to 32 different states. a) Band-mismatch $\epsilon_{\Delta}=0.3\ \epsilon_0$ corresponding to an $S=0$ groundstate for the $4n+2$ filling. b) Band-mismatch $\epsilon_{\Delta}=0$ corresponding to an $S=\hslash$ groundstate at filling $4n+2$. In both cases a 4-electron periodicity of the Coulomb diamonds is observed.} \end{figure} \vspace{0cm} All states with up to one bosonic excitation have been included in the calculation. A 4-electron periodicity of the Coulomb diamonds is clearly seen. The change in color indicates a change in current and therefore the opening of a new channel. At high bias a smearing of the transitions due to the multitude of bosonic excitations is observed. In the remaining of this section we focus on the gate voltage region relevant for the $4n+1\longleftrightarrow4n+2$ transitions. In the plots of the differential conductance reported in the following we did not include the bosonic excitations to avoid a multitude of transition not relevant for the coming discussion. A polarization $P=0.9$ is chosen. \subsection{Differential conductance at zero magnetic field} Figs. \ref{fig:singlet_Vg327-339_Vb0-14_B0}a) and Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}b) show the stability diagrams for parallel and antiparallel lead magnetization, respectively, for the case of the $\ket{a}$ groundstate. The two transition lines \textit{h} and \textit{e} were emphasized by a dashed line because these lines are so weak that it was not possible to resolve them together with the other stronger lines. The most obvious difference between the parallel and the antiparallel setup is the weakness of all transition lines beyond the triplet occupation (line \textit{b}) for antiparallel lead magnetization. Moreover an NDC line, (line \textit{b}), not present in the parallel magnetization case, is observed. \begin{figure} \includegraphics[width=\columnwidth]{10.eps} \caption{\label{fig:singlet_Vg327-339_Vb0-14_B0} Differential conductance for transitions between $4n+1\longleftrightarrow4n+2$ filling in the $\ket{a}$ - groundstate. The polarization has been chosen to be $P=0.9$. The four lowest lying states for $4n+1$ and the six ones for $4n+2$ filling were included. The vertical white line is the bias trace we follow to explain the distinct transition lines in Fig. \ref{fig:Energiediagramm_c1_0_B0}. a) The leads are magnetized in parallel. b) Antiparallel magnetized leads. We observe a different intensity of the excitation lines between parallel and antiparallel magnetization. In particular a pronounced negative differential conductance (NDC) occurs in correspondence of the transition between $\ket{\sigma,\cdot}$ and the triplet (line $b$).} \end{figure} In order to explain the line positions in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}a),b) we provide a schematic drawing in Fig. \ref{fig:Energiediagramm_c1_0_B0} which is based on a bias trace at the particular gate voltage which aligns the groundstates (white vertical lines in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}). The differently colored arrows stand for new transport channels that open at certain bias voltages. The channels open in the order of \textit{a} to \textit{e} for transitions from $4n+1\longrightarrow4n+2$ (dashed arrows) and \textit{f} to \textit{h} for transitions from $4n+2\longrightarrow4n+1$ (solid arrows). Sometimes opening of a new channel also opens other channels that have been blocked before and one does not see distinct lines for these transitions. Fig. \ref{fig:Energiediagramm_c1_0_B0} relates the concerned transitions to the required bias voltages. Moreover, the line \textit{g} stands for transitions between the triplet and the $\ket{\cdot,\sigma}$ states, i.e., it is a transition between excited states. To explain the NDC in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}b) which follows upon line \emph{b} in the range between lines \textit{f} and line \textit{g}, we observe that --\,in correspondence of the \emph{b} line\,-- below the resonance only the transitions from $\ket{\sigma,\cdot}$ to the $\ket{a}$ state is possible. Above resonance also the triplet $\ket{t}$ is accessible. For the case of antiparallel polarization, both provide only weak transport channels: below the resonance transport is mostly mediated by $\uparrow$ - electrons (see also sketch of Fig. \ref{fig:singlet_0}) which are minority electrons for the source contact; above resonance, after some tunneling processes the system will always end up in the $\ket{t_{-1}}$ state which is a trapping state. Just at the exact resonance, the thermal energy allows electrons to tunnel forth and back, i.e., a $\downarrow$ - electron has the possibility to tunnel back into the source contact and transport is slightly enhanced. Once the bias voltage exceeds the exact resonance the trapping state $\ket{t_{-1}}$ gets occupied for long times and the current diminishes again. \newline \begin{figure} \includegraphics[width=7cm]{11.eps} \caption{\label{fig:Energiediagramm_c1_0_B0} Schematic drawing for the possible transitions occurring by sweeping the bias voltage at the gate voltage that aligns the $\ket{\sigma,\cdot}$ and the $\ket{a}$-states (white dashed line in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}).} \end{figure} The fact that the $\ket{\downarrow,\cdot}\longleftrightarrow\ket{t_{-1}}$ transition serves as the major transport channel once it has been opened is also the reason why all transition lines above line \textit{b} are so weak. \newline In Figs. \ref{fig:trip_Vg328-338_Vb0-14_B0}a) and \ref{fig:trip_Vg328-338_Vb0-14_B0}b) the stability diagrams for the $S=\hslash$ triplet groundstate are shown. They look a lot simpler than the ones in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0} due to the absence of a band-mismatch, causing a degeneracy of all four $4n+1$ filling groundstates. Line \textit{a} is the groundstate to groundstate transition. Lines \textit{b} to \textit{d} indicate transitions from the $4n+1$ groundstates to $\ket{a}$, $\ket{s}$ and $\ket{b}$, respectively. \begin{figure} \includegraphics[width=\columnwidth]{12.eps} \caption{\label{fig:trip_Vg328-338_Vb0-14_B0} Differential conductance for transitions between $4n+1\longleftrightarrow4n+2$ filling in the triplet groundstate. The polarization has been chosen to be $P=0.9$. The four lowest lying states were included for $4n+1$ and the six lowest ones for $4n+2$. a) Leads parallel magnetized. b) Leads polarized antiparallel. From the stability diagrams it is possible to directly extract the exchange parameters $u^+$ and $J$ since the bias voltage $V_b/2=u^+$ is needed to open transition line \textit{b} and $V_b/2=J$ to open line \textit{c}.} \end{figure} They come in the expected order, at an applied voltage $V_b/2$ equal to $u^+$, $J$ and $J+u^+$, as it is shown in table \ref{States}. Line \textit{e} stands for the transition from the triplet to one of the $4n+1$ groundstates. \newline For the antiparallel setup, Fig. \ref{fig:trip_Vg328-338_Vb0-14_B0}b), we may see the same effect as we have observed in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}b), i.e., all lines beyond the transition to the triplet decrease in intensity. Since the triplet is the groundstate, this means all excitation lines are weak and may not be resolved in the figure. \subsection{Differential conductance in parallel magnetic field\label{B}} Here we present results for an applied magnetic field of strength $E_z=0.1$ meV, Fig. \ref{fig:c1_0_B01}. The leads are parallel magnetized and a polarization of $P=0.6$ has been applied. The magnetic field removes the spin degeneracy of the triplet as well as of the $4n+1$ filled states; the resulting Zeeman split transitions are clearly seen in Fig. \ref{fig:c1_0_B01} a) and are less well resolved in Fig. \ref{fig:c1_0_B01} b). \begin{figure} \includegraphics[width=\columnwidth]{13.eps} \caption{\label{fig:c1_0_B01} Differential conductance for transitions between $4n+1\longleftrightarrow4n+2$ filling with an applied magnetic field of $E_z=0.1$ meV. A parallel lead magnetization was assumed with the polarization $P=0.6$. a) $\ket{a}$ - groundstate. Soon after line \textit{c} an NDC effect is observed due to the occupation of the $\ket{t_{-1}}$ trapping state. b) Triplet groundstate. After lines \textit{b} and \textit{c} NDC occurs due to an increased population of the $\ket{t_{-1}}$ state.} \end{figure} Explicitly, for the $\ket{a}$ groundstate, line \textit{b} from Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0} splits into lines \textit{b} and \textit{c} in Fig. \ref{fig:c1_0_B01}. We notice that line \textit{c} shows an NDC effect due to the opening of the channel $\ket{\downarrow,\cdot}\longrightarrow\ket{t_{-1}}$: though this transition, as mediated by minoriy $\downarrow$ - electrons, is rare, once it happens the system is trapped in the $\ket{t_{-1}}$ state for a long time due to the parallel polarization of the leads. For transitions from $4n+2$ to $4n+1$ line \textit{k} is a new line that was Coulomb blocked in Fig. \ref{fig:singlet_Vg327-339_Vb0-14_B0}. It denotes the transition $\ket{s}\longrightarrow\ket{\cdot,\downarrow}$ and ends in line \textit{e} since the $\ket{s}$ state must be populated. Also, we notice the absence of the $\ket{s}\longrightarrow\ket{\uparrow,\cdot}$ line since it is Coulomb blocked by the groundstate to groundstate transition (line \textit{j}). For the $S=\hslash$ triplet groundstate, Fig. \ref{fig:c1_0_B01}b), we observe that line \textit{b} and line \textit{c} show NDC effects. Line \textit{b} represents transitions from $\ket{\cdot,\uparrow}\longrightarrow\ket{t_0}$ or $\ket{\uparrow,\cdot}\longrightarrow\ket{t_0}$, which is not a trapping state. However, the applied bias voltage is sufficient to also populate the $\ket{\cdot,\downarrow}$ and $\ket{\downarrow,\cdot}$ states from $\ket{t_0}$ and subsequently from $\ket{\cdot,\downarrow}$ and $\ket{\downarrow,\cdot}$ the trapping state $ \ket{t_{-1}}$. This process is also visualized in Fig. \ref{fig:Energiediagramm2}. \begin{figure} \includegraphics[width=7cm]{14.eps} \caption{\label{fig:Energiediagramm2} Schematic drawing of the possible transitions if the $\ket{\cdot,\uparrow}$ and $\ket{\uparrow,\cdot}$ states are aligned to the $\ket{t_1}$ state by the gate voltage at finite magnetic field and in the triplet groundstate. It provides the explanation for the transitions lines observed in the inset of Fig. \ref{fig:c1_0_B01}b).} \end{figure} In the very same way it is possible to get trapped in the $\ket{t_{-1}}$ state via the $\ket{a}$ state indicated by line \textit{c}. \subsection{The magnetic field sweep} In a seminal experiment Moriyama et al.\cite{Mor1} demonstrated a transition from a $S=0$ groundstate to a $S_z=\hbar$ groundstate upon magnetic field sweep in a SWNT quantum dot. In this section we have computed the differential conductance in a gate-voltage and magnetic field plot both for unpolarized, as in \cite{Mor1}, and parallel polarized leads with $P=0.9$. We start from the $\ket{a}$ groundstate at $B=0$ with a band-mismatch of $0.24\ \epsilon_0$ (smaller than we previously used). This choice yields a change of groundstate from $\ket{a}$ to the triplet at a magnetic field $\simeq6$ T as measured experimentally\cite{Mor1}. To observe well visible patterns, we increased the temperature by a factor of ten compared to Tab. \ref{Values}. \begin{figure} \includegraphics[width=\columnwidth]{15.eps} \caption{\label{fig:singlet_Moriyama_Vg320-336_Vb0-14_B0-85} a) Differential conductance d$I$/d$V_g$ for a $B$-field sweep in the $\ket{a}$ - groundstate case. The applied bias voltage was fixed at $5.8\ \mbox{meV}$. Red lines indicate transitions that become possible at a certain gate voltage and blue lines show a transition that drops out of the transport window. The "V"-shaped patterns a and b represent transitions from $N_c=4n$ to $\ket{\sigma,\cdot}$ and $\ket{\cdot,\sigma}$, respectively. Each of the patterns is split by $2E_z$ denoting $\uparrow$ - electrons and $\downarrow$ - electrons tunneling in. At line $c$ we enter the $N_c=4n+1$ Coulomb diamond and transport gets suppressed. Line $d$ stands for the groundstate to groundstate transition from $\ket{\uparrow,\cdot}$ to the $\ket{a}$ - state. The "V"-shaped pattern $e$ is due to the transition $N_c=4n+1$ to the triplet whereas $f$ and $g$ denote transitions to the $\ket{s}$ - singlet and the $\ket{b}$ state, respectively. At the point $P$ the groundstate changes from the $\ket{a}$ state to the $\ket{t_1}$ - triplet. b) Ferromagnetic leads, polarized in parallel with $P=0.9$, are assumed. This changes the intensity of the transitions, while their positions are preserved. Moreover, transitions to excited states involving spin-down electrons are disfavored channels and hence converted from positive to negative differential conductance lines.} \end{figure} The result of our calculation is presented in Fig. \ref{fig:singlet_Moriyama_Vg320-336_Vb0-14_B0-85}a). At a gate voltage of approximately $0.322\ \mbox{meV}$ and $0.323\ \mbox{meV}$ we have two $V$-shaped transition patterns (\textit{a} and \textit{b}) each of width $2E_z=2\mu_BB$. The separation between $a$ and $b$ at zero field is the band-mismatch $\epsilon_\Delta$. Interestingly, for polarized leads, the branches belonging to transitions involving $(\ket{\downarrow,\cdot},\ \ket{\cdot,\downarrow})$, corresponding to the positive slope of the $"V"$, are NDC lines, Fig. \ref{fig:singlet_Moriyama_Vg320-336_Vb0-14_B0-85}b). The reason is the same as addressed already in section \ref{B}: once the $\downarrow$ - channel becomes available, there is some chance that from time to time a minority charge carrier ($\downarrow$ - electron) enters from the source. As the drain is polarized in parallel to the source, it will take quite a while until this electron can leave the SWNT again, such that transport gets hindered. At the gate voltage of approximately $0.328\ \mbox{meV}$, one enters the $N_c=4n+1$ Coulomb diamond (line \textit{c}) and transport gets completely suppressed. The dot is in the groundstate $ \ket{\uparrow,\cdot}$ at $B\neq0$. At $V_g\simeq0.329\ \mbox{meV}$ transport from $N_c=4n+1$ to the $\ket{a}$ state is enabled (line \textit{d}). The next transitions (patterns \textit{e}, \textit{f}, \textit{g}) we observe are again split by $2E_z$ and therefore shaped like a "$V$". In all cases, the positively sloped branches are now again of NDC nature for a parallel lead polarization. The first "$V$" belongs to the triplet (pattern \textit{e}) and is of stronger intensity than the following two patterns. The transitions $\ket{\uparrow,\cdot}\longleftrightarrow\ket{t_1}$ and $\ket{\downarrow,\cdot}\longleftrightarrow\ket{t_0}$ contribute to the negative sloped part, while $\ket{\uparrow,\cdot}\longleftrightarrow\ket{t_0}$ and $\ket{\downarrow,\cdot}\longleftrightarrow\ket{t_{-1}}$ are responsible for the positive shaped line. The crossing of the $e$ and $d$ lines occurring at $B\cong6\ \mbox{T}$, point $P$, indicates \emph{the change in the groundstate from $\ket{a}$ to the state $\ket{t_1}$.} From the triplet pattern \textit{e} the additional gate voltage equal to the exchange energy $J$ is needed to arrive at the last two "$V$" - shaped patterns $f$ and $g$. Compared to the lines for the triplet transition they are quite close to each other and of less intensity. These lines belong to a transition from both the $\ket{\downarrow,\cdot}$ and the $\ket{\uparrow,\cdot}$ states to the $\ket{s}$ - singlet (pattern \textit{f}) and the $\ket{b}$ state (pattern \textit{g}). Finally, the lines on the right edges of the plots are mirror images and belong to backward transitions from $N_c=4n+2$ to $N_c=4n+1$; for this reason they mark a decrease of current for both polarized and unpolarized leads. \section{Conclusions} In summary, we have calculated spin dependent transport through fully interacting SWNTs in both the linear and the nonlinear regime, with and without an applied magnetic field. Peculiar of metallic SWNTs of small diameter is the possibility, due to exchange interactions, to find the system at $4n+2$ filling either in a groundstate of total spin $S=0$ or $S=\hbar$. Which of the two groundstates occurs in a real nanotube depends on the relation between the exchange energy and the orbital band mismatch. Thus, with focus on transitions involving $4n+1 \longleftrightarrow 4n+2 $ filling, we investigated both situtations and demonstrated pronounced differences in the current-voltage characteristics depending on the considered groundstate. \newline For example in the linear regime the conductance for parallel lead magnetization and finite magnetic field increases by raising the polarization for the case of a triplet groundstate but it decreases for the $S=0$ groundstate. This is due to the fact that for the triplet groundstate transport is dominated by a channel involving the triplet state $\ket{t_1}$ (with both spins $\uparrow$); for the $S=0$ case transport to be mediated by the majority electrons requires to make use of the $4n+1$ lowest excited state $\ket{\downarrow,\cdot}$ (and hence less favorable), Zeeman split from the ground state. In the nonlinear regime we presented stability diagrams with parallel and antiparallel lead magnetization for both ground sates. In the antiparallel case it was possible to observe a negative differential conductance (NDC) effect for the $S=0$ groundstate, following immediately upon a conductance enhancement at the opening of a trapping channel to the excited triplet state $\ket{t_{-1}}$. Directly at that resonance, electrons can, just by thermal activation, tunnel back \emph{and} fourth, such that trapping in the $\ket{t_{-1}}$ state can not yet act, leading to an intermediate conductance increase. Away from resonance, the blocking effect fully occurs, resulting in the NDC. By adding an external magnetic field in the parallel setup we found NDC effects for both groundstates caused by spin blocking mediated by $\downarrow$ - channels, involving in particular the triplet state $\ket{t_{-1}}$. Finally, we also presented results for the differential conductance in a gate-voltage and magnetic field map at finite bias. These magnetic field sweeps immediately allow to recognize the nature of the $4n+2$-filling groundstate at zero field, as well as to tune the nature of the groundstate from $S=0$ to $ S_z=\hbar$ upon variation in the field amplitude. Our results for unpolarized leads are in \emph{quantitative} agreement with experiments on a small-diameter SWNT by Moryama et al. \cite{Mor1}. Importantly the sweep at zero field also allows to immediately read off the values of the short range interactions $J$ and $u^+$. Specifically, $J$ is the singlet-triplet exchange splitting and $u^+$ characterizes at zero orbital mismatch the energy difference between two of the low energy states of total spin $S=0$. In the presence of polarized leads the magnetic field sweep also reveals lines of NDC due to the trapping nature of all $\downarrow$ - channels. The predictions of our theory are in quantitative agreement with experimental results obtained so far for unpolarized leads \cite{Mor1,Sap,Liang}. Due to recent achievements on spin-polarized transport in SWNTs \cite{Sahoo,Man,Haupt}, our predictions on spin-dependent transport are within the reach of present experiments. \section{acknowledgments} We acknowledge support by the DFG under the funding programs SFB 689, GRK 638.
d86bb70caa66da0066d00685f92cf52825502894
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Non-Linear Feedback Shift Registers (NLFSR) are a generalization of Linear Feedback Shift Registers (LFSRs) in which a current state is a non-linear function of the previous state~\cite{Golomb_book}. While the theory behind LFSRs is well-understood, many fundamental questions related to NLFSRs remain open. The interest in NLFSRs is motivated by their ability to generate pseudo-random sequences which are hard to break with existing cryptanalytic methods~\cite{Ca05}. A common approach for encrypting confidential information is to use a {\em stream cipher} which combines plain text bits with a pseudo-random bit sequence~\cite{robshaw94stream}. The resulting encrypted information can be transformed back into its original form only by an authorized user possessing the cryptographic key. While LFSRs are widely used in testing and simulation~\cite{AbBF94}, for cryptographic applications their pseudo-random sequences are not secure. The structure of an $n$-bit LFSR can be easily deduced by observing $2n$ consecutive bit of its sequence~\cite{Ma69}. Contrary, an adversary might need $2^n$ bits of a sequence to determine the structure of the $n$-bit NLFSR which generates it~\cite{DuTT08}. A number of NLFSR-based stream ciphers for RFID and smartcards applications have been proposed, including Achterbahn~\cite{GaGK07}, Grain~\cite{hell-grain}, Dragon~\cite{CHM05}, Trivium~\cite{canniere-trivium}, VEST~\cite{cryptoeprint:2005:415}, and the cipher~\cite{GaGK06}. Similarly to LFSRs, an NLFSR can be implemented either in the Fibonacci or in the Galois hardware configuration. In the former, the feedback is applied to the last bit of the register only, while in the latter the feedback can potentially be applied to every bit. The depth of circuits implementing feedback functions in a Galois configuration is usually smaller than the one in the equivalent Fibonacci configuration~\cite{Du08}. This makes the Galois configuration more attractive for stream ciphers where high throughput is important. For example, by re-implementing the NLFSR-based stream cipher Grain~\cite{hell-grain} from the original Fibonacci to the Galois configuration, one can double the throughput with no penalty in area or power~\cite{Sh09}. In~\cite{Du08} it has been shown how to transform a Fibonacci NLFSR into an equivalent Galois NLFSR. While the resulting NLFSRs generate the same sets of output sequences, they follow different sequences of states and normally start from a different initial state. The relations between sequences of states and between initial states of two configurations are studied in this paper. One reason for studying the relation between sequences of states is that some NLFSR-based stream ciphers use not only the output of an NLFSR, but also several other bits of its state to produce a pseudo-random sequence. If a Fibonacci to Galois transformation is applied to an NLFSR-based stream cipher, it is important to know which bits of the state are affected by the transformation in order to preserve the original algorithm. Changing the algorithm is likely to influence the security of a cipher. For the same reason, we need to map the secret key and the initial value (IV) of the original cipher into the corresponding ones of the transformed cipher. Finally, knowing which initial state of the Galois configuration matches a given initial state of the Fibonacci configuration makes possible validating the equivalence of two configurations by simulation. The paper is organized as follows. Section~\ref{back} gives an introduction to NLFSRs and describes the Fibonacci to Galois transformation. In Section~\ref{main1}, we study a relation between the sequences of states generated by two equivalent NLFSRs. Section~\ref{main2} shows how to compute the initial state for the Galois configuration which matches a given initial state of the Fibonacci configuration. Section~\ref{conc} concludes the paper and discusses open problems. \section{Background} \label{back} In this section, we give an introduction to NLFSRs and briefly describe the transformation from the Fibonacci to the Galois configuration. For more details, the reader is referred to~\cite{Du08}. \subsection{Definition of NLFSRs} A {\em Non-Linear Feedback Shift Register (NLFSR)} consist of $n$ binary storage elements, called {\em bits}. Each bit $i \in \{0,1,\ldots,n-1\}$ has an associated {\em state variable} $x_i$ which represents the current value of the bit $i$ and a {\em feedback function} $f_i: \{0,1\}^n \rightarrow \{0,1\}$ which determines how the value of $i$ is updated. For any $i \in \{0,1,\ldots,n-1\}$, $f_i$ depends on $x_{(i+1) mod~n}$ and a subset of variables from the set $\{x_0, x_1, \ldots, x_i\}$. A {\em state} of an NLFSR is an ordered set of values of its state variables $(x_0, x_1, \ldots, x_{n-1})$. At every clock cycle, the next state is determined from the current state by updating the values of all bits simultaneously to the values of the corresponding $f_i$'s. The {\em output} of an NLFSR is the value of its 0th bit. The {\em period} of an NLFSR is the length of the longest cyclic output sequence it produces. If for all $i \in \{0,1,\ldots,n-2\}$ the feedback functions are of type $f_i = x_{i+1}$, we call an NLFSR the {\em Fibonacci} type. Otherwise, we call an NLFSR the {\em Galois} type. Two NLFSRs are {\em equivalent} if their sets of output sequences are equivalent. Feedback functions of NLFSRs are usually represented using the algebraic normal form. The {\em algebraic normal form (ANF)} of a Boolean function $f: \{0,1\}^n \rightarrow \{0,1\}$ is a polynomial in $GF(2)$ of type \[ f(x_0, \ldots, x_{n-1}) = \sum_{i=0}^{2^n-1} c_i \cdot x_0^{i_0} \cdot x_1^{i_1} \cdot \ldots \cdot x_{n-1}^{i_{n-1}}, \] where $c_i \in \{0,1\}$ and $(i_0 i_1 \ldots i_{n-1})$ is the binary expansion of $i$ with $i_0$ being the least significant bit. Throughout the paper, we call a term of the ANF a {\em product-term}. \subsection{The transformation from the Fibonacci to the Galois configuration} \label{prel} Let $f_i$ and $f_j$ be feedback functions of bits $i$ and $j$ of an $n$-bit NLFSR, respectively. The operation {\em shifting}, denoted by $f_i \stackrel{P}{\rightarrow} f_j$, moves a set of product-terms $P$ from the ANF of $f_i$ to the ANF of $f_j$. The index of each variable $x_k$ of each product-term in $P$ is changed to $x_{(k-i+j)~\mbox{\small{mod}}~n}$. The {\em terminal bit} $\tau$ of an $n$-bit NLFSR is the bit with the maximal index which satisfies the following condition: \[ \mbox{For all bits $i$ such that $i < \tau$, $f_i$ is of type $f_i = x_{i+1}$.} \] An $n$-bit NLFSR is {\em uniform} if the following two condition hold: \begin{enumerate} \item[(a)] all its feedback functions are {\em singular} functions of type \[ f_i(x_0,\ldots,x_{n-1}) = x_{(i+1) mod~n} \oplus g_i(x_0,\ldots,x_{n-1}), \] where $g_i$ does not depend on $x_{(i+1) mod~n}$, \item[(b)] for all its bits $i$ such that $i > \tau$, the index of every variable of $g_i$ is not larger than $\tau$. \end{enumerate} \begin{theorem} \cite{Du08} Given a uniform NLFSR with the terminal bit $\tau$, a shifting $g_{\tau} \stackrel{P}{\rightarrow} g_{\tau'}$, $\tau' < \tau$, results in an equivalent NLFSR if the transformed NLFSR is uniform as well. \end{theorem} \section{The Relation Between Sequences of States} \label{main1} Although a Fibonacci NLFSR and a Galois NLFSR can generate the same output sequence, they follow different sequences of states. Therefore, in order to generate the same output sequence, they normally have to be set to different initial states. In this section we study the relation between sequences of states produced by two equivalent NLFSRs and derive a basic property which will be used to prove of the main result of the paper. Let $s = (s_0,s_1,\ldots,s_{n-1})$ be a state of an NLFSR, $s_i \in \{0,1\}$. Throughout the paper, we use $g_i(s)$ to denote the value of the function $g_i$ evaluated for the vector $s$. We also use $g_i|_{+m}$ to denote the function obtained from the function $g_i$ by increasing indexes of all variables of $g_i$ by $m$. For example, if $g_1 = x_1 \cdot x_2 \oplus x_3$, then $g_1|_{+2} = x_3 \cdot x_4 \oplus x_5$. To simplify the exposition, we do not list variables of a function explicitly if it does not cause any ambiguity, i.e. in the previous example we wrote $g_1$ instead of $g_1(x_1,x_2,x_3)$. \begin{lemma} \label{l_main} Let $N_1$ be an $n$-bit uniform NLFSR with the terminal bit $\tau$, $0 < \tau \leq n-1$, which has the feedback function of type \[ f_{\tau} = x_{(\tau+1) mod~n} \oplus g_{\tau} \oplus p_{\tau} \] and let $N_2$ be an equivalent uniform NLFSR obtained from $N_1$ by shifting from $\tau$ to $\tau-1$ the set of product-terms represented by the function $p_{\tau}$. If $N_1$ is initialized to a state $s = (s_0,s_1,\ldots,s_{n-1})$ and $N_2$ is initialized to the state $(s_0,s_1,\ldots,s_{\tau-1},r_{\tau},s_{\tau+1},\ldots,s_{n-1})$, where \begin{equation} \label{er1} r_{\tau} = s_{\tau} \oplus p_{\tau}|_{-1}(s) \end{equation} then they generate sequences of states which differ in the bit $\tau$ only. \end{lemma} {\bf Proof:} Suppose that $N_1$ is initialized to a state $s = (s_0,s_1,\ldots,s_{n-1})$ and $N_1$ is initialized to a state $r = (r_0,r_1,\ldots,$ $r_{n-1})$, such that $r_i = s_i$ for all $i$ except $i = \tau$ and $r_{\tau}$ is given by (\ref{er1}). On one hand, for $N_1$, the next state is $s^+ = (s^+_0,s^+_1,\ldots,s^+_{n-1})$ such that \[ \begin{array}{l} s^+_{n-1} = s_0 \oplus g_{n-1}(s_1,s_2,\ldots,s_{\tau-1}) \\ \ldots \\ s^+_{\tau} = s_{\tau+1} \oplus g_{\tau}(s_0,s_1,\ldots,s_{\tau-1}) \oplus p_{\tau}(s_1,s_2,\ldots,s_{\tau}) \\ s^+_{\tau-1} = s_{\tau} \\ \ldots \\ s^+_0 = s_1. \\ \end{array} \] Note that, since $N_1$ is uniform, the functions $g_{n-1}, g_{n-2}, \ldots, g_{\tau}$ may only depend on variables with indexes between 0 to $\tau$. Furthermore, $g_{n-1}, g_{n-2}, \ldots, g_{\tau}$ cannot depend on the variable $x_\tau$, since otherwise $N_2$ would not be uniform after shifting. For the same reason, the function $p_{\tau}$ cannot depend on the variable $x_0$. On the other hand, for $N_2$, the next state is $r^+ = (r^+_0,r^+_1,\ldots,r^+_{n-1})$, where \[ \begin{array}{l} r^+_{n-1} = r_0 \oplus g_{n-1}(r_1,r_2,\ldots,r_{\tau-1}) \\ \ldots \\ r^+_{\tau} = r_{\tau+1} \oplus g_{\tau}(r_0,r_1,\ldots,r_{\tau-1}) \\ r^+_{\tau-1} = r_{\tau} \oplus p_{\tau}|_{-1}(r_0,r_1,\ldots,r_{\tau-1}) \\ r^+_{\tau-2} = r_{\tau-1} \\ \ldots \\ r^+_0 = r_1. \\ \end{array} \] By substituting $r_i = s_i$ for all $i$ except $i = \tau$, we get: \[ \begin{array}{l} r^+_{n-1} = s_0 \oplus g_{n-1}(s_1,s_2,\ldots,s_{\tau-1}) \\ \ldots \\ r^+_{\tau} = s_{\tau+1} \oplus g_{\tau}(s_0,s_1,\ldots,s_{\tau-1}) \\ r^+_{\tau-1} = r_{\tau} \oplus p_{\tau}|_{-1}(s_0,s_1,\ldots,s_{\tau-1}) \\ \ldots \\ r^+_0 = s_1. \\ \end{array} \] By substituting $r_{\tau}$ by (\ref{er1}), we get \[ \begin{array}{ll} r^+_{\tau-1} & = s_{\tau} \oplus p_{\tau}|_{-1}(s_0,s_1,\ldots,s_{\tau-1}) \oplus p_{\tau}|_{-1}(s_0,s_1,\ldots,s_{\tau-1}) \\ & = s_{\tau}. \\ \end{array} \] So, the next state of $N_2$ is \[ \begin{array}{l} r^+_{n-1} = s^+_{n-1} \\ \ldots \\ r^+_{\tau} = s_{\tau+1} \oplus g_{\tau}(s_0,s_1,\ldots,s_{\tau-1}) \\ r^+_{\tau-1} = s^+_{\tau-1} \\ \ldots \\ r^+_0 = s^+_1 \\ \end{array} \] i.e. the next states of $N_1$ and $N_2$ can potentially differ only the bit position $\tau$. In order to extend this conclusion to a sequence of states, it remains to show that the resulting $r^+_{\tau}$ can be expressed according to (\ref{er1}). From \[ s^+_{\tau} = s_{\tau+1} \oplus g_{\tau}(s_0,s_1,\ldots,s_{\tau-1}) \oplus p_{\tau}(s_1,s_2,\ldots,s_{\tau}) \\ \] we can derive \[ s_{\tau+1} = s^+_{\tau} \oplus g_{\tau}(s_0,s_1,\ldots,s_{\tau-1}) \oplus p_{\tau}(s_1,s_2,\ldots,s_{\tau}). \] Substituting it to the expression of $r^+_{\tau}$ above and eliminating the double occurrence of $g_{\tau}(s_0,s_1,\ldots,s_{\tau-1})$, we get \[ r^+_{\tau} = s^+_{\tau} \oplus p_{\tau}(s_1,s_2,\ldots,s_{\tau}) \] Since $p_{\tau}(s_1,s_2,\ldots,s_{\tau}) = p_{\tau}|_{-1}(s^+_0,s^+_1,\ldots,s^+_{\tau-1})$, we get \[ r^+_{\tau} = s^+_{\tau} \oplus p_{\tau}|_{-1}(s^+) \] \begin{flushright} $\Box$ \end{flushright} As an example, consider the following 4-bit NLFSR $N_1$: \[ \begin{array}{l} f_3 = x_0 \oplus x_1 \\ f_2 = x_3 \oplus x_1 \oplus x_0 x_1 \\ f_1 = x_2 \\ f_0 = x_1. \end{array} \] which has the period 15. Suppose we shift the product term $x_1$ from the bit 2 to the bit 1. Then we get the following equivalent NLFSR $N_2$: \[ \begin{array}{l} f_3 = x_0 \oplus x_1 \\ f_2 = x_3 \oplus x_0 x_1 \\ f_1 = x_2 \oplus x_0 \\ f_0 = x_1. \end{array} \] The sequences of states of $N_1$ and $N_2$ are shown in the 1st and 2nd columns of Table~\ref{ex}. The initial states of $N_1$ and $N_2$ are $(s_3 s_2 s_1 s_0) = (0001)$ and $(r_3 r_2 r_1 r_0) = (0101)$, respectively. According to Lemma~\ref{l_main}, we have $r_0 = s_0$, $r_1 = s_1$, $r_2 = s_2 \oplus s_0$, and $r_3 = s_3$. As we can see, these sequences differ in the bit 2 only, which is the terminal bit of $N_1$. \begin{table}[t] \begin{center} \caption{Sequences of states of three equivalent 4-bit NLFSRs.} \label{ex} \begin{tabular}{|c|c|c|} \hline \multicolumn{2}{|c|}{Galois} & Fibonacci \\ \hline NLFSR $N_1$ & NLFSR $N_2$ & NLFSR $N_3$ \\ $x_3 x_2 x_1 x_0$ & $x_3 x_2 x_1 x_0$ & $x_3 x_2 x_1 x_0$ \\ \hline 0 0 0 1 & 0 1 0 1 & 0 0 0 1 \\ 1 0 0 0 & 1 0 0 0 & 1 0 0 0 \\ 0 1 0 0 & 0 1 0 0 & 0 1 0 0 \\ 0 0 1 0 & 0 0 1 0 & 1 0 1 0 \\ 1 1 0 1 & 1 0 0 1 & 1 1 0 1 \\ 1 1 1 0 & 1 1 1 0 & 0 1 1 0 \\ 1 0 1 1 & 1 1 1 1 & 1 0 1 1 \\ 0 1 0 1 & 0 0 0 1 & 0 1 0 1 \\ 1 0 1 0 & 1 0 1 0 & 0 0 1 0 \\ 1 0 0 1 & 1 1 0 1 & 1 0 0 1 \\ 1 1 0 0 & 1 1 0 0 & 1 1 0 0 \\ 0 1 1 0 & 0 1 1 0 & 1 1 1 0 \\ 1 1 1 1 & 1 0 1 1 & 1 1 1 1 \\ 0 1 1 1 & 0 0 1 1 & 0 1 1 1\\ 0 0 1 1 & 0 1 1 1 & 0 0 1 1 \\ \hline \end{tabular} \end{center} \end{table} The following property follows trivially from Lemma~\ref{l_main}. \begin{lemma} \label{l_e} Let $N_1$ be an $n$-bit uniform NLFSR with the terminal bit $\tau$, $0 < \tau \leq n-1$, which has the feedback function of type \[ f_{\tau} = x_{(\tau+1) mod~n} \oplus g_{\tau} \oplus p_{\tau} \] and let $N_2$ be an equivalent uniform NLFSR obtained from $N_1$ by shifting from $\tau$ to $\tau-1$ the set of product-terms represented by the function $p_{\tau}$. If $N_1$ is initialized to a state $s = (s_0,s_1,\ldots,s_{n-1})$ and $N_2$ is initialized to the state $(s_0,s_1,\ldots,s_{\tau-1},r_{\tau},s_{\tau+1},\ldots,s_{n-1})$, such that \begin{equation} \label{er} r_{\tau} = s_{\tau} \oplus p_{\tau}|_{-1}(s), \end{equation} then $N_1$ and $N_2$ generate the same output sequence. \end{lemma} As an example, consider the sequences of states of NLFSRs $N_1$ and $N_2$ shown in the 1st and 2nd columns of Table~\ref{ex}. Since their initial states $(0001)$ and $(0101)$ agree with Lemma~\ref{l_e}, $N_1$ and $N_2$ generate the same output sequence $100010110100111$. \section{The Mapping Between Initial States} \label{main2} This section presents the main result of the paper. \begin{theorem} \label{init_state} Let $N_F$ be an $n$-bit Fibonacci NLFSR and $N_G$ be an equivalent uniform Galois NLFSR with the terminal bit $0 \leq \tau < n-1$ and the feedback functions of type \begin{equation} \label{eg} \begin{array}{l} f_{n-1} = x_0 \oplus g_{n-1} \\ f_{n-2} = x_{n-1} \oplus g_{n-2} \\ \ldots \\ f_{\tau}= x_{\tau+1} \oplus g_{\tau} \\ f_{\tau-1} = x_{\tau} \\ \ldots\\ f_0 = x_1. \end{array} \end{equation} If $N_F$ is initialized to a state $s = (s_0,s_1,\ldots,s_{n-1})$ and $N_G$ is initialized to the state $(s_0,s_1,\ldots,s_{\tau},r_{\tau+1},r_{\tau+2},\ldots,r_{n-1})$ such that \[ r_i = s_i \oplus g_{i-1}(s) \oplus g_{i-2}|_{+1}(s) \oplus \ldots \oplus g_{\tau}|_{+i-\tau-1}(s) \] for all $i ‌ \in \{n-1,n-2,\ldots,\tau+1\}$, then $N_F$ and $N_G$ generate the same output sequence. \end{theorem} {\bf Proof:} From the definition of shifting, we can conclude that if, after the transformation, the Galois NLFSR has feedback functions of type~(\ref{eg}), then, the feedback function of the $n-1$th bit of the original Fibonacci NLFSR is of type: \[ f'_{n-1} = x_0 \oplus g_{n-1} \oplus g_{n-2}|_{+1} \oplus g_{n-3}|_{+2} \oplus \ldots \oplus g_{\tau}|_{+n-1-\tau}. \] Any uniform Galois NLFSR can be obtained by first shifting all product-terms of the original Fibonacci NLFSR but the ones represented by $g_{n-1}$ from the bit $n-1$ to the bit $n-2$, then shifting all product-terms but the ones represented by $g_{n-2}$ from the bit $n-2$ to the bit $n-3$, etc., i.e. using a sequence of $n-1-\tau$ shiftings by one bit. This means that, at each step, the set of product-terms represented by the function \begin{equation} \label{pr} p_{n-1-i} = g_{n-1-i-1}|_{+1} \oplus g_{n-1-i-2}|_{+2} \oplus \ldots \oplus g_{\tau}|_{+n-1-i-\tau} \end{equation} is shifted from the bit $n-1-i$ to the bit $n-1-i-1$, for $i \in \{0,1,\ldots,n-1-\tau-1 \}$. Furthermore, for each $i \in \{0,1,\ldots,n-1-\tau-1 \}$, by Lemma~\ref{l_e}, if the NLFSR before shifting is initialized to some state $s'$ and the NLFSR after shifting is initialized to the state where the bit $n-1-i$ has the value $s_{n-1-i} \oplus p_{n-1-i}|_{-1}(s')$ and all other bits have the same values as the corresponding bits of $s'$, then two NLFSRs generate the same output sequence. Therefore, we can conclude that if the original Fibonacci NLFSR $N_F$ is initialized to the state $s = (s_0,s_1,\ldots,s_{n-1})$ and the NLFSR $N_G$ obtained using the sequence of $n-1-\tau$ shiftings by one bit described above is initialized to the state $(s_0,s_1,\ldots,s_{\tau},r_{\tau+1},r_{\tau+2},\ldots,r_{n-1})$ such that \[ r_j = \oplus p_{j}|_{-1}(s) \] for each $j ‌ \in \{n-1,n-2,\ldots,\tau+1\}$ and $p_j$ is defined by (\ref{pr}), then $N_F$ and $N_G$ generate the same output sequence. \begin{flushright} $\Box$ \end{flushright} Since the functions $g_{n-1}, g_{n-2}, \ldots, g_{\tau}$ of a uniform Galois NLFSR depend on variables with indexes between 0 to $\tau$ only, the following property follows directly from the Theorem~\ref{init_state}. \begin{lemma} \label{l1} Let $N_F$ be an $n$-bit Fibonacci NLFSR and $N_G$ be an equivalent uniform Galois NLFSR with the terminal bit $\tau$. If both $N_F$ and $N_G$ are initialized to any state $(s_0,s_1,\ldots,s_{n-1})$ such that $s_i = 0$ for all $i \in \{0,1,\ldots,\tau\}$, then they generate the same output sequence. \end{lemma} As an example, consider the 4-bit Fibonacci NLFSR $N_3$ with the feedback functions: \[ \begin{array}{l} f_3 = x_0 \oplus x_1 \oplus x_2 \oplus x_1 x_2 \\ f_2 = x_3 \\ f_1 = x_2 \\ f_0 = x_1 \end{array} \] which is equivalent to the Galois NLFSRs $N_1$ and $N_2$ from the previous example. The 3rd column of Table~\ref{ex} shows the sequence of states of $N_3$. The terminal bits of $N_1$ and $N_2$ are 2 and 1, respectively. Therefore, is $(1000)$ is used as an initial state (2nd row of Table~\ref{ex}), all three NLFSRs generate the same output sequence $000101101001111$. \section{Conclusion} \label{conc} In this paper, we establish a relation between sequences of states generated by two equivalent NLFSRs and show how to compute the initial state for the Galois configuration which matches a given initial state of the Fibonacci configuration. Many fundamental problems related to NLFSRs remain open. Probably the most important one is finding a systematic procedure for constructing NLFSRs with a guaranteed long period. Available algorithms either consider some special cases~\cite{JaS04}, or applicable to small NLFSRs only~\cite{Fr82}. The general problem is hard because there seems to be no simple algebraic theory supporting it. Specifically, so far no analog of a primitive generator polynomial has been found for the nonlinear case. \bibliographystyle{ieeetr}
9d3f50e97d1d6bb97765ec3d9585d6469892e659
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The study of heavy fermion materials is an exciting area in physics motivating sophisticated experimental work and giving rise to many new concepts and ideas \cite{si}. In particular the fact that heavy fermions are close to a magnetic quantum critical point \cite{base} has brought a new range of possibilities to this field, both theoretically and experimentally. On the course of their investigations, as experimentalists aim to reach closer to the AFQCP at even lower temperatures, came out the exciting discovery of a superconducting dome encircling a putative AFQCP \cite{lonzarich}. The region of superconductivity in the phase diagram is restricted to a close neighborhood of the AFQCP. Even for the most skeptical it is hard not to admit that in this case superconductivity is due to quantum antiferromagnetic fluctuations associated with the QCP \cite{nature}. The theory of superconductivity mediated by spin fluctuations has progressed very much in the last decades mostly due to its relevance for high temperature superconductivity \cite{mon1,este,monden,monmag,km}. In these theories, the paramagnon propagator describing critical antiferromagnetic fluctuations close to an AFQCP can be written in the scaling form \cite{este}, \begin{equation} \chi(q, \omega)=\frac{\chi_{S}}{i \omega\tau+ q^{2} \xi^{2} + 1} \label{qprop}% \end{equation} where $\chi_{S}=\chi_{0}/|g|$ is the staggered susceptibility, $\xi =\sqrt{A/|g|}$ and $\tau=\tau_{0} \xi^{z}$ the correlation length and critical relaxation time, respectively. The quantity $g$ measures the distance (in energy scale) to the AFQCP (at $g=0$), $A$ is the stiffness of the spin fluctuations, $\tau_{0}=1/A$ and the dynamic exponent $z=2$. We are interested here in quantum phase transitions which occur in three ($d=3$) or two dimensions ($d=2$). Since the dynamic exponent associated with an AFQCP takes the value \cite{hertz} $z=2$, the effective dimension associated with the antiferromagnetic quantum phase transition is $d_{eff}=d+2$. Then, for $2d$ and $3d$ systems the effective dimension coincides or is above the upper critical dimension $d_{c}=4$, respectively. This implies that the transition at $T=0$, $g=0$ is described by Gaussian or mean-field exponents \cite{livro} with logarithmic corrections in the marginal case ($d=2$). Then for $d_{eff} \ge d_{c}$ the Gaussian exponents, $\gamma=1$, for the staggered susceptibility and $\nu=1/2$, for the correlation length turn out to describe correctly the quantum critical behavior of the AFQCP. The Gaussian free energy close to the AFQCP can be written as ($k_{B}=1$) \cite{moriya,local}, \begin{equation} f=-\frac{3}{\pi}\sum_{q} T \int_{0}^{\infty} \frac{d \lambda}{e^{\lambda}-1} \tan^{-1}\left[ \frac{2 \pi\lambda T \xi^{z}}{A(1+q^{2} \xi^{2})}\right] \label{free}% \end{equation} For temperatures $T << T_{coh}$ the free energy is given by, \begin{equation} f=-\frac{\pi^{2} T^{2} \xi^{z-d}}{A}\left( \frac{L}{2 \pi}\right) ^{d} S_{d} \int_{0}^{q_{c} \xi} dy \frac{ y^{d-1}}{1+y^{2}}% \end{equation} where $q_{c}$ is a cut-off. The coherence temperature, $T_{coh} =|g|^{\nu z}=|g|$ is that introduced by Continentino et al. \cite{base} and marks the entrance of the system in the Fermi liquid regime. $S_{d}$ is the surface of a d-dimensional sphere with unit radius. The specific heat $C/T=-\partial^{2} f/\partial T^{2}$ in the Fermi liquid regime, $T << T_{coh}$, is easily obtained, \begin{equation} C/T=\frac{V/\xi}{A} q_{c} \xi\left( 1-\frac{\tan^{-1}q_{c} \xi}{q_{c} \xi }\right) \end{equation} in $3d$ and, \begin{equation} C/T=\frac{\pi S_{2}}{2A} \ln\left( 1+q_{c}^{2} \xi^{2} \right) \end{equation} in $2d$. In the critical regime $q_{c} \xi\gg1$, we get that $\gamma=C/T$ is constant in $3d$ and logarithmically divergent in $2d$ \cite{moriya}. We can also define \cite{lacroix,local} a \emph{local limit}, $q_{c} \xi < 1$, in which case the specific heat is given by, \begin{equation} C/T=\frac{2 \pi^{2} N}{T_{coh}}% \end{equation} independent of dimension. This result can be obtained directly from Eq. \ref{free} neglecting its q-dependence and replacing $\sum_{q} \rightarrow N$. The propagator associated with these local spin fluctuations is given by, \begin{equation} \chi_{L}(\omega)=\frac{\chi_{S}}{i \omega\tau+ 1} \label{lprop}% \end{equation} with $\chi_{S}=\chi_{0} / |g|$ and $\tau=\tau_{0} \xi^{z}$. It is remarkable that in spite of the local character of the fluctuations in this regime, the system is still \textit{aware} of the quantum phase transition through the dependence of $\tau$ and $\chi_{S}$ on $g$. Indeed, in this regime the fluctuations are local in space but correlated along the \emph{ time directions}. The theory of this regime can be described as a critical theory in $d_{eff}=z=2$ but with Euclidean dimension $d=0$. The properties of the system for $q_{c} \xi< 1$ have been described in Ref. \cite{local}. They can all be expressed in terms of a single parameter, the coherence temperature\cite{local}. \section{Relation to superconductivity} The q-dependent propagator given by Eq. \ref{qprop} provides an attractive interaction in the singlet channel among quasi-particles in neighboring sites. Thus, antiferromagnetic paramagnons can give rise to d-wave pairing with $d_{x^{2}-y^{2}}$ symmetry \cite{este}, for $q_{c}\xi \gg 1$. What about local spin fluctuations? The spatial dependence of the dynamic susceptibility can be obtained from the Fourier transform of the $q$ and $\omega$ dependent propagator, \begin{equation} \chi(r,\omega)=\sum_{q}\chi(q,\omega)e^{iq.r}. \end{equation} In the case of local spin fluctuations, the relevant propagator is given by Eq. \ref{lprop}, such that, \begin{equation} \Re e\chi_{L}(\omega)=\frac{\chi_{S}}{1+\omega^{2}\tau^{2}}, \end{equation} and, \[ \Re e\chi_{L}(\omega=0)=\chi_{S}. \] Then, \[ \chi(r)=\chi_{S}\sum_{q}e^{iq.r}=2\pi\chi_{S}\int dqd\theta q^{2}\sin\theta e^{iqr\cos\theta}. \] The interaction among the quasi-particles according to Monthoux et al. \cite{nature} is given by: \begin{align} U(r) & =-\lambda^{2}\chi(r)S_{1}.S_{2}\\ & =-\lambda^{2}(-1)\frac{8\pi^{4}\chi_{S}}{a^{3}}\frac{1}{(\frac{\pi r}% {a})^{3}}\left\{ \sin\frac{\pi r}{a}-\frac{\pi r}{a}\cos\frac{\pi r}% {a}\right\} .\nonumber \end{align} For small $r$ this yields, \begin{align} & U(r\rightarrow0)=\allowbreak8\pi\frac{\lambda^{2}}{r^{3}}\chi_{S}\frac {1}{3}\frac{\pi^{3}}{a^{3}}r^{3}=\allowbreak\frac{8}{3}\frac{\pi^{4}}{a^{3}% }\lambda^{2}\chi_{S}\nonumber\\ & U(a)=\allowbreak8\frac{\pi^{2}}{a^{3}}\lambda^{2}\chi_{S}\nonumber\\ & U(2a)=\allowbreak-2\frac{\pi^{2}}{a^{3}}\lambda^{2}\chi_{S}\label{magcase}% \end{align} If we are dealing with a lattice then we have a delta-function at the origin. The potential is repulsive at the origin and zero everywhere else in the singlet channel. Also, it is interesting to obtain the results in the case the local propagator is associated with density fluctuations. In this case \cite{este}, \[ U(r)=-\lambda^{2}\chi(r) \] and we obtain, \begin{align} & U(r\rightarrow0)=\allowbreak-8\pi\frac{\lambda^{2}}{r^{3}}\chi_{S}\frac {1}{3}\frac{\pi^{3}}{a^{3}}r^{3}=\allowbreak-\frac{8}{3}\frac{\pi^{4}}{a^{3}% }\lambda^{2}\chi_{S}\nonumber\\ & U(a)=-\allowbreak8\frac{\pi^{2}}{a^{3}}\lambda^{2}\chi_{S}\nonumber\\ & U(2a)=\allowbreak2\frac{\pi^{2}}{a^{3}}\lambda^{2}\chi_{S} \label{dencase}% \end{align} where $\chi_{S}$ in this case is the compressibility. In the case of a lattice the interaction is attractive at the origin and zero everywhere else in the singlet channel. In the magnetic case, Eqs. \ref{magcase} show that the on-site and nearest neighbor quasi-particle interaction mediated by critical local spin fluctuations are repulsive in the singlet channel and do not lead to Cooper pair formation. Then, as the system moves away from the AFQCP, the correlation length of the spin fluctuations decreases and for $q_{c} \xi \sim 1$ the relevant interactions mediated by these fluctuations become repulsive everywhere destroying superconductivity. On the other hand, local charge fluctuations can still mediate an attractive local interaction. The condition \begin{equation} q_{c} \xi=1 \label{yes}% \end{equation} puts an upper limit to the region of the phase diagram around the AFQCP where superconductivity mediated by spin fluctuations can exist. At zero temperature this implies that superconductivity survives up to a critical coupling $|g|_{S}=Aq_{c}^{2}$. In order to extend Eq. \ref{yes} to finite temperatures ($T$), we consider the scaling form of the correlation length, $\xi= \sqrt{A}|g|^{-\nu}F[T/T_{coh}]$, where $F[t]$ is a scaling function and $T_{coh}=|g|^{\nu z}$. In this case Eq. \ref{yes} can be written as, \begin{equation} F\left[ \frac{T_{D}}{|g|^{\nu z}}\right] =\frac{\sqrt{|g|}}{\sqrt{A} q_{c}} \label{yes1}% \end{equation} where $T_{D}(g)$ represents an upper limit for superconductivity around the AFQCP. \begin{figure}[tbh] \centering{\includegraphics[scale=0.5,angle=270]{fig1.eps}}\caption{(Color online) The line {$q_{c} \xi(T_{D})=1$} represents an upper limit where superconductivity induced by antiferromagnetic spin fluctuations can occur. It's equation is {$T_{D}=|g|^{\nu z}G^{-1}[|g|/Aq_{c}^{2}]$} (see equation \ref{yes1}) and is plotted schematically in the figure. It separates region I with $q_{c} \xi\gg1$ from region (II) of local quantum criticality where fluctuations are critical in the time directions but local in space. The latter gives rise to repulsive, on-site and nearest neighbors interactions (see text).}% \label{fig1}% \end{figure} The scaling function $F(t)$ has a well known behavior in two limiting cases. First, $F(t \rightarrow0) \propto1 - t^{2} + O\left( t^{4}\right) $, such that $F(0)=1$ and for $t \ll 1$, i.e., $T \ll T_{coh}$ this yields a Fermi liquid behavior for the staggered susceptibility which in the spin fluctuation theory is used to define the correlation length \cite{moriya}. Also, neglecting the effect of dangerously irrelevant interactions (to be discussed below), $F(t \rightarrow\infty) \propto t^{x}$ where the exponent $x$ is determined by the condition that the dependence of the correlation length on $g$ cancels out. This yields $x=-1/z$, such that, at the quantum critical trajectory ($g=0$, $T \rightarrow0$), the correlation length diverges as $\xi\propto T^{-1/z}$. We can also show using this asymptotic behavior of the scaling function that the point in the phase diagram at which the superconducting temperature can attain its maximum value is just above the AFQCP, i.e., at $g=0$. In fact $d T_{D}/d g \propto|g|^{z/2}$ and vanishes at $g=0$. An interpolation formula for the scaling function which gives correct results on both limits ($t \rightarrow0$ and $\infty$) is given by, \begin{equation} F(t)=\frac{1}{(1+t^{2})^{1/4}}, \end{equation} where we used the value of the exponents $\nu=1/2$ and $z=2$. Using this expression for the scaling function, Eq. \ref{yes} can be written as, \begin{equation} \sqrt{T_{D}^{2} + |g|^{2}}=Aq_{c}^{2}. \end{equation} This has the form of a dome as shown in Fig. \ref{fig1}. This equation provides a reasonable interpolation for the lines of constant correlation length in the region of the phase diagram $g \ge0$. For negative ($g<0$), there may be thermal fluctuations that change the scaling behavior of the correlation length, as will be discussed below. The physical significance of $T_{D}$ is now quite clear. It provides an upper limit to the region where superconductivity induced by quantum antiferromagnetic spin fluctuations can exist. While for $T \approx0$ this may provide a reasonable estimate of the actual superconducting region and its shape, for larger temperatures thermal fluctuations should reduce the critical temperature $T_{s}$ to values well below $T_{D}$. \begin{figure}[tb] \centering{\includegraphics[scale=0.5]{fig2.eps}}\caption{(Color online) Line of constant correlation length {$(q_{c} \xi=1)$} when thermal or interacting antiferromagnetic spin fluctuations are taken into account.}% \label{fig2}% \end{figure} In the theory of the AFQCP for $d_{eff} \ge d_{c}$, the quartic interaction $u$ is \textit{dangerously irrelevant} \cite{millis}. It determines the shape of the Neel line and changes the value of the shift exponent $\psi$ from the expected scaling result, $\psi=\nu z$, to $\psi=z/(d+z-2)$ (for the AFQCP in 3d, $\psi=2/3$) \cite{millis}. The scaling expression for the correlation length can be immediately generalized to include the effect of the quartic interaction $u$. It is given by \cite{livro} \begin{equation} \xi=\sqrt{A}|g(T)|^{-\nu}G\left[ \frac{T}{|g(T)|^{\nu z}} \right] \label{general}% \end{equation} In this equation $g(T)=g-uT^{1/\psi}$, such that, $g(T_{N})=0$ gives the equation for the N\'eel line. The scaling function $G(t)$ has the following asymptotic behaviors; $G(t=0)=1$ to reproduce the previous zero temperature results, and $G(t \rightarrow\infty) =t^{\frac{\tilde{\nu}-\nu}{\nu z}}$. The latter guarantees the correct behavior close to the critical N\'eel line $g(T_{N})=0$, i.e., $\xi=Q(T)|g(T))|^{-\tilde{\nu}}$, with the amplitude $Q(T)=\sqrt{A}T^{\frac{\tilde{\nu}-\nu}{\nu z}}$. When $T \rightarrow T_{N}$, $g(T) \rightarrow0$ and the correlation length diverges with the thermal correlation length exponent $\tilde{\nu}$. Assuming \cite{millis}, $\tilde {\nu}=\nu=1/2$, we get at $g=0$, \begin{equation} \xi=\sqrt{\frac{A}{u}}T^{-1/3} \label{xigen}% \end{equation} For $g=0$, this yields a temperature for the dome at $g=0$, $T^{u}_{D}(g=0)=\left( Aq_{c}^{2}/u\right) ^{3/2}$, to be compared with $T_{D}(g=0)=Aq_{c}^{2}$, obtained previously. This leads to $T_{D}=u(T_{D}% ^{u})^{2/3}$ and since $u$ is a small number, we expect that in general $T_{D} \ll T_{D}^{u}$, such that, non-Gaussian fluctuations allow in principle for larger critical superconducting temperatures just above the AFQCP. However, in this case the lines of constant $\xi$ satisfying Eq. \ref{yes} should follow closely the N\'eel line as in Fig.\ref{fig2}. The experimental results show however that the superconducting region has a dome shape \cite{lonzarich}. Then, they seem to imply that only purely Gaussian quantum fluctuations are effective in pairing the quasi-particles. Also notice that along the N\'eel line, for $T \ne 0$, the quartic interaction is a \textit{relevant} interaction \cite{livro}. Then, as the spin fluctuations start to interact they apparently loose their efficacy in pairing the quasi-particles. At least this is what the experiments seem to imply. In some heavy fermion systems as one moves away from the AFQCP, for example, applying pressure in the system there is a second superconducting dome \cite{flouquet,miyake}. This is generally attributed to pairing due to charge fluctuations associated with a valence transition \cite{miyake}. This second dome is larger than that associated with the AFQCP extending in a wider region of pressures and temperature. As pointed out before in the case of pairing by charge fluctuations the interaction is attractive when the system is in the regime of local quantum criticality. So, we expected that in this case, superconductivity can occur in a larger region of the phase diagram around the relevant quantum critical point, as is in fact observed. The energy scale of the \emph{magnetic glue} that fixes the region where superconductivity can exist is given by $Aq_{c}^{2}$. $A$ is the stiffness of the spin fluctuations and $q_{c}$ a cut-off appropriate for a hydrodynamic description of these modes. This quantity plays a role similar to the Debye energy in BCS superconductors. The region of the phase diagram just above the limiting superconducting dome is a state of \textit{local quantum criticality}. This state is characterized by a single energy scale, the coherence temperature, $T_{coh}=|g|^{\nu z}$. It has a resistivity which scales as $\rho\propto(T/T_{coh})^{2}$ for $T \ll T_{coh}$ and as $\rho\propto(T/T_{coh})$ for $T \gg T_{coh}$. In the case of anisotropic lattices, the crossover to the local regime occurs in stages. For a tetragonal system with a spectrum of spin fluctuations given by, $a_{xy} q_{x}^{2} + a_{xy} q_{y}^{2} + a_{z}q_{z}^{2}$, as the system moves away from the AFQCP it goes from $d=3$ to $d=2$ and finally to $d=0$ quantum critical behavior. The dome shape of the superconducting region seems to indicate that the interaction between the spin fluctuations acts in detriment of superconductivity. It can not be excluded that interacting spin fluctuations can still provide a pairing mechanism to produce, for example, a pseudo-gap state, but not superconductivity. We have shown quite generally that if Gaussian quantum spin fluctuations give rise to superconductivity, the maximum allowed $T_{c}$ can be found just above the AFQCP. Finally, our results strongly support the proposal that the second superconducting dome observed in some heavy fermions systems is due to pairing by charge fluctuations. Since even in the local quantum regime, charge fluctuations give rise to attractive interactions, superconductivity in this case can extend over a wider region of the phase diagram. Our results are appropriate to describe the system in the paramagnetic region. In the long range ordered magnetic phase of the diagram there are new excitations, the spin waves, associated with transverse modes. Then in this region the present approach does not provide any insight. \acknowledgements I would like to thank Dr. P. Monthoux for invaluable discussions. This work is partially supported by the Brazilian agencies CNPq and FAPERJ.
820f5502ce9704698f036b0496ebf03c01678d2b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Non-local electric effects have been observed since the early days of mesoscopic physics, e.g. in metallic circuits\cite{Webb:89,FilsMetal}. This fact is related to the primarily non-local nature of electronic wave functions in quantum coherent conductors. The spin-degree of freedom has raised little attention in this context, although its control and detection is one of the major challenges of nanophysics, nowadays. Non-local spin signals have been studied for multi-terminal metallic conductors\cite{Johnson,Jedema,Jedema2,Zaffalon}, semiconductors\cite{Lou} and graphene\cite{Tombros2}, in the multichannel diffusive incoherent (MDI) regime. It has been found that a non-equilibrium spin accumulation induced by a ferromagnet into a given conductor can be detected as a voltage across the interface between this conductor and another ferromagnet\cite{Silsbee}. However, to our knowledge, spin-dependent non local effects have not been investigated in the coherent regime, so far. Carbon-nanotubes-based circuits are appealing candidates for observing a non-local, spin-dependent, and coherent behavior of electrons. First, electronic transport in carbon nanotubes (CNTs) can reach the few-channels ballistic regime, as suggested by the observation of Fabry-Perot-like interference patterns\cite{Liang}. Secondly, spin injection has already been demonstrated in CNTs connected to two ferromagnetic leads (see Ref.~\onlinecite {SST} for a review). Thirdly, non local voltages have been observed in CNTs contacted to four normal-metal leads\cite{Makarovski}, which suggests that electrons can propagate in the nanotube sections below the contacts. The study of non-local spin transport in CNTs has recently triggered some experimental efforts\cite{Tombros,Gunnar,Cheryl}. However, a theoretical insight on this topic is lacking. Some major questions to address are what are the signatures of a non-local and spin-dependent behavior of electrons in a nanoconductor, and to which extent these signatures are specific to the coherent regime or the few-channels case. \begin{figure}[ptbh] \includegraphics[width=1.\linewidth]{Circuits.eps}\caption{Left: The two types of circuits [setups (a) and (b)] studied in this article. The central conductor (represented with the grey bar) is contacted to two normal metal leads $N$ and two ferromagnetic leads $F$, which can be magnetized in parallel ($c=P$) or antiparallel ($c=AP$) configurations. The only difference between setups (a) and (b) is the position of the two $F$ leads. Contacts 1 and 2 are used as source and drain to measure a local conductance $G^{c\text{ }}$and contacts 3 and 4 are used to probe a non-local voltage $V^{c}$ outside the classical current path. Right: Tables presenting the behaviors of setups (a) and (b) in various regimes. We study the existence of the signals $MG=(G^{P}-G^{AP})/G^{P}$, $V^{P}$ and $MV=(V^{P}-V^{AP})/V_{b}$. We compare the predictions of the coherent four-channels model (CFC) of section \ref{ScatModel}, the incoherent four-channels model (IFC) of section \ref{IFC} and the multichannel diffusive incoherent (MDI) model of section \ref{MDI}.}% \label{Circuits}% \end{figure} In this paper, we study the behavior of a CNT with two normal metal ($N$) leads and two ferromagnetic ($F$) leads magnetized in colinear directions. Two leads are used as source and drain to define a local conductance $G^{c\text{ }}$and the other two are used to probe a non-local voltage $V^{c}$ outside the classical current path. We consider two different setups which differ on the positions of the $F$ leads. Setup (a) corresponds to the standard geometry used for the study of the MDI limit. In setup (b), the two $F$ leads play the role of the voltage probes, so that no magnetic response is allowed in the MDI limit. We mainly focus on the coherent regime, using a scattering description with two transverse modes, to account for the twofold orbital degeneracy commonly observed in CNTs\cite{deg}. This minimal description is appropriate at low temperatures and bias voltages. We take into account both the spin-polarization of the tunneling probabilities at the ferromagnetic contacts and the Spin-Dependence of Interfacial Phase Shifts (SDIPS) which has been shown to affect significantly spin-dependent transport in the two-terminals case\cite{Cottet06a,Cottet06b,Sahoo}. This approach leads to strong qualitative differences with the MDI case. In particular, we find a magnetic signal in the conductance $G^{c}$ of setups (a) and (b), which would not occur in the MDI limit. We also predict an unprecedented magnetic signal in $V^{c}$ for setup (b). We find that these effects already arise in the incoherent few channels regime. However, they are much stronger in the coherent case, due to resonances which occur inside the CNT. These resonances make the circuit sensitive to the SDIPS, which can furthermore enhance the amplitude of the magnetic signals. This paper is organized as follows: section II defines setups (a) and (b), section III discusses the multichannel diffusive incoherent (MDI) limit, section IV focuses on the coherent\ four-channels (CFC) scattering description, section V presents an incoherent four-channels (IFC) description, section VI discusses the experimental results presently available, and section VII concludes. \section{Definition of setups (a) and (b)} In this article, we consider a central conductor (CC) connected to an ensemble $\mathcal{L}$ of two ferromagnetic ($F$) and two normal metal ($N$) reservoirs. We study the two configurations presented in Fig.~\ref{Circuits}. In both cases, lead 1 is connected to a bias voltage source $V_{b}$, lead 2 is connected to ground, whereas leads 3 and 4 are left floating. The only difference between setups (a) and (b) is the position of the two $F$ leads. These $F$ leads can be magnetized in parallel ($c=P$) or antiparallel ($c=AP$) configurations. We will study the conductance $G^{c}=\partial I_{1}% ^{c}/\partial V_{b}$ between contacts 1 and 2 and the voltage drop $V^{c}$ between leads 3 and 4. The dependence of these quantities on the magnetic configuration $c$ of the ferromagnetic electrodes can be characterized with the magnetic signals $MG=(G^{P}-G^{AP})/G^{P}$ and $MV=(V^{P}-V^{AP})/V_{b}$. \section{Multichannel diffusive incoherent limit\label{MDI}} We first briefly discuss the behavior of setups (a) and (b) in the multichannel diffusive incoherent (MDI) regime. This case has been thoroughly investigated, in relation with experiments in which the CC is a metallic island\cite{Johnson,Jedema,Jedema2,Zaffalon}. For a theoretical description of this regime, one can use spin-currents and a spin-dependent electrochemical potential $\mu_{\sigma}$ which obey a local spin-dependent Ohm's law, provided the mean free path in the sample is much shorter than the spin-flip length. We refer the reader to Ref.~\onlinecite {Valet} for a detailed justification of this approach from the Boltzmann equations, and to Ref.~\onlinecite {Zutic} for an overview of this field of research. In this section, we summarize the behaviors expected for setups (a) and (b) in the MDI limit (see Appendix A for a short derivation of these results from a resistors model). A finite current between leads 1 and 2 can lead to a spin accumulation (i.e. $\mu_{\uparrow }\neq\mu_{\downarrow}$) in the CC if lead 1 or 2 is ferromagnetic, because spins are injected into and extracted from the CC\ with different rates in this case. The spin accumulation diffuses along the CC beyond lead 2, and reaches leads 3 and 4, provided the spin-flip length is sufficiently long. Then, leads $3$ and $4$ can be used to detect the spin accumulation provided one of them is ferromagnetic. Indeed, a local unbalance $\mu_{\uparrow}\neq \mu_{\downarrow}$ in the CC will produce a voltage drop between the floating lead $j\in\{3,4\}$ and the CC if $j$ is ferromagnetic (this voltage drops aims at equilibrating the spin currents between the CC and the ferromagnetic contact). One can thus conclude that in setup (a), a spin accumulation occurs when $V_{b}\neq0$, which leads to $V^{c}=V^{3}-V^{4}\neq0$. In contrast, one finds $V^{c}=0$ in setup (b) because a current flow between the $N$ leads 1 and 2 cannot produce any spin accumulation. For completeness, we also mention that in the MDI limit, one finds $G^{P}=G^{AP}$ for both setups (a) and (b), due to the fact that leads 3 and 4 are left floating (see Appendix A). The table in Fig. \ref{Circuits} summarizes these results. \section{Coherent four-channels limit\label{ScatModel}} \subsection{General scattering description} In this section, we study the case where the CC is a ballistic carbon nanotube (CNT) allowing coherent transport. The observation of Fabry-Perot like interference patterns\cite{Liang} suggests that it is possible, with certain type of metallic contacts, to neglect electronic interactions inside CNTs. We thus use a Landauer-B\"{u}ttiker scattering description\cite{Buttiker1}. We take into account two transverse modes $p\in\{K,K^{\prime}\}$, to account for the twofold orbital degeneracy commonly observed in CNTs\cite{deg}. Each transverse mode one has two spin submodes $\sigma\in\{\uparrow,\downarrow\}$, defined colinearly to the polarization of the $F$ leads. This gives four channels $m=(p,\sigma)$ in total. We assume that spin is conserved upon scattering by the CNT/lead interfaces and upon propagation along the CNT. This requires, in particular, that the magnetization direction can be considered as uniform in the four $F$ leads, and that spin-orbit coupling and spin-flip effects can be neglected inside the CNT and upon interfacial scattering. For simplicity, we also assume that the transverse index $p$ is conserved. In the linear regime, the average current through lead $j$ writes% \begin{equation} I_{j}^{c}=\sum_{k}G_{jk}V_{k}^{c} \label{I}% \end{equation} with \begin{equation} G_{jk}=G_{K}[4\delta_{jk}-\sum_{m}\left\vert S_{jk}^{m}\right\vert ^{2}] \label{Gdef}% \end{equation} $G_{K}=e^{2}/h$, and $S_{jk}^{m}$ the scattering amplitude from lead $k$ to lead $j$ for electrons of channel $m$. Equation (\ref{I}) involves the electrostatic potential $V_{k}^{c}$ of lead $k$ (we assume that the leads are in local equilibrium, so that each one has a single chemical potential for both spin directions). Note that $G_{jk}$ and $S_{jk}^{m}$ implicitly depend on the configuration $c$ of the ferromagnetic electrodes. In this section, we calculate $G^{c}$ and $V^{c}$ by using the general notations of Fig.~\ref{Amplitudes} for the scattering amplitudes. \begin{figure}[ptbh] \includegraphics[width=0.7\linewidth]{Amplitudes.eps}\caption{Scheme representing the notations used for the scattering amplitudes of channel $m$ in setups (a) and (b). The rectangles with full lines represent the different leads and the dashed rectangles represent the nanotube sections between those leads. We note $\delta_{jk}$\ the phase shift acquired by electrons while propagating along the nanotube between contacts $j$\ and $k$. At this stage, we use $\delta_{jk}=\delta_{kj}$, but we keep $t_{jm}\neq t_{jm}^{\prime}$, $u_{jm}\neq u_{jm}^{\prime}$, and $v_{jm}\neq v_{jm}^{\prime}$ for transparency of the calculation.}% \label{Amplitudes}% \end{figure}\ The phase shift $\delta_{jk}$ acquired by electrons along the CNT from contacts $j$ to $k$ can be considered as independent from $m$, with $\delta_{jk}=\delta_{kj}$\cite{Saito}. In practice, $\delta_{12}$, $\delta_{23}$, and $\delta_{34}$ can be tuned using local gate voltage electrodes to change the electronic wavevector in the different CNT sections \cite{LocalGates}. We will thus study the signals $G^{c}$, $V^{c}$, $MG$ and $MV$ as a function of these phases. The calculation of the voltage drop $V^{c}$ requires to determine $V_{3}^{c}$ and $V_{4}^{c}$ from $\left\langle I_{3}\right\rangle =\left\langle I_{4}\right\rangle =0$. This yields: \begin{equation} \frac{V^{c}}{V_{b}}=\frac{G_{41}G_{32}-G_{42}G_{31}}{G_{34}G_{43}-G_{33}% G_{44}} \label{Vnl4pat}% \end{equation} and \begin{align} G^{c}/G_{K} & =G_{11}+[G_{13}(G_{44}G_{31}-G_{41}G_{34})\nonumber\\ & +G_{14}(G_{41}G_{33}-G_{31}G_{43})]/[G_{34}G_{43}-G_{33}G_{44}] \end{align} Using the notations of Fig.~\ref{Amplitudes}, the elements $|S_{jk}^{m}|$ occurring in Eq.~(\ref{Vnl4pat}) through the coefficients $G_{jk}$ of Eq. (\ref{Gdef}) can be calculated as \begin{equation} \left\vert S_{41}^{m}\right\vert =\left\vert D_{m}^{-1}t_{1m}t_{2m}% t_{3m}t_{4m}\right\vert \label{S41quat}% \end{equation}% \begin{align} \left\vert S_{31}^{m}\right\vert & =\left\vert D_{m}^{-1}t_{1m}% t_{2m}\right\vert \nonumber\\ & \times\left\vert u_{3m}^{^{\prime}}+r_{4m}(t_{3m}v_{3m}^{\prime}% -r_{3m}^{\prime}u_{3m}^{\prime})e^{i2\delta_{34}}\right\vert \label{S31quat}% \end{align}% \begin{align} \left\vert S_{42}^{m}\right\vert & =\left\vert D_{m}^{-1}t_{3m}% t_{4m}\right\vert \nonumber\\ & \times\left\vert v_{2m}+r_{1m}(t_{2m}u_{2m}-r_{2m}v_{2m})e^{i2\delta_{12}% }\right\vert \label{S42quat}% \end{align}% \begin{equation} S_{32}^{m}=S_{31}^{m}S_{42}^{m}/S_{41}^{m} \label{S32quat}% \end{equation}% \begin{align} \left\vert S_{34}^{m}\right\vert & =\left\vert D_{m}^{-1}t_{4m}^{\prime }\right\vert \times\left\vert v_{3m}^{\prime}(1-r_{1m}r_{2m}e^{i2\delta_{12}% })\right. \nonumber\\ & +e^{i2\delta_{23}}(r_{2m}^{\prime}+r_{1m}e^{i2\delta_{12}}\left[ t_{2m}t_{2m}^{\prime}-r_{2m}r_{2m}^{\prime}\right] )\nonumber\\ & \times\left. (t_{3m}^{\prime}u_{3m}^{\prime}-r_{3m}v_{3m}^{\prime })\right\vert \label{S34quat}% \end{align} and \begin{align} \left\vert S_{43}^{m}\right\vert & =\left\vert D_{m}^{-1}t_{4m}\right\vert \times\left\vert v_{3m}[1-r_{1m}r_{2m}e^{i2\delta_{12}}]\right. \nonumber\\ & +e^{i2\delta_{23}}[r_{2m}^{\prime}+r_{1m}e^{i2\delta_{12}}\left( t_{2m}t_{2m}^{\prime}-r_{2m}r_{2m}^{\prime}\right) ]\nonumber\\ & \times\left. \lbrack t_{3m}u_{3m}-r_{3m}v_{3m}]\right\vert \label{S43quat}% \end{align} with \begin{align} D_{m} & =[\left( 1-r_{1m}r_{2m}e^{i2\delta_{12}}\right) \left( 1-r_{2m}^{\prime}r_{3m}e^{i2\delta_{23}}\right) \nonumber\\ & \times\left( 1-r_{3m}^{^{\prime}}r_{4m}e^{i2\delta_{34}}\right) ]\nonumber\\ & -t_{2m}t_{2m}^{\prime}r_{1m}r_{3m}\left( 1-r_{3m}^{^{\prime}}% r_{4m}e^{i2\delta_{34}}\right) e^{i2(\delta_{12}+\delta_{23})}\nonumber\\ & -t_{3m}t_{3m}^{\prime}r_{2m}^{\prime}r_{4m}\left( 1-r_{1m}r_{2m}% e^{i2\delta_{12}}\right) e^{i2(\delta_{23}+\delta_{34})}\nonumber\\ & -t_{2m}t_{2m}^{\prime}t_{3m}t_{3m}^{\prime}r_{1m}r_{4m}e^{i2(\delta _{12}+\delta_{23}+\delta_{34})} \label{dquat}% \end{align} The missing coefficients $G_{33}$ and $G_{44}$ can be obtained from the above Eqs. using $G_{33}=-(G_{34}+G_{31}+G_{32})$ and $G_{44}=-(G_{43}+G_{41}% +G_{42})$. For calculating $G^{c}$, one furthermore needs \begin{equation} \left\vert S_{14}^{m}\right\vert =\left\vert D_{m}^{-1}t_{1m}^{\prime}% t_{2m}^{\prime}t_{3m}^{\prime}t_{4m}^{\prime}\right\vert \label{S14quat}% \end{equation}% \begin{align} \left\vert S_{13}^{m}\right\vert & =\left\vert D_{m}^{-1}t_{1m}^{\prime }t_{2m}^{\prime}\right\vert \nonumber\\ & \times\left\vert u_{3m}+r_{4m}e^{i2\delta_{34}}(t_{3m}^{\prime}% v_{3m}-r_{3m}^{\prime}u_{3m})\right\vert \end{align} and \begin{align} S_{11}^{m}-r_{1m}^{\prime} & =D_{m}^{-1}t_{1m}t_{1m}^{\prime}e^{i2\delta _{12}}\{r_{2m}\left( 1-r_{3m}^{\prime}r_{4m}e^{i2\delta_{34}}\right) \nonumber\\ & +e^{i2\delta_{23}}[r_{3m}+r_{4m}e^{i2\delta_{34}}(t_{3m}t_{3m}^{\prime }-r_{3m}r_{3m}^{\prime})]\nonumber\\ & \times\lbrack t_{2m}t_{2m}^{\prime}-r_{2m}r_{2m}^{\prime}]\} \label{S11quat}% \end{align} The denominator $D_{m}$ accounts for multiple resonances inside the CNT. Figure~\ref{Reson4term} depicts some resonances $\mathcal{A}_{n}^{m}$, with $n\in\lbrack1,6]$, which can occur in limiting cases where $t_{2[3],m}% =t_{2[3],m}^{^{\prime}}=0$ or $1$. \begin{figure}[ptbh] \includegraphics[width=0.65\linewidth]{Reson4term.eps}\caption{Scheme representing different resonances (noted $\mathcal{A}_{i}^{m}$) which can occur in setups (a) and (b) when $t_{2[3],m}=t_{2[3],m}^{^{\prime}}=0$ or $1$. The upper numbers indicate the position of contacts 1, 2, 3 and 4.}% \label{Reson4term}% \end{figure}In the general case, Eq. (\ref{dquat}) indicates that these different resonances are coupled. For $t_{2m}=t_{2m}^{^{\prime}}=0$, $G^{c}$ corresponds to the conductance of a two-terminals device, independent from $\delta_{23}$ and $\delta_{34}$, and $V^{c}$ vanishes. For $t_{2m}% =t_{2m}^{^{\prime}}\neq0$ and $t_{3m}=t_{3m}^{^{\prime}}=0$, $G^{c}$ depends on $\delta_{12}$ and $\delta_{23}$, but not on $\delta_{34}$, and $V^{c}$ still vanishes. Having a non-local signal $V^{c}\neq0$ requires a direct CNT-CNT transmission at both contacts $2$ and $3$. It also requires that the four channels $m$ are not coupled to the leads in the same way. Indeed, from Eqs.~(\ref{Vnl4pat}) and (\ref{S32quat}), one can check that if all the $S_{jk}^{m}$ coefficients are independent from $m$, one finds $V^{c}=0$ due to the series structure of the device \cite{Baranger}. Interestingly, a finite $V^{c}$ has already been obtained in a CNT connected to four normal-metal leads\cite{Makarovski}, which suggests that the $K$ and $K^{\prime}$ modes were not similarly coupled to those leads. In principle, such an asymmetry is also possible with ferromagnetic contacts. \subsection{Parametrization of the lead/nanotube contacts} In the following, we assume that the top and bottom halves of the three terminals contacts $j\in\{2,3\}$ in Fig.~\ref{Circuits} are symmetric. We furthermore take into account that the scattering matrix associated to each contact is invariant upon transposition, due to spin-conservation\cite{SC}. This gives $t_{jm}=t_{jm}^{\prime}$, $r_{jm}=r_{jm}^{\prime}$, and $u_{jm}=u_{jm}^{\prime}=v_{jm}=v_{jm}^{\prime}$ for $j\in\{2,3\}$. In this case, on can check from Eqs. (\ref{S41quat}-\ref{S11quat}) that\ $G^{c}$ and $V^{c}$ depend only on six interfacial scattering phases, i.e. those of $r_{1m}$, $t_{2m}$, $r_{2m}$, $t_{3m}$, $r_{3m}$ and $r_{4m}$, which correspond to processes during which electrons remain inside the CNT \cite{simpl}. For contacts $j\in\{2,3\}$, it is thus convenient to use the parametrization% \begin{equation} t_{j(p,\sigma)}=\sqrt{T_{j,p}(1+\sigma P_{j,p})}e^{i(\varphi_{j,p}^{T}% +\frac{\sigma}{2}\Delta\varphi_{j,p}^{T})}% \end{equation}% \begin{align} r_{j(p,\sigma)} & =\left[ \sqrt{1-\left\vert t_{j(p,\sigma)}\sin[\phi _{j}^{(p,\sigma)}]\right\vert ^{2}}\right. \nonumber\\ & +\left. \left\vert t_{j(p,\sigma)}\right\vert \cos[\phi_{j}^{(p,\sigma )}]\right] e^{i(\varphi_{j,p}^{R}+\frac{\sigma}{2}\Delta\varphi_{j,p}^{R})}% \end{align}% \begin{equation} \left\vert u_{jm}\right\vert =\sqrt{1-\left\vert r_{jm}\right\vert ^{2}-\left\vert t_{jm}\right\vert ^{2}} \label{K}% \end{equation} with \[ \phi_{j}^{(p,\sigma)}=\varphi_{j,p}^{R}-\varphi_{j,p}^{T}+\frac{\sigma}% {2}(\Delta\varphi_{j,p}^{R}-\Delta\varphi_{j,p}^{T}) \] The above expressions depend on six real parameters $T_{j,p}$, $P_{j,p}$, $\varphi_{j,p}^{T}$, $\varphi_{j,p}^{R}$, $\Delta\varphi_{j,p}^{R}$ and $\Delta\varphi_{j,p}^{T}$\cite{param}. In order to have unitary lead/CNT scattering matrices, on must use $0\leq T_{j,p}(1+\sigma P_{j,p})\leq1$ and\cite{annul} $\pi/2\leq\phi_{j}^{m}[2\pi]\leq3\pi/2$. These conditions imply $0\leq|t_{jm}|^{2}\leq1$, $0\leq|r_{jm}|^{2}\leq1$ and\cite{sym} $0\leq|u_{jm}|^{2}\leq1/2$. For contacts $j\in\{1,4\}$, one can use% \begin{equation} \left\vert t_{j}^{(p,\sigma)}\right\vert =\left\vert t_{j}^{^{\prime}% (p,\sigma)}\right\vert =\sqrt{T_{j,p}(1+\sigma P_{j,p})}% \end{equation} and% \begin{equation} \arg(r_{j}^{(p,\sigma)})=c_{p}\varphi_{j}^{R}+\frac{\sigma}{2}\Delta \varphi_{j,p}^{R} \label{phi1}% \end{equation} with $c_{K(K^{\prime})}=\pm1$. In Eq.~(\ref{phi1}), we have assumed $\sum_{p,\sigma}\arg(r_{1}^{(p,\sigma)})=0$ and $\sum_{p,\sigma}\arg (r_{4}^{(p,\sigma)})=0$, because, from Eqs. (\ref{S41quat}-\ref{S11quat}), these quantities only shift the variations of $G^{c}$, $V^{c}$, $MG$ and $MV$ with respect to $\delta_{12}$ and $\delta_{34}$, respectively. The parameters $P_{j,p}$, with $j\in\{1,2,3,4\}$, produce a spin-polarization of the transmission probabilities $|t_{j}^{m}|^{2}$. The parameters $\Delta \varphi_{j,p}^{R(T)}$ allow to take into account the Spin-Dependence of Interfacial Phase Shifts (SDIPS), which has already been shown to affect significantly the behavior of CNT spin-valves\cite{Cottet06a,Cottet06b,Sahoo}. We will show below that the SDIPS also modifies the behavior of multi-terminal setups. Note that for $j\in\{2,3\}$, the parameters $\Delta\varphi _{p,j}^{R(T)}$ also contribute to the spin dependence of $|r_{jm}|^{2}$ and $|u_{jm}|^{2}$: the SDIPS and the spin-dependence of interfacial scattering probabilities are not independent in three-terminal contacts, due to the unitarity of scattering processes. \subsection{Behavior of setup (a)} \begin{figure}[ptbh] \includegraphics[width=1.\linewidth]{GraySetupAbis.eps}\caption{Signals $G^{P}$ (top left panel), $V^{P}$ (top right panel), $MG$ (bottom left panel), and $MV$ (bottom right panel) as a function of $\delta_{12}$ (horizontal axes) and $\delta_{34}$ (vertical axes) for a setup (a) with symmetric $K$ and $K^{\prime}$ channels. We have used $T_{1,K[K^{\prime}]}$ $=0.6$, $T_{2,K[K^{\prime}]}$ $=0.1$, $T_{(3)4,K[K^{\prime}]}$ $=0.3$, $P_{2[3],K[K^{\prime}]}$ $=0.4$, $\varphi_{1(4)}^{R}=0$, $\varphi _{2(3),K[K^{\prime}]}^{R}=\pi$, $\varphi_{2[3],K[K^{\prime}]}^{T}=0$, $\Delta\varphi_{2(3),K[K^{\prime}]}^{R\{T\}}=0$ and $\delta_{23}=\pi$. We have indicated the position of the resonances $\mathcal{A}_{1(4)}^{m}$ and $\mathcal{A}_{3(5)}^{m}$ with red and blue dashed lines, respectively.}% \label{GraySetupAbis}% \end{figure}We now consider setup (a), which has been frequently used in the MDI regime, for studying the spin accumulation effect\cite{Johnson,Jedema,Jedema2,Zaffalon,Lou,Tombros2}. We first assume that the $K$ and $K^{\prime}$ channels are coupled identically to the leads. This case is illustrated by Fig. \ref{GraySetupAbis}, which shows the variations of $G^{P}$ (top left panel), $V^{P}$ (top right panel), $MG$ (bottom left panel) and $MV$ (bottom right panel) versus $\delta_{12}$ (horizontal axes) and $\delta_{34}$ (vertical axes). One can first notice that all these signals present strong variations with $\delta_{12}$ and $\delta_{34}$, due to quantum interferences occurring inside the CNT. In Fig. \ref{GraySetupAbis}, $G^{P}(\delta_{12})$ presents peaks which correspond to the resonances $\mathcal{A}_{1}^{m}$ (see e.g. red dashed line), because we consider a case where $T_{2m}$ is weak (these peaks also correspond accidentally to the resonances $\mathcal{A}_{4}^{m}$, which are much broader). A more remarkable result is that $G^{P}(\delta_{34})$ presents antiresonances which correspond to $\mathcal{A}_{3}^{m}$ and $\mathcal{A}_{5}^{m}$ (see e.g. blue dashed line). This is a signature of the strongly non-local nature of current transport in this circuit: the electric signal measured in a given section of the CNT can be sensitive to resonances occurring in other sections of the CNT. We note that in Fig.~\ref{GraySetupAbis}, $\left\vert V^{P}\right\vert $ presents the same type of variations as $G^{P}$ with $\delta_{12}$ and $\delta_{34}$. In the general case, the resonances or antiresonances shown by the electric signals will not necessarily correspond to those defined in Fig. \ref{Reson4term}, due to the strong coupling between these different types of resonances. Importantly, we find that the $MG$ signal can be finite, contrarily to what happens in the MDI limit. Indeed, in Fig. \ref{GraySetupAbis}, $MG$ can exceed $8\%$. We note that in Fig. \ref{GraySetupAbis}, $MG$ presents minima approximately correlated with the maxima of $G^{P}$ in the $\delta_{12}$ direction, and with the minima of $G^{P}$ in the $\delta_{34}$ direction. In the case of a $S^{m}$ matrix independent from $m$, one finds $V^{c}=0$ (see section \ref{ScatModel}). By continuity, since we have used in Fig. \ref{GraySetupAbis} relatively low values for $P_{2[3],K[K^{\prime}]}$, no SDIPS and symmetric $K$ and $K^{\prime}$ channels, we find $\left\vert V^{P}\right\vert \ll V_{b}$. More precisely, a lowest order development with respect to $P_{2}$ and $P_{3}$ yields $V^{P}\sim-V^{AP}\sim\lambda P_{2}P_{3}$, with $\lambda\ll1$ a function of the different system parameters. In these conditions, $MV$ presents the same type of variations as $V^{P}$ (one has $MV\sim2V^{P}$). When the $K$ and $K^{\prime}$ modes are strongly asymmetric, it is possible to obtain a strong $\left\vert V^{P}/V_{b}\right\vert $ ratio for relatively low polarizations $P_{2[3],p}$. This case is illustrated by Fig. \ref{GraySetupA}, where we have used $T_{1[4],K}\neq T_{1[4],K^{\prime}}$ and $\varphi_{1(4)}^{R}\neq0$, so that $\mathcal{A}_{1[4]}^{(K,\sigma)}\neq\mathcal{A}_{1[4]}^{(K^{\prime },\sigma)}$ and $\mathcal{A}_{3[5]}^{(K,\sigma)}\neq\mathcal{A}_{3[5]}% ^{(K^{\prime},\sigma)}$. In this case, the variations shown by the different electric signals are more complicated than previously. However, we find $V^{P}\sim V^{AP}$, so that the amplitude of $MV$ remains comparable to that of Fig. \ref{GraySetupAbis}. \begin{figure}[ptbh] \includegraphics[width=1.\linewidth]{GraySetupA.eps}\caption{Signals $G^{P}$, $V^{P}$ , $MG$, and $MV$ as a function of $\delta_{12}$ and $\delta_{34}$, for a setup (a) with dissymmetric $K$ and $K^{\prime}$ channels. We have used $T_{1[4],K}=0.5$, $T_{1[4],K^{\prime}}=0.3$, $T_{2,K[K^{\prime}]}$ $=0.1$, $T_{3,K[K^{\prime}]}=0.3$, $P_{2[3],K[K^{\prime}]}$ $=0.4$, $\varphi _{2(3),K[K^{\prime}]}^{R}=\pi$, $\varphi_{2[3],K[K^{\prime}]}^{T}=0$, $\varphi_{1(4)}^{R}=\pi/2$, $\Delta\varphi_{2(3),K[K^{\prime}]}^{R\{T\}}=0$ and $\delta_{23}=\pi$. We have indicated the position of the resonances $\mathcal{A}_{1(4)}^{m}$ and $\mathcal{A}_{3(5)}^{m}$ with red and blue dashed lines, respectively.}% \label{GraySetupA}% \end{figure} We now discuss the signs of the different signals. We have already seen above that with the parameters of Fig. \ref{GraySetupAbis}, one has $V^{P}>0$ and $V^{AP}<0$. In other conditions, it is possible to have $V^{P}<0$ and $V^{AP}>0$, or $V^{P}$and $V^{AP}$ both positive, or both negative (not shown). In the CFC model, the signs of $V^{P}$ and $V^{AP}$ are thus independent, whereas MDI models usually give opposite signs for $V^{P}$ and $V^{AP}$ (see e.g. Eq. (\ref{VCMDI}) of Appendix A and Refs. \onlinecite{Johnson2, Takahashi}). Figure \ref{GraySetupA} illustrates that there exists sets of parameters such that the non-local voltage $V^{P}$ changes sign while sweeping $\delta_{12}$ or $\delta_{34}$ (this result is also true for $\delta_{23}$) \cite{SignChange}. It is also possible to find sets of parameters such that $MV$ (not shown) and $MG$ (see Fig. \ref{compSDIPS}, bottom left panel, full lines) change sign with $\delta_{12}% $, $\delta_{23}$ or $\delta_{34}$. We now briefly discuss the effects of the contacts polarizations. One can generally increase the amplitude of the magnetic signals by increasing $P_{j,p}$ (not shown), $\left\vert \Delta\varphi_{j,p}^{R}\right\vert $ (see Fig. \ref{compSDIPS}, red full lines) and $\left\vert \Delta\varphi_{j,p}% ^{T}\right\vert $ (not shown). A strong SDIPS can split the resonances or antiresonances of the electric signals (not shown), like already found in the two-terminals $F$/CNT/$F$ case\cite{Cottet06a}. Interestingly, in the case of a two-terminals $F$/CNT/$F$ device with a $K-K^{\prime}$ degeneracy and no SDIPS (using $1=F$, $2=F$ and no leads $3$ and $4$), Ref.~\onlinecite{Cottet06a} has found that the oscillations of $MG$ with $\delta_{12}$ are symmetric, and a finite SDIPS is necessary to break this symmetry. In contrast, in setup (a), the oscillations of $MG(\delta_{12})$ can be asymmetric in spite of the $K-K^{\prime}$ degeneracy and the absence of a SDIPS (see Fig. \ref{compSDIPS}, bottom left panel). \begin{figure}[ptbh] \includegraphics[width=1.\linewidth]{compSDIPS.eps}\caption{Signals $G^{P}$, $V^{P}$, $MG$, and $MV$ as a function of $\delta_{12}$ for setup (a). We consider a case with no SDIPS (black lines) and a case with a finite SDIPS (red lines corresponding to $\Delta\varphi_{2(3),K[K^{\prime}]}^{R}=0.15\pi$). We have used $T_{1[4],K}=0.2$, $T_{1[4],K^{\prime}}=0.6$, $T_{2,K[K^{\prime}% ]}$ $=0.5$, $T_{3,K[K^{\prime}]}=0.3$, $P_{2[3],K[K^{\prime}]}$ $=0.2$, $\varphi_{1(4)}^{R}=0.12\pi$, $\varphi_{2(3),K[K^{\prime}]}^{R}=\pi$, $\varphi_{2[3],K[K^{\prime}]}^{T}=\Delta\varphi_{2(3),K[K^{\prime}]}^{T}=0$, $\delta_{23}=\pi$ and $\delta_{34}=0.12\pi$. The full lines correspond to the CFC prediction (section \ref{ScatModel}), and the dotted lines to the IFC prediction (section \ref{IFC}). The second does not depend on $\delta_{12}$. In the IFC case, the MG signal is hardly visible in this figure because it is of the order of 0.06\%.}% \label{compSDIPS}% \end{figure} \subsection{Behavior of setup (b)} In setup (b), the types of resonances or antiresonances shown by the electric signals depend again on the value of the coupling between the different CNT sections. We will only highlight the most interesting specificities of setup (b), because it has many common properties with setup (a). Fig. \ref{setupB} shows, with black [red] full lines, examples of $G^{P}(\delta_{12})$, $MG(\delta_{12})$, $V^{P}(\delta_{12})$, and $MV(\delta_{12})$ curves, for symmetric [asymmetric] $K$ and $K^{\prime}$ channels . Strikingly, in both cases, the magnetoconductance $MG$ between the $N$ leads 1 and 2 can be finite, although the two $F$ leads are located outside the classical current path. This is in strong contrast with the MDI limit. From Eqs. (\ref{Vnl4pat}% -\ref{S32quat}), the voltage difference $V^{c}$ vanishes if the scattering properties of contacts $1$ or $2$ are independent from the transverse index $p$, regardless of the scattering properties of contacts $3$ and $4$\cite{check}. This leads to the paradoxical situation where a magnetic signal can be measured between the two $N$ leads but not between the two $F$ leads (see black full lines in Fig. \ref{setupB}).\ By continuity, when the $K-K^{\prime}$ asymmetry is not large at contacts 1 and 2, the amplitude of the signals $V^{P}$ and $MV$ measured between contacts $3$ and $4$ will remain very small. It is possible to obtain stronger amplitudes for $V^{c}$ and $MV$ in the opposite limit of strongly asymmetric $K$ and $K^{\prime}$ channels (see red full lines in Fig. \ref{setupB}, for which we have used $\varphi_{2,K}\neq\varphi_{2,K^{\prime}}$). With setup (b), is thus also possible to obtain magnetic signals in both $G^{c}$ and $V^{c}$, whereas $MG$ and $MV$ would vanish in the $MDI$ limit. \begin{figure}[ptbh] \includegraphics[width=1.\linewidth]{setupB.eps}\caption{Signals $G^{P}$, $V^{P}$, $MG$, and $MV$ as a function of $\delta_{12}$, for setup (b). The black (red) lines correspond to the cases of $K$ and $K^{\prime}$ channels coupled identically (differently) to the contacts, i.e. $\varphi_{2,K}% ^{R}=\varphi_{2,K^{\prime}}^{R}=\pi$ ($\varphi_{2,K}^{R}=1.2\pi$, $\varphi_{2,K^{\prime}}^{R}=0.6\pi$). We have used $T_{1[2],K(K^{\prime}% )}=0.4$, $T_{3,K[K^{\prime}]}=0.3$, $T_{4,K[K^{\prime}]}=0.5$, $P_{3[4],K[K^{\prime}]}$ $=0.4$, $\varphi_{1(4)}^{R}=\varphi_{2(3),K[K^{\prime }]}^{T}=\Delta\varphi_{3,K[K^{\prime}]}^{R(T)}=\Delta\varphi_{4,K[K^{\prime}% ]}^{R}=0$, $\delta_{23}=\pi/2$ and $\delta_{34}=\pi$. The full lines correspond to the CFC prediction and the dotted lines to the IFC prediction. In the black case, the $MG$ signal vanishes in the IFC limit, and the $V^{P}$ and $MV$ signals vanish in both the IFC and CFC limits. The MG signal of the red case is hardly visible in the IFC limit because it is of the order of 0.03\%.}% \label{setupB}% \end{figure} \subsection{Comparison with the MDI limit} In this section, we summarize the most striking differences between the coherent four-channels (CFC) model of section \ref{ScatModel} and the MDI model of section \ref{MDI}. For setup (b), the CFC model allows $V^{P}\neq0$ and $MV\neq0$ whereas one finds $V^{P}=0$ and $MV=0$ with the MDI model. Another remarkable result is that for both setups (a) and (b), the CFC model gives $G^{P}\neq G^{AP}$ whereas the MDI model imposes $G^{P}=G^{AP}$. The table in Fig. \ref{Circuits} summarizes these results. \section{Incoherent four-channels limit\label{IFC}} In order to determine whether the specific spin-dependent behavior of the CFC model is due to coherence or to the low number of channels, it is interesting to consider the incoherent four-channels (IFC) limit. If the phase relaxation length of the CNT is much shorter than the distance between the different contacts, the global transmission and reflection probabilities of setups (a) and (b) can be calculated by composing the scattering probabilities of the different contacts instead of the scattering amplitudes\cite{Datta}. We have checked that this leads to replacing the scattering probabilities $\left\vert S_{\alpha\beta}^{m}(\{r_{jm},t_{jm},v_{jm},u_{jm},\delta_{ij}\})\right\vert ^{2}$ occurring in Eqs.(\ref{I}) and (\ref{Gdef}) by $S_{\alpha\beta}% ^{m}(\{\left\vert r_{jm}\right\vert ^{2},\left\vert t_{jm}\right\vert ^{2},\left\vert v_{jm}\right\vert ^{2},\left\vert u_{jm}\right\vert ^{2}% ,0\})$. Importantly, this description remains intrinsically quantum since the channel quantization is taken into account. In Figs. \ref{compSDIPS} and \ref{setupB}, we show with black and red dotted lines the IFC values corresponding to the different CFC curves. We find that $G^{c}$, $V^{c}$, $MG$ and $MV$ do not depend anymore on the phases $\delta_{ij}$. However, $G^{P}\neq G^{AP}$ is still possible for setups (a) and (b). More precisely, we have checked analytically that using identical $K$ and $K^{\prime}$ modes leads to\cite{check} $G^{P}=G^{AP}$, and we have checked numerically that $G^{P}\neq G^{AP}$ occurs in case of a $K$/$K^{\prime}$ asymmetry at one of the four contacts for setup (a), and at contacts 1 or 2 for setup (b). We can also obtain $V^{P}\neq0$ and $MV\neq0$ for setup (b) [and, more trivially, for setup (a)], with the same symmetry restrictions as for the CFC case (see table in Fig. \ref{Circuits}). Therefore, having $V^{P}\neq0$ and $MV\neq0$ for setup (b), and $MG\neq0$ for setups (a) and (b) is not a specificity the coherent case: using a very small number of transport channels already allows these properties. It is nevertheless important to notice that the values of $MG$ and $MV$ are strongly enhanced in the CFC case, due to resonance effects. Moreover, in the IFC case, the circuit is insensitive to the SDIPS, whereas in the coherent case, the SDIPS furthermore increases the amplitude of $MV$ and $MG$\cite{IFCnoSDIPS}. At last, the coherent case presents the interest of allowing strong variations of the electric signals with the gate-controlled phases $\delta_{12}$, $\delta_{23}$ and $\delta_{34}$. \section{Discussion on first experiments} Reference~\onlinecite{Tombros} reports on a Single Wall Carbon Nanotube (SWNT) circuit biased like in Fig. \ref{Circuits}, but with four ferromagnetic leads. A hysteretic $V^{c}$ has been measured by flipping sequentially the magnetizations of the two inner contacts. However, no conclusion can be drawn from this experiment, due to the lack of information on the conduction regime followed by the device. Reference \onlinecite{Gunnar} reports on $V^{c}$ measurements for a setup (a) made with a SWNT. The authors of this Ref. have observed a finite $V^{c}$ which oscillates around zero while the back gate voltage of the sample is swept. This suggests that this experiment was in the coherent regime. However, the amplitude of $V^{c}$ was very low ($\max (\left\vert V^{c}/V_{b}\right\vert )\sim0.01$), which indicates, in the framework of the scattering model, that the $K$ and $K^{\prime}$ modes were very close and the spin polarization of the contacts scattering properties very weak. It is therefore not surprising that these authors did not obtain a measurable $MV$ signal. Although setup (a) seems very popular in the nanospintronics community for historical reasons \cite{Johnson}, we have shown above that setup (b) also highly deserves an experimental effort, as well as $MG$ measurements in general. In this article, we have chosen to focus on the case of double mode quantum wires because this is adapted for describing CNT based devices, which are presently among the most advanced nanospintronics devices. However, technological progress might offer the opportunity to observe the effects depicted in this article in other types of nanowires, like e.g. semiconducting nanowires. Indeed, quantum interferences have already been observed in \textrm{Si}\cite{Tilke} and \textrm{InAs} quantum wires\cite{Doh}, and spin-injection has already been demonstrated in \textrm{Si} layers \cite{Jonker} and \textrm{InAs} quantum dots\cite{Hamaya}. One major difficulty may consist in reaching the few modes and fully ballistic\cite{Zhou} regime with these devices. \section{Conclusion} In this work, we have studied theoretically various circuits consisting of a carbon nanotube with two transverse modes, contacted to two normal metal leads and two ferromagnetic leads. Two contacts are used as source and drain to define a local conductance, and the two other contacts are left floating, to define a non-local voltage outside the classical current path. When the magnetizations of the two ferromagnetic contacts are changed from a parallel to an antiparallel configuration, we predict, in the local conductance and the non-local voltage, magnetic signals which are specific to the case of a system with a low number of channels. In particular, we propose an arrangement of the normal and ferromagnetic leads [setup (b)] which would give no magnetic response in the multichannel diffusive incoherent (MDI) limit, but which allows magnetic responses in both the local conductance and the non-local voltage in the two-modes regime. The more traditional arrangement [setup (a)] used for the study of the MDI limit also shows a qualitatively new behavior, i.e. a magnetic response in the local conductance. These specific magnetic behaviors are strongly reinforced in the coherent case, due to resonance effects occurring inside the nanotube, and also, possibly, due to the Spin-Dependence of Interfacial Phase Shifts. Our calculations pave the way to new experiments on non-local spin-transport in low-dimensional conductors. We acknowledge discussions with H. U. Baranger and G. E. W. Bauer. This work was financially supported by the ANR-05-NANO-055 contract, the European Union contract FP6-IST-021285-2 and the C'Nano Ile de France contract SPINMOL. \section{Appendix A: Discussion of the MDI limit with a resistors network} In this appendix, we discuss the MDI regime with an elementary but insightful resistors network model. When the electronic mean free path is much shorter than the spin-flip length, it is possible to define a spin-dependent electrochemical potential which obeys a local spin-dependent Ohm's law\cite{Zutic}. Thus, neglecting spin-flip scattering inside the CC, one can use the effective resistor network of Fig. \ref{Resistances.eps} to describe the behaviors of setups (a) and (b) in the MDI limit\cite{Valet,Tombros}. For completeness, we allow the four leads $j\in\{1,2,3,4\}$ to be ferromagnetic, with colinear magnetizations. The left (right) part of the resistors network corresponds to the up (down) spin channels. \begin{figure}[ptbh] \includegraphics[width=0.6\linewidth]{Resistances.eps}\caption{Resistors network used to describe the behavior of a one dimensional Central Conductor (CC) connected to four leads $j\in\{1,2,3,4\}$ in the MDI limit. The contact between lead $j$ and the CC is represented by the resistors $R_{j}^{\sigma}$, $r_{j}^{\sigma}$, and $\widetilde{r}_{j}^{\sigma}$. The section $j-k$ of the CC is modeled with the two resistors $R_{jk}$. When lead $j$ is not ferromagnetic, one must use $R_{j}^{\uparrow}=R_{j}^{\downarrow}$, $r_{j}^{\uparrow}=r_{j}^{\downarrow}$, and $\widetilde{r}_{j}^{\uparrow }=\widetilde{r}_{j}^{\downarrow}$}% \label{Resistances.eps}% \end{figure}Due to intra-lead spin-flip scattering, electrons are in local equilibrium in lead $j$. This equilibration is described by the electrical connection of $\uparrow$ and $\downarrow$ channels at node $j$, which has an electric potential $V_{j}^{c}$. The section $j-k$ of the CC is modeled with the two resistors $R_{jk}$. The contact between lead $j$ and the CC is represented by the resistors $R_{j}^{\sigma}$, $r_{j}^{\sigma}$, and $\widetilde{r}_{j}^{\sigma}$. When lead $j$ is not ferromagnetic, one must use $R_{j}^{\uparrow}=R_{j}^{\downarrow}$, $r_{j}^{\uparrow}=r_{j}^{\downarrow}$, and $\widetilde{r}_{j}^{\uparrow}=\widetilde{r}_{j}^{\downarrow}$. The current flowing from lead $1$ to lead $2$ is $I_{1}^{c}=I_{1\uparrow}^{c}% +I_{1\downarrow}^{c}$. Since leads $3$ and $4$ are floating, they supply the CC with spin currents which are perfectly equilibrated, i.e. $I_{j\uparrow }=-I_{j\downarrow}$ for $\alpha\in\{3,4\}$. We find \begin{equation} G^{c}=\mathcal{H}V_{b}/\mathcal{D}\label{GCMDI}% \end{equation} and% \begin{equation} V^{c}=\left( A_{34}^{\uparrow}-A_{34}^{\downarrow}\right) \left( B_{12}^{\uparrow}-B_{12}^{\downarrow}\right) V_{b}/\mathcal{R}_{23}% \mathcal{H}\label{VCMDI}% \end{equation} with \begin{equation} \mathcal{R}_{23}=\frac{R_{3}^{\uparrow}+R_{3}^{\downarrow}}{\sum \limits_{\sigma}\left( \mathcal{R}_{34}^{\sigma}-R_{3}^{\sigma}\right) }+\sum\limits_{\sigma}\left( \widetilde{r}_{2}^{\sigma}+r_{3}^{\sigma}% +R_{23}\right) \end{equation}% \begin{equation} \mathcal{H}=\sum\limits_{\sigma}\left( \mathcal{R}_{12}^{\sigma}% +\alpha_{\sigma,\sigma}+\alpha_{\sigma,-\sigma}\right) \end{equation}% \begin{equation} \mathcal{D}=\left( \mathcal{R}_{12}^{\uparrow}+\alpha_{\uparrow,\uparrow }\right) \left( \mathcal{R}_{12}^{\downarrow}+\alpha_{\downarrow,\downarrow }\right) -\alpha_{\uparrow,\downarrow}\alpha_{\downarrow,\uparrow}% \end{equation}% \begin{equation} 2A_{34}^{\sigma}=\mathcal{R}_{34}^{\sigma}\left[ (R_{3}^{\uparrow}% +R_{3}^{\downarrow})/(\mathcal{R}_{34}^{\uparrow}+\mathcal{R}_{34}% ^{\downarrow})\right] -R_{3}^{\sigma}% \end{equation}% \begin{equation} B_{12}^{\sigma}=\left( \mathcal{R}_{12}^{\sigma}+\alpha_{\sigma,\sigma }+\alpha_{-\sigma,\sigma}\right) \left( \mathcal{R}_{12}^{-\sigma}% -R_{2}^{-\sigma}\right) \end{equation}% \begin{equation} \alpha_{\sigma,\sigma^{\prime}}=R_{2}^{\sigma}(\mathcal{R}_{12}^{\sigma ^{\prime}}-R_{2}^{\sigma^{\prime}})/\mathcal{R}_{23}% \end{equation} and% \begin{equation} \mathcal{R}_{jk}^{\sigma}=R_{j}^{\sigma}+\widetilde{r}_{j}^{\sigma}% +R_{k}^{\sigma}+r_{k}^{\sigma}+R_{jk}% \end{equation} for $(j,k)\in\{1,2,3,4\}^{2}$. The value of $\mathcal{H}$ is independent from the contacts magnetic configuration, but $\mathcal{D}$ depends on the relative configuration of leads $1$ and $2$, so that $G^{P}\neq G^{AP}$ is possible provided leads $1$ and $2$ are ferromagnetic. In contrast, the value of $G^{c}$ is independent from the magnetization directions of leads $3$ or $4$ because, due to $I_{3(4)\uparrow}=-I_{3(4)\downarrow}$, the resistors $R_{3}^{\sigma}$, $r_{3}^{\sigma}$, $\widetilde{r}_{3}^{\sigma}$, $R_{4}^{\sigma}$ and $r_{4}^{\sigma}$ of Fig. \ref{Resistances.eps} are connected in series with $R_{3}^{-\sigma}$, $r_{3}^{-\sigma}$, $\widetilde {r}_{3}^{-\sigma}$, $R_{4}^{-\sigma}$ and $r_{4}^{-\sigma}$ respectively. We conclude that for setups (a) and (b), one has $G^{P}=G^{AP}$ in the MDI limit. From Eq. (\ref{VCMDI}), having $V^{P}\neq0$ requires that at least one of the biased leads $1$ or $2$ is ferromagnetic (for the generation of a spin-accumulation), and at least one of the floating leads $3$ or $4$ is ferromagnetic (for the detection of this spin-accumulation). These conditions are fulfilled for setup (a), but not for setup (b). Importantly, these results will not be modified if a moderate intra-CC spin-flip scattering or the finite width of the contacts is taken into account, because both features can be modelled with a distributed array of resistors connecting the two spin branches, which will not change the spin symmetry of the model of Fig. \ref{Resistances.eps}.
ff10b18fd83ab652895fafe6593cf9fe2a9dc96b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{ \large Time and complexity} The formulation of the concept of {\em complexity} can be traced back to the late 19th century when the Austrian physicist Ludwig Boltzmann studied the evolution of physical systems. Boltzmann gave a microscopic content to the thermodynamic {\em entropy} in connection with the irreversible time evolution towards equilibrium where systems exhibit maximum disorder or complexity. It was another Austrian physicist, Erwin Schroedinger (also born in Vienna) who in the mid 20th century suggested that under non-equilibrium conditions, temporal evolution could produce ordered states with less complexity than the equilibrium state, or rather with a different kind of complexity. Time then acquires a new status: it is an evolutionary factor - {\em the arrow of time} - by which natural phenomena can make systems become organized, which organization materializes in the {\em emergence} of space- and time-dependent structures with various degrees of complexity. The concept of emergence was at the core of new insights that were highlighted in a 1972 article entitled {\em More is different} [1] in which physicist P.W. Anderson emphasized that an ensemble of simple elements is more than merely their sum, and that the the whole can exhibit remarkable properties that cannot be predicted from the (even exact) knowledge of the constituting elements. Complexity appears as a characteristic of the ensemble of these elements, which, in physics, are often called the {\em degrees of freedom} of the system. Complexity in art pieces may be viewed as the materialization of the many degrees of freedom involved in artistic creation, and this is probably one of the many reasons why art resists theoretical analysis, theory being understood as a set of principles and methods from which logical computation and analysis can be performed on sets of data obtained from measurements carried on a system (here the piece of art). Considered from this point of view, music may be more easily amenable to scientific analysis because of its underlying mathematical structure and because, when performed, it steers a one-dimensional course, the dimension of time. The {\em arrow of time} is intrinsic to musical expression: music emerges from silence and returns to silence. And, in contrast with other forms of art, it is not possible to take a snapshot of a piece of music; if time were to be stopped, the music simply vanishes. In practical terms, a musical sequence can be considered as the time evolution of an acoustic signal, and therefore a piece of music can be cast in the form of a time series. Technically this means that, be it in the form of a sequence of acoustic pulses or in the written form of symbols in a music score, a piece of music can be cast as a set of data points distributed along an axis with the dimension of time. Such a set, however complicated it may be, can always be coded as a {\em string of bits}. This leads us to the more formal concept of complexity as formulated in 1965 independently by G.J. Chaitin and A.N. Kolmogorov who proposed an algorithmic (objective) definition. The idea goes as follows: given a string of bits, which is the shortest computer program that is able to produce the string? It is clear that a finite sequence of 2N bits with alternatively 1 and 0 (10101010101010101...) should be produced by a program with minimal length (write N times 1 followed by 0) whereas a sequence with the same number of 1's and 0's distributed randomly (110100000111010100001011111...) can probably not be reproduced otherwise than by rewriting the complete string. The length of the program is then used to quantify the degree of complexity. An interesting aspect to the procedure is that one must - or the pointer of the computing machine must - scan sequentially the bits of the string in order to perform the computation, which is an operation performed in time. This observation indicates that the algorithmic measure of complexity implies a measurement in time. In particular, considering the string of data obtained from the coding of a piece of music, the algorithmic measure of its degree of complexity then yields a signature of the dynamics of that piece of music. And reciprocally the dynamics perceived in the music reflects - at least one of the components of - its complexity. \section*{\large Complexity and music} Tools developed in the context of dynamical systems theory in mathematics and in physics provide techniques to analyze sets of data obtained from measurements on physical and other systems which exhibit complex behavior and are very difficult to explain on the basis of classical causal laws. For instance the laws of gravitational motion explain beautifully the orbital movement of planets in the solar system, but the energy dissipated in the erratic solar flares, turbulence in the atmosphere, cardiac arrhythmias, or stock market fluctuations cannot be described analytically by first principle laws. Yet data analysis of the time series obtained from measurements in such types of phenomena can provide quantitative measures from which their degree of complexity can be evaluated, and thereby provide insights to the mechanisms of complex behaviors. How can we apply these concepts to the analysis of music, and what insights would this approach provide as to our perception of music? The answer to the first question is somewhat technical [2]. One can take the printed score of a chosen piece of music and play it on a synthesizer interfaced to a computer. The pitch values are converted into digital data which are stored in the computer memory where the score is now converted into a time series, say $X(t)$ for a single part score. Pieces with several parts are treated part by part to produce a set of time series $X(t), Y(t), Z(t),...$ . Alternatively on can access data libraries where digitally coded music scores are readily available. Once in the form of time series, data are processed with the tools developed in the context of dynamical systems theory. \section*{\large From time to space} One particularly interesting characterization is obtained with the construction of the {\em phase portrait}: $X(t), Y(t), Z(t),...$ (or $X(t), X(t+n\Delta t), X(t+m\Delta t)$, using the time-delay method for single part pieces). Suppose we want to analyze a string trio piece, and we processed the violin part, $X(t)$, the viola part, $Y(t)$, and the cello part, $Z(t)$. We then construct a three-dimensional Euclidean space spanned by $X(t), Y(t), Z(t)$ called the {\em phase space}. The $X$-axis represents the range over which the notes are played on the violin, and similarly for the viola along the $Y$-axis, and for the cello along the $Z$-axis. Suppose the piece starts with the violin playing A, the the viola playing F, and the cello playing C simultaneously on the first beat of the first bar. Plotting the corresponding numerical values along each axis gives one point in the $XYZ$ phase-space. The next note will give the next point in phase-space, and so on until the completion of the piece. Joining the points yields a trajectory as illustrated in Fig.1c which shows the phase portrait of the three part {\it Ricercar} of Bach's {\it Musical Offering}. The result is called the {\em phase portrait} which gives a spatial representation (in the abstract phase space) of the temporal dynamics of the music piece reconstructed from the time series obtained from the pitch variations as a function of time: the phase portrait maps the {\em time} evolution of a dynamical process onto a {\em space} representation. \begin{figure} \begin{center} \makebox{ \resizebox{5.5cm}{5.5cm}{ \includegraphics{S1_Scale.jpg}}}\\ \makebox{ \resizebox{5.5cm}{5.5cm}{ \includegraphics{S2_Random.jpg}}}\\ \makebox{ \resizebox{5.5cm}{5.5cm}{ \includegraphics{S3_Bach.jpg}}} \caption{Phase portraits: Ascending and descending chromatic scale (top) $D_f=1$; Computer generated random music (center) $D_f=3$; Ricercar from Bach's Musical Offering (bottom) $D_f=1.72$.} \end{center} \end{figure} So measuring characteristics of the spatial object provides a measure of the dynamics. In figure 1, along with the phase portrait of the Bach's piece, two typical extreme examples: an elementary score constructed as a canon of three repeatedly ascending and descending chromatic scales and a piece of random music constructed with a computer generated white noise algorithm. Obviously the three pieces exhibit very different space occupations. Dimensionality measures were also used in the context of plastic art: in the two-dimensional space of Pollock's abstract paintings, an identification of Pollock's style is obtained by measuring the fractional value of the space covered by the paint [3]. The same type of dimensionality analysis is performed for the phase portraits in music with the box counting method. To illustrate the procedure we consider the pictures shown in figure 1. The repeated ascending and descending chromatic scales are periodic in time so that their corresponding trajectory in space form a closed loop which has dimensionality $D_f=1$. In contrast, a piece of random music explores all possible combinations in the course of its time evolution and therefore the resulting trajectory fills homogeneously the entire available space: the resulting object has dimensionality $D_f=3$. On the one hand, we have an almost totally predictable musical piece, the chromatic scales with minimal complexity, and on the other hand a sequence of unpredictable subsequent sounds, therefore with maximal complexity. The piece of Bach lies in the intermediate range with a dimensionality $D_f=1.72$. The dimension $D_f$ (known as the Hausdorff dimension) computed by the box-counting method, characterizes the structure of the complete phase trajectory, and its value yields a quantitative evaluation of the global dynamics of the music piece. \section*{\large Global dynamics and local dynamics} While the Hausdorff dimension provides an evaluation of the {\em global} dynamics of a piece of music, a measure of the {\it local} dynamics can be obtained from the application of information theory [2]. The analysis proceeds on the basis of the data files and goes along the lines of the discussion in the introductory section. A sequence of notes is viewed as a string of characters and is analyzed from the point of view of its information content. The string is defined by straightforward coding of the pitch by assigning a symbol to each note. The Shanon {\em entropy} $H$ measures the information content of a string of characters on the basis of their occurrence probability. It is defined such that its value has an upper bound ($=1$) for a fully random sequence. $H_0$ is the zeroth order entropy which is a measure of the straight occurrence of each note, and the $\alpha$-th order entropies, $H_\alpha$ ($\alpha \neq 0$), follow from the successive conditional probabilities at increasing orders. In fact the most relevant quantity is $H_{\alpha=1}$ which is the probability to find the note $s_{i+1}$ given that the previous note was $s_i$. In addition, in western music, an important feature that must be accounted for is {\em tonality} (here noted $\theta$). One therefore introduces a quantity defined as the {\em parametric entropy} $H'_1$ which measures the information content of a musical sequence quantifying the transition probabilities from one note to the next given that the transition can occur within the reference tonality ($(s_i,s_{i+1}) \in \theta$), outside the tonality ($(s_i,s_{i+1}) \not \in \theta$), or from $\theta$ to off $\theta$ ($s_i\in \theta,s_{i+1}\not \in \theta$), and vice-versa. The operational result is that a large value of the parametric entropy is indicative of frequent excursions away from the tonality, with transitions over intervals distributed over a large number of notes. On the contrary, the parametric entropy has a low value when a note determines almost unambiguously the next one, in particular when the next note remains in the range of tonality. Dimensionality and entropic analyses performed on eighty sequences chosen in the music literature from the 17th century (J.S. Bach) to the 20th century (E. Carter) lead to interesting observations. When the values of $D_f$ and $H'_1$ are organized in chronological order (referring to the date of composition) - with very few exceptions - there is no obvious clustering of pieces by composer or by period of composition; this holds for dimensionality as well as for parametric entropy. Now when one plots the dimensionality $D_f$ versus parametric entropy, $H'_1$, as shown in Fig.2, a trend appears indicating a correlation between $D_f$ and $H'_1$, that is between local dynamics and global dynamics. While no analytical relation could be conjectured for $D_f = {\cal F}(H'_1)$, Fig.2 suggests that a statistical analysis performed on a larger number of music pieces should provide a better quantification. \begin{figure} \begin{center} {\resizebox{9cm}{7cm} {\includegraphics{Dim_Entropy.jpg}}} \caption{Dimensionality $D_f$ versus parametric entropy $H'_1$ for about 30 pieces sampled in the music literature from the 17th to the 20th century (from Boon and Decroly [2]).} \label{fig:portraits} \end{center} \end{figure} \section*{\large Complexity and artistic value} The unexpected elements in a piece of music can be found in the deviations from established rules and the violation or even the mere rejection of such rules. In the context of classical forms, these deviations are mostly related to the liberty taken by the composer with respect to tonality. Thus when Leibowitz [4] considers {\it the complexity of musical language}, he argues that {\it Bach's and Haendel's complex polyphonic style is commonly opposed to what has been called the homophony of Haydn and Mozart (...). According to which criteria does one evaluate simplicity and complexity? Only one: the counterpoint} (...). However, continues Leibowitz, {\it the counterpoint is hardly the only constituting element in music, and, even more, it should be obvious that music can be simple or complex independently of any notion of counterpoint}. Leibowitz then considers the problem of harmony and so observes that the composer's {\it audacity as well as harmonic complexity may and must be evaluated according to further criteria}. Those then invoked concern {\it the principles of tonality expansion} and here - as argued on the basis of a few specific examples - { \it Haydn's and Mozart's works appear more audacious than those of their precursors}. Obviously the argument is of considerable importance as it leads Leibowitz to the concept of {\it increasing complexity which should determine the overall evolution of musical tradition}. Considering that entropy provides a quantitative measure of the degree of complexity the present results show that complexity - in contrast with Leibowitz' hypothesis - appears to be characteristic of the composition rather than of the composer. Accordingly we find no indication of a systematic increase in complexity paralleling historically the evolution of classical music. What we have considered is how complexity can be identified in music from the viewpoint of the dynamical nature of the musical object. Obviously there are aspects of music which have been ignored in this approach, such as the structural and spectral components of harmony and sound where complexity is to be identified with other tools. A global characterization of complexity in music implies a higher degree of complexity and therefore requires a sophisticated combination of various complementary approaches. Nevertheless a striking observation is that in both the analysis of complex structures in Pollock's abstract paintings and in the analysis of the dynamics of music pieces an identification of the complexity follows from the computation of the dimensionality, suggesting that the fractional nature of art might have an intrinsic value of more general significance. \bigskip \noindent{\bf References} \bigskip \noindent [1] P.W. Anderson, {\em More is different}, Science, {\bf 177}, 393 (1972). \noindent [2] J.P. Boon and O. Decroly, {\em Dynamical systems theory for music analysis}, Chaos, {\bf 5}, 501 (1995). \noindent [3] R.P. Taylor, A.P. Micolich and D. Jonas, {\em Fractal analysis of Pollock drip paintings}, Nature, {\bf 399}, 422 (1999). \noindent [4] R. Leibowitz,~{\em L'\'evolution de la musique de Bach \`a Sch\"on\-berg}, (Correa, Paris, 1951), Chap.2. \end{document}
7549c775ea6298cc9a78a6f86168a1130186a4bf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Revenue management, namely the practice of adjusting supply to the random demand for perishable goods, is an old practice, which has increased in importance with the rise of the e-commerce \citep{boyd2003revenue}. Adjusting prices in a flexible way is likely to increase firms' revenues but it also comes at some cost, as it requires specialized teams and good algorithms. This continuous updating is also a complex exercise, so simple rules are usually set to simplify the pricing strategy. These rules may be suboptimal. This paper identifies how much gains in revenues can be expected by firms when adopting flexible strategies compared to uniform pricing. We also quantify the magnitude of losses of actual strategies compared to the optimal ones, under various constraints imposed on such strategies. Finally, by varying these constraints and the assumptions behind the counterfactuals, we identify the main sources of the gains or losses. \medskip We address these questions by studying revenue management at iDTGV, a subsidiary of the French railway monopoly, SNCF. From 2004 to 2017, this firm provided low-cost trains from Paris to several towns in France, and the corresponding return trains. Its revenue management was based on quantities, as is often the case in companies selling perishable goods (e.g. flight tickets, hotel rooms, rented cars for given periods etc.).\footnote{For a detailed review of revenue management techniques, see \cite{Talluri_vanRyzin_05}.} Namely, for the economy class on which we focus here, 12 classes of prices, called fare classes hereafter, were defined. These 12 prices were sorted in ascending order and for a given trip (e.g. Paris-Bordeaux), set almost constant during the period we studied. For each train, revenue managers could decide, at any moment before the departure and depending on the demand, to close the current fare class and open the next one, thus increasing the prices of the seats. We investigate hereafter the relative benefits of this popular pricing strategy compared to uniform pricing, or alternative, more flexible strategies. \medskip In order to compute such counterfactuals, we first show that in our context, recovering the price elasticity coefficient, relative demand parameters (of, e.g. Bordeaux versus Toulouse in Paris-Toulouse trains) and the total demand for a given train at a given price are sufficient to identify a rich set of counterfactual revenues. In particular, the timing of consumers' arrival is not necessary to identify counterfactual revenues. This is convenient because such information is often unobserved, as in our case. We can compute revenues not only under uniform pricing, but also under optimal dynamic pricing, with any number of fare classes. Importantly also, we are able to compute such counterfactuals assuming either that iDTGV has complete or incomplete information on a given train's demand. \medskip The identification of price elasticity, relative demand parameters and the total demand at a given price is however complicated by two issues that are likely to arise in many markets of perishable goods. First, and as already observed by \cite{Swan_90}, \cite{Lee_90} and~\cite{Stefanescu_12}, we face a severe censoring problem: demand at a given price is generally larger than the number of seats sold at that price. Second, prices vary only within the grids of 12 prices corresponding to each of the 12 fare classes. Hence, we cannot rely on usual instruments such as cost shifters. \medskip To identify price elasticity, we rely on a new argument tailored to our application but that may apply to other contexts as well. Specifically, we exploit the fact that revenue management is done at a route level (e.g. Paris-Toulouse), while the train serves several cities (e.g. Bordeaux and Toulouse). This means that fare classes close at the same time for all destinations within the same route. Relative prices between, e.g. Bordeaux and Toulouse, then vary simultaneously whenever a fare class closes.\footnote{A similar strategy could be used for, e.g., hotels, if the prices of rooms of different qualities change simultaneously.} We prove that the price elasticity can be identified by relating variations between relative prices and the proportion of consumers buying tickets for one destination versus another. Specifically, identification can be achieved under the assumption that price elasticities and the proportion of consumers seeking to buy a ticket for one destination versus another remain constant over time. We can test both conditions empirically, and the results suggest that they are reasonable in our context. \medskip The identification of the distribution, over the different trains, of the total demand at a given price is also difficult, in particular because of the censoring problem mentioned above. We first show that basic conditions on the rationality of consumers deliver inequalities relating this total demand with the number of seats that are sold. We complement these inequalities by weak optimality conditions on the observed revenue management. Specifically, we assume that this revenue management was better, on average, than a uniform pricing strategy performed under incomplete information and using prices from the observed grid of prices. Given our very purpose, it is important here not to impose too strong optimality conditions, such as optimality vis-\`a-vis all dynamic strategies, as these conditions would very much drive our results. Also, our conditions have the advantage of being relatively simple to exploit for identification and estimation purposes. At the end, these conditions stemming from demand and supply can be combined to form a set of moment inequalities. Though they rely on weak restrictions, these moment inequalities are sufficient to produce informative bounds on most counterfactual revenues. \medskip We obtain the following key findings. First, we estimate a price elasticity of about -4, which is below the range of most estimates in the transportation literature \citep[see, e.g.][for a meta-analysis]{Jevons_05}. However, we show in Appendix \ref{app:aggregated} that using aggregated quantities and prices to estimate price elasticity, as done by most of these studies, produces estimates that are substantially biased towards zero. Second, our results suggest that the observed revenue management practice was effective but still sub-optimal. The observed revenue management generated a gain of up to 8.1\% compared to the optimal uniform pricing in an incomplete information set-up. However, we also estimate, under the same informational set-up, a loss of at least 6.8\% and up to 15.1\% compared to the optimal pricing strategy under the same restriction of 12 ascending fare classes as those actually used. Actually, we estimate that simple strategies, such as 12 (non necessarily ascending) fare classes, already secure almost 99\% of the fully unconstrained optimal pricing strategy. \medskip Lastly, we emphasize the key role of demand uncertainty on revenues, and how revenue management can mitigate it. Revenues from a uniform pricing strategy are 17.2\% higher when moving from an incomplete to a complete information set-up. But the informational gains are much smaller (0.22\%) when considering fully flexible pricing strategies. In other words, implementing the optimal dynamic pricing strategy mitigates almost entirely the loss entailed by demand uncertainty. The reason behind is that information accumulates quickly: by observing and learning from the sales of half of the available seats, the firm can already secure more than 97\% of the revenue under complete information. \medskip \paragraph*{Related Literature.} Our paper relates to several theoretical and empirical papers in operational research and economics. The theoretical literature on revenue management has investigated optimal quantity-based revenue managements, where firms segment demand by choosing either once for all or dynamically the allocation of, say, seats into fare classes in which prices are predetermined. We refer in particular to \cite{Littlewood_72} and \cite{Brumelle_McGill_93} for static solutions, and to \cite{gallego1994optimal}, \cite{Feng_Gallego_95}, \cite{Feng_Xiao_00}, \cite{aviv2002pricing} for dynamic solutions. These last papers have studied optimal pricing strategies assuming that consumers arrive under some homogeneous Poisson process. \medskip In our paper, we assume that consumers arrive according to a flexible, non-homogenous Poisson process, as \cite{bitran1997periodic}, \cite{zhao2000optimal}, and \cite{mcafee2008dynamic}. Our demand model is closest to \cite{mcafee2008dynamic}, but with one key difference. Whereas they assume that the firm has a complete information on the demand parameters, we also consider an incomplete information set-up where only the distribution of these parameters is known. The firm then updates this distribution as consumers arrive. Such an incomplete information set-up seems more plausible when, as here, aggregate demand may vary much from one train to another. We also generalize \cite{mcafee2008dynamic} by studying constrained pricing strategies close to those implemented in practice. We refer to Online Appendix \ref{app:counter_rev} for details on the resolution of the corresponding Bellman equations. \medskip Our results underline the important role of information and demand learning to explain the gains and losses of revenue management. Such a point has already been made in the theoretical literature but to our knowledge, we are the first to quantify these roles using real data.\footnote{Different from the revenue management we consider, \cite{huang2020learning} study how firms' static pricing in liquor market in the US can be improved by learning market conditions from realized sales.} \cite{lin2006dynamic} studies similar models to ours in his sections 5.1 and 5.2 and allows for firm's Bayesian learning from the observed purchases or arrivals. Instead of deriving the optimal policy, his paper focuses on a specific policy (variable-rate), which is shown to be nearly optimal in simulations. \cite{aviv2002pricing} derives the optimal policy assuming an unknown constant arrival rate of consumers and simulates the loss due to incomplete information. By contrast, we allow for heterogeneous arrival rate, and study other practically relevant pricing strategies as well. Finally, in contrast to all these papers and ours, \cite{den2015dynamic} consider another form of learning by the firm, based on maximizing the likelihood of the data at its disposal. We refer to \cite{den2015dynamicsingle} for a complete survey on demand learning in dynamic pricing. \medskip In the empirical literature on revenue management, the closest papers to ours are \cite{lazarev2013welfare} and \cite{williams2021welfare}, both of which study dynamic airline pricing in a monopolistic market.\footnote{Another recent empirical paper is \cite{cho2018optimal}, which studies revenue management under oligopoly in hospitality industry. Their analysis focuses on the pricing behavior of ``hotel 0'' (from which the demand data is obtained) in a competing environment.} While both papers accentuate price discrimination and its welfare effects, the main goal of our paper is to quantify the potential gains and loss due to revenue management in practice. As a result, contrary to their models, ours explicitly incorporates firm's learning behavior from the realized demand. Moreover, we do not impose strong optimality conditions on the observed prices.\footnote{See also \cite{cho2018optimal,cho2019semi} for recent examples that identify demand without imposing strong optimality conditions.} On the other hand, while \cite{lazarev2013welfare} allows travelers to be forward-looking, we abstract from any strategic considerations from consumers here, following \cite{williams2021welfare} and the operation research literature. The rationale behind is that in our context, and contrary to what happens in the airline industry, prices always increase. So at least in the absence of uncertainty on the opportunity of the journey, the consumers have no incentives to wait. \medskip The rest of the paper is organized as follows. In Section 2, we present the context and our data. Section 3 displays the demand model and our assumptions on the supply side. Section 4 is devoted to the identification and estimation of demand under our assumptions and given the data at our disposal. Section 5 presents the results. The appendix gathers the proofs of our identification results and estimation with aggregate data. The Online Appendix displays the formulas for the counterfactual revenues, additional details on some robustness checkcs and additional proofs. \section{Institutional Background and Data} \subsection{Revenue Management at iDTGV in 2007-2009}\label{sub:revenue management} iDTGV was a low-cost subsidiary of the French railway monopoly, SNCF, which was created in 2004 and disappeared in December 2017.\footnote{Its disappearance was due to internal strategic considerations at SNCF. It was basically replaced by Ouigo, the new low-cost service at SNCF.} It owned its trains and had a pricing strategy independent from SNCF. Prices were generally lower than the full-rate prices of SNCF, but were also associated with a slightly lower quality of services. Namely, tickets could only be bought on Internet, they were nominative and could not be cancelled. On top of that, they could be exchanged only under some conditions and at some cost. \medskip The routes of iDTGV were all between Paris and other towns. For each of those towns and every day, one train was leaving Paris and another coming to Paris. Table \ref{DES1} presents the routes we observe in our data from May 2007 till March 2009. These routes have several stops, but to simplify the analysis, we gather them so as to form a single intermediate stop and a single final stop. We aggregate the cities according to the price schedule. For instance, we group Aix-en-Provence and Avignon together in the Paris-Marseille route since the corresponding prices are always the same. This gathering is consistent with Assumption \ref{hyp:cons_demand} below, as our demand model remains valid after aggregation of cities. \medskip Different routes may share the same intermediate destination. For instance, Bordeaux is the intermediate destination of Paris-C\^ote basque and Paris-Toulouse. Importantly, no tickets were sold between the intermediate and the final destination, e.g. no Bordeaux-Toulouse tickets are sold on the Paris-Toulouse route. Our understanding is that this was done to avoid controlling people in intermediate destinations, as there were no ticket inspectors in the trains. \begin{center} \begin{threeparttable} \vspace{-1cm} \caption{\small Routes with intermediate and final destinations}\label{DES1} \begin{tabular}{lllc} \toprule Route name &Final stop(s) &Intermediate stop(s) &Nb. of trains \\ \midrule C\^ote d'Azur &Cannes,Saint-Rapha\"el,Nice & Avignon & 452 \\ Marseille &Marseille &Aix-en-Provence/Avignon & 453 \\ Perpignan & Perpignan & N\^imes, Montpellier & 689 \\ \multirow{2}{*}{C\^ote basque} &St Jean de Luz,Bayonne,& \multirow{2}{*}{Bordeaux} & \multirow{2}{*}{405} \\ & Biarritz,Hendaye & & \\ Toulouse & Toulouse & Bordeaux &411 \\ Mulhouse & Mulhouse &Strasbourg &499 \\ \midrule Total & & & 2,909 \\ \bottomrule \end{tabular} \begin{tablenotes} \footnotesize \item \emph{Notes:} we have different number of observations for the different routes because the period we cover varies slightly from one route to another. \end{tablenotes} \end{threeparttable} \end{center} \medskip The trains are split into economy class and business class cars of fixed sizes. Revenue management was implemented almost independently between the two classes, i.e. under the sole constraint that prices in economy class are always lower than in business class. This constraint was very seldom binding in practice, so we ignore it hereafter. We focus hereafter on the economy class, which represents roughly $73\%$ of the seats. In this category, there are 12 fare classes corresponding to 12 prices sorted in ascending order. The price of a given fare class, at a peak time or off peak and for some origin-destination trip (e.g. Paris-Bordeaux) remained constant for several months (e.g. from 03/01/2007 to 10/31/2007) before being adjusted marginally, mostly to account for inflation. Contrary to SNCF, iDTGV did not make any third-degree price discrimination, so there was no discount for young people, old people or families. \medskip In this context, revenue management consists in deciding in real time to maintain the current fare class or to close it and move to the next one, resulting in a price increase. Coming back to a previous fare is impossible; thus, there are no last minute drops in ticket prices for trains that have still several empty seats. Also, revenue managers could decide to never open the first fare classes and begin to sell directly tickets in a higher fare class. Symmetrically, the last fare class may never be reached. In practice, revenue management was operated through a Computerized Reservation System (CRS). Before the beginning of sales, it fixes a seat allocation planning for all fare classes, using the history of purchases on past trains. During sales, the CRS uses the number of tickets sold up to now to make recommendations on the size of subsequent fare classes. Revenue management managers can nevertheless always intervene, both on the initial and on subsequent seat allocations, according to their experience on past trains.\footnote{Manager intervention in automatized revenue management also exists in other industries, e.g. hospitality industry \citep{cho2018optimal}.} \medskip Finally, and crucially for our identification strategy, the revenue management did not use separate fare classes for a given train with several destinations. For instance, in a Paris-Toulouse train, the closure of the first fare class occurred exactly at the same moment for both Bordeaux and Toulouse. Hence, price changes of Paris-Bordeaux and Paris-Toulouse tickets happened exactly at the same time, for all trains. According to discussions with people in the revenue management department, this was to limit the number of decisions to be taken at each moment. \subsection{Data and descriptive statistics} We have data on iDTGV trains between May 2007 and March 2009 in economy class and for journeys from Paris to the rest of France. We first observe basic characteristics of the trains: all the stops, departure and arrival time, day of departure (e.g. May 2, 2008) and whether it corresponds to a peak time or not. We also observe the price grid used for that train for each fare class. For each route and type of period (peak time or off peak), there are a limited number of such grids, as they change these grids only a few times during the period we observe (e.g. 3 times for the Paris-Toulouse). We also observe the sales of each fare classes of all trains. On the other hand, we do not observe the purchasing dates, nor the opening moments of each fare class. For a given route, capacity is defined as the maximal number $n$ such that for at least three trains, $n$ seats were sold \footnote{We use this definition (rather than the maximal number of seats sold across all trains of a given route) to take into account rare cases of overbooked trains. With this definition, we observe 5 cases of overbooking, over the $2,909$ trains of our dataset. Note that capacity can be assumed to be fixed for a given route because the number of coaches in economy class is fixed.} \medskip Table \ref{DES2} presents some descriptive statistics on our data. We observe a substantial amount of price dispersion within trains. For instance on the C\^ote d'Azur line, the minimal price paid by consumers on average over the different trains (19.3\EUR{}) was three times and a half lower than the average maximal price (68.4\EUR{}). We also observe substantial variations on the average load across routes. While trains in Paris-Marseille were always nearly full, with an average load above 95\%, this was far from being the case on the C\^ote basque line, with an average load of only 65.4\%. This suggests that the actual pricing may not be fully optimal, at least for some routes. \bigskip \begin{center} \begin{threeparttable} \vspace{-1cm} \caption{\small Descriptive statistics, economy class, from Paris} \label{DES2} \begin{tabular}{lccccccc} \toprule & & Avg & \% final & \multicolumn{4}{c}{Prices} \\ Route &Capacity & Load & dest. & Avg & Avg min. & Avg max. & max/min \\ \midrule C\^ote d'Azur &324 & 85.4\% & 81.5\% & 50.3 & 19.3 & 68.4& 3.54\\ Marseille &324 & 95.5\% & 60.0\% &49.5 & 19.0 & 70.5 & 3.71\\ Perpignan &324 & 88.6\% & 27.4\% & 50.0 & 20.2 & 72.6 &3.59\\ C\^ote basque &350 & 65.4\% & 64.1\% & 37.3 & 19.7 & 53.3 &2.71\\ Toulouse &350 &87.3\% & 55.3\% & 43.6 & 19.4 & 67.2&3.46 \\ Mulhouse &238 &79.4\% & 24.1\% & 35.0 & 19.4 & 50.0 &2.58\\ \bottomrule \end{tabular} \begin{tablenotes} \scriptsize \item Notes: Avg min. and max. are the average of the minimal and maximal prices charged for each train, for the final destination. max/min is the ratio between the two previous columns. \end{tablenotes} \end{threeparttable} \end{center} \section{Theoretical model and parameters of interest}\label{sec:mode} \subsection{Demand side}\label{sec:demand_model} We consider a demand model close to that of \cite{mcafee2008dynamic}. A train $T$ is defined by its route $r(T)$ (e.g. Paris-Toulouse) and its day of departure (e.g. May 2, 2008). For each route $r$, we denote by $a_r$ the intermediate destination and by $b_r$ the final destination. To simplify notation and in the absence of ambiguity, we just denote the destinations of a train $T$ by $a$ and $b$ instead of $a_{r(T)}$ and $b_{r(T)}$. For any train $T$, tickets are sold between the normalized dates $t=0$ and $t=1$. We denote the fare classes by $k\in \{1,...,K\}$. Within fare class $k$, tickets for train $T$ and destination $d\in \{a_{r(T)},b_{r(T)}\}$ are sold at price $p_{dkT}$. We recall that $p_{dkT}$ belongs to a grid of $K$ prices that remains fixed for several months and depends only on the destination $d$ and whether the train leaves at a peak time or not. Finally, we denote by $V_{dT}(A, B)$ the number of consumers arriving during the subset $A$ of the time interval $[0,1]$, and with a valuation belonging to the subset $B$ of $[0,\infty)$. Similarly, let $D_{dT}(t,t';p_d)$ denote the demand for destination $d$ in train $T$ between dates $t$ and $t'$ (with $(t,t')\in[0,1]^2$) when the price is constant and equal to $p_d$. Then $D_{dT}(t,t';p_d)= V_{dT}([t,t'),[p_d,\infty))$. We then assume the following condition. \begin{hyp} (Consumers' demand) For all $T$ and $d\in \{a,b\}$, there exists $\varepsilon>1$ and a random process $b_T(.)$ on $[0,1]$, continuous and satisfying $\min_{u\in[0,1]} b_T(u)>0$ almost surely, such that conditional on $\xi_{dT}$ and $b_T(.)$: \begin{enumerate} \item $V_{dT}$ is a Poisson process with intensity $I_{dT}(t,p) = \xi_{dT} b_T(t) \varepsilon p^{-1-\varepsilon}$ for $(t,p)\in [0,1]\times [0,\infty)$. Without loss of generality, we let $\int_0^1b_T(u)du=1$. \item $V_{aT}$ and $V_{bT}$ are independent. \end{enumerate} \label{hyp:cons_demand} \end{hyp} The term $\xi_{dT}$ captures train-destination specific overall demand shocks. For instance, demand to Cannes may increase a lot during the Cannes Film Festival. The term $b_T(t)$ describes the pattern of consumers' arrival time for train $T$. We do not make any restriction hereafter on this function, nor do we impose it to be constant from one train to another. On the other hand, we impose that the intensity of $V_{dT}$ takes a multiplicative form. This form has three implications. First, we assume that the arrival of consumers for destinations $a$ and $b$ have the same shape, as they are just shifted by a multiplicative destination-train specific constant $\xi_{dT}$. This condition can be tested, an important point on which we come back in Section \ref{sub:ident_demand} below. Second, we impose a specific functional dependence in $p$, of the form $p^{-1-\varepsilon}$. This particular form is not essential for our identification strategy. We do have to impose a parametric form, on the other hand, given that prices only take a few different values. When restricted to $[p_0,\infty]$ for any $p_0>0$, the intensity we consider corresponds to consumers' valuation following a Pareto distribution with a parameter equal to $\varepsilon$. \medskip Finally, by considering a multiplicative form ($I_{dT}(t,p) \propto b_T(t) \times p^{-1-\varepsilon}$), we assume that the valuation of consumers does not evolve over time. In particular, Assumption \ref{hyp:cons_demand} implies that the demand for destination $d$ on the time interval $[t_1,t_2]$ satisfies $$D_{dT}(t_1,t_2;p) | b_T(.)\sim \mathcal{P}\left(\xi_{dT} p^{-\varepsilon} \int_{t_1}^{t_2} b_T(u) du \right).$$ Thus, as \cite{mcafee2008dynamic}, we assume that the price elasticity does not evolve over time. This assumption could be relaxed with more detailed data. We believe it is reasonable in our context where purchasers of the economy class tickets of these trains are already quite homogenous. Nonetheless, we test it and consider an extended model allowing for time-varying elasticities in Section \ref{sec:robustness_checks} below. \medskip Assumption \ref{hyp:cons_demand} together with a supply-side restriction (Assumption \ref{hyp:yield_line}) turns out to be sufficient to identify $\varepsilon$, see Point 1 of Theorem \ref{thm:ident_xi_eps} below. To further identify the distribution of $(\xi_{aT},\xi_{bT})$, we consider the next assumption. Hereafter, $W_T$ denotes a vector of observed characteristics of train $T$ (e.g., whether the train operates on a rush hour or not) and $X_{dT}$ denotes a vector of observed characteristics of destination $d$ served by train $T$, e.g., the travel time from Paris to $d$ by train $T$. \begin{hyp}\label{hyp:gamma} For $d=\{a,b \}$, $\xi_{dT}$ satisfies: \begin{itemize} \item[(i).] $\xi_{dT}=\exp\{X'_{dT}\beta_0\}g_0(W_T)\eta_{dT}$ where $\eta_{aT}$, $\eta_{bT}$ and $(X_{aT},X_{bT},W_T)$ are independent. \item[(ii).] $\eta_{dT}\sim\Gamma(\lambda_{d0},1)$. \end{itemize} \end{hyp} Assumption \ref{hyp:gamma}(i) specifies $\xi_{dT}$ as the product of a function of $X_{dT}$, $g_0(W_T)$, and a remainder term $\eta_{dT}$. It restricts $\xi_{aT}$ and $\xi_{bT}$ to be dependent through the observed variables $(X_{aT},X_{bT},W_T)$, rather than $(\eta_{aT},\eta_{bT})$. This is plausible as long as one includes sufficient controls in $(X_{aT},X_{bT},W_T)$. Importantly, we leave the function $g_0(.)$, which determines how train-specific characteristics affect the demand, unrestricted. Assumption \ref{hyp:gamma}(ii) imposes that conditional on $X_{dT}$, $g_0(W_T)$, $\xi_{dT}$ follows a gamma distribution. Since we include a $d$-specific constant term in $X_{dT}$, we can normalize the scale parameter of the gamma distribution to $1$. As detailed below, the assumption of a gamma distribution does not matter for identification. It is rather made for computational reasons: that the gamma and Poisson distributions are conjugate makes it possible to simplify the computation of counterfactual revenues under incomplete information, see Online Appendix \ref{app:counter_rev} for more details. We also consider log-normality as a robustness check below, but without computing all counterfactual revenues, then. \medskip Note that in (i), we made the simplifying assumption that trains only had two destinations, an intermediate $a$ and a final one $b$. But recall from Table \ref{DES1} that most of them serve more than just two cities, so $a$ or $b$ actually correspond to more than one city. If so, we modify (i) by assuming that \begin{equation} \xi_{dT}=\left[\sum_{c\in d} \exp(X'_{cT}\beta_0)\right]g_0(W_T)\eta_{dT}, \label{eq:aggreg_cities} \end{equation} where $c$ is an index for cities belonging to either $a$ or $b$. For instance, in a train to C\^ote d'Azur, $c$ corresponds to Avignon for destination $a$ whereas $c$ includes Cannes, Saint-Rapha\"el and Nice, see again Table \ref{DES1}.\footnote{Note, on the other hand, that all cities $c\in d$ are priced equally, so we do not need to take into account price variations between cities.} \subsection{Supply side} We now formalize the features of revenue management already discussed in Section \ref{sub:revenue management}. First, recall that the revenue management is operated at a route level (e.g. Paris-Toulouse) rather than for each destination of this route (e.g. Paris-Bordeaux and Paris-Toulouse for the route Paris-Toulouse). We thus make the following assumption. \begin{hyp}\label{hyp:yield_line} (revenue management at the route level) The opening time of fare class $k \in \{1,...,K\}$, $\tau_k$, is a stopping time with respect to the process $t \mapsto N_{aT}(t)+N_{bT}(t)$, where $N_{dT}(t)$ is the number of purchases for $d$ made before $t$. \end{hyp} Assumption \ref{hyp:yield_line} states that the decision of opening a new fare class depends only on past total purchases, rather than on the repartition between purchases for $a$ and for $b$. Such an assumption is fully in line with the fact that a single fare class is used for the two destinations of each route. It was also confirmed by discussions we had with the revenue management department. \medskip Our second assumption on the supply side is a weak optimality condition for the firm. To introduce it, let $R_T(p_a,p_b)$ denote the maximal revenue for train $T$ under a uniform pricing of $(p_a, p_b)$ for destinations $a$ and $b$ respectively. This maximal revenue is obtained by considering the optimal quotas $C_{aT}$ and $C_{bT}$ of tickets sold for destinations $a$ and $b$ respectively, with $C_{aT}+C_{bT}=C_T$, the total capacity of train $T$ (the exact formula of $R_T(p_a,p_b)$ is displayed in Equation \eqref{eq:uniform_rev} below). Let also $p_{dkT}$ denote the price in train $T$ and fare class $k\in \{1,...,K\}$ for destination $d\in\{a,b\}$. The weak optimality condition we consider is the following: \begin{hyp}\label{hyp:weak_opt} (Weak optimality of actual revenue management) We have \begin{equation}\label{eq:moment_ineg3} \max_{k=1,...,K}\mathbb E\left[{R_T(p_{akT},p_{bkT})}|W_T\right]\leq \mathbb E\left[R^{\text{obs}}_T|W_T\right]. \end{equation} \end{hyp} By conditioning on $W_T$, which only includes coarse proxies of the true demand, we allow for the possibility that revenue managers use limited information for their pricing strategy. In reality, it seems credible that they have access to additional signals on the true demand for a specific train. For instance, they could use the past number of purchases in each fare class on previous years for the same exact train. If so, we would expect that Inequalities \eqref{eq:moment_ineg3} would also hold conditional on this information. \medskip Importantly, Assumption \ref{hyp:weak_opt} does not imply that the revenue management performs better than the optimal uniform pricing, because we only impose that observed revenues exceed any uniform pricing strategy that is constrained to the grid of the 12 predetermined prices. In other words, we simply assume that observed revenues are on average higher than those one would have obtained by sticking from $t=0$ to $t=1$ to one of the fare class. \medskip Moreover, we do not impose any optimality with respect to all dynamic strategies. We refrain to do so for several reasons. First, such an assumption would conflict with our very objective to quantify the gains or losses of the actual revenue management, compared to alternative scenarios. By definition, assuming a strong form of optimality would result in gains against most simpler pricing strategies. Second and related to the first point, it seems very restrictive in our setting to assume that the optimal dynamic strategy was adopted. As discussed in Section \ref{sub:revenue management}, the revenue management applied simplified rules (increasing fares from 12 predetermined fare classes), which can at best approach the optimal solution. Moreover, seat allocation decisions were also subject to the manager's manual intervention, which could be a source of suboptimality.\footnote{See \cite{cho2018optimal,cho2019semi,phillips2021pricing} for evidence of suboptimality due to human management.} Further, computing the optimal dynamic strategy under the simplified rules is still a very complicated dynamic programming problem. While \cite{Feng_Xiao_00} have proposed an algorithm for computing the solution for a homogeneous Poisson process, little has been done so far for the non-homogeneous case, to our knowledge. Finally, given that iDTGV has been merely created in 2004, we can doubt that it perfectly knows the demand parameters, and in particular all destination-train-specific effects $\xi_{dT}$. \subsection{Parameters of interest} \label{sub:parameters_of_interest} We aim at comparing the current revenues with several counterfactual revenues, depending on the type of revenue management and the information the firm has access to. We consider several possible pricing strategies, from the most basic to the most sophisticated ones. The first, uniform pricing, simply corresponds to fixing the price of each route in a given train once and for all. We let $R_u$ denote optimal counterfactual revenues, averaged over all trains, under this pricing regime. At the other extreme, in ``full'' dynamic pricing, prices can be changed any time. $R_f$ then corresponds to optimal counterfactual revenues in this set-up. We also study pricing strategies, called stopping-time strategies hereafter, where prices can be changed only after a ticket is sold. The corresponding optimal revenues are then $R_s$. Finally, we consider constrained stopping-time strategies close to what was implemented in practice, by assuming that only $M$ number of fares, or $M$ increasing fares, are allowed. The corresponding optimal revenues are denoted by $R_{sM}$ and $R_{sM+}$, respectively. To compute these counterfactual revenues, we maintain Assumption \ref{hyp:cons_demand}. This means that for pricing strategies where prices are allowed to decrease, we rule out any forecast of such price decreases by consumers. \medskip Hereafter, we consider two scenarios in terms of information available to the revenue managers. \begin{enumerate} \item (Complete information) Revenue managers fully know the expected demand for each train. Thus, they observe $\varepsilon$, $b_T(.)$, $\xi_{aT}$ and $\xi_{bT}$ for each train $T$; \item (Incomplete information) Revenue managers observe $\varepsilon$, $b_T(.)$, $(X_{aT},X_{bT},W_T)$ but only $f_{\xi_{aT},\xi_{bT}|X_{aT},X_{bT},W_T}$. As time goes by, revenue managers update their information on $(\xi_{aT},\xi_{bT})$ according to Bayes' rule. \end{enumerate} The complete information case should be seen as a benchmark. It is useful in particular to quantify the value of information and contrast the gains of revenue management in complete and incomplete information set-ups. The case of incomplete information is probably more realistic. In this scenario, revenue managers know, for each train, the pattern of consumers' arrival over time ($b_T(.)$) but does not know exactly the aggregate demand for each destination ($\xi_{aT}$ and $\xi_{bT}$). The assumption that $b_T(.)$ is known makes especially sense if $b_T(.)$ does not depend on $T$, in which case revenue managers can have learned from previous trains how consumers arrive through time. \medskip If the scenario of incomplete information holds in practice, the differences between the counterfactual revenues and the observed ones can be interpreted as the potential gains or losses of the optimal revenue management under different constraints compared to the actual ones. Hereafter, we use the exponents $c$ and $i$ to specify the two information set-ups. Hence, $R_u^c$ denotes for instance the counterfactual optimal revenue under uniform pricing and complete information. \medskip In all the counterfactual scenarios, we consider a separate pricing strategy for destinations $a$ and $b$, contrary to the actual practice. On the other hand, for computational reasons, we cannot consider the fully optimal pricing strategies for the two destinations. Without further restrictions, the state space is large: the optimal strategy at any time $t$ depends on the remaining seats for both destination. To reduce this state space, we fix ex ante the total number of seats available for stops $a$ ($C_{aT}$, say) and thus to $b$ ($C_{bT}=C_T-C_{aT}$, with $C_T$ the total number of seats in train $T$). Then, depending on the scenario we consider, we either consider the optimal pre-allocation $C_{aT}$, or fix it so that $C_{aT}$ matches the observed average sales for $a$. In any case, fixing $C_{aT}$ allows us to solve the optimization problem separably for each destination (given the independence of $\eta_{aT}$ and $\eta_{bT}$ imposed in Assumption \ref{hyp:gamma}(i)), rather than jointly. This greatly reduces the computational burden of the optimization problem. For this reason, our results below may be seen as lower bounds on the fully optimal counterfactual revenues. This actually reinforces some of our conclusions below. Also, we compare in Section \ref{sec:robustness_checks} below the revenues under uniform pricing with and without pre-allocation, and do not find important differences between the two. \section{Identification and estimation} In this section, we first clarify which parameters of the demand function are needed to recover the average revenues under the counterfactual scenarios described above. We also describe challenges for the identification of these parameters. Next, we show how price elasticity and relative demand effects can be identified. We then describe the partial identification of the distribution of train effects and counterfactual revenues. Finally, we show how to perform inference on the parameters of interest. The asymptotic framework we consider below is obtained by letting the number of trains tend to infinity (recall that in our application, we observe 2,909 trains). \subsection{A first result and challenges}\label{sub:param} The following theorem clarifies which parameters of the demand are required to identify all the counterfactual revenues we consider. \begin{thm}\label{thm:counterf_rev} Suppose that Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma} hold. Then, $R^{I}_{r}$ is a function of the distribution of $(\xi_{aT},\xi_{bT}, X_{aT}, X_{bT}, W_T)$ and $\varepsilon$, for $I\in\{c,i\}$ and $r\in\{u,f,s,sM,sM+\}$. \end{thm} We obtain the result by constructing new Poisson demand processes $\widetilde{V}_{aT}$ and $\widetilde{V}_{bT}$ with the same parameters as the true ones $V_{aT}$ and $V_{bT}$, except that they are homogeneous: $b_T(.)$ is replaced by $\widetilde{b}_T(.)=1$. We prove that the optimal revenues are the same for these new demand processes as for the original ones. This shows that the optimal revenues depend on $I_{dT}(.,.)$ only through $\xi_{aT}$ and $\xi_{bT}$: it does not matter whether consumers arrive early or late, as long as on average, the same number of consumers eventually arrive. The result holds because basically, all the constraints on pricing we consider are independent of time. In this sense, Theorem \ref{thm:counterf_rev} holds beyond the specific scenarios we consider here. But it would fail if time constraints were imposed on the pricing strategies, for instance if a limit on the number of price changes occuring before a given date $t^*<1$ was set. \medskip Theorem \ref{thm:counterf_rev} is crucial in our context with no information on purchasing dates. In the absence of such information, there is no way to recover $b_T(.)$. Instead, we only have to recover the price elasticity $\varepsilon$ and the conditional distribution of destination-train-specific effects $(\xi_{aT},\xi_{bT})$ to identify the counterfactual revenues. \medskip We do not specify here the exact forms of the counterfactual revenues, as they do not have closed forms. However, we can obtain them by induction, using the Bellman equations associated with the optimal strategies and solving some differential equations. In the incomplete information case, the gamma specification in Assumption \ref{hyp:gamma}(ii) is helpful for that purpose, as the gamma distribution is a conjugate prior for Poisson likelihood. The induction formulas are given in Online Appendix \ref{app:counter_rev}. \medskip \cite{mcafee2008dynamic} obtains a similar result as Theorem \ref{thm:counterf_rev} for the ``full'' dynamic pricing strategy under complete information and a similar demand model. We extend their results in two directions. First, we consider other types of pricing strategies, and in particular possibly constrained stopping-time strategies, which are very common in practice and correspond to the actual revenue management. Second, we also show a similar result in an incomplete information set-up. \medskip Now, we face two main issues for recovering the demand parameters. First, demand is actually unobserved; only bounds on it can be obtained. Let $n_{dkT}$ denote the number of sales for train $T$, fare class $k\in \{1,...,K\}$ and destination $d\in\{a,b\}$. Then $$D_{dT}(p_{dkT}) \geq D_{dT}(\tau_{k,T},\tau_{k+1,T};p_{dkT})= n_{dkT},$$ where $\tau_{k,T}$ is the (random) time at which the $k$th fare class opens, which we do not observe. Hence, without further assumptions, we only observe a crude lower bound on the total demand at price $p_{dkT}$. This point was already made in similar contexts by \cite{Swan_90}, \cite{Lee_90}, and \cite{Stefanescu_12}. \medskip The second issue we face is the absence of usual instruments for prices. Prices only vary within the grid specified by revenue managers, and to our knowledge, fare classes did not close for exogenous reasons unrelated to demand. In other words, there is no exogenous variations of prices in our context. The bottom line is that usual strategies to identify the demand function do not apply here. \medskip We now show that despite these limitations, it is possible, under Assumptions \ref{hyp:cons_demand}-\ref{hyp:yield_line}, to point or partially identify the parameters $(\theta_0,g_0(.))$, where $\theta_0:=(\varepsilon,\beta_0,\lambda_{a0},\lambda_{b0})$ and $\beta_0,\lambda_{a0},\lambda_{b0}$ and $g_0(.)$ are defined in Assumption \ref{hyp:gamma}. Then, in view of Theorem \ref{thm:counterf_rev}, we obtain bounds on the counterfactual revenues. We proceed in two steps hereafter, by first showing point identification of $\theta_0$ and then partial identification of $g_0(.)$. \subsection{Point identification of $\theta_0$}\label{sub:ident_demand} We first identify $\varepsilon$ by exploiting variations in the relative prices $p_{bkT}/p_{akT}$ between the two destinations and from one fare class to another. We start from $n_{dkT} = D_{dT}(\tau_k,\tau_{k+1};p_{dkT})$. For the sake of exposition, let us first assume that $\tau_k$ and $\tau_{k+1}$ are deterministic. Then, by Assumption \ref{hyp:cons_demand}, $D_{aT}(\tau_k,\tau_{k+1};p_{ak})$ and $D_{bT}(\tau_k,\tau_{k+1};p_{bk})$ are independent conditional on $\xi_{aT}, \xi_{bT}$ and $\int_{\tau_k}^{\tau_{k+1}} b_T(u)du$. Moreover, they both follow Poisson distributions. As a result, \begin{equation}\label{eq:binomial} n_{bkT}|n_{akT}+n_{bkT}=n,\xi_{aT},\xi_{bT} \sim \text{Binomial}\left(n, \Lambda(\ln(\xi_{bT}/\xi_{aT}) -\varepsilon \ln(p_{bkT}/p_{akT}))\right), \end{equation} where $\Lambda(x)=1/(1+\exp(-x))$. The term $\ln(\xi_{bT}/\xi_{aT})$ may be seen as a train fixed effect. Hence, this model boils down to a fixed effect logit model, and $\varepsilon$ is identified as long as there are variations through fare classes $k$ in the relative prices $p_{bkT}/p_{akT}$. In the data, we do observe such variations. In Paris-Toulouse for instance, $p_{bkT}/p_{akT}$ vary from 1 for $k=1$ to 1.18 for $k=12$. Then, if we add Assumption \ref{hyp:gamma}(i), we can intuitively identify $\beta_0$ and $\lambda_0$ from the fact that we have a random effect logistic model. Note that $g_0(W_T)$ is canceled out in the ratio $\xi_{bT}/\xi_{aT}$. In Section \ref{sec:id_Bt} below, we use further arguments to partially identify this function. \medskip To obtain \eqref{eq:binomial}, we assumed that the stopping times $(\tau_k)_{k=1,...,K}$ were fixed, which is unrealistic. Nonetheless, the following result shows that \eqref{eq:binomial}, and then identification of $\theta_0$, still holds provided that these stopping times satisfy Assumption \ref{hyp:yield_line}. \begin{thm}\label{thm:ident_xi_eps} Suppose that Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line} hold and that with positive probability, $k\mapsto p_{bkT}/p_{akT}$ is not constant. Then, \begin{enumerate} \item Equation \eqref{eq:binomial} holds and $\varepsilon$ is point identified; \item If Assumptions \ref{hyp:gamma}(i) and \ref{hyp:supp_X} in the appendix further hold, $\beta_0$ and the distribution of $\eta_{bT}/\eta_{bT}$ are identified. \end{enumerate} \end{thm} Two remarks are in order. First, Equation \eqref{eq:binomial} does not hold for any possible random stopping times. We can easily build counterexamples by making $(\tau_k)_{k=1,...,K}$ depend solely on $N_{aT}(.)$, for instance. Such situations are however ruled out by Assumption \ref{hyp:yield_line}. Under this condition, intuitively, the stopping times will be independent of the proportion of consumers buying tickets for $a$ (versus $b$). Second, we actually prove the nonparametric identification of $\eta_{bT}/\eta_{bT}$. This implies the identification of $\lambda_0$ under Assumption \ref{hyp:gamma}(ii). It also shows that imposing this latter condition is not necessary for identification. As mentioned above, it solely matters for the computation of counterfactual revenues. \medskip Beyond the identification of $\theta_0$, Equation \eqref{eq:binomial} can be the basis of testing some of the conditions we have imposed. First, the separability between $b_T(.)$ and $\xi_{dT}$ in Assumption \ref{hyp:cons_demand} implies that if $p_{bkT}=p_{akT}$ for several fare classes $k$, we should observe similar proportions $n_{bkT}/(n_{akT}+n_{bkT})$ for the corresponding $k$. Second, we can also test for the fact that price elasticities do not evolve over time, by considering more general specifications than \eqref{eq:binomial}. Third, we have imposed so far that the price elasticity was constant for all routes. We made this restriction for parsimony and consistency, because several routes share common origin-destination sections (e.g. Paris-Toulouse and Paris-C\^ote basque share the Paris-Bordeaux section). But we can allow for variations according to the day and hour of departure and according to groups of routes sharing the same sections. We consider all these extensions and robustness checks in Sections \ref{sub:demand_est} and \ref{sec:robustness_checks} below. \subsection{Partial identification of $g_0(W_T)$}\label{sec:id_Bt} To partially identify $g_0(W_T)$, which corresponds to the train-specific effect in $\xi_{dT}$, we build moment inequalities based on consumers' rationality (Assumption \ref{hyp:cons_demand}.1) and weak optimality of the actual revenue management (Assumption \ref{hyp:weak_opt}). \paragraph{Consumers' rationality} First, by Assumption \ref{hyp:cons_demand}.1, all consumers who bought a ticket for $d$ at price $p_{djT}$ for $j\geq k$ would have also bought it at price $p_{dkT}$. Therefore, for all $k=1,...,K$ and $d\in \{a,b\}$, $$D_{dT}(p_{dkT};g_0(W_T), X_{dT}) \geq \sum_{j=k}^K n_{djT},$$ where we now index total demand $D_{dT}(p_{dk})$ by $g_0(W_T)$ and $X_{dT}$. Let $C_T$ denote the capacity of train $T$. Then we also have $C_T \geq \sum_{j=k}^K n_{djT}$. Combining these inequalities and integrating conditional on $W_T$, we obtain, for all $k=1,...,K$ and $d\in \{a,b\}$, \begin{equation}\label{eq:moment_ineg1} \mathbb E\left[\sum_{j=k}^K n_{djT}-C_T \wedge D_{dT}(p_{dkT};g_0(W_T), X_{dT})\bigg|W_T\right]\leq 0. \end{equation} We assume hereafter that $X_{dT}$ is a deterministic function of $W_T$. This holds in our context where $W_T$ includes indicator of routes and $X_{dT}$ includes time-invariant destination variables and interactions between such variables and $W_T$. Then, the function $g\mapsto \mathbb{E}[C_T \wedge D_{dT}(p_{dk};g, X_{dT})|W_T]$ is strictly increasing. Denoting by $Q_k^{-1}(.;W_T, \theta_0)$ its inverse, we get \begin{equation*} g_0(W_T)\geq Q^{-1}_k\left(\mathbb E\left[\sum_{j=k}^K n_{djT}\bigg|W_T\right];W_T, \theta_0\right). \end{equation*} Then, we obtain a lower bound for $g_0(W_T)$: \begin{equation}\label{eq:lower_bound} g_0(W_T)\geq g_0^L(W_T):=\max_{\substack{d=a,b \\k=1,...,12}}\left\{ Q^{-1}_k\left(\mathbb E\left[\sum_{j=k}^K n_{djT}\bigg|W_T\right];W_T,\theta_0\right)\right\}. \end{equation} While $Q_k^{-1}$ does not have a closed form, we can compute it easily through simulations. \paragraph{Weak optimality condition} We now rely on Assumption \ref{hyp:weak_opt} to form additional moment inequalities. To exploit them, note that under Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma}, we have (see Appendix \ref{proof_thm_limited_rev}, section ``uniform pricing'', for details) \begin{align} \mathbb{E}[R_T(p_a,p_b)|W_T] =&\max_{\substack{(C_{aT},C_{bT}):\\ C_{aT}+C_{bT}=C_T}}\bigg\{\sum_{d\in\{a,b\}}p_d\int_0^\infty \mathbb{E}\bigg[D(\exp\{X_{dT}'\beta_0 \}p_d^{-\varepsilon}g_0(W_T)z) \notag \\ &\hspace{5.2cm} \wedge C_{dT}|W_T\bigg] \times g_{\lambda_{d0},1}(z)dz\bigg\}, \label{eq:uniform_rev} \end{align} where $D(u)\sim \mathcal{P}(u)$, $g_{\lambda_{d0},1}$ is the density of a $\Gamma(\lambda_{d0},1)$, and $C_{dT}$ is the total number of seats allocated to destination $d$. As a result, $$R(g_0(W_T);X_{aT},X_{bT},\theta_0):=\max_{k=1,...,K}\mathbb E\left[{R_T(p_{akT},p_{bkT})}|W_T\right]$$ is an identified function. Note that again, our notation reflects that $(X_{aT},X_{bT})$ is a deterministic function of $W_T$. Hence, the weak optimality condition \eqref{eq:moment_ineg3} rewrites as \begin{equation} R(g_0(W_T);X_{aT},X_{bT},\theta_0)\leq \mathbb E\left[R^{\text{obs}}_T|W_T\right]. \label{eq:moment_ineg3bis} \end{equation} The function $R(.;X_{aT},X_{bT},\theta_0)$ is strictly increasing. Denoting by $R^{-1}(.;$ $ X_{aT},X_{bT},\theta_0)$ its inverse, we obtain the following upper bound for $g_0(W_T)$: \begin{equation} \label{eq:upper_bound} g_{0}(W_T)\leq g_0^U(W_T)= R^{-1}\left(\mathbb E\left[R^{\text{obs}}_T|W_T\right];X_{aT},X_{bT},\theta_0\right) \end{equation} \subsection{Partial identification of counterfactual revenues}\label{sub:id_counterf} As shown by Theorem \ref{thm:counterf_rev}, $R_r^I$ ($I\in\{c,u\}$) is a function of the distribution of $(\xi_{aT},\xi_{bT}, W_T)$ and price elasticity $\varepsilon$. Further, under Assumption \ref{hyp:gamma}, given $W_T$ and a pre-allocation $(C_{aT},C_{bT})$, $R_r^I$ has the following form (see Appendix \ref{app:counter_rev}):\footnote{The only exception is for revenues under uniform pricing with prices constrained to belong to the grid, for which the form is more complicated. Specifically, it corresponds to the maximum over the grid of the revenue displayed in \eqref{eq:uniform_rev}. Nonetheless, we can still simply obtain bounds on these revenues using the monotonicity of the right-hand side of \eqref{eq:uniform_rev} with respect to $g_0(W_T)$.} \begin{equation*} R_r^I(W_T,C_{aT},C_{bT})=\sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}g_0(W_T)^{1/\varepsilon}, \end{equation*} for some non-random term $\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})$. Then, using the bounds of $g_0(W_T)$ in \eqref{eq:lower_bound} and \eqref{eq:upper_bound}, we obtain lower and upper bounds for $R_r^I(W_T,C_{aT},C_{bT})$ as: \begin{equation}\label{eq:revenue_gamma} \left[g_0^L(W_T)^{1/\varepsilon},g_0^U(W_T)^{1/\varepsilon} \right]\times \sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}. \end{equation} Bounds on $R_r^I$ then follow by averaging \eqref{eq:revenue_gamma} over trains: \begin{equation}\label{eq:revenue_gamma_ave} \mathbb{E}\left[\left[g_0^L(W_T)^{1/\varepsilon},g_0^U(W_T)^{1/\varepsilon} \right]\times \sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}\right]. \end{equation} We also consider below ratios of counterfactual revenues. Given what precedes, such ratios $r_0$ satisfy $$r_0= \frac{\mathbb E[f_1(U_T) g_0(W_T)^{1/\varepsilon}]}{\mathbb E[f_2(U_T) g_0(W_T)^{1/\varepsilon}]},$$ for two identified, positive functions $f_1$ and $f_2$. Let $\mathcal{R}$ denote the identified set on $r_0$. Then one can show that $\mathcal{R}$ is an interval $[\underline{r}, \overline{r}]$, where $\overline{r}$ and $\underline{r}$ are defined as the unique solutions of \begin{align*} \mathbb E\left[g_0^L(W_T)^{1/\varepsilon} (f_1(U_T) - \overline{r} f_2(U_T)) + (g_0^U(W_T)^{1/\varepsilon} - g_0^L(W_T)^{1/\varepsilon})(f_1(U_T) - \overline{r} f_2(U_T))_+\right] & = 0, \\ \mathbb E\left[g_0^U(W_T)^{1/\varepsilon} (f_1(U_T) - \underline{r} f_2(U_T)) + (g_0^L(W_T)^{1/\varepsilon} - g_0^U(W_T)^{1/\varepsilon})(f_1(U_T) - \underline{r} f_2(U_T))_+\right] & = 0. \end{align*} \subsection{Estimation and inference} \label{sub:estimation_and_inference} We estimate $\theta_0$ as follows. Let $Y_{jkT}=1$ if seat $j$ in fare class $k$ for train $T$ is sold to $a$, $Y_{jkT}=0$ otherwise. By \eqref{eq:binomial}, we have $$\Pr(Y_{jkT}=1|\xi_{aT},\xi_{bT}) = \Lambda\left(\ln(\xi_{bT}/\xi_{aT}) - \varepsilon \ln(p_{bkT}/p_{akT})\right),$$ and the $(Y_{jkT})_{j=1,...,n_{kT}}$ (with $n_{kT}:=n_{akT}+n_{bkT}$) are independent. Thus, we can estimate $\varepsilon$ and $\ln(\xi_{bT}/\xi_{aT})$ by maximizing the likelihood of a logit model including train fixed effects. Because the number of sales for each train is large (usually above 250), the bias related to the estimation of these fixed effects is expected to be negligible. Second, under Assumption \ref{hyp:gamma} (with the equality in (i) replaced by \eqref{eq:aggreg_cities} to account for multiple cities in each $d\in\{a,b\}$), $$\ln(\xi_{bT}/\xi_{aT})=\ln\left[\frac{\sum_{c \in b}\exp(X_{cT}'\beta_0)}{\sum_{c \in a}\exp(X_{cT}'\beta_0)}\right] +\ln\left(\frac{\eta_{bT}}{\eta_{aT}}\right), \quad \eta_{bT}/\eta_{aT} \perp \!\!\! \perp (X_{cT})_c.$$ Then, we estimate $\beta_0$ by nonlinear least squares, replacing $\ln(\xi_{bT}/\xi_{aT})$ by its estimator. Finally, we estimate $\lambda_0$ by maximum likelihood on the sample $(\widehat{\ln(\eta_{bT}/\eta_{aT})})_T$, with $$\widehat{\ln(\eta_{bT}/\eta_{aT})} = \widehat{\ln(\xi_{bT}/\xi_{aT})}-\ln\left[\frac{\sum_{c \in b}\exp(X_{cT}'\widehat{\beta})}{\sum_{c \in a}\exp(X_{cT}'\widehat{\beta})}\right].$$ In principle, we could directly estimate $\theta_0$ by maximum likelihood, as under Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma}, the distribution of $(Y_{jkT})_{j=1,...,n_{kT}, k=1,...,K}$ is fully parametric. We do not adopt this method for two reasons. First, the estimators of $\varepsilon$ and $\lambda_0$ would be sensitive to the parametric specification on $(\eta_{aT},\eta_{bT})$. Second, the corresponding estimator is much more complicated to compute, something turning out to be important when considering inference based on the bootstrap. \medskip Next, we estimate the lower and upper bounds on $g_0(W_T)$ by the empirical counterparts of \eqref{eq:lower_bound} and \eqref{eq:upper_bound}, where the conditional expectations $\mathbb E(.|W_T)$ are replaced by empirical means (as $W_T$ is discrete in our specification, see below for its full description). Finally, we estimate bounds on $R_r^I$ by the empirical counterpart of \eqref{eq:revenue_gamma_ave}. \medskip As estimation involves multiple steps, we rely on the bootstrap for inference. We compute confidence intervals on counterfactual revenues with nominal levels of $1-\alpha$ as follows. The lower bound corresponds to the $\alpha/2$-th quantile of the bootstrapped lower bound in \eqref{eq:revenue_gamma_ave}, while the upper bound corresponds to the $1-\alpha/2$-th quantile of the bootstrapped upper bound in \eqref{eq:revenue_gamma_ave}. This ensures an asymptotic coverage of at least $1-\alpha$, whether the parameter is point or partially identified. \section{Results} \subsection{Demand estimation}\label{sub:demand_est} We first consider the estimation of the price elasticity ($-\varepsilon$), the coefficients of des-tination-train specific effects ($\beta_0$), and the parameters of Gamma distribution $\lambda_0$. The variables we include in $W_T$ are route dummies, time dummies for the year and month of the train, whether it occurs during the weekend, on public holidays, on school holidays and whether the departure time is during rush hour. Regarding the variables $X_{dT}$ or, to be more precise, $X_{cT}$ where $c$ denotes a city (see our discussion around Equation \eqref{eq:aggreg_cities}), we include travel time to $c$ by train $T$, its square, city-specific effects $X_c$ (namely, the population of the urban area of $c$ and whether $c$ is a regional capital) and all interactions $X_{cj}\times W_{Tk}$ for all components $X_{cj}$ and $W_{Tk}$ of the vectors $X_c$ and $W_T$, respectively. \medskip The estimates of price elasticities are displayed in the top panel of Table \ref{tab:binomial}. In Column I (our baseline specification), we assume a constant price elasticity across routes and trains and obtain a price elasticity of $-4.04$. This result is larger (in absolute value) than those in the literature on the transportation industry. We refer for instance to the meta-analysis by \cite{Jevons_05} and the studies of \cite{Wardman_97}, \cite{Wardman_06} and \cite{Wardman_07}, which point to price elasticities in the range $[-1.3;-2.2]$. Unlike ours, most of the studies rely on aggregated data. This is likely to bias upwards price-elasticity estimates, a point that we illustrate in Appendix \ref{app:aggregated} by running regressions based on our data aggregated at different levels. \medskip The middle panel of Table \ref{tab:binomial} reports the estimates of the components of $\beta_0$ corresponding to the travel time and city-specific effects. The effect of the population size and travel time by train are as expected. Larger cities lead to higher demand and a longer travel time by train leads to a lower demand for train tickets. The effect of travel time may nonetheless be attenuated for long journeys, though the coefficient of the square of travel time is not significant. \medskip The bottom panel of Table \ref{tab:binomial} reports the estimates of the parameters $(\lambda_{a0},\lambda_{b0})$ of the gamma distribution. Intermediate destinations are estimated to have larger uncertainty on demand ($V(\eta_{dT})=\lambda_{d0}$ under the gamma specification), though the difference between the two is not statistically significant. \input{tables/Results_demand_new_v3} \medskip In Column II, we estimate the demand model by allowing price elasticity to vary across routes and trains. We find that travelers of routes from Paris to the southwest of France (namely, the routes to C\^ote basque, Toulouse and Perpignan) are less price-sensitive to price than those of other routes. Travelers on weekend or national holidays have a smaller price elasticity (in absolute value) than those on other days. On the other hand, once controlling for weekend and national holidays, individuals traveling during peak hours appear to have a similar elasticity to the others. \medskip For several routes, there are actually multiple intermediate or final destinations. If Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line} hold, Theorem \ref{thm:ident_xi_eps} implies that the joint distribution of the purchases for these multiple destinations, conditional on the total number of purchases on the train, is multinomial, rather than binomial in case the purchases for the intermediate or final stops are aggregated. We re-estimate the demand models corresponding to Columns I and II using a multinomial model. The results are displayed in Columns III and IV, respectively. The resulting price elasticities are almost identical to those obtained before. The destination effects and estimates of $\lambda_0$ are also very similar. \subsection{Counterfactual revenues}\label{sec:counterfactuals} We now turn to the counterfactual revenues under different pricing strategies, namely uniform, stopping-time, and full dynamic pricing. For counterfactual revenues $R_r^{I}$ with $r\in\{u,f,s\}$ and $I\in\{c,i\} $, we simulate the revenue with the optimally pre-allocated numbers of available seats for intermediate and final stops; for $R_r^{I}$ with $r\in\{sM,sM+\}$ and $I\in\{c,i\} $, i.e. stopping-time pricing strategy with $M$ (increasing) fares, we fix the pre-allocated number of available seats for intermediate stop $a$, $C_{aT}$, to be equal to the average number of seats sold for $a$ among all the trains operated on the given route. We do this, rather than finding the optimal value of $C_{aT}$, for computational reasons. Moreover, for the other pricing strategies ($r\not\in\{sM,sM+\}$), the revenues obtained this way secures at least 99\% of the revenue based on the optimal pre-allocation, so we expect very little effect of considering this specific pre-allocation. \medskip Table \ref{tab:supply_main} summarizes the set estimates of counterfactual revenues averaging over all routes based on Column I in Table \ref{tab:binomial} -- we discuss the results based on Column II in Section \ref{sec:robustness_checks} below. When possible, we indicated the 95\% confidence intervals on the set.\footnote{Computing the set estimates of counterfactual revenues can be costly, as it involves the terms $\alpha^I_r$, which are only defined by induction and thus can take time to be obtained. For instance, computing the set estimates corresponding to Line s.3 takes us 77 hours. For 500 bootstrap replications executed on 10 cores, this would mean 160 days of computational time.} Below, we organize our discussion of the results along different themes. \input{tables/counter_rev_main_set_estimates} \paragraph{How does the actual strategy compare to counterfactual pricing strategies?} Recall that by Assumption \ref{hyp:weak_opt}, the actual strategy is supposed to be better than any uniform pricing strategy under incomplete information and with prices constrained to belong to the price grid. The gains are however moderate: they range between 0\% and 9.5\%. Then, we already cannot exclude that the actual strategy performs actually worse than the same uniform pricing strategy but with unconstrained prices (see Scenario u.2). In any case, the gains would be at most 8.1\%. When turning to the most constrained dynamic pricing strategy, namely two fare classes and increasing prices, we observe a loss in revenue ranging between 3.6\% and 12.2\%. When we consider the same constraints as in the actual pricing strategy, namely 12 fare classes and increasing prices, we estimate a loss in between 6.8\% and 15.1\%. \medskip Because of fixed pre-allocations for destinations $a$ and $b$, the revenues in Table \ref{tab:supply_main} are just lower bounds on the true, optimal revenues, which still reinforces our conclusions above. To get a sense on the quantitative effect of these pre-allocations, we simulate counterfactual revenues under unconstrained uniform pricing without pre-allocating capacities among intermediate and final destinations. The corresponding formulas are in Appendices \ref{ssub:proofs} and \ref{proof_thm_limited_rev}, see the sections ``uniform pricing'' therein. In the complete information set-up, we obtain a set estimate of $[13.45, 14.68]$, corresponding to an increase in between 1.4\% and 1.8\% compared to Scenario u.4. In the incomplete information set-up, we obtain a higher gain of around 6\%, with a set estimate of $[11.96, 14.68]$. This 6\% might be the upper bound on possible gains from not imposing any pre-allocation, as one could expect that the effects of pre-allocation can be more easily mitigated with more flexible pricing strategies. \medskip How can we explain the suboptimality of the actual strategy, in particular compared to the optimal strategies under similar pricing constraints? First, the initial seat allocation planning determined by the CRS may sometimes be far away from the optimal allocation under complete information. Then, revenue managers may fail to adjust enough this initial allocation. Second, in our counterfactuals, we have considered that revenue managers knew the true $\varepsilon$, the true effects of covariates, or the true $b_T(.)$. This may not be the case in reality. In any case, our results emphasize the importance of not imposing strong optimality conditions on the supply side. \paragraph{Does it matter to have a fixed price grid?} We look at this question by comparing the revenues obtained under optimal uniform strategies with prices either chosen optimally on $[0,\infty)$ or only within the actual price grid of the train under consideration. The effect of the grid is higher in the complete information set-up, with a gain of an unconstrained optimization roughly ranging in between 4\% and 5.3\%.\footnote{These are approximations obtained by dividing each lower bounds and each upper bounds. The exact bounds on the ratios are hard to obtain because the revenues under uniform pricing and constrained prices do not take a simple form. The approximation we use works well on other ratios, for which we can compute the exact bounds.} This is basically because demand is very high or very low for a few trains, in which case one would like to set a price above the maximal price, or below the minimal price of the grid. On the other hand, fixing the price grid has very small effects on revenues under incomplete information, with gains in between 0.8\% and 1.3\%. \paragraph{Does it pay to complexify pricing strategies?} The answer to that question very much depends on the information set-up. In the complete information case, the answer is basically ``no'': the difference in revenue between uniform pricing with unconstrained prices and full dynamic pricing is only around 2.0\%.\footnote{This empirical finding is consistent with simulation results in operational research and empirical results in economics. For example, \cite{zhao2000optimal} shows a similar improvement by between 2.4\% and 7.3\%. \cite{williams2021welfare} estimates a revenue improvement due to optimal dynamic pricing of around 2\% in airline industry.} This figure sharply contrasts with the 19.3\% gain we estimate under incomplete information by comparing Scenarios f.1 and u.2. \medskip Intuitively, dynamic pricing still helps in the complete information case because of the uncertainty on the demand process. But the possibility to adjust the pricing strategy as one learns about $(\xi_{aT},\xi_{bT})$ (or, equivalently, $(\eta_{aT},\eta_{bT})$) in the incomplete information set-up plays a much more important role. To shed light on this point, we decompose the variance of the demand under the optimal uniform pricing in incomplete information into two parts: \begin{align*} \mathbb E\left[\mathbb V(D_{dT}(0,1,p^u_{dT})|W_T) \right]= & \mathbb{E}[\mathbb V(D_{dT}(0,1,p^u_{dT})|\xi_{dT})] +\mathbb E\left[\mathbb V(\mathbb{E}[D_{dT}(0,1,p^u_{dT})|\xi_{dT}]|W_T)\right], \end{align*} where $p^u_{dT}$ is the optimal price under uniform pricing for destination $d$ and train $T$. Even though they both involve $g_0(W_T)$, one can show that the two terms in this decomposition are point identified. For intermediate and final destinations respectively, the variation of the demand process (the first term) only explains on average $1.3\%$ and $0.9\%$ of the total variance. \medskip Now, even in the incomplete information set-up, one need not consider complex pricing strategies to obtain revenues close to the optimal ones. First, restricting to stopping-time pricing strategies incurs virtually no loss, compared to ``full'' dynamic pricing. By changing prices only when a purchase is observed, the firm can secure around 99.8\% of the revenue gain from uniform pricing to dynamic pricing regimes (comparing here Scenarios s.5 and f.1). Considering pricing strategies with 12 fare classes, as in reality but with possibly decreasing prices, still yield revenues in between 98.7\% and 99.1\% of the revenues under full dynamic pricing (comparing here Scenarios s.4 and f.1). \paragraph{How fast does information accumulate?} \medskip First, the tiny difference between the gains of full dynamic pricing under complete and incomplete information shows that revenue management is an effective instrument for demand learning. By learning from consumers' purchases in a Bayesian way, it can gradually pin down the uncertainty on the overall demand. Pricing decision then takes this renewed information into account, improving total revenue. And actually, this demand learning can compensate almost all revenue loss due to ex ante uncertainty on demand. The difference in revenue under optimal uniform pricing between incomplete and complete information is around 2K\EUR{} (comparing Scenarios u.4 and u.2), while this difference decreases to around 0.03K\EUR{} only under optimal dynamic pricing (see Scenarios f.2 and f.1). This finding is in line with \cite{lin2006dynamic}, who reports a similar near-optimality of demand learning in a simulation study. \medskip The reason of this very modest loss compared to the complete information set-up is that information accumulates quickly. To illustrate this point, we simulate expected revenues under a class of intermediate stopping-time pricing strategies, where the firm is only allowed to dynamically price the first $K\%$ seats, turning to uniform pricing for the remaining seats. Thus, $K=0$ and $K=100$ correspond respectively to the optimal uniform and stopping-time pricing strategies.\footnote{As the pricing strategies in Theorem \ref{thm:counterf_rev}, we show in Appendix \ref{app:counter_rev} that one can partially identify the optimal revenues with intermediate stopping-time pricing strategies using the procedure described in Section \ref{sub:id_counterf}.} By quantifying the revenue gain from $K$ to $K+1$, we can characterize how much can be marginally gained from being able to extract information on demand from additional purchases ($1\%$ of total seats) and optimally adjust its pricing. Figure \ref{fig:intermediate} displays the lower bounds of the optimal revenues under these intermediate pricing strategies under complete (blue) and incomplete (red) information for $K=1,...,100$.\footnote{We also simulate the upper bounds of these revenues. The obtained curve is very similar.} \begin{center} \begin{figure}[H] \caption{Revenues (lower bound) under intermediate pricing strategies}\label{fig:intermediate} \vspace{-0.45cm} \includegraphics[width=0.9\textwidth]{figures/intermediate_Pricing.pdf} \vspace{-0.4cm} \end{figure} \end{center} \vspace{-1cm} Under incomplete information, demand learning is rather quick, as we can see from the important concavity of the red line. With just $K=5$, the firm already achieves a revenue equal to the observed one; by learning from $50\%$ of the seats, it obtains a revenue around $3\%$ lower than that of the complete information. On the other hand, the blue line shows that the revenue gains under complete information are small. The incremental revenue from $K$ to $K+1$ is almost constant and barely reaches 3\EUR{}. This latter result could be expected, given that the difference between uniform pricing and the full stopping-time pricing is small under complete information. The striking difference in the pattern of marginal gain between complete and incomplete information settings is also in line with our previous findings: in terms of revenue improvement by dynamic pricing, the effect of learning overall demand $(\xi_{aT},\xi_{dT})$ is more remarkable than that of pinning down the uncertainty in demand process when $(\xi_{aT},\xi_{dT})$ is fixed. \subsection{Tests and robustness checks}\label{sec:robustness_checks} In this section, we first test the plausibility of Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line}, on which the identification of $\theta_0$ relies. Next, we relax the assumption of a time-invariant price elasticity. Then, we consider alternative parametric specifications. Finally, we explore the effect of specific routes on our results. \paragraph{Test of Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line}.} These assumptions imply that the proportions $n_{bkT}/(n_{akT}+n_{bkT})$ remain constant through fare classes $k$ satisfying $p_{bkT}=p_{akT}$. A convenient way to check this is to restrict ourselves to two routes, Paris-Marseille and Paris-Mulhouse, for which $p_{bkT}=p_{akT}$ for all $k\in\{1,...,K\}$. By taking the first fare class as a reference, we simply regress $n_{bkT}/(n_{akT}+n_{bkT})$ on the other 11 fare class dummies and train fixed effects. We then test whether the coefficients of the fare class dummies are equal to zero. \medskip The results are presented in Table \ref{tab:binomialTest}. As emphasized by the top panel, most coefficients are not significant, despite the large number of observations ($453$ and $499$ for the two routes). For Paris-Marseille, the p-value of the joint test is larger than 0.05. For Paris-Mulhouse, the p-value is lower, but it appears that this result is mostly driven by the last fare classes (the joint test for nullity of the first 10 classes has a p-value of 15\%). The coefficients of the last two fare classes are indeed positive and quite large for this route, indicating that there would be more ``late purchasers'' for Mulhouse than for Strasbourg. \begin{comment} u2: 11.452 12.307 u4: 13.661 14.624 s5: 13.763 14.795 s10: 13.892 14.873 f1: 13.787 14.822 f2: 13.910 14.892 \end{comment} \medskip To see whether this pattern could influence our results beyond this specific route, we re-estimate $\varepsilon$ using only the first 10 fare classes. We obtain a price elasticity of $-4.86$, which is thus somewhat higher in absolute value than the baseline estimate of -4.04 obtained with the 12 fare classes. We then recomputed the identified sets of counterfactual revenues for Scenarios u2, u4, s5, s10, f1 and f2 (as they are the simplest to compute). The optimal revenues are slightly higher but with differences never exceeding 3.3\% on the lower bounds and 1.2\% on the upper bounds. \input{tables/table_separability_check} \vspace{-0.5cm} \paragraph{Time-varying price elasticities.} One could expect that consumers purchasing their tickers earlier would be more price elastic than those buying their tickets late. For instance, the latter could include more business travelers. If so, the assumption of a time-invariant price elasticity would be violated. To test for this condition, we replace $\varepsilon$ in \eqref{eq:binomial} by $\varepsilon_{\text{early}} 1\{k\leq S \} + \varepsilon_{\text{late}} 1\{k> S \}$ for some threshold $S$ that we vary. In other words, we distinguish price elasticity of early purchasers, defined as those who purchase a ticket with price in fare classes inferior or equal to $S$, from that of late purchasers who pay the price in fare classes superior to $S$. We then compare $\varepsilon_{\text{early}}$ to $\varepsilon_{\text{late}}$ to assess the extent to which the assumption of a time-invariant price elasticity holds, and the impact of relaxing this condition on counterfactual revenues. \medskip The results are displayed in Table \ref{tab:check_constancy_time}. We consider threshold values $S$ equal to $9$, $10$ and $11$. In the three cases, ``early purchasers'' are estimated to be more price elastic than ``late purchasers'', with the estimated price elasticity of the former being greater (in absolute value) than the baseline estimate in Table \ref{tab:binomial} ($-4.04$) but still close to its upper bound of the 95\% confidence interval. \input{tables/table_constant_elas_check} \vspace{-0.3cm} Next, to assess the robustness of the time-invariance elasticity condition affects counterfactual revenues, we simulate some scenarios by explicitly considering early and late purchasers (with $\varepsilon_{\text{early}}$ and $\varepsilon_{\text{late}}$, respectively) in the demand model. Specifically, we set $S=10$, re-estimate the demand using the procedure described in Sections \ref{sub:ident_demand} and \ref{sec:id_Bt} with $\varepsilon_{\text{early}}$ and $\varepsilon_{\text{late}}$, and compute the revenue with the optimal uniform pricing under complete information (scenario u.4 in Table \ref{tab:supply_main}). We find that the estimates of destination-train-specific effect, $\beta_0$, and the parameters of the Gamma distribution, $\lambda_0$, are close to the baseline results. Furthermore, the simulated revenue is close to that in the scenario u.4 in Table \ref{tab:supply_main}. Both findings suggest that despite the difference in price elasticity of early and late purchasers, the results are robust to the assumption of a time-invariant price elasticity. We refer to the Online Appendix \ref{app:constant_elas_details} for more details on the estimation method and the results. \paragraph{Alternative parametric specifications.} We conduct two robustness checks. First, we simulate counterfactual revenues with a lognormal specification on $\eta_{dT}$ instead of a gamma distribution (Assumption \ref{hyp:gamma}(ii)). The drawback of a lognormal specification is that it is not conjugate with the Poisson distribution. Then, the updated distribution of $\eta_{dT}$ in the incomplete information set-up takes a complicated form, making it very difficult to compute counterfactual scenarios. Nevertheless, this issue does not appear for uniform pricing and complete information. Table \ref{tab:supply2_benforroni} shows the results for Scenarios u.2, u.4, s.10 and f.2. Even if the bounds are wider than in the baseline specification, the results are similar. \medskip \input{tables/robustness_check_lognormal_v2} \vspace{-0.3cm} Second, we have focused so far on the demand model corresponding to Column I in Table \ref{tab:binomial}. We did so because counterfactual revenues are harder to compute under the richer specification corresponding to Column II in the same table. Nevertheless, we were able to compute counterfactual revenues with this specification for a few scenarios. The results, presented in Table \ref{tab:rev_column2}, are hardly affected. \paragraph{Effect of specific routes.} Table \ref{DES2} show that the routes to Marseille and C\^ote basque have unusually high and low loads, so one may worry that revenue management was very different for these lines. We resimulate the counterfactual revenues by excluding these two routes. The results are hardly affected, with changes in the bounds by at most 1\% over all scenarios. \input{tables/counter_rev_main_column2_v2} \vspace{-1cm} \section{Conclusion} Though the framework we have developed is taylored to our application, several of our results could be applied to other set-ups. The insight that many counterfactual revenues only depend on price elasticity and total demand, and not on the precise timing of consumers' arrival, is convenient when no details on the dates of purchases are available. Similarly, the censoring issue and the absence of exogenous variations in prices may often occur. Our identification strategy, combining exogenous variations in relative prices and moment inequalities based on basic rationality on consumer's side and weak optimality conditions on the firm's pricing strategy, could then be applied in such contexts. Our results also suggest that such moment inequalities may be quite informative in practice. \newpage \section{Introduction} Revenue management, namely the practice of adjusting supply to the random demand for perishable goods, is an old practice, which has increased in importance with the rise of the e-commerce \citep{boyd2003revenue}. Adjusting prices in a flexible way is likely to increase firms' revenues but it also comes at some cost, as it requires specialized teams and good algorithms. This continuous updating is also a complex exercise, so simple rules are usually set to simplify the pricing strategy. These rules may be suboptimal. This paper identifies how much gains in revenues can be expected by firms when adopting flexible strategies compared to uniform pricing. We also quantify the magnitude of losses of actual strategies compared to the optimal ones, under various constraints imposed on such strategies. Finally, by varying these constraints and the assumptions behind the counterfactuals, we identify the main sources of the gains or losses. \medskip We address these questions by studying revenue management at iDTGV, a subsidiary of the French railway monopoly, SNCF. From 2004 to 2017, this firm provided low-cost trains from Paris to several towns in France, and the corresponding return trains. Its revenue management was based on quantities, as is often the case in companies selling perishable goods (e.g. flight tickets, hotel rooms, rented cars for given periods etc.).\footnote{For a detailed review of revenue management techniques, see \cite{Talluri_vanRyzin_05}.} Namely, for the economy class on which we focus here, 12 classes of prices, called fare classes hereafter, were defined. These 12 prices were sorted in ascending order and for a given trip (e.g. Paris-Bordeaux), set almost constant during the period we studied. For each train, revenue managers could decide, at any moment before the departure and depending on the demand, to close the current fare class and open the next one, thus increasing the prices of the seats. We investigate hereafter the relative benefits of this popular pricing strategy compared to uniform pricing, or alternative, more flexible strategies. \medskip In order to compute such counterfactuals, we first show that in our context, recovering the price elasticity coefficient, relative demand parameters (of, e.g. Bordeaux versus Toulouse in Paris-Toulouse trains) and the total demand for a given train at a given price are sufficient to identify a rich set of counterfactual revenues. In particular, the timing of consumers' arrival is not necessary to identify counterfactual revenues. This is convenient because such information is often unobserved, as in our case. We can compute revenues not only under uniform pricing, but also under optimal dynamic pricing, with any number of fare classes. Importantly also, we are able to compute such counterfactuals assuming either that iDTGV has complete or incomplete information on a given train's demand. \medskip The identification of price elasticity, relative demand parameters and the total demand at a given price is however complicated by two issues that are likely to arise in many markets of perishable goods. First, and as already observed by \cite{Swan_90}, \cite{Lee_90} and~\cite{Stefanescu_12}, we face a severe censoring problem: demand at a given price is generally larger than the number of seats sold at that price. Second, prices vary only within the grids of 12 prices corresponding to each of the 12 fare classes. Hence, we cannot rely on usual instruments such as cost shifters. \medskip To identify price elasticity, we rely on a new argument tailored to our application but that may apply to other contexts as well. Specifically, we exploit the fact that revenue management is done at a route level (e.g. Paris-Toulouse), while the train serves several cities (e.g. Bordeaux and Toulouse). This means that fare classes close at the same time for all destinations within the same route. Relative prices between, e.g. Bordeaux and Toulouse, then vary simultaneously whenever a fare class closes.\footnote{A similar strategy could be used for, e.g., hotels, if the prices of rooms of different qualities change simultaneously.} We prove that the price elasticity can be identified by relating variations between relative prices and the proportion of consumers buying tickets for one destination versus another. Specifically, identification can be achieved under the assumption that price elasticities and the proportion of consumers seeking to buy a ticket for one destination versus another remain constant over time. We can test both conditions empirically, and the results suggest that they are reasonable in our context. \medskip The identification of the distribution, over the different trains, of the total demand at a given price is also difficult, in particular because of the censoring problem mentioned above. We first show that basic conditions on the rationality of consumers deliver inequalities relating this total demand with the number of seats that are sold. We complement these inequalities by weak optimality conditions on the observed revenue management. Specifically, we assume that this revenue management was better, on average, than a uniform pricing strategy performed under incomplete information and using prices from the observed grid of prices. Given our very purpose, it is important here not to impose too strong optimality conditions, such as optimality vis-\`a-vis all dynamic strategies, as these conditions would very much drive our results. Also, our conditions have the advantage of being relatively simple to exploit for identification and estimation purposes. At the end, these conditions stemming from demand and supply can be combined to form a set of moment inequalities. Though they rely on weak restrictions, these moment inequalities are sufficient to produce informative bounds on most counterfactual revenues. \medskip We obtain the following key findings. First, we estimate a price elasticity of about -4, which is below the range of most estimates in the transportation literature \citep[see, e.g.][for a meta-analysis]{Jevons_05}. However, we show in Appendix \ref{app:aggregated} that using aggregated quantities and prices to estimate price elasticity, as done by most of these studies, produces estimates that are substantially biased towards zero. Second, our results suggest that the observed revenue management practice was effective but still sub-optimal. The observed revenue management generated a gain of up to 8.1\% compared to the optimal uniform pricing in an incomplete information set-up. However, we also estimate, under the same informational set-up, a loss of at least 6.8\% and up to 15.1\% compared to the optimal pricing strategy under the same restriction of 12 ascending fare classes as those actually used. Actually, we estimate that simple strategies, such as 12 (non necessarily ascending) fare classes, already secure almost 99\% of the fully unconstrained optimal pricing strategy. \medskip Lastly, we emphasize the key role of demand uncertainty on revenues, and how revenue management can mitigate it. Revenues from a uniform pricing strategy are 17.2\% higher when moving from an incomplete to a complete information set-up. But the informational gains are much smaller (0.22\%) when considering fully flexible pricing strategies. In other words, implementing the optimal dynamic pricing strategy mitigates almost entirely the loss entailed by demand uncertainty. The reason behind is that information accumulates quickly: by observing and learning from the sales of half of the available seats, the firm can already secure more than 97\% of the revenue under complete information. \medskip \paragraph*{Related Literature.} Our paper relates to several theoretical and empirical papers in operational research and economics. The theoretical literature on revenue management has investigated optimal quantity-based revenue managements, where firms segment demand by choosing either once for all or dynamically the allocation of, say, seats into fare classes in which prices are predetermined. We refer in particular to \cite{Littlewood_72} and \cite{Brumelle_McGill_93} for static solutions, and to \cite{gallego1994optimal}, \cite{Feng_Gallego_95}, \cite{Feng_Xiao_00}, \cite{aviv2002pricing} for dynamic solutions. These last papers have studied optimal pricing strategies assuming that consumers arrive under some homogeneous Poisson process. \medskip In our paper, we assume that consumers arrive according to a flexible, non-homogenous Poisson process, as \cite{bitran1997periodic}, \cite{zhao2000optimal}, and \cite{mcafee2008dynamic}. Our demand model is closest to \cite{mcafee2008dynamic}, but with one key difference. Whereas they assume that the firm has a complete information on the demand parameters, we also consider an incomplete information set-up where only the distribution of these parameters is known. The firm then updates this distribution as consumers arrive. Such an incomplete information set-up seems more plausible when, as here, aggregate demand may vary much from one train to another. We also generalize \cite{mcafee2008dynamic} by studying constrained pricing strategies close to those implemented in practice. We refer to Online Appendix \ref{app:counter_rev} for details on the resolution of the corresponding Bellman equations. \medskip Our results underline the important role of information and demand learning to explain the gains and losses of revenue management. Such a point has already been made in the theoretical literature but to our knowledge, we are the first to quantify these roles using real data.\footnote{Different from the revenue management we consider, \cite{huang2020learning} study how firms' static pricing in liquor market in the US can be improved by learning market conditions from realized sales.} \cite{lin2006dynamic} studies similar models to ours in his sections 5.1 and 5.2 and allows for firm's Bayesian learning from the observed purchases or arrivals. Instead of deriving the optimal policy, his paper focuses on a specific policy (variable-rate), which is shown to be nearly optimal in simulations. \cite{aviv2002pricing} derives the optimal policy assuming an unknown constant arrival rate of consumers and simulates the loss due to incomplete information. By contrast, we allow for heterogeneous arrival rate, and study other practically relevant pricing strategies as well. Finally, in contrast to all these papers and ours, \cite{den2015dynamic} consider another form of learning by the firm, based on maximizing the likelihood of the data at its disposal. We refer to \cite{den2015dynamicsingle} for a complete survey on demand learning in dynamic pricing. \medskip In the empirical literature on revenue management, the closest papers to ours are \cite{lazarev2013welfare} and \cite{williams2021welfare}, both of which study dynamic airline pricing in a monopolistic market.\footnote{Another recent empirical paper is \cite{cho2018optimal}, which studies revenue management under oligopoly in hospitality industry. Their analysis focuses on the pricing behavior of ``hotel 0'' (from which the demand data is obtained) in a competing environment.} While both papers accentuate price discrimination and its welfare effects, the main goal of our paper is to quantify the potential gains and loss due to revenue management in practice. As a result, contrary to their models, ours explicitly incorporates firm's learning behavior from the realized demand. Moreover, we do not impose strong optimality conditions on the observed prices.\footnote{See also \cite{cho2018optimal,cho2019semi} for recent examples that identify demand without imposing strong optimality conditions.} On the other hand, while \cite{lazarev2013welfare} allows travelers to be forward-looking, we abstract from any strategic considerations from consumers here, following \cite{williams2021welfare} and the operation research literature. The rationale behind is that in our context, and contrary to what happens in the airline industry, prices always increase. So at least in the absence of uncertainty on the opportunity of the journey, the consumers have no incentives to wait. \medskip The rest of the paper is organized as follows. In Section 2, we present the context and our data. Section 3 displays the demand model and our assumptions on the supply side. Section 4 is devoted to the identification and estimation of demand under our assumptions and given the data at our disposal. Section 5 presents the results. The appendix gathers the proofs of our identification results and estimation with aggregate data. The Online Appendix displays the formulas for the counterfactual revenues, additional details on some robustness checkcs and additional proofs. \section{Institutional Background and Data} \subsection{Revenue Management at iDTGV in 2007-2009}\label{sub:revenue management} iDTGV was a low-cost subsidiary of the French railway monopoly, SNCF, which was created in 2004 and disappeared in December 2017.\footnote{Its disappearance was due to internal strategic considerations at SNCF. It was basically replaced by Ouigo, the new low-cost service at SNCF.} It owned its trains and had a pricing strategy independent from SNCF. Prices were generally lower than the full-rate prices of SNCF, but were also associated with a slightly lower quality of services. Namely, tickets could only be bought on Internet, they were nominative and could not be cancelled. On top of that, they could be exchanged only under some conditions and at some cost. \medskip The routes of iDTGV were all between Paris and other towns. For each of those towns and every day, one train was leaving Paris and another coming to Paris. Table \ref{DES1} presents the routes we observe in our data from May 2007 till March 2009. These routes have several stops, but to simplify the analysis, we gather them so as to form a single intermediate stop and a single final stop. We aggregate the cities according to the price schedule. For instance, we group Aix-en-Provence and Avignon together in the Paris-Marseille route since the corresponding prices are always the same. This gathering is consistent with Assumption \ref{hyp:cons_demand} below, as our demand model remains valid after aggregation of cities. \medskip Different routes may share the same intermediate destination. For instance, Bordeaux is the intermediate destination of Paris-C\^ote basque and Paris-Toulouse. Importantly, no tickets were sold between the intermediate and the final destination, e.g. no Bordeaux-Toulouse tickets are sold on the Paris-Toulouse route. Our understanding is that this was done to avoid controlling people in intermediate destinations, as there were no ticket inspectors in the trains. \begin{center} \begin{threeparttable} \vspace{-1cm} \caption{\small Routes with intermediate and final destinations}\label{DES1} \begin{tabular}{lllc} \toprule Route name &Final stop(s) &Intermediate stop(s) &Nb. of trains \\ \midrule C\^ote d'Azur &Cannes,Saint-Rapha\"el,Nice & Avignon & 452 \\ Marseille &Marseille &Aix-en-Provence/Avignon & 453 \\ Perpignan & Perpignan & N\^imes, Montpellier & 689 \\ \multirow{2}{*}{C\^ote basque} &St Jean de Luz,Bayonne,& \multirow{2}{*}{Bordeaux} & \multirow{2}{*}{405} \\ & Biarritz,Hendaye & & \\ Toulouse & Toulouse & Bordeaux &411 \\ Mulhouse & Mulhouse &Strasbourg &499 \\ \midrule Total & & & 2,909 \\ \bottomrule \end{tabular} \begin{tablenotes} \footnotesize \item \emph{Notes:} we have different number of observations for the different routes because the period we cover varies slightly from one route to another. \end{tablenotes} \end{threeparttable} \end{center} \medskip The trains are split into economy class and business class cars of fixed sizes. Revenue management was implemented almost independently between the two classes, i.e. under the sole constraint that prices in economy class are always lower than in business class. This constraint was very seldom binding in practice, so we ignore it hereafter. We focus hereafter on the economy class, which represents roughly $73\%$ of the seats. In this category, there are 12 fare classes corresponding to 12 prices sorted in ascending order. The price of a given fare class, at a peak time or off peak and for some origin-destination trip (e.g. Paris-Bordeaux) remained constant for several months (e.g. from 03/01/2007 to 10/31/2007) before being adjusted marginally, mostly to account for inflation. Contrary to SNCF, iDTGV did not make any third-degree price discrimination, so there was no discount for young people, old people or families. \medskip In this context, revenue management consists in deciding in real time to maintain the current fare class or to close it and move to the next one, resulting in a price increase. Coming back to a previous fare is impossible; thus, there are no last minute drops in ticket prices for trains that have still several empty seats. Also, revenue managers could decide to never open the first fare classes and begin to sell directly tickets in a higher fare class. Symmetrically, the last fare class may never be reached. In practice, revenue management was operated through a Computerized Reservation System (CRS). Before the beginning of sales, it fixes a seat allocation planning for all fare classes, using the history of purchases on past trains. During sales, the CRS uses the number of tickets sold up to now to make recommendations on the size of subsequent fare classes. Revenue management managers can nevertheless always intervene, both on the initial and on subsequent seat allocations, according to their experience on past trains.\footnote{Manager intervention in automatized revenue management also exists in other industries, e.g. hospitality industry \citep{cho2018optimal}.} \medskip Finally, and crucially for our identification strategy, the revenue management did not use separate fare classes for a given train with several destinations. For instance, in a Paris-Toulouse train, the closure of the first fare class occurred exactly at the same moment for both Bordeaux and Toulouse. Hence, price changes of Paris-Bordeaux and Paris-Toulouse tickets happened exactly at the same time, for all trains. According to discussions with people in the revenue management department, this was to limit the number of decisions to be taken at each moment. \subsection{Data and descriptive statistics} We have data on iDTGV trains between May 2007 and March 2009 in economy class and for journeys from Paris to the rest of France. We first observe basic characteristics of the trains: all the stops, departure and arrival time, day of departure (e.g. May 2, 2008) and whether it corresponds to a peak time or not. We also observe the price grid used for that train for each fare class. For each route and type of period (peak time or off peak), there are a limited number of such grids, as they change these grids only a few times during the period we observe (e.g. 3 times for the Paris-Toulouse). We also observe the sales of each fare classes of all trains. On the other hand, we do not observe the purchasing dates, nor the opening moments of each fare class. For a given route, capacity is defined as the maximal number $n$ such that for at least three trains, $n$ seats were sold \footnote{We use this definition (rather than the maximal number of seats sold across all trains of a given route) to take into account rare cases of overbooked trains. With this definition, we observe 5 cases of overbooking, over the $2,909$ trains of our dataset. Note that capacity can be assumed to be fixed for a given route because the number of coaches in economy class is fixed.} \medskip Table \ref{DES2} presents some descriptive statistics on our data. We observe a substantial amount of price dispersion within trains. For instance on the C\^ote d'Azur line, the minimal price paid by consumers on average over the different trains (19.3\EUR{}) was three times and a half lower than the average maximal price (68.4\EUR{}). We also observe substantial variations on the average load across routes. While trains in Paris-Marseille were always nearly full, with an average load above 95\%, this was far from being the case on the C\^ote basque line, with an average load of only 65.4\%. This suggests that the actual pricing may not be fully optimal, at least for some routes. \bigskip \begin{center} \begin{threeparttable} \vspace{-1cm} \caption{\small Descriptive statistics, economy class, from Paris} \label{DES2} \begin{tabular}{lccccccc} \toprule & & Avg & \% final & \multicolumn{4}{c}{Prices} \\ Route &Capacity & Load & dest. & Avg & Avg min. & Avg max. & max/min \\ \midrule C\^ote d'Azur &324 & 85.4\% & 81.5\% & 50.3 & 19.3 & 68.4& 3.54\\ Marseille &324 & 95.5\% & 60.0\% &49.5 & 19.0 & 70.5 & 3.71\\ Perpignan &324 & 88.6\% & 27.4\% & 50.0 & 20.2 & 72.6 &3.59\\ C\^ote basque &350 & 65.4\% & 64.1\% & 37.3 & 19.7 & 53.3 &2.71\\ Toulouse &350 &87.3\% & 55.3\% & 43.6 & 19.4 & 67.2&3.46 \\ Mulhouse &238 &79.4\% & 24.1\% & 35.0 & 19.4 & 50.0 &2.58\\ \bottomrule \end{tabular} \begin{tablenotes} \scriptsize \item Notes: Avg min. and max. are the average of the minimal and maximal prices charged for each train, for the final destination. max/min is the ratio between the two previous columns. \end{tablenotes} \end{threeparttable} \end{center} \section{Theoretical model and parameters of interest}\label{sec:mode} \subsection{Demand side}\label{sec:demand_model} We consider a demand model close to that of \cite{mcafee2008dynamic}. A train $T$ is defined by its route $r(T)$ (e.g. Paris-Toulouse) and its day of departure (e.g. May 2, 2008). For each route $r$, we denote by $a_r$ the intermediate destination and by $b_r$ the final destination. To simplify notation and in the absence of ambiguity, we just denote the destinations of a train $T$ by $a$ and $b$ instead of $a_{r(T)}$ and $b_{r(T)}$. For any train $T$, tickets are sold between the normalized dates $t=0$ and $t=1$. We denote the fare classes by $k\in \{1,...,K\}$. Within fare class $k$, tickets for train $T$ and destination $d\in \{a_{r(T)},b_{r(T)}\}$ are sold at price $p_{dkT}$. We recall that $p_{dkT}$ belongs to a grid of $K$ prices that remains fixed for several months and depends only on the destination $d$ and whether the train leaves at a peak time or not. Finally, we denote by $V_{dT}(A, B)$ the number of consumers arriving during the subset $A$ of the time interval $[0,1]$, and with a valuation belonging to the subset $B$ of $[0,\infty)$. Similarly, let $D_{dT}(t,t';p_d)$ denote the demand for destination $d$ in train $T$ between dates $t$ and $t'$ (with $(t,t')\in[0,1]^2$) when the price is constant and equal to $p_d$. Then $D_{dT}(t,t';p_d)= V_{dT}([t,t'),[p_d,\infty))$. We then assume the following condition. \begin{hyp} (Consumers' demand) For all $T$ and $d\in \{a,b\}$, there exists $\varepsilon>1$ and a random process $b_T(.)$ on $[0,1]$, continuous and satisfying $\min_{u\in[0,1]} b_T(u)>0$ almost surely, such that conditional on $\xi_{dT}$ and $b_T(.)$: \begin{enumerate} \item $V_{dT}$ is a Poisson process with intensity $I_{dT}(t,p) = \xi_{dT} b_T(t) \varepsilon p^{-1-\varepsilon}$ for $(t,p)\in [0,1]\times [0,\infty)$. Without loss of generality, we let $\int_0^1b_T(u)du=1$. \item $V_{aT}$ and $V_{bT}$ are independent. \end{enumerate} \label{hyp:cons_demand} \end{hyp} The term $\xi_{dT}$ captures train-destination specific overall demand shocks. For instance, demand to Cannes may increase a lot during the Cannes Film Festival. The term $b_T(t)$ describes the pattern of consumers' arrival time for train $T$. We do not make any restriction hereafter on this function, nor do we impose it to be constant from one train to another. On the other hand, we impose that the intensity of $V_{dT}$ takes a multiplicative form. This form has three implications. First, we assume that the arrival of consumers for destinations $a$ and $b$ have the same shape, as they are just shifted by a multiplicative destination-train specific constant $\xi_{dT}$. This condition can be tested, an important point on which we come back in Section \ref{sub:ident_demand} below. Second, we impose a specific functional dependence in $p$, of the form $p^{-1-\varepsilon}$. This particular form is not essential for our identification strategy. We do have to impose a parametric form, on the other hand, given that prices only take a few different values. When restricted to $[p_0,\infty]$ for any $p_0>0$, the intensity we consider corresponds to consumers' valuation following a Pareto distribution with a parameter equal to $\varepsilon$. \medskip Finally, by considering a multiplicative form ($I_{dT}(t,p) \propto b_T(t) \times p^{-1-\varepsilon}$), we assume that the valuation of consumers does not evolve over time. In particular, Assumption \ref{hyp:cons_demand} implies that the demand for destination $d$ on the time interval $[t_1,t_2]$ satisfies $$D_{dT}(t_1,t_2;p) | b_T(.)\sim \mathcal{P}\left(\xi_{dT} p^{-\varepsilon} \int_{t_1}^{t_2} b_T(u) du \right).$$ Thus, as \cite{mcafee2008dynamic}, we assume that the price elasticity does not evolve over time. This assumption could be relaxed with more detailed data. We believe it is reasonable in our context where purchasers of the economy class tickets of these trains are already quite homogenous. Nonetheless, we test it and consider an extended model allowing for time-varying elasticities in Section \ref{sec:robustness_checks} below. \medskip Assumption \ref{hyp:cons_demand} together with a supply-side restriction (Assumption \ref{hyp:yield_line}) turns out to be sufficient to identify $\varepsilon$, see Point 1 of Theorem \ref{thm:ident_xi_eps} below. To further identify the distribution of $(\xi_{aT},\xi_{bT})$, we consider the next assumption. Hereafter, $W_T$ denotes a vector of observed characteristics of train $T$ (e.g., whether the train operates on a rush hour or not) and $X_{dT}$ denotes a vector of observed characteristics of destination $d$ served by train $T$, e.g., the travel time from Paris to $d$ by train $T$. \begin{hyp}\label{hyp:gamma} For $d=\{a,b \}$, $\xi_{dT}$ satisfies: \begin{itemize} \item[(i).] $\xi_{dT}=\exp\{X'_{dT}\beta_0\}g_0(W_T)\eta_{dT}$ where $\eta_{aT}$, $\eta_{bT}$ and $(X_{aT},X_{bT},W_T)$ are independent. \item[(ii).] $\eta_{dT}\sim\Gamma(\lambda_{d0},1)$. \end{itemize} \end{hyp} Assumption \ref{hyp:gamma}(i) specifies $\xi_{dT}$ as the product of a function of $X_{dT}$, $g_0(W_T)$, and a remainder term $\eta_{dT}$. It restricts $\xi_{aT}$ and $\xi_{bT}$ to be dependent through the observed variables $(X_{aT},X_{bT},W_T)$, rather than $(\eta_{aT},\eta_{bT})$. This is plausible as long as one includes sufficient controls in $(X_{aT},X_{bT},W_T)$. Importantly, we leave the function $g_0(.)$, which determines how train-specific characteristics affect the demand, unrestricted. Assumption \ref{hyp:gamma}(ii) imposes that conditional on $X_{dT}$, $g_0(W_T)$, $\xi_{dT}$ follows a gamma distribution. Since we include a $d$-specific constant term in $X_{dT}$, we can normalize the scale parameter of the gamma distribution to $1$. As detailed below, the assumption of a gamma distribution does not matter for identification. It is rather made for computational reasons: that the gamma and Poisson distributions are conjugate makes it possible to simplify the computation of counterfactual revenues under incomplete information, see Online Appendix \ref{app:counter_rev} for more details. We also consider log-normality as a robustness check below, but without computing all counterfactual revenues, then. \medskip Note that in (i), we made the simplifying assumption that trains only had two destinations, an intermediate $a$ and a final one $b$. But recall from Table \ref{DES1} that most of them serve more than just two cities, so $a$ or $b$ actually correspond to more than one city. If so, we modify (i) by assuming that \begin{equation} \xi_{dT}=\left[\sum_{c\in d} \exp(X'_{cT}\beta_0)\right]g_0(W_T)\eta_{dT}, \label{eq:aggreg_cities} \end{equation} where $c$ is an index for cities belonging to either $a$ or $b$. For instance, in a train to C\^ote d'Azur, $c$ corresponds to Avignon for destination $a$ whereas $c$ includes Cannes, Saint-Rapha\"el and Nice, see again Table \ref{DES1}.\footnote{Note, on the other hand, that all cities $c\in d$ are priced equally, so we do not need to take into account price variations between cities.} \subsection{Supply side} We now formalize the features of revenue management already discussed in Section \ref{sub:revenue management}. First, recall that the revenue management is operated at a route level (e.g. Paris-Toulouse) rather than for each destination of this route (e.g. Paris-Bordeaux and Paris-Toulouse for the route Paris-Toulouse). We thus make the following assumption. \begin{hyp}\label{hyp:yield_line} (revenue management at the route level) The opening time of fare class $k \in \{1,...,K\}$, $\tau_k$, is a stopping time with respect to the process $t \mapsto N_{aT}(t)+N_{bT}(t)$, where $N_{dT}(t)$ is the number of purchases for $d$ made before $t$. \end{hyp} Assumption \ref{hyp:yield_line} states that the decision of opening a new fare class depends only on past total purchases, rather than on the repartition between purchases for $a$ and for $b$. Such an assumption is fully in line with the fact that a single fare class is used for the two destinations of each route. It was also confirmed by discussions we had with the revenue management department. \medskip Our second assumption on the supply side is a weak optimality condition for the firm. To introduce it, let $R_T(p_a,p_b)$ denote the maximal revenue for train $T$ under a uniform pricing of $(p_a, p_b)$ for destinations $a$ and $b$ respectively. This maximal revenue is obtained by considering the optimal quotas $C_{aT}$ and $C_{bT}$ of tickets sold for destinations $a$ and $b$ respectively, with $C_{aT}+C_{bT}=C_T$, the total capacity of train $T$ (the exact formula of $R_T(p_a,p_b)$ is displayed in Equation \eqref{eq:uniform_rev} below). Let also $p_{dkT}$ denote the price in train $T$ and fare class $k\in \{1,...,K\}$ for destination $d\in\{a,b\}$. The weak optimality condition we consider is the following: \begin{hyp}\label{hyp:weak_opt} (Weak optimality of actual revenue management) We have \begin{equation}\label{eq:moment_ineg3} \max_{k=1,...,K}\mathbb E\left[{R_T(p_{akT},p_{bkT})}|W_T\right]\leq \mathbb E\left[R^{\text{obs}}_T|W_T\right]. \end{equation} \end{hyp} By conditioning on $W_T$, which only includes coarse proxies of the true demand, we allow for the possibility that revenue managers use limited information for their pricing strategy. In reality, it seems credible that they have access to additional signals on the true demand for a specific train. For instance, they could use the past number of purchases in each fare class on previous years for the same exact train. If so, we would expect that Inequalities \eqref{eq:moment_ineg3} would also hold conditional on this information. \medskip Importantly, Assumption \ref{hyp:weak_opt} does not imply that the revenue management performs better than the optimal uniform pricing, because we only impose that observed revenues exceed any uniform pricing strategy that is constrained to the grid of the 12 predetermined prices. In other words, we simply assume that observed revenues are on average higher than those one would have obtained by sticking from $t=0$ to $t=1$ to one of the fare class. \medskip Moreover, we do not impose any optimality with respect to all dynamic strategies. We refrain to do so for several reasons. First, such an assumption would conflict with our very objective to quantify the gains or losses of the actual revenue management, compared to alternative scenarios. By definition, assuming a strong form of optimality would result in gains against most simpler pricing strategies. Second and related to the first point, it seems very restrictive in our setting to assume that the optimal dynamic strategy was adopted. As discussed in Section \ref{sub:revenue management}, the revenue management applied simplified rules (increasing fares from 12 predetermined fare classes), which can at best approach the optimal solution. Moreover, seat allocation decisions were also subject to the manager's manual intervention, which could be a source of suboptimality.\footnote{See \cite{cho2018optimal,cho2019semi,phillips2021pricing} for evidence of suboptimality due to human management.} Further, computing the optimal dynamic strategy under the simplified rules is still a very complicated dynamic programming problem. While \cite{Feng_Xiao_00} have proposed an algorithm for computing the solution for a homogeneous Poisson process, little has been done so far for the non-homogeneous case, to our knowledge. Finally, given that iDTGV has been merely created in 2004, we can doubt that it perfectly knows the demand parameters, and in particular all destination-train-specific effects $\xi_{dT}$. \subsection{Parameters of interest} \label{sub:parameters_of_interest} We aim at comparing the current revenues with several counterfactual revenues, depending on the type of revenue management and the information the firm has access to. We consider several possible pricing strategies, from the most basic to the most sophisticated ones. The first, uniform pricing, simply corresponds to fixing the price of each route in a given train once and for all. We let $R_u$ denote optimal counterfactual revenues, averaged over all trains, under this pricing regime. At the other extreme, in ``full'' dynamic pricing, prices can be changed any time. $R_f$ then corresponds to optimal counterfactual revenues in this set-up. We also study pricing strategies, called stopping-time strategies hereafter, where prices can be changed only after a ticket is sold. The corresponding optimal revenues are then $R_s$. Finally, we consider constrained stopping-time strategies close to what was implemented in practice, by assuming that only $M$ number of fares, or $M$ increasing fares, are allowed. The corresponding optimal revenues are denoted by $R_{sM}$ and $R_{sM+}$, respectively. To compute these counterfactual revenues, we maintain Assumption \ref{hyp:cons_demand}. This means that for pricing strategies where prices are allowed to decrease, we rule out any forecast of such price decreases by consumers. \medskip Hereafter, we consider two scenarios in terms of information available to the revenue managers. \begin{enumerate} \item (Complete information) Revenue managers fully know the expected demand for each train. Thus, they observe $\varepsilon$, $b_T(.)$, $\xi_{aT}$ and $\xi_{bT}$ for each train $T$; \item (Incomplete information) Revenue managers observe $\varepsilon$, $b_T(.)$, $(X_{aT},X_{bT},W_T)$ but only $f_{\xi_{aT},\xi_{bT}|X_{aT},X_{bT},W_T}$. As time goes by, revenue managers update their information on $(\xi_{aT},\xi_{bT})$ according to Bayes' rule. \end{enumerate} The complete information case should be seen as a benchmark. It is useful in particular to quantify the value of information and contrast the gains of revenue management in complete and incomplete information set-ups. The case of incomplete information is probably more realistic. In this scenario, revenue managers know, for each train, the pattern of consumers' arrival over time ($b_T(.)$) but does not know exactly the aggregate demand for each destination ($\xi_{aT}$ and $\xi_{bT}$). The assumption that $b_T(.)$ is known makes especially sense if $b_T(.)$ does not depend on $T$, in which case revenue managers can have learned from previous trains how consumers arrive through time. \medskip If the scenario of incomplete information holds in practice, the differences between the counterfactual revenues and the observed ones can be interpreted as the potential gains or losses of the optimal revenue management under different constraints compared to the actual ones. Hereafter, we use the exponents $c$ and $i$ to specify the two information set-ups. Hence, $R_u^c$ denotes for instance the counterfactual optimal revenue under uniform pricing and complete information. \medskip In all the counterfactual scenarios, we consider a separate pricing strategy for destinations $a$ and $b$, contrary to the actual practice. On the other hand, for computational reasons, we cannot consider the fully optimal pricing strategies for the two destinations. Without further restrictions, the state space is large: the optimal strategy at any time $t$ depends on the remaining seats for both destination. To reduce this state space, we fix ex ante the total number of seats available for stops $a$ ($C_{aT}$, say) and thus to $b$ ($C_{bT}=C_T-C_{aT}$, with $C_T$ the total number of seats in train $T$). Then, depending on the scenario we consider, we either consider the optimal pre-allocation $C_{aT}$, or fix it so that $C_{aT}$ matches the observed average sales for $a$. In any case, fixing $C_{aT}$ allows us to solve the optimization problem separably for each destination (given the independence of $\eta_{aT}$ and $\eta_{bT}$ imposed in Assumption \ref{hyp:gamma}(i)), rather than jointly. This greatly reduces the computational burden of the optimization problem. For this reason, our results below may be seen as lower bounds on the fully optimal counterfactual revenues. This actually reinforces some of our conclusions below. Also, we compare in Section \ref{sec:robustness_checks} below the revenues under uniform pricing with and without pre-allocation, and do not find important differences between the two. \section{Identification and estimation} In this section, we first clarify which parameters of the demand function are needed to recover the average revenues under the counterfactual scenarios described above. We also describe challenges for the identification of these parameters. Next, we show how price elasticity and relative demand effects can be identified. We then describe the partial identification of the distribution of train effects and counterfactual revenues. Finally, we show how to perform inference on the parameters of interest. The asymptotic framework we consider below is obtained by letting the number of trains tend to infinity (recall that in our application, we observe 2,909 trains). \subsection{A first result and challenges}\label{sub:param} The following theorem clarifies which parameters of the demand are required to identify all the counterfactual revenues we consider. \begin{thm}\label{thm:counterf_rev} Suppose that Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma} hold. Then, $R^{I}_{r}$ is a function of the distribution of $(\xi_{aT},\xi_{bT}, X_{aT}, X_{bT}, W_T)$ and $\varepsilon$, for $I\in\{c,i\}$ and $r\in\{u,f,s,sM,sM+\}$. \end{thm} We obtain the result by constructing new Poisson demand processes $\widetilde{V}_{aT}$ and $\widetilde{V}_{bT}$ with the same parameters as the true ones $V_{aT}$ and $V_{bT}$, except that they are homogeneous: $b_T(.)$ is replaced by $\widetilde{b}_T(.)=1$. We prove that the optimal revenues are the same for these new demand processes as for the original ones. This shows that the optimal revenues depend on $I_{dT}(.,.)$ only through $\xi_{aT}$ and $\xi_{bT}$: it does not matter whether consumers arrive early or late, as long as on average, the same number of consumers eventually arrive. The result holds because basically, all the constraints on pricing we consider are independent of time. In this sense, Theorem \ref{thm:counterf_rev} holds beyond the specific scenarios we consider here. But it would fail if time constraints were imposed on the pricing strategies, for instance if a limit on the number of price changes occuring before a given date $t^*<1$ was set. \medskip Theorem \ref{thm:counterf_rev} is crucial in our context with no information on purchasing dates. In the absence of such information, there is no way to recover $b_T(.)$. Instead, we only have to recover the price elasticity $\varepsilon$ and the conditional distribution of destination-train-specific effects $(\xi_{aT},\xi_{bT})$ to identify the counterfactual revenues. \medskip We do not specify here the exact forms of the counterfactual revenues, as they do not have closed forms. However, we can obtain them by induction, using the Bellman equations associated with the optimal strategies and solving some differential equations. In the incomplete information case, the gamma specification in Assumption \ref{hyp:gamma}(ii) is helpful for that purpose, as the gamma distribution is a conjugate prior for Poisson likelihood. The induction formulas are given in Online Appendix \ref{app:counter_rev}. \medskip \cite{mcafee2008dynamic} obtains a similar result as Theorem \ref{thm:counterf_rev} for the ``full'' dynamic pricing strategy under complete information and a similar demand model. We extend their results in two directions. First, we consider other types of pricing strategies, and in particular possibly constrained stopping-time strategies, which are very common in practice and correspond to the actual revenue management. Second, we also show a similar result in an incomplete information set-up. \medskip Now, we face two main issues for recovering the demand parameters. First, demand is actually unobserved; only bounds on it can be obtained. Let $n_{dkT}$ denote the number of sales for train $T$, fare class $k\in \{1,...,K\}$ and destination $d\in\{a,b\}$. Then $$D_{dT}(p_{dkT}) \geq D_{dT}(\tau_{k,T},\tau_{k+1,T};p_{dkT})= n_{dkT},$$ where $\tau_{k,T}$ is the (random) time at which the $k$th fare class opens, which we do not observe. Hence, without further assumptions, we only observe a crude lower bound on the total demand at price $p_{dkT}$. This point was already made in similar contexts by \cite{Swan_90}, \cite{Lee_90}, and \cite{Stefanescu_12}. \medskip The second issue we face is the absence of usual instruments for prices. Prices only vary within the grid specified by revenue managers, and to our knowledge, fare classes did not close for exogenous reasons unrelated to demand. In other words, there is no exogenous variations of prices in our context. The bottom line is that usual strategies to identify the demand function do not apply here. \medskip We now show that despite these limitations, it is possible, under Assumptions \ref{hyp:cons_demand}-\ref{hyp:yield_line}, to point or partially identify the parameters $(\theta_0,g_0(.))$, where $\theta_0:=(\varepsilon,\beta_0,\lambda_{a0},\lambda_{b0})$ and $\beta_0,\lambda_{a0},\lambda_{b0}$ and $g_0(.)$ are defined in Assumption \ref{hyp:gamma}. Then, in view of Theorem \ref{thm:counterf_rev}, we obtain bounds on the counterfactual revenues. We proceed in two steps hereafter, by first showing point identification of $\theta_0$ and then partial identification of $g_0(.)$. \subsection{Point identification of $\theta_0$}\label{sub:ident_demand} We first identify $\varepsilon$ by exploiting variations in the relative prices $p_{bkT}/p_{akT}$ between the two destinations and from one fare class to another. We start from $n_{dkT} = D_{dT}(\tau_k,\tau_{k+1};p_{dkT})$. For the sake of exposition, let us first assume that $\tau_k$ and $\tau_{k+1}$ are deterministic. Then, by Assumption \ref{hyp:cons_demand}, $D_{aT}(\tau_k,\tau_{k+1};p_{ak})$ and $D_{bT}(\tau_k,\tau_{k+1};p_{bk})$ are independent conditional on $\xi_{aT}, \xi_{bT}$ and $\int_{\tau_k}^{\tau_{k+1}} b_T(u)du$. Moreover, they both follow Poisson distributions. As a result, \begin{equation}\label{eq:binomial} n_{bkT}|n_{akT}+n_{bkT}=n,\xi_{aT},\xi_{bT} \sim \text{Binomial}\left(n, \Lambda(\ln(\xi_{bT}/\xi_{aT}) -\varepsilon \ln(p_{bkT}/p_{akT}))\right), \end{equation} where $\Lambda(x)=1/(1+\exp(-x))$. The term $\ln(\xi_{bT}/\xi_{aT})$ may be seen as a train fixed effect. Hence, this model boils down to a fixed effect logit model, and $\varepsilon$ is identified as long as there are variations through fare classes $k$ in the relative prices $p_{bkT}/p_{akT}$. In the data, we do observe such variations. In Paris-Toulouse for instance, $p_{bkT}/p_{akT}$ vary from 1 for $k=1$ to 1.18 for $k=12$. Then, if we add Assumption \ref{hyp:gamma}(i), we can intuitively identify $\beta_0$ and $\lambda_0$ from the fact that we have a random effect logistic model. Note that $g_0(W_T)$ is canceled out in the ratio $\xi_{bT}/\xi_{aT}$. In Section \ref{sec:id_Bt} below, we use further arguments to partially identify this function. \medskip To obtain \eqref{eq:binomial}, we assumed that the stopping times $(\tau_k)_{k=1,...,K}$ were fixed, which is unrealistic. Nonetheless, the following result shows that \eqref{eq:binomial}, and then identification of $\theta_0$, still holds provided that these stopping times satisfy Assumption \ref{hyp:yield_line}. \begin{thm}\label{thm:ident_xi_eps} Suppose that Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line} hold and that with positive probability, $k\mapsto p_{bkT}/p_{akT}$ is not constant. Then, \begin{enumerate} \item Equation \eqref{eq:binomial} holds and $\varepsilon$ is point identified; \item If Assumptions \ref{hyp:gamma}(i) and \ref{hyp:supp_X} in the appendix further hold, $\beta_0$ and the distribution of $\eta_{bT}/\eta_{bT}$ are identified. \end{enumerate} \end{thm} Two remarks are in order. First, Equation \eqref{eq:binomial} does not hold for any possible random stopping times. We can easily build counterexamples by making $(\tau_k)_{k=1,...,K}$ depend solely on $N_{aT}(.)$, for instance. Such situations are however ruled out by Assumption \ref{hyp:yield_line}. Under this condition, intuitively, the stopping times will be independent of the proportion of consumers buying tickets for $a$ (versus $b$). Second, we actually prove the nonparametric identification of $\eta_{bT}/\eta_{bT}$. This implies the identification of $\lambda_0$ under Assumption \ref{hyp:gamma}(ii). It also shows that imposing this latter condition is not necessary for identification. As mentioned above, it solely matters for the computation of counterfactual revenues. \medskip Beyond the identification of $\theta_0$, Equation \eqref{eq:binomial} can be the basis of testing some of the conditions we have imposed. First, the separability between $b_T(.)$ and $\xi_{dT}$ in Assumption \ref{hyp:cons_demand} implies that if $p_{bkT}=p_{akT}$ for several fare classes $k$, we should observe similar proportions $n_{bkT}/(n_{akT}+n_{bkT})$ for the corresponding $k$. Second, we can also test for the fact that price elasticities do not evolve over time, by considering more general specifications than \eqref{eq:binomial}. Third, we have imposed so far that the price elasticity was constant for all routes. We made this restriction for parsimony and consistency, because several routes share common origin-destination sections (e.g. Paris-Toulouse and Paris-C\^ote basque share the Paris-Bordeaux section). But we can allow for variations according to the day and hour of departure and according to groups of routes sharing the same sections. We consider all these extensions and robustness checks in Sections \ref{sub:demand_est} and \ref{sec:robustness_checks} below. \subsection{Partial identification of $g_0(W_T)$}\label{sec:id_Bt} To partially identify $g_0(W_T)$, which corresponds to the train-specific effect in $\xi_{dT}$, we build moment inequalities based on consumers' rationality (Assumption \ref{hyp:cons_demand}.1) and weak optimality of the actual revenue management (Assumption \ref{hyp:weak_opt}). \paragraph{Consumers' rationality} First, by Assumption \ref{hyp:cons_demand}.1, all consumers who bought a ticket for $d$ at price $p_{djT}$ for $j\geq k$ would have also bought it at price $p_{dkT}$. Therefore, for all $k=1,...,K$ and $d\in \{a,b\}$, $$D_{dT}(p_{dkT};g_0(W_T), X_{dT}) \geq \sum_{j=k}^K n_{djT},$$ where we now index total demand $D_{dT}(p_{dk})$ by $g_0(W_T)$ and $X_{dT}$. Let $C_T$ denote the capacity of train $T$. Then we also have $C_T \geq \sum_{j=k}^K n_{djT}$. Combining these inequalities and integrating conditional on $W_T$, we obtain, for all $k=1,...,K$ and $d\in \{a,b\}$, \begin{equation}\label{eq:moment_ineg1} \mathbb E\left[\sum_{j=k}^K n_{djT}-C_T \wedge D_{dT}(p_{dkT};g_0(W_T), X_{dT})\bigg|W_T\right]\leq 0. \end{equation} We assume hereafter that $X_{dT}$ is a deterministic function of $W_T$. This holds in our context where $W_T$ includes indicator of routes and $X_{dT}$ includes time-invariant destination variables and interactions between such variables and $W_T$. Then, the function $g\mapsto \mathbb{E}[C_T \wedge D_{dT}(p_{dk};g, X_{dT})|W_T]$ is strictly increasing. Denoting by $Q_k^{-1}(.;W_T, \theta_0)$ its inverse, we get \begin{equation*} g_0(W_T)\geq Q^{-1}_k\left(\mathbb E\left[\sum_{j=k}^K n_{djT}\bigg|W_T\right];W_T, \theta_0\right). \end{equation*} Then, we obtain a lower bound for $g_0(W_T)$: \begin{equation}\label{eq:lower_bound} g_0(W_T)\geq g_0^L(W_T):=\max_{\substack{d=a,b \\k=1,...,12}}\left\{ Q^{-1}_k\left(\mathbb E\left[\sum_{j=k}^K n_{djT}\bigg|W_T\right];W_T,\theta_0\right)\right\}. \end{equation} While $Q_k^{-1}$ does not have a closed form, we can compute it easily through simulations. \paragraph{Weak optimality condition} We now rely on Assumption \ref{hyp:weak_opt} to form additional moment inequalities. To exploit them, note that under Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma}, we have (see Appendix \ref{proof_thm_limited_rev}, section ``uniform pricing'', for details) \begin{align} \mathbb{E}[R_T(p_a,p_b)|W_T] =&\max_{\substack{(C_{aT},C_{bT}):\\ C_{aT}+C_{bT}=C_T}}\bigg\{\sum_{d\in\{a,b\}}p_d\int_0^\infty \mathbb{E}\bigg[D(\exp\{X_{dT}'\beta_0 \}p_d^{-\varepsilon}g_0(W_T)z) \notag \\ &\hspace{5.2cm} \wedge C_{dT}|W_T\bigg] \times g_{\lambda_{d0},1}(z)dz\bigg\}, \label{eq:uniform_rev} \end{align} where $D(u)\sim \mathcal{P}(u)$, $g_{\lambda_{d0},1}$ is the density of a $\Gamma(\lambda_{d0},1)$, and $C_{dT}$ is the total number of seats allocated to destination $d$. As a result, $$R(g_0(W_T);X_{aT},X_{bT},\theta_0):=\max_{k=1,...,K}\mathbb E\left[{R_T(p_{akT},p_{bkT})}|W_T\right]$$ is an identified function. Note that again, our notation reflects that $(X_{aT},X_{bT})$ is a deterministic function of $W_T$. Hence, the weak optimality condition \eqref{eq:moment_ineg3} rewrites as \begin{equation} R(g_0(W_T);X_{aT},X_{bT},\theta_0)\leq \mathbb E\left[R^{\text{obs}}_T|W_T\right]. \label{eq:moment_ineg3bis} \end{equation} The function $R(.;X_{aT},X_{bT},\theta_0)$ is strictly increasing. Denoting by $R^{-1}(.;$ $ X_{aT},X_{bT},\theta_0)$ its inverse, we obtain the following upper bound for $g_0(W_T)$: \begin{equation} \label{eq:upper_bound} g_{0}(W_T)\leq g_0^U(W_T)= R^{-1}\left(\mathbb E\left[R^{\text{obs}}_T|W_T\right];X_{aT},X_{bT},\theta_0\right) \end{equation} \subsection{Partial identification of counterfactual revenues}\label{sub:id_counterf} As shown by Theorem \ref{thm:counterf_rev}, $R_r^I$ ($I\in\{c,u\}$) is a function of the distribution of $(\xi_{aT},\xi_{bT}, W_T)$ and price elasticity $\varepsilon$. Further, under Assumption \ref{hyp:gamma}, given $W_T$ and a pre-allocation $(C_{aT},C_{bT})$, $R_r^I$ has the following form (see Appendix \ref{app:counter_rev}):\footnote{The only exception is for revenues under uniform pricing with prices constrained to belong to the grid, for which the form is more complicated. Specifically, it corresponds to the maximum over the grid of the revenue displayed in \eqref{eq:uniform_rev}. Nonetheless, we can still simply obtain bounds on these revenues using the monotonicity of the right-hand side of \eqref{eq:uniform_rev} with respect to $g_0(W_T)$.} \begin{equation*} R_r^I(W_T,C_{aT},C_{bT})=\sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}g_0(W_T)^{1/\varepsilon}, \end{equation*} for some non-random term $\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})$. Then, using the bounds of $g_0(W_T)$ in \eqref{eq:lower_bound} and \eqref{eq:upper_bound}, we obtain lower and upper bounds for $R_r^I(W_T,C_{aT},C_{bT})$ as: \begin{equation}\label{eq:revenue_gamma} \left[g_0^L(W_T)^{1/\varepsilon},g_0^U(W_T)^{1/\varepsilon} \right]\times \sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}. \end{equation} Bounds on $R_r^I$ then follow by averaging \eqref{eq:revenue_gamma} over trains: \begin{equation}\label{eq:revenue_gamma_ave} \mathbb{E}\left[\left[g_0^L(W_T)^{1/\varepsilon},g_0^U(W_T)^{1/\varepsilon} \right]\times \sum_{d=a,b}\alpha_{r}^I(C_{dT},\varepsilon,\lambda_{d0})\exp\{ X_{dT}'\beta_0/\varepsilon\}\right]. \end{equation} We also consider below ratios of counterfactual revenues. Given what precedes, such ratios $r_0$ satisfy $$r_0= \frac{\mathbb E[f_1(U_T) g_0(W_T)^{1/\varepsilon}]}{\mathbb E[f_2(U_T) g_0(W_T)^{1/\varepsilon}]},$$ for two identified, positive functions $f_1$ and $f_2$. Let $\mathcal{R}$ denote the identified set on $r_0$. Then one can show that $\mathcal{R}$ is an interval $[\underline{r}, \overline{r}]$, where $\overline{r}$ and $\underline{r}$ are defined as the unique solutions of \begin{align*} \mathbb E\left[g_0^L(W_T)^{1/\varepsilon} (f_1(U_T) - \overline{r} f_2(U_T)) + (g_0^U(W_T)^{1/\varepsilon} - g_0^L(W_T)^{1/\varepsilon})(f_1(U_T) - \overline{r} f_2(U_T))_+\right] & = 0, \\ \mathbb E\left[g_0^U(W_T)^{1/\varepsilon} (f_1(U_T) - \underline{r} f_2(U_T)) + (g_0^L(W_T)^{1/\varepsilon} - g_0^U(W_T)^{1/\varepsilon})(f_1(U_T) - \underline{r} f_2(U_T))_+\right] & = 0. \end{align*} \subsection{Estimation and inference} \label{sub:estimation_and_inference} We estimate $\theta_0$ as follows. Let $Y_{jkT}=1$ if seat $j$ in fare class $k$ for train $T$ is sold to $a$, $Y_{jkT}=0$ otherwise. By \eqref{eq:binomial}, we have $$\Pr(Y_{jkT}=1|\xi_{aT},\xi_{bT}) = \Lambda\left(\ln(\xi_{bT}/\xi_{aT}) - \varepsilon \ln(p_{bkT}/p_{akT})\right),$$ and the $(Y_{jkT})_{j=1,...,n_{kT}}$ (with $n_{kT}:=n_{akT}+n_{bkT}$) are independent. Thus, we can estimate $\varepsilon$ and $\ln(\xi_{bT}/\xi_{aT})$ by maximizing the likelihood of a logit model including train fixed effects. Because the number of sales for each train is large (usually above 250), the bias related to the estimation of these fixed effects is expected to be negligible. Second, under Assumption \ref{hyp:gamma} (with the equality in (i) replaced by \eqref{eq:aggreg_cities} to account for multiple cities in each $d\in\{a,b\}$), $$\ln(\xi_{bT}/\xi_{aT})=\ln\left[\frac{\sum_{c \in b}\exp(X_{cT}'\beta_0)}{\sum_{c \in a}\exp(X_{cT}'\beta_0)}\right] +\ln\left(\frac{\eta_{bT}}{\eta_{aT}}\right), \quad \eta_{bT}/\eta_{aT} \perp \!\!\! \perp (X_{cT})_c.$$ Then, we estimate $\beta_0$ by nonlinear least squares, replacing $\ln(\xi_{bT}/\xi_{aT})$ by its estimator. Finally, we estimate $\lambda_0$ by maximum likelihood on the sample $(\widehat{\ln(\eta_{bT}/\eta_{aT})})_T$, with $$\widehat{\ln(\eta_{bT}/\eta_{aT})} = \widehat{\ln(\xi_{bT}/\xi_{aT})}-\ln\left[\frac{\sum_{c \in b}\exp(X_{cT}'\widehat{\beta})}{\sum_{c \in a}\exp(X_{cT}'\widehat{\beta})}\right].$$ In principle, we could directly estimate $\theta_0$ by maximum likelihood, as under Assumptions \ref{hyp:cons_demand}-\ref{hyp:gamma}, the distribution of $(Y_{jkT})_{j=1,...,n_{kT}, k=1,...,K}$ is fully parametric. We do not adopt this method for two reasons. First, the estimators of $\varepsilon$ and $\lambda_0$ would be sensitive to the parametric specification on $(\eta_{aT},\eta_{bT})$. Second, the corresponding estimator is much more complicated to compute, something turning out to be important when considering inference based on the bootstrap. \medskip Next, we estimate the lower and upper bounds on $g_0(W_T)$ by the empirical counterparts of \eqref{eq:lower_bound} and \eqref{eq:upper_bound}, where the conditional expectations $\mathbb E(.|W_T)$ are replaced by empirical means (as $W_T$ is discrete in our specification, see below for its full description). Finally, we estimate bounds on $R_r^I$ by the empirical counterpart of \eqref{eq:revenue_gamma_ave}. \medskip As estimation involves multiple steps, we rely on the bootstrap for inference. We compute confidence intervals on counterfactual revenues with nominal levels of $1-\alpha$ as follows. The lower bound corresponds to the $\alpha/2$-th quantile of the bootstrapped lower bound in \eqref{eq:revenue_gamma_ave}, while the upper bound corresponds to the $1-\alpha/2$-th quantile of the bootstrapped upper bound in \eqref{eq:revenue_gamma_ave}. This ensures an asymptotic coverage of at least $1-\alpha$, whether the parameter is point or partially identified. \section{Results} \subsection{Demand estimation}\label{sub:demand_est} We first consider the estimation of the price elasticity ($-\varepsilon$), the coefficients of des-tination-train specific effects ($\beta_0$), and the parameters of Gamma distribution $\lambda_0$. The variables we include in $W_T$ are route dummies, time dummies for the year and month of the train, whether it occurs during the weekend, on public holidays, on school holidays and whether the departure time is during rush hour. Regarding the variables $X_{dT}$ or, to be more precise, $X_{cT}$ where $c$ denotes a city (see our discussion around Equation \eqref{eq:aggreg_cities}), we include travel time to $c$ by train $T$, its square, city-specific effects $X_c$ (namely, the population of the urban area of $c$ and whether $c$ is a regional capital) and all interactions $X_{cj}\times W_{Tk}$ for all components $X_{cj}$ and $W_{Tk}$ of the vectors $X_c$ and $W_T$, respectively. \medskip The estimates of price elasticities are displayed in the top panel of Table \ref{tab:binomial}. In Column I (our baseline specification), we assume a constant price elasticity across routes and trains and obtain a price elasticity of $-4.04$. This result is larger (in absolute value) than those in the literature on the transportation industry. We refer for instance to the meta-analysis by \cite{Jevons_05} and the studies of \cite{Wardman_97}, \cite{Wardman_06} and \cite{Wardman_07}, which point to price elasticities in the range $[-1.3;-2.2]$. Unlike ours, most of the studies rely on aggregated data. This is likely to bias upwards price-elasticity estimates, a point that we illustrate in Appendix \ref{app:aggregated} by running regressions based on our data aggregated at different levels. \medskip The middle panel of Table \ref{tab:binomial} reports the estimates of the components of $\beta_0$ corresponding to the travel time and city-specific effects. The effect of the population size and travel time by train are as expected. Larger cities lead to higher demand and a longer travel time by train leads to a lower demand for train tickets. The effect of travel time may nonetheless be attenuated for long journeys, though the coefficient of the square of travel time is not significant. \medskip The bottom panel of Table \ref{tab:binomial} reports the estimates of the parameters $(\lambda_{a0},\lambda_{b0})$ of the gamma distribution. Intermediate destinations are estimated to have larger uncertainty on demand ($V(\eta_{dT})=\lambda_{d0}$ under the gamma specification), though the difference between the two is not statistically significant. \input{tables/Results_demand_new_v3} \medskip In Column II, we estimate the demand model by allowing price elasticity to vary across routes and trains. We find that travelers of routes from Paris to the southwest of France (namely, the routes to C\^ote basque, Toulouse and Perpignan) are less price-sensitive to price than those of other routes. Travelers on weekend or national holidays have a smaller price elasticity (in absolute value) than those on other days. On the other hand, once controlling for weekend and national holidays, individuals traveling during peak hours appear to have a similar elasticity to the others. \medskip For several routes, there are actually multiple intermediate or final destinations. If Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line} hold, Theorem \ref{thm:ident_xi_eps} implies that the joint distribution of the purchases for these multiple destinations, conditional on the total number of purchases on the train, is multinomial, rather than binomial in case the purchases for the intermediate or final stops are aggregated. We re-estimate the demand models corresponding to Columns I and II using a multinomial model. The results are displayed in Columns III and IV, respectively. The resulting price elasticities are almost identical to those obtained before. The destination effects and estimates of $\lambda_0$ are also very similar. \subsection{Counterfactual revenues}\label{sec:counterfactuals} We now turn to the counterfactual revenues under different pricing strategies, namely uniform, stopping-time, and full dynamic pricing. For counterfactual revenues $R_r^{I}$ with $r\in\{u,f,s\}$ and $I\in\{c,i\} $, we simulate the revenue with the optimally pre-allocated numbers of available seats for intermediate and final stops; for $R_r^{I}$ with $r\in\{sM,sM+\}$ and $I\in\{c,i\} $, i.e. stopping-time pricing strategy with $M$ (increasing) fares, we fix the pre-allocated number of available seats for intermediate stop $a$, $C_{aT}$, to be equal to the average number of seats sold for $a$ among all the trains operated on the given route. We do this, rather than finding the optimal value of $C_{aT}$, for computational reasons. Moreover, for the other pricing strategies ($r\not\in\{sM,sM+\}$), the revenues obtained this way secures at least 99\% of the revenue based on the optimal pre-allocation, so we expect very little effect of considering this specific pre-allocation. \medskip Table \ref{tab:supply_main} summarizes the set estimates of counterfactual revenues averaging over all routes based on Column I in Table \ref{tab:binomial} -- we discuss the results based on Column II in Section \ref{sec:robustness_checks} below. When possible, we indicated the 95\% confidence intervals on the set.\footnote{Computing the set estimates of counterfactual revenues can be costly, as it involves the terms $\alpha^I_r$, which are only defined by induction and thus can take time to be obtained. For instance, computing the set estimates corresponding to Line s.3 takes us 77 hours. For 500 bootstrap replications executed on 10 cores, this would mean 160 days of computational time.} Below, we organize our discussion of the results along different themes. \input{tables/counter_rev_main_set_estimates} \paragraph{How does the actual strategy compare to counterfactual pricing strategies?} Recall that by Assumption \ref{hyp:weak_opt}, the actual strategy is supposed to be better than any uniform pricing strategy under incomplete information and with prices constrained to belong to the price grid. The gains are however moderate: they range between 0\% and 9.5\%. Then, we already cannot exclude that the actual strategy performs actually worse than the same uniform pricing strategy but with unconstrained prices (see Scenario u.2). In any case, the gains would be at most 8.1\%. When turning to the most constrained dynamic pricing strategy, namely two fare classes and increasing prices, we observe a loss in revenue ranging between 3.6\% and 12.2\%. When we consider the same constraints as in the actual pricing strategy, namely 12 fare classes and increasing prices, we estimate a loss in between 6.8\% and 15.1\%. \medskip Because of fixed pre-allocations for destinations $a$ and $b$, the revenues in Table \ref{tab:supply_main} are just lower bounds on the true, optimal revenues, which still reinforces our conclusions above. To get a sense on the quantitative effect of these pre-allocations, we simulate counterfactual revenues under unconstrained uniform pricing without pre-allocating capacities among intermediate and final destinations. The corresponding formulas are in Appendices \ref{ssub:proofs} and \ref{proof_thm_limited_rev}, see the sections ``uniform pricing'' therein. In the complete information set-up, we obtain a set estimate of $[13.45, 14.68]$, corresponding to an increase in between 1.4\% and 1.8\% compared to Scenario u.4. In the incomplete information set-up, we obtain a higher gain of around 6\%, with a set estimate of $[11.96, 14.68]$. This 6\% might be the upper bound on possible gains from not imposing any pre-allocation, as one could expect that the effects of pre-allocation can be more easily mitigated with more flexible pricing strategies. \medskip How can we explain the suboptimality of the actual strategy, in particular compared to the optimal strategies under similar pricing constraints? First, the initial seat allocation planning determined by the CRS may sometimes be far away from the optimal allocation under complete information. Then, revenue managers may fail to adjust enough this initial allocation. Second, in our counterfactuals, we have considered that revenue managers knew the true $\varepsilon$, the true effects of covariates, or the true $b_T(.)$. This may not be the case in reality. In any case, our results emphasize the importance of not imposing strong optimality conditions on the supply side. \paragraph{Does it matter to have a fixed price grid?} We look at this question by comparing the revenues obtained under optimal uniform strategies with prices either chosen optimally on $[0,\infty)$ or only within the actual price grid of the train under consideration. The effect of the grid is higher in the complete information set-up, with a gain of an unconstrained optimization roughly ranging in between 4\% and 5.3\%.\footnote{These are approximations obtained by dividing each lower bounds and each upper bounds. The exact bounds on the ratios are hard to obtain because the revenues under uniform pricing and constrained prices do not take a simple form. The approximation we use works well on other ratios, for which we can compute the exact bounds.} This is basically because demand is very high or very low for a few trains, in which case one would like to set a price above the maximal price, or below the minimal price of the grid. On the other hand, fixing the price grid has very small effects on revenues under incomplete information, with gains in between 0.8\% and 1.3\%. \paragraph{Does it pay to complexify pricing strategies?} The answer to that question very much depends on the information set-up. In the complete information case, the answer is basically ``no'': the difference in revenue between uniform pricing with unconstrained prices and full dynamic pricing is only around 2.0\%.\footnote{This empirical finding is consistent with simulation results in operational research and empirical results in economics. For example, \cite{zhao2000optimal} shows a similar improvement by between 2.4\% and 7.3\%. \cite{williams2021welfare} estimates a revenue improvement due to optimal dynamic pricing of around 2\% in airline industry.} This figure sharply contrasts with the 19.3\% gain we estimate under incomplete information by comparing Scenarios f.1 and u.2. \medskip Intuitively, dynamic pricing still helps in the complete information case because of the uncertainty on the demand process. But the possibility to adjust the pricing strategy as one learns about $(\xi_{aT},\xi_{bT})$ (or, equivalently, $(\eta_{aT},\eta_{bT})$) in the incomplete information set-up plays a much more important role. To shed light on this point, we decompose the variance of the demand under the optimal uniform pricing in incomplete information into two parts: \begin{align*} \mathbb E\left[\mathbb V(D_{dT}(0,1,p^u_{dT})|W_T) \right]= & \mathbb{E}[\mathbb V(D_{dT}(0,1,p^u_{dT})|\xi_{dT})] +\mathbb E\left[\mathbb V(\mathbb{E}[D_{dT}(0,1,p^u_{dT})|\xi_{dT}]|W_T)\right], \end{align*} where $p^u_{dT}$ is the optimal price under uniform pricing for destination $d$ and train $T$. Even though they both involve $g_0(W_T)$, one can show that the two terms in this decomposition are point identified. For intermediate and final destinations respectively, the variation of the demand process (the first term) only explains on average $1.3\%$ and $0.9\%$ of the total variance. \medskip Now, even in the incomplete information set-up, one need not consider complex pricing strategies to obtain revenues close to the optimal ones. First, restricting to stopping-time pricing strategies incurs virtually no loss, compared to ``full'' dynamic pricing. By changing prices only when a purchase is observed, the firm can secure around 99.8\% of the revenue gain from uniform pricing to dynamic pricing regimes (comparing here Scenarios s.5 and f.1). Considering pricing strategies with 12 fare classes, as in reality but with possibly decreasing prices, still yield revenues in between 98.7\% and 99.1\% of the revenues under full dynamic pricing (comparing here Scenarios s.4 and f.1). \paragraph{How fast does information accumulate?} \medskip First, the tiny difference between the gains of full dynamic pricing under complete and incomplete information shows that revenue management is an effective instrument for demand learning. By learning from consumers' purchases in a Bayesian way, it can gradually pin down the uncertainty on the overall demand. Pricing decision then takes this renewed information into account, improving total revenue. And actually, this demand learning can compensate almost all revenue loss due to ex ante uncertainty on demand. The difference in revenue under optimal uniform pricing between incomplete and complete information is around 2K\EUR{} (comparing Scenarios u.4 and u.2), while this difference decreases to around 0.03K\EUR{} only under optimal dynamic pricing (see Scenarios f.2 and f.1). This finding is in line with \cite{lin2006dynamic}, who reports a similar near-optimality of demand learning in a simulation study. \medskip The reason of this very modest loss compared to the complete information set-up is that information accumulates quickly. To illustrate this point, we simulate expected revenues under a class of intermediate stopping-time pricing strategies, where the firm is only allowed to dynamically price the first $K\%$ seats, turning to uniform pricing for the remaining seats. Thus, $K=0$ and $K=100$ correspond respectively to the optimal uniform and stopping-time pricing strategies.\footnote{As the pricing strategies in Theorem \ref{thm:counterf_rev}, we show in Appendix \ref{app:counter_rev} that one can partially identify the optimal revenues with intermediate stopping-time pricing strategies using the procedure described in Section \ref{sub:id_counterf}.} By quantifying the revenue gain from $K$ to $K+1$, we can characterize how much can be marginally gained from being able to extract information on demand from additional purchases ($1\%$ of total seats) and optimally adjust its pricing. Figure \ref{fig:intermediate} displays the lower bounds of the optimal revenues under these intermediate pricing strategies under complete (blue) and incomplete (red) information for $K=1,...,100$.\footnote{We also simulate the upper bounds of these revenues. The obtained curve is very similar.} \begin{center} \begin{figure}[H] \caption{Revenues (lower bound) under intermediate pricing strategies}\label{fig:intermediate} \vspace{-0.45cm} \includegraphics[width=0.9\textwidth]{figures/intermediate_Pricing.pdf} \vspace{-0.4cm} \end{figure} \end{center} \vspace{-1cm} Under incomplete information, demand learning is rather quick, as we can see from the important concavity of the red line. With just $K=5$, the firm already achieves a revenue equal to the observed one; by learning from $50\%$ of the seats, it obtains a revenue around $3\%$ lower than that of the complete information. On the other hand, the blue line shows that the revenue gains under complete information are small. The incremental revenue from $K$ to $K+1$ is almost constant and barely reaches 3\EUR{}. This latter result could be expected, given that the difference between uniform pricing and the full stopping-time pricing is small under complete information. The striking difference in the pattern of marginal gain between complete and incomplete information settings is also in line with our previous findings: in terms of revenue improvement by dynamic pricing, the effect of learning overall demand $(\xi_{aT},\xi_{dT})$ is more remarkable than that of pinning down the uncertainty in demand process when $(\xi_{aT},\xi_{dT})$ is fixed. \subsection{Tests and robustness checks}\label{sec:robustness_checks} In this section, we first test the plausibility of Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line}, on which the identification of $\theta_0$ relies. Next, we relax the assumption of a time-invariant price elasticity. Then, we consider alternative parametric specifications. Finally, we explore the effect of specific routes on our results. \paragraph{Test of Assumptions \ref{hyp:cons_demand} and \ref{hyp:yield_line}.} These assumptions imply that the proportions $n_{bkT}/(n_{akT}+n_{bkT})$ remain constant through fare classes $k$ satisfying $p_{bkT}=p_{akT}$. A convenient way to check this is to restrict ourselves to two routes, Paris-Marseille and Paris-Mulhouse, for which $p_{bkT}=p_{akT}$ for all $k\in\{1,...,K\}$. By taking the first fare class as a reference, we simply regress $n_{bkT}/(n_{akT}+n_{bkT})$ on the other 11 fare class dummies and train fixed effects. We then test whether the coefficients of the fare class dummies are equal to zero. \medskip The results are presented in Table \ref{tab:binomialTest}. As emphasized by the top panel, most coefficients are not significant, despite the large number of observations ($453$ and $499$ for the two routes). For Paris-Marseille, the p-value of the joint test is larger than 0.05. For Paris-Mulhouse, the p-value is lower, but it appears that this result is mostly driven by the last fare classes (the joint test for nullity of the first 10 classes has a p-value of 15\%). The coefficients of the last two fare classes are indeed positive and quite large for this route, indicating that there would be more ``late purchasers'' for Mulhouse than for Strasbourg. \begin{comment} u2: 11.452 12.307 u4: 13.661 14.624 s5: 13.763 14.795 s10: 13.892 14.873 f1: 13.787 14.822 f2: 13.910 14.892 \end{comment} \medskip To see whether this pattern could influence our results beyond this specific route, we re-estimate $\varepsilon$ using only the first 10 fare classes. We obtain a price elasticity of $-4.86$, which is thus somewhat higher in absolute value than the baseline estimate of -4.04 obtained with the 12 fare classes. We then recomputed the identified sets of counterfactual revenues for Scenarios u2, u4, s5, s10, f1 and f2 (as they are the simplest to compute). The optimal revenues are slightly higher but with differences never exceeding 3.3\% on the lower bounds and 1.2\% on the upper bounds. \input{tables/table_separability_check} \vspace{-0.5cm} \paragraph{Time-varying price elasticities.} One could expect that consumers purchasing their tickers earlier would be more price elastic than those buying their tickets late. For instance, the latter could include more business travelers. If so, the assumption of a time-invariant price elasticity would be violated. To test for this condition, we replace $\varepsilon$ in \eqref{eq:binomial} by $\varepsilon_{\text{early}} 1\{k\leq S \} + \varepsilon_{\text{late}} 1\{k> S \}$ for some threshold $S$ that we vary. In other words, we distinguish price elasticity of early purchasers, defined as those who purchase a ticket with price in fare classes inferior or equal to $S$, from that of late purchasers who pay the price in fare classes superior to $S$. We then compare $\varepsilon_{\text{early}}$ to $\varepsilon_{\text{late}}$ to assess the extent to which the assumption of a time-invariant price elasticity holds, and the impact of relaxing this condition on counterfactual revenues. \medskip The results are displayed in Table \ref{tab:check_constancy_time}. We consider threshold values $S$ equal to $9$, $10$ and $11$. In the three cases, ``early purchasers'' are estimated to be more price elastic than ``late purchasers'', with the estimated price elasticity of the former being greater (in absolute value) than the baseline estimate in Table \ref{tab:binomial} ($-4.04$) but still close to its upper bound of the 95\% confidence interval. \input{tables/table_constant_elas_check} \vspace{-0.3cm} Next, to assess the robustness of the time-invariance elasticity condition affects counterfactual revenues, we simulate some scenarios by explicitly considering early and late purchasers (with $\varepsilon_{\text{early}}$ and $\varepsilon_{\text{late}}$, respectively) in the demand model. Specifically, we set $S=10$, re-estimate the demand using the procedure described in Sections \ref{sub:ident_demand} and \ref{sec:id_Bt} with $\varepsilon_{\text{early}}$ and $\varepsilon_{\text{late}}$, and compute the revenue with the optimal uniform pricing under complete information (scenario u.4 in Table \ref{tab:supply_main}). We find that the estimates of destination-train-specific effect, $\beta_0$, and the parameters of the Gamma distribution, $\lambda_0$, are close to the baseline results. Furthermore, the simulated revenue is close to that in the scenario u.4 in Table \ref{tab:supply_main}. Both findings suggest that despite the difference in price elasticity of early and late purchasers, the results are robust to the assumption of a time-invariant price elasticity. We refer to the Online Appendix \ref{app:constant_elas_details} for more details on the estimation method and the results. \paragraph{Alternative parametric specifications.} We conduct two robustness checks. First, we simulate counterfactual revenues with a lognormal specification on $\eta_{dT}$ instead of a gamma distribution (Assumption \ref{hyp:gamma}(ii)). The drawback of a lognormal specification is that it is not conjugate with the Poisson distribution. Then, the updated distribution of $\eta_{dT}$ in the incomplete information set-up takes a complicated form, making it very difficult to compute counterfactual scenarios. Nevertheless, this issue does not appear for uniform pricing and complete information. Table \ref{tab:supply2_benforroni} shows the results for Scenarios u.2, u.4, s.10 and f.2. Even if the bounds are wider than in the baseline specification, the results are similar. \medskip \input{tables/robustness_check_lognormal_v2} \vspace{-0.3cm} Second, we have focused so far on the demand model corresponding to Column I in Table \ref{tab:binomial}. We did so because counterfactual revenues are harder to compute under the richer specification corresponding to Column II in the same table. Nevertheless, we were able to compute counterfactual revenues with this specification for a few scenarios. The results, presented in Table \ref{tab:rev_column2}, are hardly affected. \paragraph{Effect of specific routes.} Table \ref{DES2} show that the routes to Marseille and C\^ote basque have unusually high and low loads, so one may worry that revenue management was very different for these lines. We resimulate the counterfactual revenues by excluding these two routes. The results are hardly affected, with changes in the bounds by at most 1\% over all scenarios. \input{tables/counter_rev_main_column2_v2} \vspace{-1cm} \section{Conclusion} Though the framework we have developed is taylored to our application, several of our results could be applied to other set-ups. The insight that many counterfactual revenues only depend on price elasticity and total demand, and not on the precise timing of consumers' arrival, is convenient when no details on the dates of purchases are available. Similarly, the censoring issue and the absence of exogenous variations in prices may often occur. Our identification strategy, combining exogenous variations in relative prices and moment inequalities based on basic rationality on consumer's side and weak optimality conditions on the firm's pricing strategy, could then be applied in such contexts. Our results also suggest that such moment inequalities may be quite informative in practice. \newpage
099942583ed991276498d1bc6fe9ab4e8fa94086
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Let $\Omega\subset \mathbb R^d$ be a bounded domain with smooth boundary $\partial\Omega$. We consider the following problem \begin{align} \partial_t u - (1+D_t^{\{m\}})\Delta u & = f(u, \mathcal H u)\text{ in } \Omega, t\in (0,T),\label{e1}\\ u & = 0 \text{ on } \partial\Omega,\; t\ge 0,\label{e2}\\ u(0) & = \xi \text{ in } \Omega, \label{e3} \end{align} where $f$ is a given function, $\partial_t=\frac{\partial}{\partial t}$, $D_t^{\{m\}}$ is the nonlocal derivative of Riemann-Liouville type defined by $$ D_t^{\{m\}} v(t) = \frac{d}{dt}\int_0^t m(t-s)v(s)ds, $$ with kernel $m\in L^1_{loc}(\mathbb R^+)$, $\mathcal H$ is the convolution operator given by $$\mathcal Hv(t)=\int_0^t \ell (t-\tau)v(\tau)d\tau,\; \ell \in L^1(0,T),$$ which tells us that $f$ depends on the history state of the system. The proposed system is a general model for some problems studied in literature. Indeed, in the case $m$ is a constant, \eqref{e1} is of classical diffusion type. If $m$ is a regular function, e.g. $m\in C^1(\mathbb R^+)$ then one gets \begin{equation*} \partial_t u - (1+m_0)\Delta u -\int_0^t m_1(t-s)\Delta u(s) ds = f, \end{equation*} with $m_0=m(0)$ và $m_1(t)=m'(t)$, which is a nonclassical diffusion equation. Let $m(t) = m_0 t^{-\alpha}/\Gamma(\alpha)$ with $m_0>0$, then we see that \eqref{e1} is a Rayleigh-Stokes equation, i.e. \begin{equation}\label{RS-eq} \partial_t u - (1+m_0\partial_t^\alpha)\Delta u = f. \end{equation} The constitution of the last equation was formed in \cite{FJFV09,STZM06} to describe the behavior of second-grade fluids. It should be mentioned that, some numerical schemes for Rayleigh-Stokes equations were developed in \cite{Bazh15,Bi18,Chen13,Chen08}. On the other hand, analytical representations for solution of \eqref{RS-eq} in linear case were obtained in \cite{Khan09,STZM06}, and recently, the regularity for nonlinear Rayleigh-Stokes equations has been established in \cite{Lan20,Luc21,ZW19}. For more studies related to \eqref{RS-eq}, we refer the reader to \cite{Luc19,NLATZ,Tuan19}, where the terminal value problem was carried out. In this work, we are interested in the solvability and regularity analysis for problem \eqref{e1}-\eqref{e3} in the circumstance that the nonlinearity function involves the history state expressed by $\mathcal H$. The appearance of $\mathcal Hu$ comes from, e.g. the inverse source problem as presented in the last section. In another way, this factor may arise from control problems, where the feedback requires some history information of the system. One will find that, the term $\mathcal Hu$ causes some difficulties for analyzing regularity of solutions. As an important feature of our study, it is noted that the nonlinearity function $f$ is allowed to take weak values, i.e. $f(u,\mathcal Hu)$ may belong to dual of fractional Sobolev spaces. This enables us to consider the case when $f$ contains polymonial or gradient terms, which have connections with practical applications. This also extends the recent results established in \cite{KT2022,Lan20,Luc21}. The rest of this note is as follows. In the next section, we first recall some notions and facts related to Hilbert scales and fractional Sobolev spaces. Additionally, a representation for the solution of linear problem will be shown together with some essential estimates for resolvent operator. Section 3 is devoted to proving the main results, including the global existence and H\"older regularity of solutions. It is worth noting, in particular, that the H\"older regularity result in our work can not be obtained by the technique used in \cite{KT2022}, since the differentiability of resolvent operator is unavailable on dual spaces. In order to overcome this impediment, we construct a regular closed subset of solution space, which is invariant under the solution operator, and make use of fixed point arguments. The last section shows an application of the obtained results, where we demonstrate that an inverse source problem governed by Rayleigh-Stokes equations is solvable by transforming it to a prototype of problem \eqref{e1}-\eqref{e3}. \section{Preliminaries} \subsection{Functional spaces} Let $-\Delta$ be the Laplacian associated with the homogenuous Dirichlet boundary condition. Then one has a sequence of eigenfunctions $\{e_n\}$ of $-\Delta$ which forms an orthonormal basis of $L^2(\Omega)$, and we have the following representation $$-\Delta v = \sum _{n=1}^\infty \left(\lambda_n \int\limits_\Omega v(x)e_n(x)dx\right)e_n,$$ where $\lambda_n>0$ is the eigenvalue corresponding to the eigenfunction $e_n$. For $\varrho\ge 0$, we define the following functional space $$\mathbb H^\varrho:=\left\{\varphi\in L^2(\Omega) \bigg| \|\varphi\|^2_{\mathbb H^\varrho} := \sum_{n=1}^\infty \left|\lambda_n^{\frac \varrho 2}\int\limits_\Omega \varphi(x)e_n(x)dx \right |^2<\infty \right\}.$$ Let $\mathbb H^{-\varrho}$ denote the dual space of $\mathbb H^\varrho$ with respect to the dual pair $\langle \cdot,\cdot \rangle_{-\varrho,\varrho} $ on $\mathbb H^{-\varrho} \times \mathbb H^\varrho$ with the norm $$ \|\varphi\|^2_{\mathbb H^{-\varrho}} := \sum_{n=1}^\infty \left|\lambda_n^{-\frac \varrho 2} \langle \varphi,e_n \rangle_{-\varrho,\varrho} \right |^2<\infty.$$ Clearly, $\mathbb H^{\varrho_2} \hookrightarrow \mathbb H^{\varrho_1}$ and $\mathbb H^{-\varrho_1} \hookrightarrow \mathbb H^{-\varrho_2}$ with $\varrho_2\ge \varrho_1\ge 0$. The family of Hilbert spaces $\mathbb H^\varrho, \varrho\in\mathbb R$, is said to be the Hilbert scales. The fractional Laplacian $(-\Delta)^\gamma$, $\gamma\ge 0$, is defined as follows \begin{align*} (-\Delta)^\gamma: \; \mathbb H^{\gamma} &\to \mathbb H^{-\gamma};\\ (-\Delta)^\gamma \varphi &= \sum_{n=1}^\infty \lambda_n^\gamma \left(\int\limits_\Omega \varphi(x)e_n(x)dx \right)e_n. \end{align*} Then, $\|(-\Delta)^\gamma \varphi\|_{\mathbb H^{-\gamma}} = \|\varphi\|_{\mathbb H^{\gamma}}$. We now recall the notion of the fractional Sobolev spaces (see, e.g. \cite{Dem, Di} for details). For $r \in(0,1)$ and $p \in[1,+\infty)$, the functional space $$ W^{r, p}(\Omega):=\left\{u \in L^{p}(\Omega): \int_{\Omega} \int_{\Omega} \frac{|u(x)-u(y)|^{p}}{|x-y|^{d+p r}} d x d y<\infty\right\} $$ is called the fractional Sobolev space with the norm $$ \|u\|_{W^{r, p}(\Omega)}=\left(\int_{\Omega}|u|^{p} d x+\int_{\Omega} \int_{\Omega} \frac{|u(x)-u(y)|^{p}}{|x-y|^{d+p r}} d x d y\right)^{1 / p} $$ For $r \geq 1$, denote by $[r]$ the integral part of $r$, we define $$W^{r, p}(\Omega):=\left\{u \in W^{[r], p}(\Omega): D^{\eta} u \in W^{r-[r], p}(\Omega), \forall |\eta| \leq[r]\right\}$$ which is a Banach space with the norm $$ \|u\|_{W^{r, p}(\Omega)}=\left(\|u\|_{W^{[r], p}}^{p}(\Omega)+\sum_{|\eta|=[r]} \int_{\Omega} \int_{\Omega} \frac{\left|D^{\eta} u(x)-D^{\eta} u(y)\right|^{p}}{|x-y|^{d+p(r-[r])}} d x d y\right)^{1 / p}. $$ Denote $$ W_{0}^{r, p}(\Omega):=\overline{C_{c}^{\infty}(\Omega)}^{W^{r, p}(\Omega)}, H^{r}(\Omega):=W^{r, 2}(\Omega), H_{0}^{r}(\Omega):=W_{0}^{r, 2}(\Omega). $$ Assume that $\Omega$ is a domain with sufficiently smooth boundary such that $C_{c}^{\infty}(\Omega)$ is dense in $H^{r}(\Omega)$ with $0<r<1 / 2$, then $H_{0}^{r}(\Omega)=H^{r}(\Omega)$ (see \cite[Corollary 8.10.1]{Bha}). It follows from \cite{BSV15} that \begin{equation}\label{hr} \mathbb{H}^{r}= \begin{cases}H_{0}^{r}(\Omega), & 0 \leq r<1 / 2, \\ H_{00}^{1 / 2}(\Omega) \varsubsetneqq H_{0}^{1 / 2}(\Omega), & r=1 / 2, \\ H_{0}^{r}(\Omega), & 1 / 2<r \leq 1, \\ H_{0}^{1}(\Omega) \cap H^{r}(\Omega), & 1<r \leq 2,\end{cases} \end{equation} where $H_{00}^{1 / 2}(\Omega)$ is the Lions-Magenes space determined by $$ H_{00}^{1 / 2}(\Omega)=\left\{u \in H^{1 / 2}(\Omega): \int_{\Omega} \frac{|u(x)|^{2}}{(\operatorname{dist}(x, \partial \Omega))^{2}} d x<\infty\right\}. $$ Then the following lemma is a direct consequence of \eqref{hr}. \begin{lemma}\label{lm:em} Denote by $H^{-r}(\Omega)$ the dual space of $H_{0}^{r}(\Omega)$ with $r \geq 0$. If $0 \leq r \leq r^{\prime} \leq 2$, then $$ \mathbb{H}^{r^{\prime}} \hookrightarrow \mathbb{H}^{r} \hookrightarrow H^{r}(\Omega) \hookrightarrow L^{2}(\Omega) \hookrightarrow H^{-r}(\Omega) \hookrightarrow \mathbb{H}^{-r} \hookrightarrow \mathbb{H}^{-r^{\prime}}. $$ \end{lemma} The following lemma represents embeddings between fractional Sobolev spaces. \begin{lemma}\cite[Theorem 8.12.6]{Bha}\label{sob_emb} Let $1\le p,p'\le \infty, 0\le r,r' <\infty$ and $r'-\frac{d}{p'} \ge r-\frac{d}{p}$. Then, $$W^{r',p'}(\Omega) \hookrightarrow W^{r,p}(\Omega).$$ \end{lemma} Using Lemma \ref{lm:em} and \ref{sob_emb}, we have the following embeddings. \begin{lemma}\label{lm:em1} We have \begin{enumerate}[a)] \item $L^{p}(\Omega) \hookrightarrow H^{r}(\Omega) \hookrightarrow \mathbb{H}^{r}$ if $\left\{-\frac{d}{2}<r \leq 0, p \geq \frac{2 d}{d-2 r}\right\}$. \item $\mathbb{H}^{r} \hookrightarrow H^{r}(\Omega) \hookrightarrow L^{p}(\Omega)$ if $ \left\{0 \leq r<\frac{d}{2}, 1 \leq p \leq \frac{2 d}{d-2 r}\right\}. $ \end{enumerate} \end{lemma} \subsection{Representation of solutions to linear problem} In order to investigate problem \eqref{e1}-\eqref{e3}, we assume the following hypothesis: \begin{itemize} \it \item[(\textbf{M})] The function $m\in L^1_{loc}(\mathbb R^+)$ is nonnegative such that $a(t): = 1 + m(t)$ is completely positive. \end{itemize} Recall that the real valued function $a$ is said to be completely positive if the solutions to the following integral equations \begin{align} & s(t) + \theta \int_0^t a(t-\tau)s(\tau)d\tau = 1,\; t\ge 0,\label{ire1}\\ & r(t) + \theta \int_0^t a(t-\tau)r(\tau)d\tau = a(t),\; t>0,\label{ire2} \end{align} are nonnegative for each $\theta>0$. The theory of completely positive functions can be found in \cite{CN81,Pruss}. An equivalent condition for $a$ to be completely positive is as follows: \begin{enumerate} \item[(PC)] \it There exists a nonincreasing and nonnegative function $k\in L^1_{loc}(\mathbb R^+)$ and $\epsilon\ge 0$ such that $$ \epsilon a + k*a = 1\text{ on } (0,\infty). $$ \end{enumerate} In the present work, we assume that $m$ is unbounded on $\mathbb R^+$, which implies $\epsilon=0$. The condition (PC) is satisfied if $m$ is completely monotone, i.e. $(-1)^n m^{(n)} (t) \ge 0$ on $(0,\infty)$, for all $n\in\mathbb N$ (see \cite{Miller}). First we consider the relaxation problem: \begin{align} \omega'(t) + \lambda (1+D_t^{\{m\}}) \omega (t) & = 0,\; t> 0,\label{re1}\\ \omega (0) & = 1,\label{re2} \end{align} where the unknown $\omega$ is a scalar function, $\lambda$ is a positive parameter. Integrating both sides of \eqref{re1} over $(0,t)$, we obtain \begin{align} \omega(t) + \lambda \int_0^t (1+m(t-\tau))\omega(\tau) d\tau=1, \label{re3} \end{align} which is just \eqref{ire1} with $\lambda =\theta$. We denote by $\omega(t,\lambda)$ the solution of \eqref{re3} to emphasize the dependence of $\omega$ on the parameter $\lambda$. We list some properties of $\omega$ in the following proposition. \begin{proposition}\label{pp-relax-func} Let $\omega$ be the solution to \eqref{re1}-\eqref{re2}. Then \begin{enumerate} \item[(a)] $\omega$ is nonincreasing on $\mathbb R^+$ and \begin{align*} 0<\omega(t,\lambda)\le \frac{1}{1+\lambda\int_0^t (1+m(\tau))d\tau},\; \forall t\ge 0,\;\lambda>0. \end{align*} \item[(b)] The following estimate holds \begin{align*} \int_0^t \omega(\tau,\lambda)d\tau\le \lambda^{-1}(1-\omega(t,\lambda)),\;\forall t\ge 0, \lambda>0. \end{align*} \item[(c)] For each $t>0$, the function $\lambda\mapsto \omega(t,\lambda)$ is nonincreasing. \item[(d)] The function $v(t) = \omega(t,\lambda)v_0 + \int_0^t\omega(t-\tau,\lambda)g(\tau)d\tau$ is a solution to the problem \begin{align*} v'(t) + \lambda (1+D^{\{m\}}_t)v(t) &= g(t),\\ v(0)&=v_0. \end{align*} \end{enumerate} \end{proposition} \begin{proof} The properties (a) and (b) are implied from \eqref{re3}. The properties (c) and (d) were proved in \cite{KT2022}. \end{proof} Now we look for a representation of the solution to the following initial value linear problem \begin{align} \partial_t u - (1 + D_t^{\{m\}})\Delta u & = F \;\text{ in }\Omega, t\in (0,T],\label{le1}\\ u & = 0\; \text{ on } \partial\Omega,\; t\in [0,T],\label{le2}\\ u(0) & = \xi \; \text{ in }\Omega, \label{le3} \end{align} where $F\in C([0,T];L^2(\Omega))$. Assume that \begin{align*} u(t) = \sum_{n=1}^\infty u_n(t)e_n, \; F(t) = \sum_{n=1}^\infty F_n(t)e_n. \end{align*} Using these expansions in \eqref{le1}, we obtain \begin{align*} &u_n'(t) +\lambda_n (1 + D_t^{\{m\}})u_n(t) = F_n(t),\\ &u_n(0) = \xi_n := (\xi,e_n). \end{align*} Applying Proposition \ref{pp-relax-func}(d), we get $$ u_n(t) = \omega(t,\lambda_n)\xi_n + \int_0^t \omega(t-\tau,\lambda_n)F_n(\tau)d\tau. $$ Therefore, \begin{align} u(t) = S(t)\xi + \int_0^t S(t-\tau)F(\tau)d\tau,\label{sol-form} \end{align} where $S(t)$ is the \textit{resolvent operator} determined by \begin{align} S(t)\xi = \sum_{n=1}^\infty \omega(t,\lambda_n)\xi_n e_n ,\; \xi\in L^2(\Omega).\label{sol-op} \end{align} Obviously, $S(t)$ is a bounded linear operator on $L^2(\Omega)$ for all $t\ge 0$. Moreover, we have the following statements. \begin{lemma}\label{lm-sol-op} Let $\{S(t)\}_{t\ge 0}$ be the resolvent family defined by \eqref{sol-op}, $v\in L^2(\Omega)$ and $T>0$. Then, \begin{enumerate} \item[(a)] $S(\cdot)v\in C([0,T];L^2(\Omega))$ and $\|S(t)\|\le \omega(t,\lambda_1)$ for all $t\ge 0$. \item[(b)] For $g\in C([0,T];\mathbb H^{\mu-1})$, $\mu>0$, we have $S*g\in C([0,T];\mathbb H^{\mu})$. Furthermore, \begin{equation}\label{lm-sol-op-a} \|S*g(t)\|^2_{\mathbb H^\mu}\le \int_0^t \omega(t-\tau,\lambda_1)\|g(\tau)\|^2_{\mathbb H^{\mu-1}}d\tau,\text{ for all } t\ge 0. \end{equation} \item[(c)] If $m$ is nonincreasing, then $S(\cdot)v\in C^1((0,T];L^2(\Omega))$ and it holds that \begin{equation}\label{lm-sol-op-b} \|S'(t)\|\le t^{-1}\;\text{ for all } t>0. \end{equation} \item[(d)] For $\delta\in (0,1)$, $g\in C([0,T];\mathbb H^{\mu-1-\delta})$, we have \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu}\le \int_0^t (t-\tau)^{-\delta}\|g(\tau)\|^2_{\mathbb H^{\mu-1-\delta}}d\tau. \end{align*} \item[(e)] If $(1*m)^{-1}\in L^1(0,T)$, then for $g\in C([0,T];\mathbb H^{\mu-2})$, we have \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu}\le \int_0^t \frac{\|g(\tau)\|^2_{\mathbb H^{\mu-2}}}{(1*m)(t-\tau)}d\tau. \end{align*} \end{enumerate} \end{lemma} \begin{proof} The properties (a) and (c) were proved in \cite[Lemma 1]{KT2022}. Now we show (b). For $g\in C([0,T];\mathbb H^{\mu-1})$, we have \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu} = \sum_{n=1}^\infty \lambda^\mu_n \left(\int_0^t \omega(t-\tau,\lambda_n)g_n(\tau)d\tau\right)^2,\; g_n(\tau)=(g(\tau),e_n). \end{align*} Moreover, by the H\"older inequality, we can estimate \begin{align*} \left(\int_0^t \omega(t-\tau,\lambda_n)g_n(\tau)d\tau\right)^2 & \le \left(\int_0^t \omega(t-\tau,\lambda_n)d\tau\right)\left(\int_0^t \omega(t-\tau,\lambda_n)|g_n(\tau)|^2d\tau\right)\\ & \le \lambda_n^{-1}\int_0^t \omega(t-\tau,\lambda_1)|g_n(\tau)|^2d\tau. \end{align*} Hence, \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu} & \le \sum_{n=1}^\infty \int_0^t \omega(t-\tau,\lambda_1)\lambda_n^{\mu-1}|g_n(\tau)|^2 d\tau\\ & = \int_0^t \omega(t-\tau,\lambda_1)\|g(\tau)\|^2_{\mathbb H^{\mu-1}} d\tau. \end{align*} Next, we prove (d). Assume that $g\in C([0,T];\mathbb H^{\mu-1-\delta})$. Then \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu}=\sum_{n=1}^\infty \lambda_n^\mu \left(\int_0^t \omega(t-\tau,\lambda_n)g_n(\tau)d\tau\right)^2,\; g_n(\tau)=(g(\tau),e_n). \end{align*} Using the H\"older inequality and Proposition \ref{pp-relax-func}, we obtain \begin{align*} \left(\int_0^t \omega(t-\tau,\lambda_n)g_n(\tau)d\tau\right)^2 & \le \left(\int_0^t \omega(t-\tau,\lambda_n)d\tau\right)\left(\int_0^t \omega(t-\tau,\lambda_n)|g_n(\tau)|^2 d\tau\right)\\ & \le \lambda_n^{-1}\int_0^t \frac{|g_n(\tau)|^2}{1+\lambda_n (t-\tau)} d\tau\\ & \le \lambda_n^{-1}\int_0^t \frac{|g_n(\tau)|^2}{\lambda_n^\delta (t-\tau)^\delta} d\tau, \end{align*} here, we used the inequality $1+b\ge b^\delta$ for $b\ge 0, \delta\in (0,1)$. Therefore, \begin{align*} \|S*g(t)\|^2_{\mathbb H^\mu}&\le \sum_{n=1}^\infty \lambda_n^{\mu-1-\delta}\int_0^t (t-\tau)^{-\delta}|g_n(\tau)|^2 d\tau\\ & = \int_0^t (t-\tau)^{-\delta}\|g(\tau)\|^2_{\mathbb H^{\mu-1-\delta}}d\tau. \end{align*} The last property is proved similarly by noting that \begin{align*} \left(\int_0^t \omega(t-\tau,\lambda_n)g_n(\tau)d\tau\right)^2 & \le \left(\int_0^t \omega(t-\tau,\lambda_n)d\tau\right)\left(\int_0^t \omega(t-\tau,\lambda_n)|g_n(\tau)|^2 d\tau\right)\\ & \le \lambda_n^{-1}\int_0^t \frac{|g_n(\tau)|^2}{1+\lambda_n (1*m)(t-\tau)} d\tau\\ & \le \lambda_n^{-2}\int_0^t \frac{|g_n(\tau)|^2}{ (1*m)(t-\tau)} d\tau. \end{align*} The proof is complete. \end{proof} \section{Global solvability and H\"older regularity} In order to solve problem \eqref{e1}-\eqref{e3}, we require the following assumption on the nonlinearity: \begin{enumerate}\it \item[(F)] The function $f: \mathbb H^{\mu}\times \mathbb H^{\mu} \to \mathbb H^{-\theta}$ satisfies $f(0,0)=0$ and for $\rho, \rho'>0$, we have $$ \|f(v_1,w_1)-f(v_2,w_2)\|_{\mathbb H^{-\theta}}\le L_f(\rho)\|v_1-v_2\|_{\mathbb H^\mu}+K_f(\rho')\|w_1-w_2\|_{\mathbb H^\mu}, $$ where $\|v_1\|_{\mathbb H^\mu}, \|v_2\|_{\mathbb H^\mu}\le \rho$, $\|w_1\|_{\mathbb H^\mu}, \|w_2\|_{\mathbb H^\mu}\le \rho'$, $0<\mu<2$, $\theta >0$ , $L_f$ and $K_f$ are nonnegative functions. \end{enumerate} By the representation of the solution to the linear problem given by \eqref{sol-form}, we have the following definition of mild solution to \eqref{e1}-\eqref{e3}. \begin{definition} Let $\xi\in\mathbb H^\mu$. The function $u\in C([0,T];\mathbb H^\mu)$ is called a mild solution to \eqref{e1}-\eqref{e3} on the interval $[0,T]$ if the following identity holds $$ u(t) = S(t)\xi + \int_0^t S(t-\tau)f(u(\tau),\mathcal Hu(\tau))d\tau,\; t\in [0,T]. $$ \end{definition} \subsection{Global solvability} \begin{theorem}\label{th-sol} Assume that the condtion (F) is satisfied with $\theta=1+\delta-\mu$, $\delta\in(0,1)$ and $$ \limsup\limits_{\rho\to 0}L_f(\rho)=L_f^*,\; \limsup\limits_{\rho'\to 0}K_f(\rho')=K_f^*, $$ such that $$ 8T^{1-\delta}(1-\delta)^{-1}({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1})<1. $$ Then there exists $\rho^*>0$ such that for $\|\xi\|_{\mathbb H^\mu}\le \frac 12 \rho^*$, problem \eqref{e1}-\eqref{e3} possesses a unique mild solution $u$ on $[0,T]$ obeying $\|u(t)\|_{\mathbb H^\mu}\le \rho^*$, for $t\in [0,T]$. \end{theorem} \begin{proof} We will show that the operator $\Phi$ determined by $$ \Phi(u)(t)=S(t)\xi + \int_0^t S(t-\tau)f(u(\tau),\mathcal Hu(\tau))d\tau,\; t\in [0,T], $$ has a fixed point in the space $C([0,T];\mathbb H^\mu)$ furnished by the norm $\|u\|_\infty=\sup\limits_{t\in [0,T]}\|u(t)\|_{\mathbb H^\mu}$. Denote by $B_\rho$ the closed ball of radius $\rho$ in $C([0,T];\mathbb H^\mu)$ which centers at the origin. For $u\in B_\rho$, we have $\mathcal Hu\in B_{\rho'}$ with $\rho' = \rho\|\ell\|_{L^1}$. Then \begin{align*} \|\Phi(u)(t)\|^2_{\mathbb H^\mu} & \le 2\|S(t)\xi\|^2_{\mathbb H^\mu} + 2\|S*f(u(\cdot),\mathcal Hu(\cdot))(t)\|^2_{\mathbb H^\mu}\\ & \le 2\|S(t)\xi\|^2_{\mathbb H^\mu} + 2\int_0^t (t-\tau)^{-\delta}\|f(u(\tau),\mathcal Hu(\tau))\|^2_{\mathbb H^{\mu-1-\delta}}d\tau, \end{align*} by Lemma \ref{lm-sol-op}(d). Using the condition (F), we have \begin{align*} \|\Phi(u)(t)\|^2_{\mathbb H^\mu} & \le 2\|\xi\|^2_{\mathbb H^\mu} + 4\int_0^t (t-\tau)^{-\delta}[L_f(\rho)^2 \|u(\tau)\|^2_{\mathbb H^\mu} +K_f(\rho')^2\|\mathcal Hu(\tau)\|^2_{\mathbb H^\mu}]d\tau\\ & \le 2\|\xi\|^2_{\mathbb H^\mu} + 4\int_0^t (t-\tau)^{-\delta}({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1}+\epsilon) \rho^2 d\tau\\ & \le 2\|\xi\|^2_{\mathbb H^\mu} + 4T^{1-\delta}(1-\delta)^{-1}({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1}+\epsilon) \rho^2, \end{align*} where $\epsilon>0$ and $\rho^*>0$ is chosen such that $$ 8T^{1-\delta}(1-\delta)^{-1}({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1}+\epsilon)\le 1, \text{ for all } \rho\le \rho^*. $$ Then, for $u\in B_{\rho^*}$ and $\|\xi\|\le \frac 12\rho^*$, we can estimate \begin{align*} \|\Phi(u)(t)\|_{\mathbb H^\mu}\le \rho^*, \text{ for all } t\in [0,T]. \end{align*} Hence $\Phi(B_{\rho^*})\subset B_{\rho^*}$. It remains to verify that $\Phi$ is a contraction on $B_{\rho^*}$. Indeed, for $u_1, u_2\in B_{\rho^*}$, we have \begin{align*} & \|f(u_1(\tau),\mathcal Hu_1(\tau))-f(u_2(\tau),\mathcal Hu_2(\tau))\|^2_{\mathbb H^{\mu-1-\delta}} \\ & \qquad \le 2 L_f(\rho^*)^2 \|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu} + 2 K_f(\rho^*\|h\|_{L^1})^2\|\mathcal H u_1(\tau)-\mathcal Hu_2(\tau)\|^2_{\mathbb H^\mu}\\ & \qquad \le 2({L^*_f}^2+{K^*_f}^2\|h\|^2_{L^1}+\epsilon)\sup_{\tau\in [0,T]}\|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}. \end{align*} This yields that \begin{align*} & \|\Phi(u_1)(t)-\Phi(u_2)(t)\|^2_{\mathbb H^\mu} \le \|S*[f(u_1(\cdot),\mathcal Hu_1(\cdot))-f(u_2(\cdot),\mathcal Hu_2(\cdot))](t)\|^2_{\mathbb H^\mu}\\ &\qquad \le \int_0^t (t-\tau)^{-\delta}\|f(u_1(\tau),\mathcal Hu_1(\tau))-f(u_2(\tau),\mathcal Hu_2(\tau))\|^2_{\mathbb H^{\mu-1-\delta}}d\tau\\ &\qquad \le 2T^{1-\delta}(1-\delta)^{-1}({L^*_f}^2+{K^*_f}^2\|h\|^2_{L^1}+\epsilon) \sup_{\tau\in [0,T]} \|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}\\ &\qquad \le \frac 14 \sup_{\tau\in [0,T]} \|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}. \end{align*} Finally, we arrive at $$ \|\Phi(u_1)-\Phi(u_2)\|_\infty\le \frac 12 \|u_1-u_2\|_\infty. $$ The proof is complete. \end{proof} In the case the function $f(\cdot,\cdot)$ satisfies global Lipschitz condition, we have the following result. \begin{theorem}\label{th-sol-ad} Assume that the nonlinearity $f:\mathbb H^{\mu}\times \mathbb H^{\mu}\to \mathbb H^{\mu-1-\delta}$ is continuous and satisfies the condition $$ \|f(v_1,w_1)-f(v_2,w_2)\|_{\mathbb H^{\mu-1-\delta}}\le L^*_f\|v_1-v_2\|_{\mathbb H^\mu}+K^*_f\|w_1-w_2\|_{\mathbb H^\mu}, $$ for all $v_1, v_2, w_1, w_2\in\mathbb H^\mu$, where $L^*_f, K^*_f \ge 0$. Then, Problem \eqref{e1}-\eqref{e3} possesses a unique mild solution in the space $C([0,T];\mathbb H^\mu)$. \end{theorem} \begin{proof} In $C([0,T];\mathbb H^\mu)$, we use an equivalent norm $$ \|u\|_{\beta,\infty} = \sup_{t\in [0,T]}e^{-\beta t}\|u(t)\|_{\mathbb H^\mu}, $$ where $\beta>0$ satisfies $$ L^*:=2({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1}) \int_0^T e^{-2\beta \tau} \tau^{-\delta}d\tau <1. $$ Consider the solution operator $\Phi$ as in the proof of Theorem \ref{th-sol}, where $u_1, u_2\in C([0,T];\mathbb H^\mu)$, we have \begin{align*} \|\Phi(u_1)(t)-\Phi(u_2)(t)\|^2_{\mathbb H^\mu} & \le 2 \int_0^t (t-\tau)^{-\delta}{L^*_f}^2 \|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}d\tau\\ & \qquad + 2\int_0^t (t-\tau)^{-\delta}{K^*_f}^2\|\mathcal H u_1(\tau)-\mathcal H u_2(\tau)\|^2_{\mathbb H^\mu} d\tau, \end{align*} according to the assumption of the theorem and Lemma \ref{lm-sol-op}(d). Then, it yields \begin{align*} e^{-2\beta t}\|\Phi(u_1)(t)&-\Phi(u_2)(t)\|^2_{\mathbb H^\mu} \\ & \le 2\int_0^t e^{-2\beta(t-\tau)} (t-\tau)^{-\delta}{L^*_f}^2 [e^{-2\beta\tau}\|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}]d\tau\\ & \qquad + 2\int_0^t e^{-2\beta(t-\tau)} (t-\tau)^{-\delta}{K^*_f}^2 [e^{-2\beta\tau}\|\mathcal H u_1(\tau)-\mathcal H u_2(\tau)\|^2_{\mathbb H^\mu}]d\tau\\ & \le 2({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1})\left(\int_0^t e^{-2\beta \tau} \tau^{-\delta}d\tau\right)\|u_1-u_2\|^2_{\beta,\infty}, \end{align*} here, we have used the following estimate \begin{align*} e^{-2\beta\tau}\|\mathcal H u_1(\tau)-\mathcal H u_2(\tau)\|^2_{\mathbb H^\mu} &\le \left(\int_0^\tau e^{-\beta(\tau-z)}\ell(\tau-z)[e^{-\beta z}\|u_1(z)-u_2(z)\|_{\mathbb H^\mu}]dz \right)^2\\ & \le \left(\int_0^\tau e^{-\beta z}|\ell (z)| dz \right)^2 \|u_1-u_2\|^2_{\beta,\infty}\\ & \le \|\ell\|^2_{L^1} \|u_1-u_2\|^2_{\beta,\infty}. \end{align*} Therefore, $$ \|\Phi(u_1)-\Phi(u_2)\|_{\beta,\infty}\le \sqrt{L^*}\|u_1-u_2\|_{\beta,\infty}. $$ In other words, $\Phi$ is a contraction on $C([0,T];\mathbb H^\mu)$. This completes the proof. \end{proof} \begin{remark}\label{rm-sol-ad} (i) In Theorem \ref{th-sol-ad}, we do not require that $\|\xi\|_{\mathbb H^\mu}$ is small. Besides, we also relax the condition on $L^*_f$ and $K^*_f$ in comparison with the assumptions of Theorem \ref{th-sol}. Moreover, the result obtained in Theorem \ref{th-sol-ad} still holds if we add to the right-hand side of equation \eqref{e1} an external force, that means the right-hand side of \eqref{e1} has the form $f(u,\mathcal Hu) + g(t,x)$. (ii) In the case $(1*m)^{-1}\in L^1(0,T)$, we achieve a similar result to that in Theorem \ref{th-sol-ad} assumming that the nonlinearity $f$ can take \lq weaker\rq\ values: $$ \|f(v_1,w_1)-f(v_2,w_2)\|_{\mathbb H^{\mu-2}}\le L^*_f\|v_1-v_2\|_{\mathbb H^\mu}+K^*_f\|w_1-w_2\|_{\mathbb H^\mu}, $$ for all $v_1, v_2, w_1, w_2\in\mathbb H^\mu$, where $L^*_f, K^*_f \ge 0$. In this situation, we use the estimate in Lemma \ref{lm-sol-op}(e). \end{remark} \subsection{H\"older regularity} In this part, we assume an additional contition on the kernel $m$ as follows. \begin{enumerate}\it \item[\rm (M*)] The hypothesis (M) is satisfied with a nonincreasing function $m$. \end{enumerate} Note that, under the assumption (M*), the resolvent $S(\cdot)$ is differentiable in $(0,\infty)$ and $\|S(t)\|\le t^{-1}$ for $t>0$ by Lemma \ref{lm-sol-op}. For $\gamma\in (0,1)$, denote $$ V^{\mu,\gamma}_{\rho,\rho^*} = B_{\rho^*} \cap \{u\in C([0,T];\mathbb H^\mu): \sup_{\substack{h>0\\ t\in (0,T-h]}} \frac{t^\gamma\|u(t+h)-u(t)\|_{\mathbb H^\mu}}{h^\gamma}\le \rho\}. $$ We will show that the mild solution to problem \eqref{e1}-\eqref{e3} obtained by Theorem \ref{th-sol} is H\"older continuous in $(0,T]$ by proving that the solution mapping $\Phi$ is contractive on $V^{\mu,\gamma}_{\rho,\rho^*}$. \begin{theorem}\label{th-Holder} Assume that (M*) and all assumptions in Theorem \ref{th-sol} are satisfied. Moreover, with $\gamma\in (\frac 12 \delta, \frac 12)$, we have \begin{align*} &\ell^*_1 = \sup_{\substack{h>0\\ t\in (0,T-h]}}\left(\frac th\right)^\gamma \int_t^{t+h} |\ell(\tau)| d\tau<\infty,\\ &16 B(1-\delta, 1-2\gamma) T^{1-\delta} ( {L_f^*}^2 + {K_f^*}^2 {\ell^*_2}^2)<1, \end{align*} where $B(\cdot,\cdot)$ is the Beta function and $$ \ell^*_2 = \sup_{t\in (0,T]}t^\gamma\int_0^t \frac{|\ell(\tau)|}{(t-\tau)^\gamma}d\tau. $$ Then, the solution to problem \eqref{e1}-\eqref{e3} is H\"older continuous on $(0,T]$. \end{theorem} \begin{proof} It suffices to show that $\Phi(V^{\mu,\gamma}_{\rho,\rho^*})\subset V^{\mu,\gamma}_{\rho,\rho^*}$ for a certain $\rho>0$. Since $S(\cdot)$ is differentiable in $(0,\infty)$, using the Mean value theorem, we have \begin{align} \|[S(t+h)-S(t)]\xi\|_{\mathbb H^\mu}&\le h\int_0^1\|S'(t+\zeta h)\xi\|_{\mathbb H^\mu}d\zeta\notag\\ & \le h\|\xi\|_{\mathbb H^\mu}\int_0^1\frac{d\zeta}{t+\zeta h} = \|\xi\|_{\mathbb H^\mu}\ln \left(1+\frac ht\right)\notag\\ & \le \|\xi\|_{\mathbb H^\mu}\gamma^{-1} t^{-\gamma} h^\gamma. \label{th-Holder-1} \end{align} On the other hand, for $u\in V^{\mu,\gamma}_{\rho,\rho^*}$, we have \begin{align*} & \|\mathcal H u(t+h)-\mathcal H u(t)\|_{\mathbb H^\mu} \le \int_t^{t+h} |\ell(\tau)|\|u(t+h-\tau)\|_{\mathbb H^\mu}d\tau \\ & \quad +\int_0^t |\ell(\tau)|\| u(t+h-\tau)-u(t-\tau)\|_{\mathbb H^\mu}d\tau\\ & \le t^{-\gamma}h^\gamma \rho^* \left(\frac th\right)^\gamma \int_t^{t+h} |\ell(\tau)| d\tau \\ & \quad + t^{-\gamma}h^\gamma t^\gamma\int_0^t \frac{|\ell(\tau)|}{(t-\tau)^\gamma} [(t-\tau)^\gamma h^{-\gamma}\| u(t+h-\tau)-u(t-\tau)\|_{\mathbb H^\mu}]d\tau\\ & \le t^{-\gamma}h^\gamma \rho^*\left[ \left(\frac th\right)^\gamma \int_t^{t+h} |\ell(\tau)| d\tau \right] + t^{-\gamma}h^\gamma \rho \left[t^\gamma\int_0^t \frac{|\ell(\tau)|}{(t-\tau)^\gamma}d\tau\right]. \end{align*} Hence, \begin{equation}\label{th-Holder-2} \|\mathcal H u(t+h)-\mathcal H u(t)\|_{\mathbb H^\mu} \le t^{-\gamma}h^\gamma (\rho^* \ell^*_1 + \rho\ell^*_2). \end{equation} Denote \begin{equation}\label{Dh} D_h f(u)(t) = f(u(t+h),\mathcal H u(t+h))-f(u(t),\mathcal Hu(t)). \end{equation} We have \begin{align} & \|D_h f(u)(t)\|_{\mathbb H^{\mu-1-\delta}} \le \|f(u(t+h),\mathcal H u(t+h)) - f(u(t),\mathcal H u(t))\|_{\mathbb H^{\mu-1-\delta}}\notag\\ &\quad \le L_f(\rho^*)\|u(t+h)-u(t)\|_{\mathbb H^\mu}+K_f(\rho^*\|\ell\|_{L^1})\|\mathcal H u(t+h)-\mathcal H u(t)\|_{\mathbb H^\mu}\notag\\ &\quad \le t^{-\gamma} h^\gamma L_f(\rho^*)\rho + t^{-\gamma}h^\gamma K_f(\rho^*\|\ell\|_{L^1}) (\rho^* \ell^*_1 + \rho\ell^*_2),\label{th-Holder-3} \end{align} here, we employed the estimate \eqref{th-Holder-2}. Finally, it holds that \begin{align*} \|\Phi(u)(t+h)&-\Phi(u)(t)\|^2_{\mathbb H^\mu}\le 2\|[S(t+h)-S(t)]\xi\|^2_{\mathbb H^\mu} \\ &+ 4\int_0^t \tau^{-\delta}\|D_h f(u)(t-\tau)\|^2_{\mathbb H^{\mu-1-\delta}}d\tau\\ &+ 4\int_t^{t+h}\tau^{-\delta}\|f(u(t+h-\tau),\mathcal H u(t+h-\tau) )\|^2_{\mathbb H^{\mu-1-\delta}}d\tau \\ & = E_1(t) + E_2(t)+E_3(t). \end{align*} By \eqref{th-Holder-1}, we get \begin{equation*} E_1(t) \le 2\gamma^{-2}t^{-2\gamma}h^{2\gamma} \|\xi\|^2_{\mathbb H^\mu}. \end{equation*} Using \eqref{th-Holder-3}, we gain \begin{align*} E_2(t) & \le 8 h^{2\gamma}[\rho^2 L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|^2_{L^1})^2(\rho^* \ell^*_1 + \rho\ell^*_2)^2] \int_0^t \tau^{-\delta}(t-\tau)^{-2\gamma} d\tau\\ & \le 8 h^{2\gamma}t^{-2\gamma}[\rho^2 L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|^2_{L^1})^2(\rho^* \ell^*_1 + \rho\ell^*_2)^2] B(1-\delta,1-2\gamma) T^{1-\delta}\\ & \le 16 h^{2\gamma}t^{-2\gamma} B(1-\delta,1-2\gamma) T^{1-\delta} \\ & \quad \times [ (L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|^2_{L^1})^2 {\ell^*_2}^2)\rho^2 +K_f(\rho^*\|\ell\|^2_{L^1})^2 {\rho^*}^2 {\ell^*_1}^2 ] \end{align*} here we utilized the identity \begin{align*} \int_0^t \tau^{-\delta}(t-\tau)^{-2\gamma} d\tau =B(1-\delta,1-2\gamma) t^{1-\delta-2\gamma}. \end{align*} We can estimate $E_3(t)$ as follows \begin{align*} E_3(t) & = 4\int_0^h (t+h-\tau)^{-\delta}\|f(u(\tau),\mathcal H u(\tau) )\|^2_{\mathbb H^{\mu-1-\delta}}d\tau\\ & \le 8{\rho^*}^2 [L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|_{L^1})^2\|\ell\|^2_{L^1}]\int_0^h (t+h-\tau)^{-\delta}d\tau. \end{align*} Noting that \begin{align*} \int_0^h (t+h-\tau)^{-\delta}d\tau & = (1-\delta)^{-1}[(t+h)^{1-\delta}-t^{1-\delta}] \\ & \le h t^{-\delta}\le h^{2\gamma} t^{-2\gamma} h^{1-2\gamma}t^{2\gamma-\delta}\le h^{2\gamma} t^{-2\gamma} T^{1-\delta}, \end{align*} we obtain \begin{equation*} E_3(t) \le 8h^{2\gamma} t^{-2\gamma} (\rho^*)^2 T^{1-\delta} [L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|_{L^1})^2\|\ell\|^2_{L^1}]. \end{equation*} Combining the estimates of $E_1(t), E_2(t)$ and $E_3(t)$ above, we conclude that \begin{align*} \left(\frac th\right)^{2\gamma}& \|\Phi(u)(t+h)-\Phi(u)(t)\|^2_{\mathbb H^\mu} \le 2\gamma^{-2} \|\xi\|^2_{\mathbb H^\mu}\\ & + 16 B(1-\delta,1-2\gamma) T^{1-\delta} [ ({L_f^*}^2 + {K_f^*}^2 {\ell^*_2}^2+\epsilon)\rho^2 +K_f(\rho^*\|\ell\|^2_{L^1})^2 {\rho^*}^2 {\ell^*_1}^2 ]\\ & + 8 {\rho^*}^2 T^{1-\delta} [L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|_{L^1})^2\|\ell\|^2_{L^1}], \end{align*} here, $\epsilon>0$ is chosen such that $$ 16 T^{1-\delta}[L_f(\rho^*)^2 + K_f(\rho^*\|\ell\|^2_{L^1})^2 {\ell^*_2}^2]\le 16 T^{1-\delta}({L_f^*}^2 + {K_f^*}^2 {\ell^*_2}^2+\epsilon)< \frac1{B(1-\delta,1-2\gamma)}. $$ Now, we take a sufficiently large $\rho> 0$ such that $$ \left(\frac th\right)^{2\gamma} \|\Phi(u)(t+h)-\Phi(u)(t)\|^2_{\mathbb H^\mu} \le \rho^2, \text{ for all } h>0, t\in (0,T-h]. $$ This implies that $\Phi(u)\in W^{\mu,\gamma}_{\rho,\rho^*}$. The proof is complete. \end{proof} We now consider the case when the nonlinearity $f(u(t),\mathcal Hu(t))$ is more regular, i.e. it takes values in $\mathbb H^{\mu-1}$. We will show the H\"older continuity of the solution without the assumptions on the coefficients as in Theorem \ref{th-Holder}. \begin{theorem}\label{th-solb} Assume that the condition (M*) and (F) are satisfied with $\theta=1-\mu$, and $$ \limsup\limits_{\rho\to 0}L_f(\rho)=L_f^*,\; \limsup\limits_{\rho'\to 0}K_f(\rho')=K_f^*, $$ such that $$ 4({L^*_f}^2+{K^*_f}^2\|\ell\|^2_{L^1})<\lambda_1. $$ Then, there exists $\eta>0$ such that for $\|\xi\|_{\mathbb H^\mu}\le \eta$, problem \eqref{e1}-\eqref{e3} has a unique mild solution $u$ in $[0,T]$. Moreover, $u(\cdot)$ is H\"older continuous on $(0,T]$. \end{theorem} \begin{proof} Similar to the proof of Theorem \ref{th-sol}, first we look for $\rho^*>0$ such that $\Phi(B_{\rho^*})\subset B_{\rho^*}$. Let $u\in B_\rho$, $\rho>0$, and $f(u(t),\mathcal Hu(t))\in \mathbb H^{\mu-1}$. Then using Lemma \ref{lm-sol-op}, we obtain \begin{align*} \|S*f(u(\cdot),&\mathcal Hu(\cdot))\|^2_{\mathbb H^\mu} \le \int_0^t \omega(t-\tau,\lambda_1)\|f(u(\tau),\mathcal Hu(\tau))\|^2_{\mathbb H^{\mu-1}}d\tau\\ & \le 2\int_0^t \omega(t-\tau,\lambda_1) [L_f(\rho)^2 \| u(\tau)\|^2_{\mathbb H^\mu} + K_f(\rho\|\ell\|_{L^1})^2 \| \mathcal H u(\tau)\|^2_{\mathbb H^\mu}]d\tau\\ & \le 2\int_0^t \omega(t-\tau,\lambda_1) [L_f(\rho)^2 + K_f(\rho\|\ell\|_{L^1})^2\|\ell\|^2_{L^1} ] \| u(\tau)\|^2_{\mathbb H^\mu}d\tau\\ & \le 2\int_0^t \omega(t-\tau,\lambda_1) ({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon ) \| u(\tau)\|^2_{\mathbb H^\mu}d\tau, \end{align*} for $\epsilon>0$ and $\rho^*>0$ is chosen such that $$ 4[L_f(\rho)^2 + K_f(\rho\|\ell\|_{L^1})^2\|\ell\|^2_{L^1}]\le 4({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon) \le \lambda_1, $$ for all $\rho\le \rho^*$. Then, \begin{align*} \|S*f(u(\cdot),&\mathcal Hu(\cdot))\|^2_{\mathbb H^\mu} \le 2\lambda_1^{-1}(1-\omega(t,\lambda_1)) ({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2. \end{align*} We can estimate the solution operator $\Phi$ as follows \begin{align*} \|\Phi(u)(t)\|_{\mathbb H^\mu}^2&\le 2\omega(t,\lambda_1)^2\|\xi\|^2_{\mathbb H^\mu} + 2\|S*f(u(\cdot),\mathcal Hu(\cdot))\|^2_{\mathbb H^\mu}\\ & \le 2\omega(t,\lambda_1) \|\xi\|^2_{\mathbb H^\mu}+4 \lambda_1^{-1}(1-\omega(t,\lambda_1)) ({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2\\ & \le 2\omega(t,\lambda_1)[\|\xi\|^2_{\mathbb H^\mu}-2 \lambda_1^{-1}({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2]\\ & \qquad + 4 \lambda_1^{-1}({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2. \end{align*} Take $\eta=2 \lambda_1^{-1}({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2$, with $\|\xi\|_{\mathbb H^\mu}\le \eta$ we have \begin{align*} \|\Phi(u)(t)\|_{\mathbb H^\mu}^2&\le 4 \lambda_1^{-1}({L_f^*}^2 + {K_f^*}^2\|\ell\|_{L^1}^2+\epsilon){\rho^*}^2 \le {\rho^*}^2. \end{align*} Hence $\Phi(u)\in B_{\rho^*}$. Then, we show that $\Phi$ is contractive on $B_{\rho^*}$. For $u_1, u_1\in B_{\rho^*}$, we have \begin{align*} \|\Phi(u_1)(t)&-\Phi(u_2)(t)\|_{\mathbb H^\mu}^2 \\ & \le \int_0^t \omega(t-\tau,\lambda_1)\|f(u_1(\tau),\mathcal Hu_1(\tau))-f(u_2(\tau),\mathcal Hu_2(\tau))\|^2_{\mathbb H^{\mu-1}}d\tau\\ & \le \int_0^t \omega(t-\tau,\lambda_1) ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon)\|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}d\tau\\ & \le \lambda_1^{-1}({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon)\sup_{\tau\in [0,T]}\|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}\\ & \le \frac 14 \sup_{\tau\in [0,T]}\|u_1(\tau)-u_2(\tau)\|^2_{\mathbb H^\mu}. \end{align*} Therefore $$ \|\Phi(u_1)-\Phi(u_2)|_\infty\le \frac 12 \|u_1-u_2\|_\infty, $$ which implies that $\Phi$ is contractive on $B_{\rho^*}$, and problem \eqref{e1}-\eqref{e3} possesses a unique mild solution $u\in B_{\rho^*}$. It remains to prove that this solution is H\"older continuous on $(0, T]$. Observe that \begin{align*} u(t+h)-u(t) &= [S(t+h)-S(t)]\xi + \int_0^t S(\tau) D_h f(u) (t-\tau)d\tau\\ & + \int_t^{t+h} S(\tau) f(u(t+h-\tau),\mathcal Hu(t+h-\tau)) d\tau. \end{align*} Then \begin{align*} \|u(t+h)-u(t)\|^2_{\mathbb H^\mu}&\le 3 \|[S(t+h)-S(t)]\xi\|^2_{\mathbb H^\mu} \\ & + 3\int_0^t \|D_h f(u)(t-\tau)\|^2_{\mathbb H^{\mu-1}}d\tau\\ & + 3\int_t^{t+h}\|f(u(t+h-\tau),\mathcal Hu(t+h-\tau))\|_{\mathbb H^{\mu-1}}^2d\tau\\ & = F_1(t) + F_2(t) + F_3(t), \end{align*} here we employed Lemma \ref{lm-sol-op}(b) and the fact that $\omega(t,\lambda_1)\le 1$. By an analoguous argument as in the proof of Theorem \ref{th-Holder}, we get \begin{align*} \|[S(t+h)-S(t)]\xi\|_{\mathbb H^\mu}& \le \gamma^{-1} h^{\gamma} t^{-\gamma}\|\xi\|_{\mathbb H^\mu}, \gamma\in (0,\frac 12). \end{align*} So it is clear that $$ F_1(t) \le 3 \gamma^{-2} h^{2\gamma} t^{-2\gamma}\|\xi\|_{\mathbb H^\mu}^2. $$ Moreover, \begin{align*} F_2(t) & \le 6 \int_0^t ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon)\|u(t+h-\tau)-u(t-\tau)\|^2_{\mathbb H^\mu}d\tau\\ & = 6 \int_0^t ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon)\|u(\tau+h)-u(\tau)\|^2_{\mathbb H^\mu}d\tau, \end{align*} \begin{align*} F_3(t)&\le 6 \int_t^{t+h} ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon){\rho^*}^2 d\tau\\ & \le 6h({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon){\rho^*}^2\\ & \le 6T h^{2\gamma}t^{-2\gamma} ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon){\rho^*}^2, \end{align*} thanks to the fact that $0<h, t<T$. Putting the estimates of $F_1(t), F_2(t)$ and $F_3(t)$ together, we get \begin{align*} \|u(t+h)&-u(t)\|^2_{\mathbb H^\mu} \le C_0 h^{2\gamma} t^{-2\gamma} + \int_0^t C_1\tau^{-2\gamma} [\tau^{2\gamma}\|u(\tau+h)-u(\tau)\|^2_{\mathbb H^\mu}]d\tau \end{align*} with \begin{align*} C_0 &= 3\gamma^{-2} \|\xi\|^2_{\mathbb H^\mu}+6T ({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon){\rho^*}^2,\\ C_1 & = 6({L_f^*}^2+{K_f^*}^2\|\ell\|^2_{L^1}+\epsilon). \end{align*} Applying the Gronwall inequality, we obtain \begin{align*} t^{2\gamma}\|u(t+h)&-u(t)\|^2_{\mathbb H^\mu} \le e^{C_1(1-2\gamma)^{-1} t^{1-2\gamma}} C_0 h^{2\gamma}\text{ for } t>0. \end{align*} In other words, $u(\cdot)$ is H\"older continuous on $(0,T]$. \end{proof} \begin{example}\rm Consider the case $d\ge 3$ and $$ f(u(t),\mathcal Hu(t)) = |u(t)|^p + \int_0^t \ell (t-\tau) (\chi\cdot\nabla) u(\tau) d\tau, p>1, $$ with $\chi\in [L^\infty (\Omega)]^d$. The nonlinear function contains the advection term which depends on the history of states. Noting that, for $\theta\in (0,1)$, $q=\frac{2d}{d+2\theta}$, $\hat p=\frac{2d}{d-2}$, $\hat q=\frac{2d}{2+2\theta}$, we have $$ \frac{1}{\hat p} +\frac{1}{\hat q}=\frac 1q. $$ Applying general H\"older inequality with $u\in L^{(p-1)\hat q}(\Omega), v\in L^{\hat p}(\Omega)$, we gain \begin{align*} \||u|^{p-1}v\|_{L^q} &\le \||u|^{p-1}\|_{L^{\hat q}}\|v\|_{L^{\hat p}} \\ & = \|u\|^{p-1}_{L^{(p-1)\hat q}}\|v\|_{L^{\hat p}}. \end{align*} Assume that $(p-1)\hat q\le \hat p$, which means $$ p\le \frac{d+2\theta}{d-2}. $$ By Lemma \ref{lm:em1}, we have $$ \mathbb H^1\subset H_0^1(\Omega)\subset L^{\hat p}(\Omega)\subset L^{(p-1)\hat q}(\Omega). $$ Therefore \begin{align*} \||u|^{p-1}v\|_{L^q} \le C \|u\|^{p-1}_{\mathbb H^1}\|v\|_{\mathbb H^1}, \end{align*} where $C$ is a positive constant which does not depend on $u$ and $v$. Moreover, we have $$L^q(\Omega) \subset H^{-\theta}\subset \mathbb H^{-\theta}.$$ Hence, \begin{align} \||u|^{p-1}v\|_{\mathbb H^{-\theta}} \le C \|u\|^{p-1}_{\mathbb H^1}\|v\|_{\mathbb H^1}.\label{ex-1} \end{align} When $u,v\in C([0,T];\mathbb H^1)$, applying the inequality \eqref{ex-1}, we have \begin{align}\label{ex-2} \||u(t)|^p - |v(t)|^p\|_{\mathbb H^{-\theta}} \le C (\|u(t)\|^{p-1}_{\mathbb H^1}+\|v(t)\|^{p-1}_{\mathbb H^1})\|u(t)-v(t)\|_{\mathbb H^1}. \end{align} For $\mathcal Hu (t) = \ell*u$, we have $$ \|(\chi\cdot\nabla)[\mathcal H u(t) - \mathcal H v(t)]\| \le \|\chi\|_\infty \cdot\|\mathcal H u(t) - \mathcal H v(t)\|_{\mathbb H^1}. $$ Because $L^2(\Omega)\subset \mathbb H^{-\theta}$, we get \begin{align}\label{ex-3} \|(\chi\cdot\nabla)[\mathcal H u(t) - \mathcal H v(t)]\|_{\mathbb H^{-\theta}} \le C_\theta\|\chi\|_\infty \cdot\|\mathcal H u(t) - \mathcal H v(t)\|_{\mathbb H^1}, \end{align} with $C_\theta$ is a positive constant. Combining \eqref{ex-2} and \eqref{ex-3}, it leads to \begin{align*} \|f(u(t),\mathcal Hu(t))-f(v(t),\mathcal Hv(t)\|_{\mathbb H^{-\theta}} & \le C (\|u(t)\|^{p-1}_{\mathbb H^1}+\|v(t)\|^{p-1}_{\mathbb H^1})\|u(t)-v(t)\|_{\mathbb H^1} \\ & \quad + C_\theta\|\chi\|_\infty \cdot\|\mathcal H u(t) - \mathcal H v(t)\|_{\mathbb H^1}. \end{align*} Thus, the function $f$ satisfies condition (F) with $\mu=1$, $\theta=\delta$ and \begin{align*} L_f(\rho) = 2C\rho^{p-1}, \; K_f(\rho) = C_\theta\|\chi\|_\infty . \end{align*} Then, we get the conclusion of Theorem \ref{th-sol} và \ref{th-Holder} when $\|\chi\|_\infty$ is sufficiently small. \end{example} \section{Application} Consider the following problem of identifying parameter \begin{align} \partial_t u - (1+D_t^{\{m\}})\Delta u & = g(x)p(t) + f_1(u) \text{ in } \Omega, t\in (0,T),\label{pe1}\\ u & = 0 \text{ on } \partial\Omega,\; t\ge 0,\label{pe2}\\ u(0) & = \xi \text{ in } \Omega, \label{pe3}\\ \int_\Omega \kappa(x)u(t,x)dx & = \psi(t),\; t\in [0,T],\label{pe4} \end{align} where $p(t)$, $t\in [0,T]$, is an unknown parameter, $g\in L^2(\Omega)$ is given. In this model, \eqref{pe4} is the complementary measurement with $\kappa\in H_0^1(\Omega)$ such that $(g,\kappa)\ne 0$, $\psi\in W^{1,1}(0,T)$. In this problem, we require further that the kernel $m$ fulfils: \begin{enumerate}\it \item[\rm (M$^\star$)] Assumption (M) holds and $m'\in L^1(0,T)$. \end{enumerate} Then, problem \eqref{pe1} is rewritten as follows \begin{align*} \partial_t u - (1+m_0) \Delta u - m_1*\Delta u & = g(x)p(t) + f_1(u), \;m_0=m(0), m_1=m'. \end{align*} Combining with \eqref{pe4}, we obtain $$ \psi' + (1+m_0)(\nabla u, \nabla\kappa) + m_1*(\nabla u, \nabla\kappa) = (g,\kappa)p(t) + (f_1(u),\kappa). $$ Therefore, $$ p(t) = (g,\kappa)^{-1}[\psi' + (1+m_0)(\nabla u, \nabla\kappa) + m_1*(\nabla u, \nabla\kappa)-(f_1(u),\kappa)]. $$ Set $$ f_2(u,\mathcal H u):=(1+m_0)(\nabla u, \nabla\kappa) + (\nabla (m_1*u), \nabla\kappa)-(f_1(u),\kappa), $$ where $\mathcal H$ is the convolution operator with the kernel $m_1$. Assume that $f_1:\mathbb H^1\to L^2(\Omega)$ verifies the condition $$ \|f_1(u)-f_1(v)\| \le L_1 \|u-v\|_{\mathbb H^1}, \text{ for all } u,v\in\mathbb H^1. $$ Then, for all $u,v\in C([0,T];\mathbb H^1)$, we have \begin{align*} |f_2(u,\mathcal H u)-f_2(v,\mathcal H v)| & \le (1+m_0)\|\kappa\|_{\mathbb H^1}\|u-v\|_{\mathbb H^1}\\ & \qquad + \|m_1*(u-v)\|_{\mathbb H^1}\|\kappa\|_{\mathbb H^1} + L_1\|\kappa\|\|u-v\|_{\mathbb H^1}. \end{align*} Hence, $f(u,\mathcal Hu):= g (g,\kappa)^{-1} f_2(u,\mathcal Hu) + f_1(u)$ verifies the conditions of Theorem \ref{th-sol-ad} with \begin{align*} L_f^* &=|(g,\kappa)^{-1}| \|g\| [(1+m_0)\|\kappa\|_{\mathbb H^1} + L_1\|\kappa\|] + L_1,\\ K_f^* &= |(g,\kappa)^{-1}| \|g\|\|\kappa\|_{\mathbb H^1}. \end{align*} Therefore, problem \eqref{pe1}-\eqref{pe4} is solvable in the sense that there exists a unique mild solution $(u,p)\in C([0,T];\mathbb H^1)\times C([0,T])$.
df0d81881f907e78a25d6ff4ff1a79b572af9bfa
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Keywords} \hspace{4.5mm} \emph{Keywords: Image Decomposition, Variational Calculus, Image Denoising,} \\ \hspace*{25.25mm} \emph{Feature Extraction, Image Segmentation} \section{Introduction} \label{sec:Introduction} Image segmentation remains at the forefront of issues in computer vision and image processing and an abundance of approaches have been developed to solve a wide range of problems; see, for example, \cite{DigitalImageProcessing2002,Szeliski2011}. The goal of image segmentation is to decompose the image domain into a montage of meaningful components. This has lead to breakthroughs in a number of research areas such as medical imaging \cite{IntroductiontoBiomedicalImaging2003}, astronomical imaging \cite{AstronomicalImageandDataAnalysis2006}, and biometric recognition \cite{IntroductiontoBiometrics2011,ThaiHuckemannGottschlich2016,ThaiGottschlich2016G3PD}. Segmentation methods can be broadly characterized by the class of target images for which they are intended, either homogeneous (piecewise smooth) or textural. Many such methods have been suggested, including approaches based on the intensity of pixels \cite{Otsu1979, SahooWilkinsYeager1997, AlbuquerqueEsquefMelloAlbuquerque2004} and others based on curve evolution\cite{ChanVese2001,BressonEsedogluVandergheynstThiranOsher2007,ChanEsedogluNikolova2012,LieLysakerTai2006}. For homogeneous images in particular, the classical approach to segmentation is based on active contours \cite{KassWitkinTerzopoulos1988}. Given an image $f$ on a bounded domain $\Omega \subset \mathbb{R}^2$, contours are driven to object boundaries by internal and external forces in the functional \begin{equation} \label{eq:KassWitkinTerzopoulos} \inf_{C(s)} \left\{ \alpha \int_{0}^1 \abs{C'(s)}^2 ds + \beta \int_{0}^1 \abs{C''(s)} ds - \lambda \int_{0}^1 \abs{\nabla f(C(s))}^2 ds \right\} \end{equation} with a curve $C(s) : [0, 1] \rightarrow \mathbb R^2$ and positive parameters $\alpha \,, \beta$ and $\lambda$. Under the classical model $f({\boldsymbol x}) = u({\boldsymbol x}) + \epsilon({\boldsymbol x})$ with ${\boldsymbol x} \in \Omega$, Mumford and Shah \cite{MumfordShah1989} proposed a solution by minimizing the energy functional \begin{equation} \label{eq:PiecewiseSmoothMS} \inf_{u, C} \left\{ \int_\Omega \big( f({\boldsymbol x}) - u({\boldsymbol x}) \big)^2 d{\boldsymbol x} ~+~ \nu \int_{\Omega \backslash C} \abs{\nabla u({\boldsymbol x})}^2 d{\boldsymbol x} + \mu \abs{C} \right\}. \end{equation} However, this piecewise smooth Mumford-Shah model is NP-hard due to the Hausdorff 1-dimensional measure ${\mathcal H}^1(\mathbb R^2)$. A simplified version for image segmentation when $f$ is assumed to be piecewise constant can be written as \begin{equation} \label{eq:PiecewiseConstantMS:indicator} \inf_{[c_n]_{n=1}^N, [\Omega_n]_{n=1}^N} \left\{ \sum_{n=1}^N \int_{\Omega} \big( f({\boldsymbol x}) - c_n \big)^2 \mathbf 1_{\Omega_n}({\boldsymbol x}) d{\boldsymbol x} ~+~ \frac{\mu}{2} \sum_{n=1}^N \int_\Omega \abs{\nabla \mathbf 1_{\Omega_n}({\boldsymbol x})} d {\boldsymbol x} \right\} \,. \end{equation} which closely resembles the Potts model \cite{Potts1952} developed decades earlier. Rudin et al.\ \cite{RudinOsherFatemi1992} proposed an alternative, more computationally efficient version of the model in (\ref{eq:PiecewiseSmoothMS}) that preserves sharp edges in the restored image. These advantages led to numerous extensions including examination in different functional spaces, \cite{AujolGilboaChanOsher2006, AujolAubertFeraudChambolle2005, AujolGilboa2006, BuadesLeMorelVese2010, VeseOsher2003, AubertVese1997}, versions involving higher-order derivatives \cite{ChanMarquinaMulet2000, LysakerLundervoldTai, RahmanTaiOsher2007, HahnWuTai2012}, mean curvature \cite{ZhuChan2012}, Euler's elastica \cite{TaiHahnChung2011, ZhuTaiChan2013}, total variation of the first and second order derivatives \cite{PapafitsorosSchoenlieb2014}, and higher-order PDEs for diffusion solved by directional operator splitting schemes \cite{CalatroniDueringSchoenlieb2014}. Various techniques have been proposed for solving the convex optimization including Chambolle's projection \cite{Chambolle2004}, the splitting Bregman method \cite{GoldsteinOsher2009}, and iterative shrinkage/thresholding (IST) algorithms \cite{DaubechiesDefriseMol2004, BeckTeboulle2009, DiasFigueiredo2007}. In 2010, Wu et al.\ \cite{WuTai2010} proved the equivalence between the augmented Lagrangian method (ALM), dual methods, and the splitting Bregman method. Letting $p_n({\boldsymbol x})$ denote the indicator function $\mathbf 1_{\Omega_n}({\boldsymbol x})$ with ${\boldsymbol x} \in \Omega$, (\ref{eq:PiecewiseConstantMS:indicator}) can be rewritten as the non-convex constrained minimization \begin{equation} \label{eq:PiecewiseConstantMS:indicator:nonconvex} \min_{\vec{c}, \vec{p}} \left\{ \frac{\mu}{2} \sum_{n=1}^N \norm{\nabla p_n}_{L_1} + \sum_{n=1}^N \Big\langle (f - c_n)^2 \,, p_n \Big\rangle_{L_2} \text{s.t.} \sum_{n=1}^N p_n({\boldsymbol x}) = 1 \,, p_n({\boldsymbol x}) \in \{0, 1\} \right\} \end{equation} where $\vec{c} = [c_n]_{n=1}^N \,, \vec{p} = [p_n]_{n=1}^N$. Note that when $N=2$, this becomes the celebrated Chan and Vese model \cite{ChanVese2001}. Brown et al.\ \cite{BrownChanBresson2010, BrownChanBresson2012} provide a convex relaxation of (\ref{eq:PiecewiseConstantMS:indicator:nonconvex}) by relaxing the binary set to $p_n({\boldsymbol x}) \in [0, 1]$ and Bae et al. \cite{BaeYuanTai2010} solve this relaxed version via a smoothed primal-dual method; see \cite{GuWangTai2012, BaeLellmannTai2005, GuWangXiongChengHuangZhou2013, GuXiongWangChengHuangZhou2014} for details. The advantage of the multiphase segmentation is illustrated in Figure \ref{fig:MultiphaseOver2phase} using the smoothed primal-dual method in \cite{BaeYuanTai2010} to recover an object under a spectrum of illumination. \begin{figure} \begin{center} \includegraphics[width=1\textwidth]{Fig1.png} \caption{The original image (a) is segmented by \cite{BaeYuanTai2010} with two phase (b), three phase (c) and four phase segmentation (with the same notation for parameters $n = 4 \,, s = 0.001 \,, \delta = 0.1$). Using multiple phases allows us to recover portions of the image with different illumination (gradient change). } \label{fig:MultiphaseOver2phase} \end{center} \end{figure} Texture segmentation and analysis remains a challenging problem due to its oscillatory nature. Proposed methods include those based on texture descriptors \cite{SagivSochenZeevi2006, HouhouThiranBresson2009}, histogram metrics \cite{NiBressonChanEsedoglu2009}, and finding other meaningful features in an observed image for classification \cite{Unser1995}. Among the most popular approaches is the vector-valued Chan-Vese model for texture segmentation with a Gabor filter \cite{ChanSandbergVese2000} whose convex relaxed version is defined in \cite{BrownChanBresson2010,BrownChanBresson2009}. This can be seen as a generalized version of the two-phase piecewise constant Mumford and Shah model for a vector valued image $\vec{f} = \big[ f_m \big]_{m=1}^M$ and constant vector $\vec{c}_1 = \big[ c_{1m} \big]_{m=1}^M \,, \vec{c}_2 = \big[ c_{2m} \big]_{m=1}^M$. The resulting minimization becomes convex by relaxing the binary constraint to $p({\boldsymbol x}) \in [0 \,, 1]$ \cite{BrownChanBresson2010}. Though these techniques have seen much success in their respective domains, many natural images like fingerprints and stem cell imaging contain both homogeneous and textural regions and it is important to define a technique to capture the entirety of this information. The Mumford-Shah model fails in this larger class of images due to the absence of measurement for texture; see Figure \ref{fig:Barbara:MSsegmentation} for such an example where textural regions appear in each phase. \begin{figure} \begin{center} \includegraphics[height=0.179\textheight]{Fig2.png} \caption{The original image (a) containing both texture and homogeneity is segmented into three phases by the Mumford and Shah (MS) model with $\sigma = 0, N = 3, s = 0.005, \xi = 0.001, \lambda = 10^{-3}, \tau = 0.1, \text{Iteration} = 20$. Note that both homogeneous and textural information appear in all three phases (b)-(d). } \label{fig:Barbara:MSsegmentation} \end{center} \end{figure} In this work, we provide a method for multiphase segmentation of images that simultaneously contain regions of both homogeneity and texture. An attempt at this kind of segmentation was provided in \cite{LiuTaiHuangHuan2011} but importantly, our work here can be viewed as a decomposition of the original image which approximates the image in functional space instead of using harmonic analysis. This approach to the inverse problem allows us to obtain a piecewise constant component as well as sparse directional information; Figure \ref{fig:fingerprint:approximation} shows a preview of results obtained using the bilevel SHT method outlined in Section \ref{sec:BiLevelMinimizationScheme}. Following \cite{ThaiGottschlich2016G3PD, VeseOsher2003, AujolChambolle2005, ThaiGottschlich2016DG3PD, Gilles2012}, we adopt the idea of the discrete directional $\text{G}_S$-norm to measure texture in several directions and the dual of a generalized Besov space in the curvelet domain ${\mathcal C}$ \cite{ThaiGottschlich2016DG3PD, CandesDonoho2004, CandesDemanetDonohoYing2006, StarckDonohoCandes2003, MaPlonka2010} to measure the residual. This approach is particularly useful for many natural images such as fingerprints in which texture appears in many directions and can easily be adapted for the shearlet, contourlet, steerable wavelet or 2D empirical transforms \cite{ShearletsBook,DoVetterli2005,UnserVandeville2010, GillesTranOsher2014}. Because of the curvelet transform, the residual can be either independent or correlated and need not follow a Gaussian distribution. Since our minimization involves the $\text{G}_S$-norm, we propose two alternative methods based on the two primary approaches to handling the $\text{G}_S$-norm: a multiphase SHT method based on the approach of Aujol and Chambolle \cite{AujolChambolle2005} and a bilevel SHT method based on the approach of Vese and Osher \cite{VeseOsher2003}. \begin{figure} \begin{center} \includegraphics[height=0.345\textheight]{Fig3.pdf} \caption{Image representation of a fingerprint decomposed according to the bilevel SHT method described in Section \ref{sec:BiLevelMinimizationScheme}. The directional texture is shown in the bottom row.} \label{fig:fingerprint:approximation} \end{center} \end{figure} The remainder of this paper is organized as follows. In Section \ref{sec:prelim}, we define some preliminary notation and investigate a simple model where only two-phase segmentation is considered and the homogeneous portion is assumed to be piecewise constant. In Section \ref{sec:multiSHT} we generalize this set-up both to multiphase segmentation and also to the case where homogeneous regions are considered piecewise-smooth. In Section \ref{sec:BiLevelMinimizationScheme} we introduce a bilevel minimization scheme to more efficiently solve the minimization induced by the multiphase piecewise smooth context. Finally, we apply the methodology to a number of representative images and compare to related approaches in Section \ref{sec:comparisons}. For readability, mathematical details and proofs are provided in the Appendix. \section{Preliminaries and Simplified Models} \label{sec:prelim} We begin by establishing some background notation and definitions. Let $X$ be the Euclidean space with dimension given by the size of the lattice $\Omega = \big\{ {\boldsymbol k} = [k_1 \,, k_2] \in [ 0 \,, d_1-1 ] \times [ 0 \,, d_2-1 ] \subset \mathbb Z^2 \big\}$. On the bounded domain $\Omega$, we denote the coordinates of the Fourier transform as ${\boldsymbol \omega} = [\omega_1 \,, \omega_2] \in [-\pi \,, \pi]^2$ and the coordinates of the $Z$ transform (the discrete version of the Fourier transform) as ${\boldsymbol z} = \big[z_1 \,, z_2\big] = \big[e^{j\omega_1} \,, e^{j\omega_2}\big]$. Denote the discrete Fourier transform pair as \begin{align*} f[{\boldsymbol k}] ~\stackrel{{\mathcal F}}{\longleftrightarrow}~ {\mathcal F} \big\{ f[{\boldsymbol k}] \big\}(e^{j{\boldsymbol \omega}}) = F(e^{j{\boldsymbol \omega}}) = \sum_{{\boldsymbol k} \in \Omega} f[{\boldsymbol k}] e^{-j \langle {\boldsymbol k} \,, {\boldsymbol \omega} \rangle_{\ell_2}}\,. \end{align*} Given the discrete function ${\mathbf f} = \big[ f[{\boldsymbol k}] \big]_{{\boldsymbol k} \in \Omega} \in X$, a vector $\vec{{\mathbf g}} = [{\mathbf g}_l]_{l=0}^{L-1} \in X^L$ and the direction $l = 0, \ldots, L-1$, we make a few preliminary definitions. The {\bfseries directional forward/backward difference operators} are given, in matrix notation, by \begin{align*} &\partial_l^+ {\mathbf f} = \sin\left(\frac{\pi l}{L}\right) \BDm {\mathbf f} + \cos\left(\frac{\pi l}{L}\right) {\mathbf f} {\mathbf{D}_{\mathbf 2}^{\text T}} ~~~~~~~ \stackrel{{\mathcal F}}{\longleftrightarrow}~ \Big[ \sin\left(\frac{\pi l}{L}\right) (z_1 - 1) + \cos\left(\frac{\pi l}{L}\right) (z_2 - 1) \Big] F({\boldsymbol z}) \\ &\partial_l^- {\mathbf f} = - \Big[ \sin\left(\frac{\pi l}{L}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf f} + \cos\left(\frac{\pi l}{L}\right) {\mathbf f} {\mathbf{D_2}} \Big] ~~ \stackrel{{\mathcal F}}{\longleftrightarrow}~ -\Big[ \sin\left(\frac{\pi l}{L}\right) (z_1^{-1}-1) + \cos\left(\frac{\pi l}{L}\right) (z_2^{-1} - 1) \Big] F({\boldsymbol z}) \end{align*} with $\partial_l^\pm {\mathbf f} = \Big[ \partial_l^\pm f[{\boldsymbol k}] \Big]_{{\boldsymbol k} \in \Omega}$ and a matrix \begin{equation*} \BDm = \begin{pmatrix} -1 & 1 &0 &\hdots &0 \\ 0 &-1 &1 &\hdots &0 \\ \vdots &\vdots &\vdots &\ddots &\vdots \\ 0 &0 &0 &\hdots &1 \\ 1 &0 &0 &\hdots &-1 \\ \end{pmatrix} \in \mathbb R^{d_1 \times d_1} \,, \end{equation*} and similarly ${\mathbf{D_2}} \in \mathbb R^{d_2 \times d_2}$. Given the adjoint operators of the difference operators as $\big( \partial^\pm_x \big)^* = - \partial_x^\mp \,,~ \big( \partial^\pm_y \big)^* = - \partial_y^\mp$, the adjoint operators of their directional version are \begin{equation*} \big( \partial^\pm_l \big)^* = - \Big[ \cos\left(\frac{\pi l}{L}\right) \partial_x^\mp + \sin\left(\frac{\pi l}{L}\right) \partial_y^\mp \Big] = - \partial_l^\mp \,, \end{equation*} and we can define the {\bfseries discrete directional gradient} and {\bf divergence} as $$ \nabla_L^\pm {\mathbf f} = \Big[ \partial_l^\pm {\mathbf f} \Big]_{l=0}^{L-1} \hspace{5mm} \text{ and } \hspace{5mm} \text{div}_L^\pm \vec{{\mathbf g}} = \sum_{l=0}^{L-1} \partial_l^\pm {\mathbf g}_l $$ respectively. Note that the adjoint operator of $\nabla_L^\pm$ is $\big( \nabla_L^\pm \big)^* = - \text{div}_L^\mp$; that is \begin{equation*} \Big\langle \nabla_L^\pm {\mathbf f} \,, \vec{{\mathbf g}} \Big\rangle_{\ell_2} = - \Big\langle {\mathbf f} \,, \text{div}_L^\mp \vec{{\mathbf g}} \Big\rangle_{\ell_2} \,. \end{equation*} Given a vector of matrices $\vec{{\mathbf p}} = \big[ {\mathbf p}_l \big]_{l=0}^{L-1} \in X^L$, the derivative of the divergence operator $\text{div}^-_L \vec{{\mathbf p}}$ w.r.t. ${\mathbf p}_l$ for $l = 0, \ldots, L-1$ is given by \begin{align*} \frac{ \partial }{ \partial {\mathbf p}_l } \Big\{ \text{div}^-_L \vec{{\mathbf p}} \Big\} = - \frac{ \partial }{ \partial {\mathbf p}_l } \Big[ \sin\left(\frac{\pi l}{L}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf p}_l + \cos\left(\frac{\pi l}{L}\right) {\mathbf p}_l {\mathbf{D_2}} \Big] = \Big[ -\partial_l^+ \delta[{\boldsymbol k}] \Big]_{{\boldsymbol k} \in \Omega} \,, \end{align*} where $\delta(\cdot)$ denotes the Dirac delta function. Thus, the derivative of the directional divergence w.r.t. $\vec{{\mathbf p}}$ is \[ \frac{ \partial }{ \partial \vec{{\mathbf p}} } \Big\{ \text{div}^-_L \vec{{\mathbf p}} \Big\} = \Big[ - \big[ \partial_l^+ \delta[{\boldsymbol k}] \big]_{l=0}^{L-1} \Big]_{{\boldsymbol k} \in \Omega} = \Big[ - \nabla^+_L \delta[{\boldsymbol k}] \Big]_{{\boldsymbol k} \in \Omega} \,. \] \noindent Finally, the {\bfseries discrete directional $\text{G}_S$-norm} \cite{ThaiGottschlich2016DG3PD} is given by \begin{align*} \norm{{\mathbf v}}_{\text{G}_S} &= \text{inf} \Big\{ \norm{ \abs{\vec{{\mathbf g}}} }_{\ell_\infty} \,, {\mathbf v} = \text{div}^-_S \vec{{\mathbf g}} \,,~ \vec{{\mathbf g}} = \big[ {\mathbf g}_s \big]_{s=0}^{S-1} \in X^S \Big\} \, \end{align*} \noindent and the indicator function on a convex set for noise in the curvelet domain ${\mathcal C}$ \cite{ThaiGottschlich2016DG3PD, CandesDonoho2004, CandesDemanetDonohoYing2006} by \begin{align*} A(\nu) = \Big\{ {\boldsymbol \epsilon} \in X ~:~ \norm{{\mathcal C} \{ {\boldsymbol \epsilon} \}}_{\ell_\infty} \leq \nu \Big\} ~~\text{and}~~ \mathscr G^*(\frac{{\boldsymbol \epsilon}}{\nu}) = \begin{cases} 0 \,, & {\boldsymbol \epsilon} \in A(\nu) \\ +\infty \,, & \text{else} \end{cases} \,. \end{align*} \noindent For a more thorough background on the mathematical preliminaries (including the point-wise operators as $\pm \,, \backslash \,, \cdot^\times \,, \max \,, \text{Shrink} \,, \mathop{\rm CST}$, etc.), we refer the reader to \cite{ThaiHuckemannGottschlich2016, ThaiGottschlich2016G3PD, ThaiGottschlich2016DG3PD, Thai2015PhD}. We point out that in the remainder of this work, we use boldface to present a matrix, e.g. ${\mathbf u} \in X$, vector with boldface to denote a vector of matrix, e.g. $\vec{{\mathbf p}} = \left[ {\mathbf p}_n \right]_{n=1}^N \in X^N$, and vector (without boldface) to denote constant vector, e.g. $\vec{c} = \left[ c_n \right]_{n=1}^N \in \mathbb R^N$. \subsection{Two-phase piecewise constant and texture segmentation} \label{sec:2phasePiecewiseConstTextureSeg} We begin by considering the simple discrete model consisting of a two phase piecewise constant image (indicated by indicator function ${\mathbf p}$ and mean values $(c_1 \,, c_2) \in \mathbb R_+$) and texture ${\mathbf v}$ corrupted by i.i.d. (or weakly correlated) noise ${\boldsymbol \epsilon}$ as \begin{align*} {\mathbf f} = c_1 {\mathbf p} + c_2 (1-{\mathbf p}) + {\mathbf v} + {\boldsymbol \epsilon} \,. \end{align*} As in \cite{ThaiGottschlich2016DG3PD}, we propose the model for this segmentation as \begin{align} \label{eq:twophaseDG3PDtexture:minimization:1} \min_{({\mathbf p}, {\boldsymbol \epsilon}, c_1, c_2) \in X^{2} \times \mathbb R^2} & \bigg\{ \norm{\nabla^+_L {\mathbf p}}_{\ell_1} + \mu_1 \norm{{\mathbf v}}_{\text{G}_S} + \mu_2 \norm{{\mathbf v}}_{\ell_1} + \mathscr G^*\left(\frac{{\boldsymbol \epsilon}}{\nu}\right) \notag \\ &\text{s.t.}~~ {\mathbf f} = c_1 {\mathbf p} + c_2 (1-{\mathbf p}) + {\mathbf v} + {\boldsymbol \epsilon} \,, p[{\boldsymbol k}] \in \{0, 1\}, \forall {\boldsymbol k} \in \Omega \bigg\} \,. \end{align} In order to solve a non-convex minimization (\ref{eq:twophaseDG3PDtexture:minimization:1}), we relax a binary set $p[{\boldsymbol k}] \in \{ 0 \,, 1\}$ to $[0 \,, 1]$ and then apply ALM and the alternating directional method of multipliers (ADMM); see proposition \ref{prop:2phasePiecewiseConst} and Algorithms 5 in Appendix for detailed calculations. Figure \ref{fig:2phasePiecewiseConst} illustrates the results of this model. The histogram of the indicator function ${\mathbf p}$ in Figure \ref{fig:2phasePiecewiseConst} (d) shows that ${\mathbf p}$ almost converges to $\{ 0 \,, 1 \}$ after the $100^{th}$ iteration. One can use a technique in \cite{BaeYuanTai2010} to make ${\mathbf p}$ be exactly a binary setting as a solution of a minimization (\ref{eq:twophaseDG3PDtexture:minimization:1}). \setcounter{subfigure}{0} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\textwidth]{Fig4.png} \caption{ The original image ${\mathbf f}_0$ is shown in Subfigure (a). Subfigure (b) shows the same image ${\mathbf f}$ with additional i.i.d. noise added from a Gaussian distribution with mean 0 and standard deviation $\sigma = 100$. The segmented version ${\mathbf f}_\text{seg} = c_1 {\mathbf p} + c_2 (1 - {\mathbf p}) + c_3(v>0)$ of (b) shown in Subfigure (c) is obtained by solving the minimization in (\ref{eq:twophaseDG3PDtexture:minimization:1}), see Algorithm 5 in the Appendix with the parameters: $L = 150 \,, S = 9 \,, c_\delta = 0.1, \theta = 0.9 \,, c_{\mu_1} = c_{\mu_2} = 0.03 \,, \beta_4 = 0.03 \,, \beta_3 = \frac{\theta}{1-\theta}\beta_4 \,, \beta_1 = \beta_4 \,, \beta_2 = 1.3\beta_3 \,, \#\text{iteration} = 100$. The binarized texture ${\mathbf v}$ in (g) shows its sparsity by a minimization of (\ref{eq:twophaseDG3PDtexture:minimization:1}) with a percentage of non-zero coefficients in texture ${\mathbf v}$ as $\frac{\#\{{\mathbf v} \neq 0 \}}{m n}100\% = 7.75\%$. Figure (e) and (f) are the indicator function and its complement, respectively. The mean values are $c_1 = 240.42 \,, c_2 = 98.02$ and we choose $c_3 = 50$. \label{fig:2phasePiecewiseConst}} \end{center} \end{figure} \section{Multiphase Segmentation SHT} \label{sec:multiSHT} The above models consider only two-phase segmentation in images where the homogeneous region can be considered piecewise constant. We now generalize this to allow for multiphase segmentation and also allow for piecewise-smooth homogeneity. \subsection{Multiphase piecewise smooth and texture segmentation} \label{sec:CombinedModel} As before, we assume that a natural image ${\mathbf f}$ contains both texture ${\mathbf v}$ and homogeneous regions ${\mathbf u}$ as well as noise ${\boldsymbol \epsilon}$ so that $ {\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon}$. However, we now further assume that ${\mathbf u}$ consists of both a multiphase $(N)$ piecewise constant (indexed by the indicator function $\vec{{\mathbf p}} = \big[ {\mathbf p}_n \big]_{n=1}^N$ and their mean values $\vec{c} = \big[ c_n \big]_{n=1}^N$) as well as a bias field ${\mathbf b}$ \[ {\mathbf u} = {\mathbf b} + \sum_{n=1}^N c_n {\mathbf p}_n \,, \] \noindent to account for the piecewise-smooth component of ${\mathbf f}$. Following \cite{Chambolle2004}, we utilize the directional $\text{G}_S$-norm to measure the texture ${\mathbf v}$ and propose a new combined model for multiphase {\bf s}imultaneous {\bf h}omogeneous and {\bf t}exture image segmentation (the SHT model) as \begin{align} \label{eq:CombinedModel:1} & \min_{\big( \vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}, {\mathbf b} \big) \in \mathbb R^N \times X^{N+4}} \Bigg\{ \norm{\nabla^+_L {\mathbf u}}_{\ell_1} + \mu_2 \norm{{\mathbf v}}_{\ell_1} + \mu_3 \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_4}{2} \norm{{\mathbf b}}^2_{\ell_2} + \mathscr G^*(\frac{{\boldsymbol \epsilon}}{\nu}) \notag \\& \text{s.t. } {\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon} \,, \norm{{\mathbf v}}_{\text{G}_S} \leq \mu_1 \,, {\mathbf u} = {\mathbf b} + \sum_{n=1}^N c_n {\mathbf p}_n \,, \notag \\ &\hspace{25mm} \sum_{n=1}^N {\mathbf p}_n = 1 \,, p_n [{\boldsymbol k}] \in \{0, 1\} \,, n = 1, \ldots, N \,, {\boldsymbol k} \in \Omega \Bigg\} \,. \end{align} Note that in contrast with (\ref{eq:twophaseDG3PDtexture:minimization:1}), we no longer assume a piecewise-constant homogeneous region and thus must also take into account the bias term ${\mathbf b}$. Note further that the directional total variation norm (DTV-norm) $\norm{\nabla^+_L \cdot}_{\ell_1}$ and $\text{G}_S$-norm are a dual pair if $L = S$; see Lemma \ref{lem:DTVDGnorm:dualpair} for details. Finally, observe that as with the ROF model in \cite{RudinOsherFatemi1992}, the process of smoothing the homogeneous areas while preserving the edge information is controlled by the DTV-norm for~ ${\mathbf u}$. \subsubsection*{Solution to the Multiphase SHT Model} In a similar fashion to \cite{BaeYuanTai2010}, the minimization in (\ref{eq:CombinedModel:1}) can be solved by a smoothed primal-dual model for the $\vec{{\mathbf p}}$-problem rather than by the Fourier approach used in the two-phase piecewise-constant model in (\ref{eq:twophaseDG3PDtexture:minimization:1}). The remainder of this section provides a sketch of the proposed algorithm; for details and proofs, see Propositions \ref{prop:DirectionalTVL2}, \ref{prop:DirectionalGL1}, \ref{prop:combinedmodel:pproblem} in the Appendix. \noindent Define the indicator function on a convex set for the $\text{G}_S$-norm as \begin{align*} &\text{G}_S(\mu_1) = \Big\{ {\mathbf v} \in X \,, \vec{{\mathbf g}} \in X^S ~:~ \norm{{\mathbf v}}_{\text{G}_S} = \norm{\vec{{\mathbf g}}}_{\ell_\infty} \leq \mu_1 \Big\} \\ &\hspace{15mm} \text{ and } J^*_S \left(\frac{{\mathbf v}}{\mu_1}\right) = \begin{cases} 0 \,, & {\mathbf v} \in \text{G}_S(\mu_1) \\ +\infty \,, & \text{otherwise} \end{cases} \,. \end{align*} By applying ALM to the equality constraint ${\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon}$ and relaxing the binary setting $p_n[{\boldsymbol k}] \in \{0, 1\}$ to the convex set $p_n[{\boldsymbol k}] \in [0, 1]$, the nonconvex minimization in (\ref{eq:CombinedModel:1}) becomes convex as \begin{align} \label{eq:CombinedModel:2} &\min_{(\vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}) \in \mathbb R^N \times X^{N+3}} \Big\{ {\mathcal L}\big( \vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}; \boldsymbol{\lambda} \big) \text{ s.t. } \sum_{n=1}^N {\mathbf p}_n = 1 \,, p_n [{\boldsymbol k}] > 0 \,, n = 1, \ldots, N \,, {\boldsymbol k} \in \Omega \Big\} \end{align} with \begin{align*} {\mathcal L}(\cdot; \cdot) &= \norm{\nabla^+_L {\mathbf u}}_{\ell_1} + \mu_2 \norm{{\mathbf v}}_{\ell_1} + \mu_3 \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_4}{2} \norm{{\mathbf u} - \sum_{n=1}^N c_n {\mathbf p}_n}^2_{\ell_2} + J_S^*(\frac{{\mathbf v}}{\mu_1}) + \mathscr G^*(\frac{{\boldsymbol \epsilon}}{\nu}) \\& + \frac{\beta}{2} \norm{ {\mathbf f} - {\mathbf u} - {\mathbf v} - {\boldsymbol \epsilon} + \frac{\boldsymbol{\lambda}}{\beta} }^2_{\ell_2} \,. \end{align*} Due to the multi-variable minimization, we apply ADMM to (\ref{eq:CombinedModel:2}) whose minimizer is numerically computed through iteration $t$ with updated Lagrange multiplier \begin{align} \label{eq:SHT:numericalLagrange} \big( \vec{c}^{(t)}, \vec{{\mathbf p}}^{(t)}, {\mathbf u}^{(t)}, {\mathbf v}^{(t)}, {\boldsymbol \epsilon}^{(t)} \big) = \mathop{\rm argmin} {\mathcal L}\big( \vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon} \,;~ \boldsymbol{\lambda}^{(t-1)} \big) \,. \end{align} Given the initialization ${\mathbf u}^{(0)} = {\mathbf f} \,, \vec{{\mathbf p}}^{(0)} = {\mathbf v}^{(0)} = {\boldsymbol \epsilon}^{(0)} = \boldsymbol{\lambda}^{(0)} = \mathbf 0$ and $c_n = (n-1) \lfloor \frac{255}{N} \rfloor$ for $n = 1, \ldots, N$, we solve the following five subproblems before updating the Lagrange multiplier. \vspace{2mm} \\ \noindent {\bfseries The ${\mathbf u}$-problem:} Fix $\vec{c}, \vec{{\mathbf p}}, {\mathbf v}, {\boldsymbol \epsilon}$ and solve \begin{align} \label{eq:CombinedModel:uproblem:solution} &\min_{{\mathbf u} \in X} \left\{ \norm{\nabla^+_L {\mathbf u}}_{\ell_1} ~+~ \frac{\mu_4 + \beta}{2} \norm{{\mathbf u} - {\mathbf h}}^2_{\ell_2} \right\} \end{align} where ${\mathbf h} = \frac{\mu_4}{\mu_4 + \beta} \sum_{n=1}^N c_n {\mathbf p}_n + \frac{\beta}{\mu_4 + \beta} \Big[ {\mathbf f} - {\mathbf v} - {\boldsymbol \epsilon} + \frac{\boldsymbol{\lambda}}{\beta} \Big] \,. $ \noindent From Proposition \ref{prop:DirectionalTVL2} and the numerical solver in (\ref{eq:SHT:numericalLagrange}), a primal solution of the DTV-$\ell_2$ (\ref{eq:CombinedModel:uproblem:solution}) at iteration $t$ is given by \begin{align*} {\mathbf u}^{(t)} &= {\mathbf h} - \frac{1}{\mu_4 + \beta} \text{div}^-_L \vec{{\mathbf r}}^{(t)} \end{align*} with dual variable \begin{align*} \vec{{\mathbf r}}^{(t)} = \frac{ \vec{{\mathbf r}}^{(t-1)} + \tau \nabla^+_L \Big[ \text{div}^-_L \vec{{\mathbf r}}^{(t-1)} - (\mu_4 + \beta) {\mathbf h} \Big] } { 1 + \tau \abs{ \nabla^+_L \Big[ \text{div}^-_L \vec{{\mathbf r}}^{(t-1)} - (\mu_4 + \beta) {\mathbf h} \Big] } } \,. \end{align*} \noindent {\bfseries The ${\mathbf v}$-problem:} Fix $\vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\boldsymbol \epsilon}$, denote ${\mathbf h}_{\text{v}} = {\mathbf f} - {\mathbf u} - {\boldsymbol \epsilon} + \frac{\boldsymbol{\lambda}}{\beta}$ and solve \begin{equation} \label{eq:CombinedModel:vproblem:solution} \min_{{\mathbf v} \in X} \left\{ J_S^*\left(\frac{{\mathbf v}}{\mu_1}\right) + \norm{{\mathbf v}}_{\ell_1} + \frac{1}{2} \frac{\beta}{\mu_2} \norm{ {\mathbf v} - {\mathbf h}_{\text{v}} }^2_{\ell_2} \right\}. \end{equation} To simplify the problem, we apply Proposition \ref{prop:DirectionalGL1} with a quadratic penalty $(\boldsymbol{\lambda_1} = 0)$. The primal solution of the directional $\text{G}_S-\ell_1$ model (\ref{eq:CombinedModel:vproblem:solution}) at iteration $t$ is updated as \begin{align*} v^{(t)} = \mathop{\rm Shrink} \left( \frac{ \frac{\beta}{\mu_2} }{\alpha + \frac{\beta}{\mu_2}} {\mathbf h}_{\text{v}} + \frac{\alpha \mu_1}{\alpha + \frac{\beta}{\mu_2} } \text{div}^-_S \vec{{\mathbf g}}^{(t)} \,,~ \frac{1}{\alpha + \frac{\beta}{\mu_2}} \right) \,, \end{align*} with dual variable \begin{align*} \vec{{\mathbf g}}^{(t)} = \frac{ \vec{{\mathbf g}}^{(t-1)} + \tau \nabla_S^+ \Big[ \alpha \mu_1 \text{div}^-_S \vec{{\mathbf g}}^{(t-1)} - \alpha {\mathbf v}^{(t-1)} \Big] } { 1 + \tau \abs{\nabla^+_S \Big[ \alpha \mu_1 \text{div}^-_S \vec{{\mathbf g}}^{(t-1)} - \alpha {\mathbf v}^{(t-1)} \Big]} } . \end{align*} \noindent {\bfseries The ${\boldsymbol \epsilon}$-problem:} Fix $\vec{c}, \vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}$ and solve \begin{align} \label{eq:CombinedModel:epsilonproblem:solution} \min_{{\boldsymbol \epsilon} \in X} \left\{ \mathscr G^*\left(\frac{{\boldsymbol \epsilon}}{\nu}\right) + \frac{\beta}{2} \norm{ {\boldsymbol \epsilon} - \Big[{\mathbf f} - {\mathbf u} - {\mathbf v} + \frac{\boldsymbol{\lambda}}{\beta} \Big] }^2_{\ell_2} \right\}. \end{align} In a similar fashion to \cite{ThaiGottschlich2016DG3PD}, the solution of (\ref{eq:CombinedModel:epsilonproblem:solution}) is given by \begin{equation*} {\boldsymbol \epsilon}^* = \Big[{\mathbf f} - {\mathbf u} - {\mathbf v} + \frac{\boldsymbol{\lambda}}{\beta} \Big] - \mathop{\rm CST} \Big( \Big[{\mathbf f} - {\mathbf u} - {\mathbf v} + \frac{\boldsymbol{\lambda}}{\beta} \Big] \,, \nu \Big) \,. \end{equation*} \noindent {\bfseries The $\vec{c} = \big[ c_n \big]_{n=1}^N$-problem:} Fix $\vec{{\mathbf p}}, {\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}$ and solve \begin{equation} \label{eq:CombinedModel:cproblem:solution} \min_{\vec{c} \in \mathbb R^N} \left\{ \norm{ {\mathbf u} - \sum_{n=1}^N c_n {\mathbf p}_n }^2_{\ell_2} \right\}. \end{equation} Due to its separability, the solution of (\ref{eq:CombinedModel:cproblem:solution}) is given by \begin{equation*} \label{eq:caproblem:solution} c_n = \frac{\displaystyle \sum_{{\boldsymbol k} \in \Omega} u[{\boldsymbol k}] p_n[{\boldsymbol k}] } {\displaystyle \sum_{{\boldsymbol k} \in \Omega} p_n[{\boldsymbol k}] } \,,~ n = 1, \ldots, N. \end{equation*} \noindent {\bfseries The $\vec{{\mathbf p}} = \big[ {\mathbf p}_n \big]_{n=1}^N$-problem:} Fix $\vec{c} \,, {\mathbf u} \,, {\mathbf v} \,, {\boldsymbol \epsilon}$ and solve \begin{align} \label{eq:CombinedModel:pproblem:solution} \min_{\vec{{\mathbf p}} \in X^N} & \Bigg\{ \mu_3 \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_4}{2} \norm{ {\mathbf u} - \sum_{n=1}^N c_n {\mathbf p}_n }^2_{\ell_2} \notag \\ &\text{ s.t. } \sum_{n=1}^N {\mathbf p}_n = 1, p_n [{\boldsymbol k}] \in \{0, 1\}, n = 1, \ldots, N, {\boldsymbol k} \in \Omega \Bigg\} \,. \end{align} From Proposition \ref{prop:combinedmodel:pproblem} with a smooth primal-dual model and Chambolle's projection, the primal solution of (\ref{eq:CombinedModel:pproblem:solution}) at iteration $t$ (for $n = 1, \ldots, N$) is \begin{align*} {\mathbf p}_n &= \frac{\displaystyle \exp\left\{ -\frac{1}{\xi} \Big[ \text{div}^-_M \vec{{\mathbf q}}_n + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u} - c_n \big)^{.2} \Big] \right\} } {\displaystyle \sum_{i=1}^N \exp \left\{ - \frac{1}{\xi} \Big[ \text{div}^-_M \vec{{\mathbf q}}_i + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u} - c_i \big)^{.2} \Big] \right\} } \\ &= \frac{\displaystyle \exp\left\{ -\frac{1}{\xi} \Big[ -\sum_{m=0}^{M-1} \Big[ \sin(\frac{\pi m}{M}) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{nm} + \cos(\frac{\pi m}{M}) {\mathbf q}_{nm} {\mathbf{D_2}} \Big] + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u} - c_n \big)^{.2} \Big] \right\} } {\displaystyle \sum_{i=1}^N \exp \left\{ - \frac{1}{\xi} \Big[ -\sum_{m=0}^{M-1} \Big[ \sin(\frac{\pi m}{M}) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{im} + \cos(\frac{\pi m}{M}) {\mathbf q}_{im} {\mathbf{D_2}} \Big] + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u} - c_i \big)^{.2} \Big] \right\} } \,, \end{align*} with dual variable \begin{align*} \vec{{\mathbf q}}_n^{(t)} &= \frac{ \vec{{\mathbf q}}_n^{(t-1)} + \tau \nabla^+_M {\mathbf p}_n^{(t)} }{ 1 + \tau \abs{\nabla^+_M {\mathbf p}_n^{(t)}} } ~ \in X^M \\ \Leftrightarrow~ {\mathbf q}_{nm}^{(t)} &= \frac{\displaystyle {\mathbf q}_{nm}^{(t-1)} + \tau \left[ \sin(\frac{\pi m}{M}) \BDm {\mathbf p}_n^{(t)} + \cos(\frac{\pi m}{M}) {\mathbf p}_n^{(t)} {\mathbf{D}_{\mathbf 2}^{\text T}} \right] } {\displaystyle 1 + \tau \left[ \sum_{m=0}^{M-1} \big[ \sin(\frac{\pi m}{M}) \BDm {\mathbf p}_n^{(t)} + \cos(\frac{\pi m}{M}) {\mathbf p}_n^{(t)} {\mathbf{D}_{\mathbf 2}^{\text T}} \big]^{.2} \right]^{.\frac{1}{2}} } \,,~ m = 0, \ldots, M-1 \,. \end{align*} \vspace{2mm} \noindent Finally, we update the Lagrange multiplier as \begin{equation*} \boldsymbol{\lambda}^{(t+1)} = \boldsymbol{\lambda}^{(t)} + \beta \big( {\mathbf f} - {\mathbf u} - {\mathbf v} - {\boldsymbol \epsilon} \big) \,. \end{equation*} This solution is summarized in Algorithm 1. Figures \ref{fig:Barbara:CombinedModel:NoNoise} and \ref{fig:Barbara:CombinedModel:Noise} depict the segmented results without noise and with independent Gaussian noise, respectively. In both cases, our proposed method provides good segmented results, though some large-scale texture (e.g.\ the books shown in the upper left-hand corner) still remains in the piecewise constant images; see Figures \ref{fig:Barbara:CombinedModel:NoNoise} (f) and \ref{fig:Barbara:CombinedModel:Noise} (f). This is likely due to the minimizer obtained by the primal-dual method with Chambolle's projection \cite{Chambolle2004} and since there is no shrinkage to produce sparse signals in some transform domains. Similar to (\ref{eq:twophaseDG3PDtexture:minimization:1}), one can use the technique in \cite{BaeYuanTai2010} to obtain a binary setting of ${\mathbf p}$. As in \cite{ThaiGottschlich2016G3PD, ThaiGottschlich2016DG3PD}, the convergence of the algorithm is defined by a relative error on the log scale \begin{align} \label{eq:relativeError:u} \text{Err}_{{\mathbf u}}(t) = \log \frac{\norm{{\mathbf u}^{(t)} - {\mathbf u}^{(t-1)}}_{\ell_2}}{\norm{{\mathbf u}^{(t-1)}}_{\ell_2}} \,,~ t = 1, 2, \ldots \, \end{align} \begin{figure}[!ht] \begin{center} \includegraphics[height=0.58\textheight]{Fig5.png} \caption{The original image ${\mathbf f}$ (a) is decomposed into a bias field ${\mathbf b}$ (e), a piecewise constant image ${\mathbf f}_\text{seg}$ (f), small scale objects (residual) ${\boldsymbol \epsilon}$ (g), and sparse texture ${\mathbf v}$ (h) with binarized verion ${\mathbf v}_\text{bin}$ in (l). A piecewise smooth image ${\mathbf u}$ (b) is obtained by a summation of ${\mathbf b}$ (e) and ${\mathbf f}_\text{seg}$ (f). Subfigure (c) shows segmented contours superimposed on ${\mathbf u}$. The relative error of ${\mathbf u}$ is shown in (d). The indicator functions for phases 1, 2, and 3 are shown in Subfigures i, j, and k, respectively. The parameters are $\nu = 10, N = 3, L = S = M = 2, \tau = 0.1, \xi = 0.001, \alpha = \mu_1 = \mu_2 = \mu_3 = 0.1, c_{\mu_1} = 0.14, \mu_4 = 0.01, \beta=0.04, \#\text{iteration} = 10000$. The mean square error of the original image ${\mathbf f}$ and a reconstructed image ${\mathbf f}_\text{re} = {\mathbf b} + {\mathbf f}_\text{seg} + {\mathbf v} + {\boldsymbol \epsilon}$ is $\text{MSE} = 9.02 \times 10^{-5}$.} \label{fig:Barbara:CombinedModel:NoNoise} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[height=0.58\textheight]{Fig6.png} \caption{Original image ${\mathbf f}$ with added i.i.d. noise from $\mathcal N(0 \,, 20^2)$ is shown in (a). With the addition of noise, we choose $\nu = 16$ with the remaining parameters set similar to those in Figure \ref{fig:Barbara:CombinedModel:NoNoise}. The QQplot in (c) for noise ${\boldsymbol \epsilon}$ in (g) shows that $\nu = 16$ can separate most of the noise and some texture information. The MSE is $7.47 \times 10^{-5}$. Note that increasing $L$ will not make ${\mathbf u}$ (b) smoother due to the lack of a sparsity constraint in Chambolle's projection. The algorithm still performs well with sparse texture ${\mathbf v}$ as illustrated in (l). } \label{fig:Barbara:CombinedModel:Noise} \end{center} \end{figure} \begin{algorithm} \label{alg:CombinedModel} \caption{The SHT model} \begin{algorithmic} \small \STATE{ {\bfseries Initialization:} ${\mathbf u}^{(0)} = {\mathbf f} \,, {\mathbf v}^{(0)} = {\boldsymbol \epsilon}^{(0)} = \vec{{\mathbf r}}^{(0)} = \vec{{\mathbf g}}^{(0)} = \vec{{\mathbf p}}^{(0)} = \vec{{\mathbf q}}^{(0)} =\boldsymbol 0 \,,~ c_n^{(0)} = (n-1) \lfloor \frac{255}{N} \rfloor \,, n = 1, \ldots, N$. } \STATE{} \FOR{$t = 1 \,, \ldots \,, T$} \STATE { {\bfseries I. Compute $\big( \vec{c}, \vec{{\mathbf r}}, {\mathbf u}, \vec{{\mathbf g}}, {\mathbf v}, {\boldsymbol \epsilon}, \vec{{\mathbf p}}, \vec{{\mathbf q}} \big) \in \mathbb R^N \times X^{L+S+N+NM+3}$:} \begin{align*} &\text{1.}~ {\mathbf h}^{(t)} = \frac{\mu_4}{\mu_4 + \beta} \sum_{n=1}^N c_n^{(t-1)} {\mathbf p}_n^{(t-1)} + \frac{\beta}{\mu_4 + \beta} \Big[ {\mathbf f} - {\mathbf v}^{(t-1)} - {\boldsymbol \epsilon}^{(t-1)} + \frac{\boldsymbol{\lambda}^{(t-1)}}{\beta} \Big] \\&\text{2.}~ \vec{{\mathbf r}}^{(t)} = \frac{ \vec{{\mathbf r}}^{(t-1)} + \tau \nabla^+_L \Big[ \text{div}^-_L \vec{{\mathbf r}}^{(t-1)} - (\mu_4 + \beta) {\mathbf h}^{(t)} \Big] } { 1 + \tau \abs{ \nabla^+_L \Big[ \text{div}^-_L \vec{{\mathbf r}}^{(t-1)} - (\mu_4 + \beta) {\mathbf h}^{(t)} \Big] } } \\&\text{3.}~ {\mathbf u}^{(t)} = {\mathbf h}^{(t)} - \frac{1}{\mu_4 + \beta} \text{div}^-_L \vec{{\mathbf r}}^{(t)} \\&\text{4.}~ \vec{{\mathbf g}}^{(t)} = \frac{ \vec{{\mathbf g}}^{(t-1)} + \tau \nabla_S^+ \Big[ \alpha \mu_1 \text{div}^-_S \vec{{\mathbf g}}^{(t-1)} - \boldsymbol{\lambda}^{(t-1)} - \alpha {\mathbf v}^{(t)} \Big] } { 1 + \tau \abs{\nabla^+_S \Big[ \alpha \mu_1 \text{div}^-_S \vec{{\mathbf g}}^{(t-1)} - \boldsymbol{\lambda}^{(t-1)} - \alpha {\mathbf v}^{(t)} \Big]} } \\&\text{5.}~ {\mathbf v}^{(t)} = \mathop{\rm Shrink} \Big( \underbrace{ \frac{ \frac{\beta}{\mu_2} }{\alpha + \frac{\beta}{\mu_2}} \big[ {\mathbf f} - {\mathbf u}^{(t)} - {\boldsymbol \epsilon}^{(t-1)} + \frac{\boldsymbol{\lambda}^{(t-1)}}{\beta} \big] + \frac{\alpha \mu_1}{\alpha + \frac{\beta}{\mu_2} } \text{div}^-_S \vec{{\mathbf g}}^{(t)} }_{:= {\mathcal T}_{\mathbf v}} , \frac{1}{\alpha + \frac{\beta}{\mu_2}} \Big) \,, \\&\qquad \mu_2 = \frac{\displaystyle \beta c_{\mu_2} \max_{{\boldsymbol k} \in \Omega} \abs{{\mathcal T}_v[{\boldsymbol k}]}}{\displaystyle 1 - \alpha c_{\mu_2} \max_{{\boldsymbol k} \in \Omega} \abs{{\mathcal T}_v[{\boldsymbol k}]}} \\&\text{6.}~ {\boldsymbol \epsilon}^{(t)} = \Big[{\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} + \frac{\boldsymbol{\lambda}^{(t-1)}}{\beta} \Big] - \mathop{\rm CST} \Big( \Big[{\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} + \frac{\boldsymbol{\lambda}^{(t-1)}}{\beta} \Big] \,, \nu \Big) \\&\text{7.}~ {\mathbf p}_n^{(t)} = \frac{\displaystyle \exp\Big\{ -\frac{1}{\xi} \Big[ \text{div}^-_M \vec{{\mathbf q}}_n^{(t-1)} + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u}^{(t)} - c_n^{(t-1)} \big)^{\cdot 2} \Big] \Big\} } {\displaystyle \sum_{i=1}^N \exp \Big\{ - \frac{1}{\xi} \Big[ \text{div}^-_M \vec{{\mathbf q}}_i^{(t-1)} + \frac{\mu_4}{2 \mu_3} \big( {\mathbf u}^{(t)} - c_i^{(t-1)} \big)^{\cdot 2} \Big] \Big\} } \,,~ n = 1, \ldots, N \\&\text{8.}~ \vec{{\mathbf q}}_n^{(t)} = \frac{ \vec{{\mathbf q}}_n^{(t-1)} + \tau \nabla^+_M {\mathbf p}_n^{(t)} }{ 1 + \tau \abs{\nabla^+_M {\mathbf p}_n^{(t)}} }\,,~ n = 1, \ldots, N \\&\text{9.}~ c_n^{(t)} = \frac{\displaystyle \sum_{{\boldsymbol k} \in \Omega} u^{(t)}[{\boldsymbol k}] p_n^{(t)}[{\boldsymbol k}] } {\displaystyle \sum_{{\boldsymbol k} \in \Omega} p_n^{(t)}[{\boldsymbol k}] } \,,~ n = 1, \ldots, N \end{align*} {\bfseries II. Update $\boldsymbol{\lambda} \in X$:} \begin{equation*} \boldsymbol{\lambda}^{(t)} = \boldsymbol{\lambda}^{(t-1)} + \beta \Big( {\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} - {\boldsymbol \epsilon}^{(t)} \Big) \end{equation*} } \ENDFOR \end{algorithmic} \end{algorithm} \section{The Bilevel-SHT Model} \label{sec:BiLevelMinimizationScheme} We now propose an alternative to the multiphase SHT model. As above, we assume that an image ${\mathbf f}$ is composed of a homogeneous region (consisting of a bias field ${\mathbf b}$ and piecewise-constant with mean values $\vec{c}$ and indicator functions $\vec{{\mathbf p}}$) as well as texture ${\mathbf v}$ and residual ${\boldsymbol \epsilon}$, but we now consider a bilevel scheme for decomposing the image into these base components. Specifically, we consider the decomposition and segmentation as separate levels: \begin{itemize} \item {\bfseries Level 1:} Image decomposition \begin{equation*} {\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon} \end{equation*} \item {\bfseries Level 2:} Multiphase piecewise-smooth image segmentation \begin{equation*} {\mathbf u} = {\mathbf b} + \sum_{n=1}^N c_n {\mathbf p}_n \,,~ \sum_{n=1}^N {\mathbf p}_n = 1 \,,~ p_n [{\boldsymbol k}] \in \{0, 1\} \,,~ n = 1, \ldots, N \,,~ {\boldsymbol k} \in \Omega \, . \end{equation*} \end{itemize} The bilevel minimization scheme for simultaneously homogeneous and textural (SHT) image segmentation is defined as \begin{align} \label{eq:BiLevelMinimization:1} \min_{\big( \vec{c} \,, \vec{{\mathbf p}} \,, {\mathbf b} \big) \in \mathbb R^N \times X^{N+1}} \Bigg\{ &\min_{({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}) \in X^3} \bigg\{ \mathcal I_1 \big({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon} \big) \text{ s.t. } {\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon} \bigg\} + \mathcal I_2 \big( \vec{{\mathbf p}}, {\mathbf b} \big) \text{ s.t. } \mathcal S \big( \vec{c}, \vec{{\mathbf p}}, {\mathbf b}; {\mathbf u} \big) \Bigg\} \end{align} with set $\mathcal S$ \begin{equation*} \mathcal S \big( \vec{c} \,, \vec{{\mathbf p}} \,, {\mathbf b} \,; {\mathbf u} \big) = \left\{ {\mathbf u} = {\mathbf b} + \sum_{n=1}^N c_n {\mathbf p}_n \,,~ \sum_{n=1}^N {\mathbf p}_n = 1 \,,~ p_n [{\boldsymbol k}] \in \{0, 1\} \,,~ n = 1, \ldots, N \,,~ {\boldsymbol k} \in \Omega \right\} \end{equation*} and energy functions \begin{align*} \mathcal I_1({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}) &~=~ \norm{\nabla^+_L {\mathbf u}}_{\ell_1} + \mu_1 \norm{{\mathbf v}}_{\text{G}_S} + \mu_2 \norm{{\mathbf v}}_{\ell_1} + \mathscr G^*\left(\frac{{\boldsymbol \epsilon}}{\nu}\right) ~~ \text{and} \\ \mathcal I_2 \big( \vec{{\mathbf p}} \,, {\mathbf b} \big) &~=~ \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_3}{2} \norm{ {\mathbf b} }^2_{\ell_2}. \end{align*} Similar to the above multiphase SHT model, this bilevel-SHT model also measures a bias field ${\mathbf b}$ via $\ell_2$ distance as data fidelity term in the regularization. In contrast with \cite{GuWangXiongChengHuangZhou2013, GuXiongWangChengHuangZhou2014}, we enforce the constraint for the smoothness in ${\mathbf u}$ with $\norm{ \nabla^+_L {\mathbf u} }_{\ell_1}$. \subsection{Solution of the bilevel-SHT model} We now describe a numerical algorithm to obtain the solution of the bilevel-SHT model (\ref{eq:BiLevelMinimization:1}): \begin{itemize} \item {\bfseries Level 1:} {\bfseries D}irectional {\bfseries G}lobal {\bfseries T}hree-{\bfseries p}art {\bfseries D}ecomposition (DG3PD) \begin{equation} \label{eq:BiLevelMinimization:2:step1} ({\mathbf u}^*, {\mathbf v}^*, {\boldsymbol \epsilon}^*) ~= \mathop{\rm argmin}_{({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}) \in X^3} \Big\{ \mathcal I_1 \big({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon} \big) \text{ s.t. } {\mathbf f} = {\mathbf u} + {\mathbf v} + {\boldsymbol \epsilon} \Big\} \end{equation} \item {\bfseries Level 2:} {\bfseries S}imultaneously {\bfseries H}omogeneous and {\bfseries T}exture {\bfseries M}ultiphase {\bfseries S}egmentation (SHTMS) \begin{equation} \label{eq:BiLevelMinimization:2:step2} \big( \vec{c}^* \,, \vec{{\mathbf p}}^* \,, {\mathbf b}^* \big) ~= \mathop{\rm argmin}_{\big( \vec{c}, \vec{{\mathbf p}}, {\mathbf b} \big) \in \mathbb R^N \times X^{N+1}} \Big\{ \mathcal I_2 \big( \vec{{\mathbf p}}, {\mathbf b} \big) \text{ s.t. } \mathcal S \big( \vec{c} \,, \vec{{\mathbf p}} \,, {\mathbf b} \,; {\mathbf u}^* \big) \Big\} \end{equation} \end{itemize} As alluded to above, we first decompose the original image ${\mathbf f}$ into piecewise-smooth, texture, and residual components ${\mathbf u}$, ${\mathbf v}$, and ${\boldsymbol \epsilon}$. We then segment the piecewise-smooth image ${\mathbf u}$ into multiphase $(N)$ piecewise-constant images and a bias field ${\mathbf b}$. Sparse (or segmented) texture ${\mathbf v}$ is measured by $\norm{{\mathbf v}}_{\ell_1}$ and $\norm{{\mathbf v}}_{\text{G}_S}$ and we note that $\text{G}_S$ is a generalized version of the Banach space G in the discrete setting \cite{AujolChambolle2005, ThaiGottschlich2016DG3PD, Meyer2001}. Note also that though we assume the original image ${\mathbf f}$ contains both texture and homogeneous areas, only the homogeneous areas are segmented by level 2. \subsubsection{Solution of Level 1 - DG3PD} The solution of the convex minimization in (\ref{eq:BiLevelMinimization:2:step1}) is defined in \cite[Algorithm 1]{ThaiGottschlich2016DG3PD} and \cite{ThaiGottschlich2016G3PD} and is solved by introducing new variables and applying ALM and ADMM. In an effort to make this paper self-contained, the kernel of the DG3PD method is provided in Algorithm 3. Note that DG3PD approximates $\norm{ \vec{{\mathbf g}} }_{\ell_\infty}$ in the $\text{G}_S$-norm by $\norm{ \vec{{\mathbf g}} }_{\ell_1}$; see \cite{VeseOsher2003} for details. This approximation in $\ell_1$-norm enforces sparsity of $\vec{{\mathbf g}}$ (in our case, the texture ${\mathbf v}$). \subsubsection{Solution of Level 2 - SHTMS} Note that since the bias field is defined as \[ {\mathbf b} = {\mathbf u} - \sum_{n=1}^N c_n {\mathbf p}_n \] with the binary set $p_n[{\boldsymbol k}] \in \{ 0 \,, 1 \}$, we can rewrite the $\ell_2$-norm and recast the non-convex minimization in (\ref{eq:BiLevelMinimization:2:step2}) as \begin{align} \label{eq:BiLevelMinimization:2:step2:1} \min_{(\vec{c}, \vec{{\mathbf p}}) \in \mathbb R^N \times X^N} \bigg\{ & \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_3}{2} \sum_{n=1}^N \Big\langle \big({\mathbf u} - c_n \big)^{.2} \,, {\mathbf p}_n \Big\rangle_{\ell_2} \,, \notag \\ &\hspace{10mm} \sum_{n=1}^N p_n[{\boldsymbol k}] = 1 , p_n [{\boldsymbol k}] \in \{0, 1\} , n = 1, \ldots, N , {\boldsymbol k} \in \Omega \bigg\} \,. \end{align} As in \cite{ThaiGottschlich2016DG3PD, ThaiGottschlich2016G3PD}, a solution of the multivariate minimization (\ref{eq:BiLevelMinimization:2:step2:1}) can be obtained by alternating between solving the following two subproblems: {\bfseries a. The $\vec{c} = \big[ c_n \big]_{n=1}^N$ problem:} Fix $\vec{{\mathbf p}}$ and solve \begin{equation} \label{eq:BiLevelMinimization:level2:cproblem} \min_{\vec{c} \in \mathbb R^N} \bigg\{ \mathcal L (\vec{c}) = \frac{\mu_3}{2} \sum_{n=1}^N \Big\langle \big({\mathbf u} - c_n \big)^{.2} \,, {\mathbf p}_n \Big\rangle_{\ell_2} \bigg\}. \end{equation} Due to its separability, the solution of (\ref{eq:BiLevelMinimization:level2:cproblem}) is given by \begin{equation} \label{eq:caproblem:solution} c_n = \frac{\displaystyle \sum_{{\boldsymbol k} \in \Omega} u[{\boldsymbol k}] p_n[{\boldsymbol k}] } {\displaystyle \sum_{{\boldsymbol k} \in \Omega} p_n[{\boldsymbol k}] } \,,~ n = 1, \ldots, N. \end{equation} {\bfseries b. The $\vec{{\mathbf p}} = \big[ {\mathbf p}_n \big]_{n=1}^N$ problem:} Fix $\vec{c}$ and find $\vec{{\mathbf p}}$. As in Section \ref{sec:CombinedModel}, the nonconvex minimization in (\ref{eq:BiLevelMinimization:2:step2:1}) w.r.t. $\vec{{\mathbf p}}$ is relaxed and made convex by setting a binary set $p_n[{\boldsymbol k}] \in \{0 \,, 1\}$ to $[0 \,, 1]$. Following \cite{BaeYuanTai2010, GuXiongWangChengHuangZhou2014}, we apply a smoothed dual formulation by introducing the primal, primal-dual, and dual models: {\bfseries The primal model:} Solve \begin{equation} \label{eq:BiLevelMinimization:2:step2:3} \min_{\vec{{\mathbf p}} \in \mathcal Q_+} \bigg\{ \mathcal L^\text{P} (\vec{{\mathbf p}}) = \sum_{n=1}^N \norm{\nabla^+_M {\mathbf p}_n}_{\ell_1} + \frac{\mu_3}{2} \sum_{n=1}^N \Big\langle \big({\mathbf u} - c_n \big)^{.2} \,, {\mathbf p}_n \Big\rangle_{\ell_2} \bigg\} \end{equation} over the convex set \begin{equation} \label{eq:BiLevelMinimization:Level2:pproblem:PrimalModel} \mathcal Q_+ = \Big\{ \big[{\mathbf p}_n\big]_{n=1}^N \in X^N ~:~ \sum_{n=1}^N p_n[{\boldsymbol k}] = 1 \,,~ p_n [{\boldsymbol k}] > 0 \,,~ n = 1, \ldots, N \,,~ {\boldsymbol k} \in \Omega \Big\}. \end{equation} {\bfseries The primal-dual model:} Denote a convex set $K_M(1) = \Big\{ \vec{{\mathbf q}}_n \in X^M ~:~ \norm{\vec{{\mathbf q}}_n}_{\ell_\infty} \leq 1 \Big\}$ with a dual variable $\vec{{\mathbf q}} = \big[ \vec{{\mathbf q}}_n \big]_{n=1}^N = \big[ {\mathbf q}_{nm} \big]_{n=[1,N]}^{m=[0,M-1]} \in X^{N M}$ of a primal variable $\vec{{\mathbf p}} = \big[ {\mathbf p}_n \big]_{n=1}^N \in X^N$. From Lemma \ref{lem:DTVDGnorm:dualpair} in the Appendix (for the dual formulation of the directional total variation norm) and the minimax theorem as found in \cite[Chapter 6]{EkelandTeman1999} and \cite{BaeYuanTai2010}, the primal-dual model is defined as \begin{equation} \label{eq:BiLevelMinimization:2:step2:primaldual} \max_{ \vec{{\mathbf q}} \in \big[K_M(1)\big]^N } \min_{ \vec{{\mathbf p}} \in \mathcal Q_+} \bigg\{ \mathcal L^\text{PD}(\vec{{\mathbf p}}; \vec{{\mathbf q}}) = \sum_{n=1}^N \Big\langle {\mathbf p}_n \,, \frac{\mu_3}{2} \big({\mathbf u} - c_n \big)^{.2} + \text{div}^-_M \vec{{\mathbf q}}_n \Big\rangle_{\ell_2} \bigg\}. \end{equation} {\bfseries The smoothed primal-dual model:} Solve \begin{equation} \label{eq:BiLevelMinimization:2:step2:SmoothedPrimalDual1} \max_{ \vec{{\mathbf q}} \in \big[K_M(1)\big]^N } \underbrace{ \min_{ \vec{{\mathbf p}} \in \mathcal Q_+} \bigg\{ \mathcal L^\text{PD}_{\xi>0}(\vec{{\mathbf p}}; \vec{{\mathbf q}}) = \sum_{n=1}^N \Big\langle {\mathbf p}_n \,, \frac{\mu_3}{2} \big({\mathbf u} - c_n \big)^{.2} + \text{div}^-_M \vec{{\mathbf q}}_n \Big\rangle_{\ell_2} + \xi \sum_{n=1}^N \Big\langle {\mathbf p}_n \,, \log {\mathbf p}_n \Big\rangle_{\ell_2} \bigg\} }_{\displaystyle = {\mathcal L}^\text{D}_{\xi>0} (\vec{{\mathbf q}}) := -\xi \sum_{{\boldsymbol k} \in \Omega} \log \Big[ \sum_{n=1}^N \exp \Big\{ -\frac{1}{\xi} \Big[ \frac{\mu_3}{2} (u[{\boldsymbol k}] - c_n)^2 + \text{div}^-_M \vec{q}_n[{\boldsymbol k}] \Big] \Big\} \Big] } \,. \end{equation} From Proposition \ref{prop:bilevel:smoothedprimaldualproblem} in the Appendix, the solution of the primal $\vec{{\mathbf p}}$-problem \begin{equation*} \min_{ \vec{{\mathbf p}} \in \mathcal Q_+} \mathcal L^\text{PD}_{\xi>0}(\vec{{\mathbf p}}; \vec{{\mathbf q}}) \end{equation*} is given by (with $n = 1, \ldots, N$) \begin{align*} {\mathbf p}^*_n &= \frac{\displaystyle \exp\bigg[ - \frac{1}{\xi} \Big[ \frac{\mu_3}{2} \big({\mathbf u} - c_n \big)^{.2} + \text{div}^-_M \vec{{\mathbf q}}_n \Big]\bigg] } {\displaystyle \sum_{i=1}^N \exp\bigg[ - \frac{1}{\xi} \Big[ \frac{\mu_3}{2} \big({\mathbf u} - c_i \big)^{.2} + \text{div}^-_M \vec{{\mathbf q}}_i \Big]\bigg] } \\& = \frac{\displaystyle \exp\bigg[ - \frac{1}{\xi} \Big[ \frac{\mu_3}{2} \big({\mathbf u} - c_n \big)^{.2} - \sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{nm} + \cos\left(\frac{\pi m}{M}\right) {\mathbf q}_{nm} {\mathbf{D_2}} \big] \Big]\bigg] } {\displaystyle \sum_{i=1}^N \exp\bigg[ - \frac{1}{\xi} \Big[ \frac{\mu_3}{2} \big({\mathbf u} - c_i \big)^{.2} - \sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{im} + \cos\left(\frac{\pi m}{M}\right) {\mathbf q}_{im} {\mathbf{D_2}} \big] \Big]\bigg] } \,. \end{align*} Due to its separability, we consider the dual $\vec{{\mathbf q}}$-problem \begin{equation} \label{eq:BiLevelMinimization:2:step2:SmoothedDual1} \max_{ \vec{{\mathbf q}} \in \big[K_M(1)\big]^N } {\mathcal L}^\text{D}_{s>0} (\vec{{\mathbf q}}) \end{equation} at $n = 1, \ldots, N$. Given $\displaystyle \vec{{\mathbf q}}_n = \big[ {\mathbf q}_{nm} \big]_{m=0}^{M-1}$ and $\nabla^+_M {\mathbf p}_n = \big[ \partial^+_m {\mathbf p}_n \big]_{m=0}^{M-1}$, the solution of (\ref{eq:BiLevelMinimization:2:step2:SmoothedDual1}) which is solved by Chambolle's projection \cite{Chambolle2004} at each iteration $t$ is \begin{align*} \vec{{\mathbf q}}_n^{(t+1)} = \frac{ \vec{{\mathbf q}}_n^{(t)} + \tau \nabla^+_M {\mathbf p}_n^{(t)} }{ 1 + \tau \abs{\nabla^+_M {\mathbf p}_n^{(t)}} } \,,~~ n = 1, \ldots, N \end{align*} and its element form is (with $m = 0, \ldots, M-1$) \begin{align*} {\mathbf q}_{nm}^{(t+1)} = \frac{\displaystyle {\mathbf q}_{nm}^{(t)} + \tau \big[ \sin\left(\frac{\pi m}{M}\right) \BDm {\mathbf p}_n + \cos\left(\frac{\pi m}{M}\right) {\mathbf p}_n {\mathbf{D}_{\mathbf 2}^{\text T}} \big] } {\displaystyle 1 + \tau \Bigg[ \sum_{m=0}^{M-1} \bigg[ \sin\left(\frac{\pi m}{M}\right) \BDm {\mathbf p}_n + \cos\left(\frac{\pi m}{M}\right) {\mathbf p}_n {\mathbf{D}_{\mathbf 2}^{\text T}} \bigg]^{.2} \Bigg]^{.\frac{1}{2}} } \,. \end{align*} The numerical solution of the bilevel SHT model is described in Algorithms 2-4. Figure \ref{fig:Barbara:BilevelModel} shows the bilevel SHT model applied to the same noisy image as in Figure \ref{fig:Barbara:CombinedModel:Noise}. Note, by comparing the upper left-hand corners of subfigures (l) and (k) of Figures \ref{fig:Barbara:CombinedModel:Noise} and \ref{fig:Barbara:BilevelModel} respectively, that the bilevel SHT model does a better job of fully segmenting the large scale texture from the homogeneous regions. However, these binarized versions also reveal that the bilevel SHT model is slightly oversensitive as some small artifacts are introduced. Finally, in Figure \ref{fig:galaxy:BilevelModel:2} we apply the bilevel SHT model to an image of a galaxy with many stars in the background. Although the stars may constitute small-scale texture, in cases such as these we may set the texture component (${\mathbf v}$) to 0, thereby treating this fine texture as noise. \begin{algorithm} \label{alg:SHTMS} \caption{The Bilevel-SHT model} \begin{algorithmic} \small \STATE{ {\bfseries Denote parameters:} $\kappa_{\text{d}} = \Big[ L, S, c_{\mu_1}, c_{\mu_2}, \big[\beta_i\big]_{i=1}^4, \nu \Big]$ and $\kappa_{\text{s}} = \Big[ M, N, \xi, \mu_3, \tau \Big]$ } \STATE{ {\bfseries Denote variables:} \begin{align*} \theta = \bigg[ \big[ {\mathbf r}_l \big]_{l=0}^{L-1} \,, \big[ {\mathbf w}_s \big]_{s=0}^{S-1} \,, \big[ {\mathbf g}_s \big]_{s=0}^{S-1} \,, \big[\boldsymbol{\lambda}_{\mathbf{1}l}\big]_{l=0}^{L-1} \,, \big[\boldsymbol{\lambda}_{\mathbf{2}a}\big]_{a=0}^{S-1} \,, \boldsymbol{\lambda}_{\mathbf 3} \,, \boldsymbol{\lambda}_{\mathbf 4} \bigg] \end{align*} } \STATE{ {\bfseries Initialization:}$ \, {\mathbf u}^{(0)} = {\mathbf f} \,, {\mathbf v}^{(0)} = {\boldsymbol \epsilon}^{(0)} = \theta^{(0)} = \vec{{\mathbf p}}^{(0)} = \vec{{\mathbf q}}^{(0)} =\boldsymbol 0 \,,~ c_n^{(0)} = (n-1) \lfloor \frac{255}{N} \rfloor \,, n = 1, \ldots, N$. } \STATE{} \FOR{$t_1 = 1 \,, \ldots \,, T_1$} \STATE { \STATE{{\bfseries Level 1 (Decomposition):} $({\mathbf u}, {\mathbf v}, {\boldsymbol \epsilon}) \in X^3$} \FOR{$t_2 = 1 \,, \ldots \,, T_2$} \STATE { \begin{equation*} \Big[ {\mathbf u}^{(t_2)} \,, {\mathbf v}^{(t_2)} \,, {\boldsymbol \epsilon}^{(t_2)} \,, \theta^{(t_2)} \Big] = \text{DG3PD} \Big( {\mathbf u}^{(t_2-1)} \,, {\mathbf v}^{(t_2-1)} \,, {\boldsymbol \epsilon}^{(t_2-1)} \,, \theta^{(t_2-1)} \,;~ {\mathbf f} \,, \kappa_\text{d} \Big) \end{equation*} } \ENDFOR \STATE{} \STATE{{\bfseries Level 2 (Multiphase Segmentation of ${\mathbf u}$)}: $(\vec{c} \,, \vec{{\mathbf p}} \,, \vec{{\mathbf q}}) \in \mathbb R^N \times X^{N + NM}$} \begin{equation*} \Big[ \vec{c}^{(t_1)} \,, \vec{{\mathbf p}}^{(t_1)} \,, \vec{{\mathbf q}}^{(t_1)} \Big] = \text{SHTMS} \Big( \vec{c}^{(t_1-1)} \,, \vec{{\mathbf p}}^{(t_1-1)} \,, \vec{{\mathbf q}}^{(t_1-1)} \,;~ {\mathbf u}^{(T_2)} \,, \kappa_\text{s} \Big) \end{equation*} } \ENDFOR \STATE{} \STATE{} \STATE{{\bfseries Global minimizer of (\ref{eq:BiLevelMinimization:1}):} \begin{align*} {\mathbf u}^* &= {\mathbf u}^{T_1 T_2} \,, {\mathbf v}^* = {\mathbf v}^{T_1 T_2} \,, {\boldsymbol \epsilon}^* = {\boldsymbol \epsilon}^{T_1 T_2} \,, \vec{{\mathbf q}}^* = \vec{{\mathbf q}}^{T_1} \,, \vec{c}^* = \vec{c}^{T_1} \,, ~ \text{and} \\ {\mathbf p}_h^* &= \begin{cases} 1 \,, & h = \displaystyle \mathop{\rm argmin}_{1 \leq n \leq N} \bigg\{ \underbrace{ -\sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}^*_{nm} + \cos\left(\frac{\pi m}{M}\right) {\mathbf q}^*_{nm} {\mathbf{D_2}} \big] }_{ = \text{div}^-_M \vec{{\mathbf q}}_n^*} + \frac{\beta_5}{2} \big( {\mathbf u}^* - c_n^* \big)^{.2} \bigg\} \\ 0 \,, & \text{else} \end{cases} \,, h = 1, \ldots, N \end{align*} } \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:Level1:DG3PD:Part1} \caption{Level 1: The Discrete DG3PD Model \cite{ThaiGottschlich2016DG3PD}} \begin{algorithmic} \small \STATE { \begin{align*} \Big[ {\mathbf u}^{(\text{new})} \,, {\mathbf v}^{(\text{new})} \,, {\boldsymbol \epsilon}^{(\text{new})} \,, \theta^{(\text{new})} \Big] = \text{DG3PD} \Big( {\mathbf u}^{(\text{old})} \,, {\mathbf v}^{(\text{old})} \,, {\boldsymbol \epsilon}^{(\text{old})} \,, \theta^{(\text{old})} \,;~ {\mathbf f} \,, \kappa_\text{d} \Big) \end{align*} } \STATE { {\bfseries 1. Compute} $\Big( \big[ {\mathbf r}_b^{(t)} \big]_{b=0}^{L-1} \,, \big[ \mathbf{w}_a^{(t)} \big]_{a=0}^{S-1} \,, \big[ \mathbf{g}_a^{(t)} \big]_{a=0}^{S-1} \,, {\mathbf v}^{(t)} \,, {\mathbf u}^{(t)} \,, {\boldsymbol \epsilon}^{(t)} \Big) \in X^{L+2S+3} $: \begin{align*} {\mathbf r}_b^{(t)} &~=~ \mathop{\rm Shrink} \Big( \sin\left( \frac{\pi b}{L} \right) \BDm {\mathbf u}^{(t-1)} + \cos\left( \frac{\pi b}{L} \right) {\mathbf u}^{(t-1)} {\mathbf{D}_{\mathbf 2}^{\text T}} - \frac{\boldsymbol{\lambda}_{\boldsymbol{1} b}^{(t-1)}}{\beta_1} \,, \frac{1}{\beta_1} \Big) \,,~ b = 0 \,, \ldots \,, L-1 \\ \mathbf{w}_a^{(t)} &~=~ \mathop{\rm Shrink} \Big( \mathbf{t}_{\mathbf{w}_a} ~:=~ \mathbf{g}_a^{(t-1)} - \frac{ \boldsymbol{\lambda}_{\boldsymbol{2}a}^{(t-1)} }{\beta_2} \,, \frac{\mu_1}{\beta_2} \Big) \,,~ \quad a = 0 \,, \ldots \,, S-1 \\ \mathbf{g}_a^{(t)} &~=~ \,{\rm Re} \bigg[ \mathcal F^{-1} \Big\{ \mathcal A^{(t)}({\boldsymbol z}) \cdot \mathcal B^{(t)}({\boldsymbol z}) \Big\} \bigg] [{\boldsymbol k}] \Big |_{{\boldsymbol k} \in \Omega} \,,~ a = 0 \,, \ldots \,, S-1 \\ {\mathbf v}^{(t)} &= \mathop{\rm Shrink} \bigg( \mathbf{t_v} ~:=~ \frac{\beta_3}{\beta_3 + \beta_4} \bigg( \sum_{s=0}^{S-1} \Big[ \sin\left( \frac{\pi s}{S} \right) \BDm {\mathbf g}_s^{(t)} + \cos\left( \frac{\pi s}{S} \right) {\mathbf g}_s^{(t)} {\mathbf{D}_{\mathbf 2}^{\text T}} \Big] - \frac{\boldsymbol{\lambda}_{\boldsymbol 3}^{(t-1)}}{\beta_3} \bigg) \\& \qquad \qquad \qquad \qquad + \frac{\beta_4}{\beta_3 + \beta_4} \bigg( {\mathbf f} - {\mathbf u}^{(t-1)} - {\boldsymbol \epsilon}^{(t-1)} + \frac{\boldsymbol{\lambda}_{\boldsymbol 4}^{(t-1)}}{\beta_4} \bigg) \,,~ \frac{\mu_2}{\beta_3 + \beta_4} \bigg) \\ {\mathbf u}^{(t)} &~=~ \,{\rm Re} \bigg[ \mathcal F^{-1} \Big\{ \mathcal X^{(t)}({\boldsymbol z}) \cdot \mathcal Y^{(t)}({\boldsymbol z}) \Big\} \bigg] [{\boldsymbol k}] \Big |_{{\boldsymbol k} \in \Omega} \\ {\boldsymbol \epsilon}^{(t)} &~=~ \Big( {\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} + \frac{\boldsymbol{\lambda}_{\boldsymbol 4}^{(t-1)}}{\beta_4} \Big) ~-~ \mathop{\rm CST} \big( {\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} + \frac{\boldsymbol{\lambda}_{\boldsymbol 4}^{(t-1)}}{\beta_4} \,, \nu \big) \end{align*} {\bfseries 2. Update} $\Big( \big[\boldsymbol{\lambda}_{\mathbf{1}b}^{(t)}\big]_{b=0}^{L-1} \,, \big[\boldsymbol{\lambda}_{\mathbf{2}a}^{(t)}\big]_{a=0}^{S-1} \,, \boldsymbol{\lambda}_{\boldsymbol 3}^{(t)} \,, \boldsymbol{\lambda}_{\boldsymbol 4}^{(t)} \Big) \in X^{L+S+2}$: \begin{align*} \boldsymbol{\lambda}_{\mathbf{1}b}^{(t)} &~=~ \boldsymbol{\lambda}_{\mathbf{1}b}^{(t-1)} ~+~ \gamma \beta_1 \Big( \mathbf{r}_b^{(t)} - \sin\left( \frac{\pi b}{L} \right) \BDm {\mathbf u}^{(t)} - \cos\left( \frac{\pi b}{L} \right) {\mathbf u}^{(t)} {\mathbf{D}_{\mathbf 2}^{\text T}} \Big) \,, \quad b = 0 \,, \ldots \,, L-1 \\ \boldsymbol{\lambda}_{\mathbf{2} a}^{(t)} &~=~ \boldsymbol{\lambda}_{\mathbf{2} a}^{(t-1)} ~+~ \gamma \beta_2 \Big( \mathbf{w}_a^{(t)} - \mathbf{g}_a^{(t)} \Big) \,, \quad a = 0 \,, \ldots \,, S-1 \\ \boldsymbol{\lambda}_{\mathbf{3}}^{(t)} &~=~ \boldsymbol{\lambda}_{\mathbf{3}}^{(t-1)} ~+~ \gamma \beta_3 \Big( {\mathbf v}^{(t)} - \sum_{s=0}^{S-1} \big[ \sin\left( \frac{\pi s}{S} \right) \BDm \mathbf{g}_s^{(t)} + \cos\left(\frac{\pi s}{S} \right) \mathbf{g}_s^{(t)} {\mathbf{D}_{\mathbf 2}^{\text T}} \big] \Big) \\ \boldsymbol{\lambda}_{\mathbf{4}}^{(t)} &~=~ \boldsymbol{\lambda}_{\mathbf{4}}^{(t-1)} ~+~ \gamma \beta_4 \big( {\mathbf f} - {\mathbf u}^{(t)} - {\mathbf v}^{(t)} - {\boldsymbol \epsilon}^{(t)} \big) \end{align*} } \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:Level1:DG3PD:Part2} \begin{algorithmic} \small \STATE {\bfseries (continued Algorithm 3)} \begin{align*} &\mathcal A({\boldsymbol z}) ~=~ \Bigg[ \beta_2 \mathbf{1_{mn}} + \beta_3 \abs{ \sin \left( \frac{\pi a}{S} \right) (z_1 - 1) + \cos \left( \frac{\pi a}{S} \right) (z_2 - 1) }^2 \Bigg]^{-1} \,, \\ &\mathcal B({\boldsymbol z}) ~=~ \beta_2 \Big[ W_a({\boldsymbol z}) + \frac{\Lambda_{2a}({\boldsymbol z}) }{\beta_2} \Big] ~+~ \beta_3 \Big[ \sin\left( \frac{\pi a}{S} \right) (z_1^{-1} - 1) + \cos\left( \frac{\pi a}{S} \right) (z_2^{-1} - 1) \Big] \times \\& \bigg[ V({\boldsymbol z}) - \sum_{s=[0\,,S-1] \backslash \{a\} } \Big[ \sin\left( \frac{\pi s}{S} \right) (z_1 - 1) + \cos\left( \frac{\pi s}{S} \right) (z_2 - 1) \Big] G_s({\boldsymbol z}) + \frac{\Lambda_3({\boldsymbol z})}{\beta_3} \bigg] \,, \\ &\mathcal X({\boldsymbol z}) = \Bigg[ \beta_4 \mathbf{1_{mn}} + \beta_1 \sum_{l=0}^{L-1} \abs{ \sin\left( \frac{\pi l}{L} \right) (z_1 - 1) + \cos\left( \frac{\pi l}{L} \right) (z_2 - 1) }^2 \Bigg]^{-1} \,, \\ &\mathcal Y({\boldsymbol z}) = \beta_4 \Big[ F({\boldsymbol z}) - V({\boldsymbol z}) - \mathcal{E}({\boldsymbol z}) + \frac{\Lambda_4({\boldsymbol z})}{\beta_4} \Big] + \beta_1 \sum_{l=0}^{L-1} \Big[ \sin \left( \frac{\pi l}{L} \right) (z_1^{-1} - 1) + \cos \left( \frac{\pi l}{L} \right) (z_2^{-1} -1) \Big] \Big[ R_l({\boldsymbol z}) + \frac{\Lambda_{1l}({\boldsymbol z})}{\beta_1} \Big] . \end{align*} {\bfseries Choice of Parameters} \begin{align*} \mu_1 &= c_{\mu_1} \beta_2 \cdot \max_{{\boldsymbol k} \in \Omega} \big( \abs{t_{{\mathbf w}_a}[{\boldsymbol k}]} \big) \,,~ \mu_2 = c_{\mu_2} (\beta_3 + \beta_4) \cdot \max_{{\boldsymbol k} \in \Omega} \big( \abs{t_{\mathbf v}[{\boldsymbol k}]} \big) \text{ and } \beta_2 = c_{\beta_{2}} \beta_3 \,, \beta_3 = \frac{\theta}{1 - \theta} \beta_4 \,, \beta_1 = c_{\beta_{1}} \beta_4. \end{align*} \end{algorithmic} \end{algorithm} \begin{algorithm} \label{alg:Level2:MultiphaseSegmentation} \caption{Level 2: The SHTMS} \begin{algorithmic} \small \STATE { \begin{equation*} \Big[ \vec{c}^{(t+1)} \,, \vec{{\mathbf p}}^{(t+1)} \,, \vec{{\mathbf q}}^{(t+1)} \Big] = \text{SHTMS} \Big( \vec{c}^{(t)} \,, \vec{{\mathbf p}}^{(t)} \,, \vec{{\mathbf q}}^{(t)} \,;~ {\mathbf u} \,, \kappa_{\text{s}} \Big) \end{equation*} } \STATE{ {\bfseries Compute} $\big( \vec{c} \,, \vec{{\mathbf p}} \,, \vec{{\mathbf q}} \big) \in \mathbb R^N \times X^{N + NM}$: \begin{align*} c_n^{(t+1)} &= \frac{\displaystyle \sum_{{\boldsymbol k} \in \Omega} u[{\boldsymbol k}] p_n^{(t)}[{\boldsymbol k}] } {\displaystyle \sum_{{\boldsymbol k} \in \Omega} p_n^{(t)}[{\boldsymbol k}] } \,,~ n = 1, \ldots, N \\ {\mathbf p}^{(t+1)}_n &= \frac{\displaystyle \exp\bigg[ - \frac{1}{\xi} \Big[ -\sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{nm}^{(t)} + \cos\left(\frac{\pi m}{M}\right) {\mathbf q}_{nm}^{(t)} {\mathbf{D_2}} \big] + \frac{\mu_3}{2} \big({\mathbf u} - c_n^{(t+1)} \big)^{.2} \Big]\bigg] } {\displaystyle \sum_{i=1}^N \exp\bigg[ - \frac{1}{\xi} \Big[- \sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) {\mathbf{D}_{\mathbf 1}^{\text T}} {\mathbf q}_{im}^{(t)} + \cos\left(\frac{\pi m}{M}\right) {\mathbf q}_{im}^{(t)} {\mathbf{D_2}} \big] + \frac{\mu_3}{2} \big({\mathbf u} - c_i^{(t+1)} \big)^{.2} \Big]\bigg] } \,,~ n = 1, \ldots, N \\ {\mathbf q}_{nm}^{(t+1)} &= \frac{\displaystyle {\mathbf q}_{nm}^{(t)} + \tau \big[ \sin\left(\frac{\pi m}{M}\right) \BDm {\mathbf p}_n^{(t+1)} + \cos\left(\frac{\pi m}{M}\right) {\mathbf p}_n^{(t+1)} {\mathbf{D}_{\mathbf 2}^{\text T}} \big] } {\displaystyle 1 + \tau \Big[ \sum_{m=0}^{M-1} \big[ \sin\left(\frac{\pi m}{M}\right) \BDm {\mathbf p}_n^{(t+1)} + \cos\left(\frac{\pi m}{M}\right) {\mathbf p}_n^{(t+1)} {\mathbf{D}_{\mathbf 2}^{\text T}} \big]^{.2} \Big]^{.\frac{1}{2}} } \,,~~ n = 1, \ldots, N \,,~ m = 0, \ldots, M-1 \end{align*} } \end{algorithmic} \end{algorithm} \begin{figure}[!ht] \begin{center} \includegraphics[height=0.58\textheight]{Fig7.png} \caption{The bilevel SHT model applied to the noise image in Figure 6 with parameters $\sigma = 20 \,, L = S = 9 \,, c_{\mu_1} = c_{\mu_2} = 0.03 \,, \theta = 0.9 \,, c_1 = 1, c_2 = 1.3 \,, \beta_4 = 0.04 \,, \beta_3 = \frac{\theta}{1-\theta}\beta_4 \,, \beta_1 = c_{\beta_{1}} \beta_4 \,, \beta_2 = c_{\beta_{2}} \beta_3 \,, \nu = 16 \,, M = 2 \,, N = 3 \,, \xi = 0.001 \,, \tau = 0.1 \,, \beta_5 = 100 \,, T_1 = T_2 = 100. $ } \label{fig:Barbara:BilevelModel} \end{center} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[height=0.6\textheight]{Fig8.png} \caption{The bilevel SHT model applied to the galaxy image with no additional noise added. The background stars represent small-scale texture which can be treated as noise and measured by $\norm{ {\mathcal C}\{ {\boldsymbol \epsilon} \} }_{\ell_\infty}$. The constant values are $\vec{c} = [40.1 \,, 106.81 \,, 208.21]$ and $\text{MSE} = 3.8 10^{-7}$. Parameters were chosen as in Figure 7 with the exception of $\nu = 40 \,, T_1 = T_2 = 50.$ } \label{fig:galaxy:BilevelModel:2} \end{center} \end{figure} \section{Comparison with Alternative Approaches} \label{sec:comparisons} We now apply our approach to several images in order to demonstrate and compare the performance with alternative approaches. The proficiency of and some properties of our models were demonstrated in Figures \ref{fig:Barbara:CombinedModel:NoNoise}, \ref{fig:Barbara:CombinedModel:Noise} and \ref{fig:Barbara:BilevelModel}. Here we focus on more subtle properties and compare our approach with existing methods. \setcounter{subfigure}{0} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\textwidth]{Fig9.png} \caption{The Chan-Vese model (rows one and two) produces different segmentations for different initializations of the level set function $\phi({\boldsymbol x})$. The Chan-Vese model parameters were chosen as $\text{Iteration} = 50 \,, \mu = \lambda_1 = \lambda_ 2 = \epsilon = 1 \,, \text{timestep} = 0.1 \,, v = 0$. The third and fourth rows show reconstructed images from the 2 and 3-phase bilevel SHT model with ${\mathbf u}_\text{re} = {\mathbf b} + \sum_{n=1}^{N} c_n {\mathbf p}_n$. The MSE for ${\mathbf u}_\text{re}$ in the 3-phase bilevel SHT model is $3.15 10^{-6}$ and the relative error in Subfigures (k) and (n) is measured by (\ref{eq:relativeError:u}). The bilevel model parameters were selected as in Figure 7 with $\nu = 0$ and $T_1 = T_2 = 50$. } \label{fig:comparison:ChanVese_Bilevel} \end{center} \end{figure} Figure \ref{fig:comparison:ChanVese_Bilevel} depicts a homogeneous image of an airplane where we compare our bilevel SHT model to the classic Chan-Vese model \cite{ChanVese2001}. The Chan-Vese model is applied in the first two rows and note that for different initial conditions -- Subfigures (a) and (e) -- that the model produces very different segmentations. However, in our bilevel SHT model, we solve a convex minimization and as a result, produce a nearly unique result. The bilevel SHT model is applied in rows three and four with 2 and 3 phases respectively. Note that the 3-phase model is able to segment the sun, sky, and airplane whereas the 2-phase model combines the sky and airplane. Subfigures (k) and (n) show convergence on the log scale, thus implying very fast convergence on the linear scale. \setcounter{subfigure}{0} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\textwidth]{Fig10.png} \caption{A comparison with the methods in \cite{BaeYuanTai2010} and \cite{BrownChanBresson2009}. The segmented image in Subfigure (d) is taken from \cite{BrownChanBresson2009}. The results in row two were based on the segmentation of Subfigure (a); those in rows three and four were applied to the noisy version of the image in Subfigure (b). The bilevel model parameters are the same as in Figure 7 with $\nu = 0, 5, \text{ and } 10$ as noted in Subfigures (e), (f), and (g), respectively. The relative error of ${\mathbf u}$ is shown in Subfigures (k) and~ (p).} \label{fig:comparison:BaeYuanTai2010_Bilevel:brain} \end{center} \end{figure} Figure \ref{fig:comparison:BaeYuanTai2010_Bilevel:brain} contains (entirely) homogeneous images of a brain and were analyzed in \cite{BrownChanBresson2012}. Subfigure (a) contains the original image with no noise added and in Subfigure (b) we add i.i.d.\ Gaussian noise with standard deviation 20. The second row (Subfigures (c) - (f)) show the resulting segmented images following the procedures in \cite{BaeYuanTai2010, BrownChanBresson2009}, and the bilevel SHT method with $\nu = 0$ and $\nu = 5$, respectively. We note that Subfigure (d) was taken directly from \cite{BrownChanBresson2009} and Subfigure (c) was programmed by hand. Note that in Subfigure (e) with $\nu = 0$, some small-scale residual still remains but when we increase the threshold to $\nu = 5$ in Subfigure (f), this residual is removed resulting in a smoother segmented image. Thus, our procedure compares favorably even to other methods that apply only to homogeneous images. Further note that when noise is added (Subfigure (b)), our procedure is able to not only filter out the additional noise from the segmented images -- see Subfigures (l) - (o) -- but also produces a decomposition with well-separated meaningful components. Note also that by examining Subfigure (p), we see that with $\nu = 10$, almost all of the resulting noise was that which was added (i.e. very little of the information from the original image was classified as noise). As in Figure \ref{fig:comparison:BaeYuanTai2010_Bilevel:brain}, Subfigure (k) shows convergence on the log scale implying very fast convergence on the linear scale. \setcounter{subfigure}{0} \begin{figure}[!ht] \begin{center} \includegraphics[width=1\textwidth]{Fig11.png} \caption{Animal images with textural regions of interest. Row one shows the original images and segmentations taken from \cite{NiBressonChanEsedoglu2009}. Rows 2 - 5 show the various components of our bilevel SHT model; see main text for details. The bilevel model parameters are the same as in Figure 7 with the following exceptions: $T_2 = 1$ for each of the three images; $L = 40, 40, 12$ and $\nu = 30, 25, 15$ for the images in columns 1, 2, and 3, respectively. } \label{fig:comparison:texture} \end{center} \end{figure} Finally, we move on to consider images that contain only a textural region of interest. The images in Figure \ref{fig:comparison:texture} depict various animals each with well-defined textural markings; the first row of images are taken directly from \cite{NiBressonChanEsedoglu2009}. We begin by noting that many methods already exist to define a region of texture; see for example the methodology in \cite{NiBressonChanEsedoglu2009}. Our SHT procedures were not designed for this goal, though extracting such a region is possible with our bilevel SHT model. Row three of Figure \ref{fig:comparison:texture} shows the texture component of the bilevel SHT decomposition. This texture component was then binarized and a morphological operator applied to obtain the textural boundaries shown in row 2. Rows four and five show the piecewise-smooth and piecewise-constant bilevel SHT components, respectively. Note that our bilevel model, though not designed for this purpose, still does an admirable job of capturing the textural boundary. One advantage to our approach is that instead of only defining this boundary, our procedure also allows one to separate the texture inside from the remainder of the image. \section{Conclusion} \label{sec:conclusion} This work provides algorithms to simultaneously decompose and segment images containing regions of both texture and homogeneity. This can be seen as an extension of the Mumford and Shah model to a much larger class of natural images. Two approaches are presented corresponding to the two alternative solutions to the $\text{G}_S$-norm for texture ${\mathbf v}$; the multiphase SHT approach based on the $\text{G}_S$-norm solution provided by Aujol and Chambolle \cite{AujolChambolle2005} and bilevel SHT approach based on the solution of Vese and Osher \cite{VeseOsher2003}. In practice we find that the bilevel SHT algorithm is better able to discriminate between the homogeneous and textural regions and thus we focus on this approach in Section 5 and recommend it in practical applications. The likely reason for the superior performance of the bilevel model is that the Vese and Osher \cite{VeseOsher2003} approach to solving the $\text{G}_S$-norm utilized in the bilevel SHT model approximates $\norm{ \vec{{\mathbf g}} }_{\ell_\infty}$ with $\norm{\vec{{\mathbf g}}}_{\ell_1}$. This enhances the sparsity of $\vec{{\mathbf g}}$ and though the original image is typically not sparse, it is sparse in some transform domain which is usually measured by $\ell_0$-norm (or its relaxed $\ell_1$-norm) in function space. One shortcoming of our models is the large number of required parameters and we hope to reduce the size of the parameter set as well as to analyze the convergence of the proposed minimization in future work. \section*{Acknowledgements} The authors thank Professors Len Stefanski, David Banks and Ingrid Daubechies for their helpful comments. This material was based upon work partially supported by the National Science Foundation under Grant DMS-1127914 to the Statistical and Applied Mathematical Sciences Institute. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
dd93009a40ef2d4c6e801e9ccd1ead5a34317290
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The $\delta$ Scuti stars are a class of pulsating stars falling in the HR diagram where the main sequence overlaps the lower extension of the Cepheid instability strip (Breger 2000; Aerts et al. 2010). They are in the core hydrogen burning or shell hydrogen burning stage (Aerts et al. 2010), with masses from 1.5 $M_{\odot}$ to 2.5 $M_{\odot}$ (Aerts et al. 2010) and pulsation periods from 0.5 to 6 hours (Breger 2000). Their pulsations are driven by the $\kappa$ mechanism (Baker $\&$ Kippenhahn 1962, 1965; Zhevakin 1963; Li $\&$ Stix 1994) in the second partial ionization zone of helium. Some of $\delta$ Scuti stars show multi-period pulsations, such as 4 Cvn (Breger et al. 1999), FG Vir (Breger et al. 2005), and HD 50870 (Mantegazza et al. 2012), and are therefore good candidates for asteroseismological studies. HD 50844 was discovered to be a $\delta$ Scuti variable star by Poretti et al. (2005) during their preparatory work for the CoRoT mission. The basic physical parameters of HD 50844 were also obtained by Poretti et al. (2005) from Str$\ddot{o}$mgren photometry. They are listed as follows: $T_{\rm eff}=$ 7500 $\pm$ 200K, log$g$=3.6 $\pm$ 0.2, and [Fe/H] = $-0.4$ $\pm$ 0.2. High-resolution spectroscopic observations obtained with the FEROS instrument mounted on the 2.2-m ESO/MPI telescope at La Silla resulted in the value of $\upsilon\sin i$ = 58 $\pm$ 2 km $\rm s^{-1}$ and the inclination angle $i=$ 82 $\pm$ 4 deg (Poretti et al. 2009). HD 50844 was observed from 2 February 2007 to 31 March 2007 ($\Delta t $= 57.61 d) by CoRoT during the initial run (IR01). Detailed frequency analysis of the observed timeseries by Poretti et al. (2009) revealed very dense frequency signals in the range of 0-30 ${\rm d}^{-1}$. In particular, they identified the frequency 6.92 ${\rm d}^{-1}$ with the largest amplitude as the fundamental radial mode by combining spectroscopic and photometric data. Meanwhile, very high-degree oscillation modes (up to $l=14$) were identified by Poretti et al. (2009) with the software FAMIAS (Zima 2008) to fit the line profile variations (Mantegazza 2000). Based on an independent analysis, Balona (2014) arrived a conclusion that a normal mode density might be existent for the CoRoT timeseries of HD 50844. He extracted a total of 59 significant oscillation modes from the CoRoT timeseries. Asteroseismology is a powerful tool to investigate the internal structure of pulsating stars that show rich pulsation modes in observations, such as 44 Tau (Civelek et al. 2001; K{\i}rb{\i}y{\i}k et al. 2003; Garrido et al. 2007; Lenz et al. 2008, 2010) and $\alpha$ Oph (Zhao et al. 2009; Deupree et al. 2012). Mode identifications are very important for asteroseismic studies of $\delta$ Scuti stars. Any eigenmode of stellar nonradial oscillations can be characterized by its radial order $k$, spherical harmonic degree $l$, and azimuthal number $m$ (Christensen-Dalsgaard 2003). For $\delta$ Scuti stars, there are usually only a few observed modes to be identified, such as FG Vir (Daszy$\acute{\rm n}$ska-Daszkiewicz et al.2005; Zima et al. 2006) and 4 Cvn (Castanheira et al. 2008; Schmid et al. 2014). Observed frequencies of a pulsating star could be compared with results of theoretical models only if their values ($l$, $m$) have been determined in advance. For a rotating star, a non-radial oscillation mode will split into $2l+1$ different components. According to the asymptotic theory of stellar oscillations, the $2l+1$ components of one mode of ($k$, $l$) are separated by almost the same spacing for a slowly rotating star. In our work, we try to identify the observed frequencies obtained by Balona (2014) based on the rotational splitting law of g-mode. Then we compute a grid of theoretical models to examine whether the computed stellar models can provide a reasonable fit to the observed frequencies. In Section 2, we analyse the rotational splitting of the observational data. In Section 3, we describe our stellar models, including input physics in Section 3.1 and details of the grid of stellar models in Section 3.2. Our fitting results are analysed in Section 4. Finally, we summarize our results in Section 5. \section{Analysis of rotational splitting} As already pointed out by Poretti et al. (2005), HD 50844 is in the post-main sequence evolution stage with a contracting helium core and an expanding envelope. The steep gradient of chemical abundance in the hydrogen burning shell will result in a large Brunt-V$\ddot{a}$is$\ddot{a}$l$\ddot{a}$ frequency $N$ there. As a result, there are two propagation zones inside the star: one for g modes in the helium core and the other for p modes in the stellar envelope. As already pointed out by Poretti et al. (2009), most of the observed pulsation modes for HD 50844 should be gravity and mixed modes. The mixed modes are dominated by two characteristics. They have pronounced g-mode character in the helium core and p-mode character in the stellar envelope. In our work, we pay more attention to those modes having frequencies near or higher than that of the fundamental radial mode ($\nu > 75\mu$Hz). We list 40 frequencies obtained by Balona (2014) in Table 1. Errors of the observed frequencies obtained by Balona (2014) are too small, i.e., less than 0.0015 $\mu$Hz. They are not listed in Table 1. The approximate expression for rotational splitting($\delta \nu_{k,l}$) and rotational period($P_{rot}$) for g-mode was derived by Dziembowski $\&$ Goode (1992) as \begin{equation} \nu_{k,l,m}-\nu_{k,l,0}=m\delta\nu_{k,l}=\frac{m}{P_{\rm rot}}(1-\frac{1}{L^{2}})+\frac{m^{2}}{P_{\rm rot}^{2}\nu_{k,l,0}}\frac{4L^{2}(2L^{2}-3)-9}{2L^{4}(4L^{2}-3)}. \end{equation} In Eq. (1), $L^{2}=l(l+1)$, and m ranges form $-l$ to $l$, resulting in $2l+1$ different values. Considering $\upsilon\sin i$ = 58 $\pm$ 2 km $\rm s^{-1}$ of HD 50844 (Poretti et al. 2009), the second term on the right-hand side of Eq.(1) is very small (e.g., less than 1.6$\%$ of the value of the first term for $\frac{1}{P_{\rm rot}}$ = 5 $\mu$Hz and $\nu_{k,l,0}$ = 100 $\mu$Hz). Therefore, only the first-order effect is considered in our work. According to Eq.(1), modes with $l=1$ constitute a triplet. Modes with $l=2$ constitute a quintuplet, and modes with $l=3$ constitute a septuplet. Meanwhile, the rotational splitting of $l=1$ modes and those of $l=2$ modes and $l=3$ modes are in some certain proportion, i.e., $\delta\nu_{k,l=1} : \delta\nu_{k,l=2}$ : $\delta\nu_{k,l=3}$ = $\frac{3}{5}$ : 1: $\frac{11}{10}$ (Winget et al. 1991). Furthermore, the value of rotational splitting are very close for modes with $l\ge3$, e.g., the differences of rotational splittings between modes with $l=4$ and those with $l=3$ are about $\frac{0.033}{P_{\rm rot}}$. If one complete nontuplet is identified, these modes can be identified as modes with $l=4$. But beyond that, it is difficult to distinguish multiplets of $l=4$ and higher values from those of $l=3$. High spherical harmonic degree modes are detected in spectroscopy of several $\delta$ Scuti stars, such as HD 101158 (Mantegazza 1997) and BV Cir (Mantegazza et al. 2001). There is no complete nontuplet to be identified for HD 50844, thus modes with $l\ge4$ are not considered in our work. Based on the above considerations, there are two properties of the frequency splitting due to rotation. Firstly, the $2l+1$ splitting frequencies of one mode are separated by nearly equal split. Secondly, rotational splittings derived from modes with different spherical harmonic degree $l$ are in specific proportion. Frequency differences ranging from 1 $\mu$Hz to 20 $\mu$Hz are searched for the observed frequencies. Possible multiplets due to rotational splitting are listed in Table 2. It can be found in Table 2 that ($f_{21}$, $f_{22}$, $f_{23}$) and ($f_{27}$, $f_{29}$, $f_{33}$) constitute two multiplets with an averaged frequency difference $\delta\nu_{1}$ of 2.434 $\mu$Hz, and ($f_{9}$, $f_{11}$, $f_{14}$) constitute another multiplet with an averaged frequency difference $\delta\nu_{2}$ of 8.017 $\mu$Hz. It is worth to note that the ratio of $\delta\nu_{1}$ and $\delta\nu_{2}$/2 is 0.607, which agrees well with the property of g-mode rotational splitting (Winget et al. 1991). Therefore, ($f_{21}$, $f_{22}$, $f_{23}$) and ($f_{27}$, $f_{29}$, $f_{33}$) are identified as two complete triplets, which are denoted with Multiplet 1 and Multiplet 2 in Table 2. Poretti et al. (2009) performed mode identifications with the FAMIAS method. They identified $f_{21}$ as ($l=3$, $m=3$), $f_{23}$ as ($l=3$, $m=2$), and $f_{33}$ as ($l=3$, $m=1$). Besides, ($f_{9}$, $f_{11}$, $f_{14}$) is identified as modes with $l=2$ on basis of the ratio of $\delta\nu_{1}$ and $\delta\nu_{2}$/2. Values of azimuthal number $m$ for $f_{9}$, $f_{11}$, and $f_{14}$ are then identified to be $m=(-2, 0, +2)$. Poretti et al. (2009) identified $f_{9}$ as a low $l$ mode. Moveover, the value of $\delta\nu_{k,l=1}$ is estimated to be 2.434 $\mu$Hz, $\delta\nu_{k,l=2}$ to be 4.009 $\mu$Hz and $\delta\nu_{k,l=3}$ to be 4.462 $\mu$Hz. Frequencies of $f_{24}$ and $f_{25}$ may constitute Multiplet 3 with a frequency difference of 2.431 $\mu$Hz, which is approximate to $\delta\nu_{k,l=1}$. We may identify their spherical harmonic degree as $l=1$. When identifying their azimuthal number $m$, there are two possibilities, i.e., corresponding to modes of $m=(-1,0)$ or $m=(0,+1)$. Poretti et al. (2009) identified $f_{24}$ as ($l=5$, $m=3$). Frequencies of $f_{1}$ and $f_{5}$ may constitute Multiplet 5 with a frequency difference of 8.072 $\mu$Hz, which is about twice of $\delta\nu_{k,l=2}$. We may identify their spherical harmonic degree as $l=2$. When identifying their azimuthal number $m$, there are three possibilities, i.e., corresponding to modes of $m=(-2,0)$, $m=(0,+2)$, or $m=(-1,+1)$. Besides, the frequency difference between $f_{15}$ and $f_{18}$ is 3.939 $\mu$Hz and the frequency difference between $f_{35}$ and $f_{36}$ is 3.983 $\mu$Hz. Both of them are approximate to $\delta\nu_{k,l=2}$. We may identify their spherical harmonic degree as $l=2$. There are four possible identifications for their azimuthal number $m$, i.e., corresponding to modes of $m=(-2,-1)$, $m=(-1,0)$, $m=(0,+1)$, or $m=(+1,+2)$. Poretti et al. (2009) identified $f_{15}$ as ($l=8$, $m=5$), and $f_{35}$ as ($l=12$, $m=10$). It can be noticed in Table 2 that ($f_{2}$, $f_{7}$), ($f_{12}$, $f_{20}$), and ($f_{38}$, $f_{39}$) may constitute 3 independent multiplets. The frequency difference between $f_{2}$ and $f_{7}$ is 12.228 $\mu$Hz. The frequency difference between $f_{12}$ and $f_{20}$ is 11.825 $\mu$Hz, and the frequency difference between $f_{38}$ and $f_{39}$ is 11.890 $\mu$Hz. All of these frequency differences are about three times of $\delta\nu_{k,l=2}$. We may identify their spherical harmonic degree as $l=2$. When identifying their azimuthal number $m$, there are two possibilities, i.e., corresponding to modes of $m=(-2,+1)$ or $m=(-1,+2)$. Poretti et al. (2009) identified $f_{12}$ as ($l=3$, $m=1$), and $f_{39}$ as ($l=14$, $m=12$). Three frequencies of $f_{3}$, $f_{6}$ and $f_{8}$ may constitute Multiplet 11. The frequency difference 8.985 $\mu$Hz between $f_{3}$ and $f_{6}$ is about twice of $\delta\nu_{k,l=3}$, and the frequency difference 13.412 $\mu$Hz between $f_{6}$ and $f_{8}$ is about three times of $\delta\nu_{k,l=3}$. Their spherical harmonic degree $l$ can be determined to be $l=3$. There are two possible identifications for their azimuthal number $m$, i.e., corresponding to modes of $m=(-3,-1,+2)$ or $m=(-2,0,+3)$. Another three frequencies of $f_{10}$, $f_{17}$ and $f_{19}$ may constitute Multiplet 12. The frequency difference 17.253 $\mu$Hz between $f_{10}$ and $f_{17}$ is about four times of $\delta\nu_{k,l=3}$, and the frequency difference 4.302 $\mu$Hz between $f_{17}$ and $f_{19}$ is approximate to $\delta\nu_{k,l=3}$. We may identify their spherical harmonic degree as $l=3$. When identifying their azimuthal number $m$, there are two possibilities, i.e., corresponding to modes of $m=(-3,+1,+2)$ or $m=(-2,+2,+3)$. Poretti et al. (2009) identified $f_{10}$ as ($l=5$, $m=0$), and $f_{17}$ as ($l=11$, $m=7$). Four frequencies of $f_{26}$, $f_{28}$, $f_{34}$ and $f_{37}$ may constitute Multiplet 13. It can be noticed in Table 2 that $f_{26}$, $f_{28}$ and $f_{34}$ have a frequency difference of about $\delta\nu_{k,l=3}$ in either pair. The frequency difference 13.331 $\mu$Hz between $f_{34}$ and $f_{37}$ is about three times of $\delta\nu_{k,l=3}$. We may identify their spherical harmonic degree as $l=3$. There are two possible identifications for their azimuthal number $m$, i.e., corresponding to modes of $m=(-3,-2,-1,+2)$ or $m=(-2,-1,0,+3)$. It can be noticed in Table 2 that there are slight differences for the rotational splittings in different multiplets. This may be due to the deviations from the asymptotic formula (e.g., Multiplet 5 and Mutiplet 6). There are also slight differences in the same multiplet (e.g., in Multiplet 2). The present frequency resolution 0.2 $\mu$Hz may be the main reason for this difference. Besides, there are only two components in Multiplet 3, 5, 6, 7, 8, 9 and 10. Different physical origins like the large separation led by the so-called island modes (Garc{\'{\i}}a Hern{\'a}ndez et al. 2013; Ligni{\`e}res et al. 2006) are also possible due to the assumptions we adopted in our approach. There are six unidentified frequencies ($f_{13}$, $f_{16}$, $f_{30}$, $f_{31}$, $f_{32}$, and $f_{40}$) for absence of frequency splitting. It can be noticed in Table 1 that $f_{40}$ is far from other observed frequencies, so that it could not be identified on basis of rotational splitting law. Besides, frequency difference between $f_{13}$ and $f_{15}$ is 4.460 $\mu$Hz, which agrees with $\delta\nu_{k,l=3}$. It can be noticed in Table 2 that frequency difference between $f_{15}$ and $f_{18}$ is 3.939 $\mu$Hz, which agrees with $\delta\nu_{k,l=2}$. There are two possible identifications for $f_{15}$, i.e., as a mode with $l=3$ or $l=2$. There are six possibilities for the former case, i.e., corresponding to modes of $m=(-3,-2)$, $(-2,-1)$, $(-1,0)$, $(0,+1)$, $(+1,+2)$, or $(+2,+3)$. There are four possibilities for the later case, just as listed in Table 2. The frequency difference between $f_{16}$ and $f_{21}$ is 13.184 $\mu$Hz, which is about three times of $\nu_{k,l=3}$. It can be found in Table 2 that $f_{21}$ is identified as one component of a complete triplet (Multiplet 1). In Multiplet 1, the frequency difference between $f_{21}$ and $f_{22}$ agrees well with the difference between $f_{22}$ and $f_{23}$. Besides, those large differences in amplitude for $f_{21}$, $f_{22}$, and $f_{23}$ agree well with the inclination angle $i=$ 82 $\pm$ 4 deg (Poretti et al. 2009) according to the relation derived by Gizon $\&$ Solanki (2003). Furthermore, spherical harmonic degree $l$ represents the number of nodal lines on the spherical surface. For higher spherical harmonic degree $l$, the sphere will be divided into more zones. Due to the geometrical cancellation, modes with low degree $l$ are easier observed. Four frequencies of $f_{30}$, $f_{31}$,$f_{32}$, and $f_{33}$ are too much close to each other. It can be noticed in Table 2 that $f_{33}$ has already been identified as one component of a triplet (Multiplet 2). Frequency spacing between theoretical pulsation modes with $l=1$ is very large, thus $f_{30}$, $f_{31}$, and $f_{32}$ could not be identified as modes with $l=1$. Besides, only one component is observed. It is difficult to identify their spherical harmonic degree $l$. In the following theoretical calculation, we compare them with modes with $l=$ 0, 2, and 3. Based on above analyses, our mode identifications differ from those of Poretti et al. (2009). Poretti et al. (2009) identified the observed frequencies with the FAMIAS method to fit the line profile variations. There is still uncertain on uniqueness of the solution for the multiparameter fitting method of the line profile variations. Our mode identifications are on basis of the property of g-mode rotational splitting. For the $\delta$ Scuti star HD 50844, rotational splittings for modes with $l\ge3$ are very close according to Eq. (1). Distinguishing multiplets of $l=4$ or higher spherical harmonic degree $l$ from those of $l=3$ is very difficult. In spectroscopy, the behaviour of amplitudes and phases across the line profiles supplies information on both spherical harmonic degree $l$ and azimuthal number $m$. For instance phase diagrams of six detected modes of BV Cir clearly show they are prograde modes with high azimuthal number -14 $\leq m \leq$ -12 (Mantegazza et al. 2001). \section{Stellar Models} \subsection{Input Physics} In our work, we use the package "pulse" from version 6596 of the Modules for Experiments in Stellar Astrophysics (MESA) (Paxton et al. 2011, 2013) to compute stellar evolutionary models and to calculate their pulsation frequencies (Christensen-Dalsgaard 2008). Our theoretical models for HD 50844 are constructed from the ZAMS to the post-main sequence stage, fully covering the observed ranges of gravitational acceleration and effective temperature. The OPAL equation-of-state tables (Rogers $\&$ Nayfonov 2002) are used. The OPAL opacity tables from Iglesias $\&$ Rogers (1996) are used in the high temperature region and opacity tables from Ferguson et al. (2005) are used in the low temperature region. The $T - \tau$ relation of Eddington grey atmosphere is adopted in the atmosphere integration. The mixing-length theory (B$\ddot{o}$hm-Vitense 1958) is adopted to treat convection. Effects of convective overshooting and element diffusion are not considered in our calculations. \subsection{Details of Model Grids} The calibrated value of $\alpha=1.77$ for the sun is adopted in our stellar evolutionary models. The evolutionary track of a star on the HR diagram is determined by the stellar mass $M$ and the initial chemical composition $(X,Y,Z)$. In our calculations, we set the initial helium fraction $Y=0.275$ as a constant. Then we choose the range of mass fraction of heavy-elements $Z$ from 0.005 to 0.018, in order to cover the observational value of $\rm [Fe/H]=-0.40\pm0.20$ (Poretti et al. 2005). A grid of stellar models are computed with MESA, $M$ ranging from 1.5 $M_{\odot}$ to 2.2 $M_{\odot}$ with a step of 0.01 $M_{\odot}$, and $Z$ ranging from 0.005 to 0.018 with a step of 0.001. Figure 1 shows the grid of evolutionary tracks with various sets of $M$ and $Z$. The error box corresponds to the effective temperature range of 7300 K $<$ $T_{\rm eff}$ $<$ 7700 K and to the gravitational acceleration range of 3.40 $<$ log$g$ $<$ 3.80. For each stellar model falling in the error box, we calculate its frequencies of pulsation modes with $l$ = 0, 1, 2 and 3, and fit them to those observational frequencies according to \begin{equation} \chi^{2}=\frac{1}{N}\sum(|\nu_{\rm obs}^{i}-\nu_{\rm mod}^{i}|^{2}), \end{equation} In Eq. (2), $\nu_{\rm obs}^{i}$ is observational frequency, $\nu_{\rm mod}^{i}$ is calculated pulsation frequency, and $N$ is total number of observational frequencies. Based on numerical simulations, most of the uncertainties of the calculated pulsation frequencies are less than 0.03 $\mu$Hz except that a few of them reach up to 0.06 $\mu$Hz. \section{Analysis of Results} \subsection{Best-fitting model} In Section 2, we give our possible mode identifications for the observed frequencies obtained by Balona (2014) based on the rotational splitting law. When doing model fittings, we try to use the calculated frequencies of each model to fit four identified modes, including 2 modes with $l=1$ ($f_{22}$ and $f_{29}$), 1 mode with $l=2$ ($f_{11}$), and the fundamental radial mode ($f_{4}$). Poretti et al. (2009) suggested that $f_{4}$ might be one mode with $l$ = 0 based on the mode identifications with the FAMIAS method. Besides, Poretti et al.(2009) detected $f_{4}$ in the variations of equivalent width and radial velocity and identified $f_{4}$ as the mode with $l=0$. Moreover, the photometric identifications made by Poretti et al. (2009) on basis of the colour information of the multi-colour photometric data show $f_{4}$ as the fundamental radial mode. This is in accordance with the property that $f_{4}$ has the highest amplitude in the variations of both equivalent width and radial velocity. In our work, we use the identification of $f_{4}$ as the fundamental radial mode. Figure 2 shows the change of $1/\chi^{2}$ as a function of the effective temperature $T_{\rm eff}$ for all considered models. In Fig. 2, each curve corresponds to one evolutionary track. It can be noticed in Fig. 2 that the value of $1/\chi^{2}$ is very large in a very small parameter space, i.e., $M=1.80 - 1.81$ $M_{\odot}$ and $Z=0.008 - 0.009$. Their physical parameters are very close. The physical parameters of HD 50844 are obtained based on these models. They are listed in Table 3. We select the theoretical model with the minimum value of $\chi^{2}$ corresponding to $(M=1.81,Z=0.008)$ as our best-fitting model, which is marked with the filled cycle in Fig.2. The theoretical frequencies of our best-fitting model are listed in Table 4, where $n_{p}$ is the number of radial nodes in the p-mode propagation region, and $n_{g}$ the number of radial nodes in the g-mode propagation region. Particularly, $\beta_{k,l}$ is a parameter measuring the size of rotational splitting for rigid body in the general formula of rotational splitting derived by Christensen-Dalsgaard (2003): \begin{equation} \beta_{k,l}=\frac{\int_{0}^{R}(\xi_{r}^{2}+l(l+1)\xi_{h}^{2}-2\xi_{r}\xi_{h}-\xi_{h}^{2})r^{2}\rho dr} {\int_{0}^{R}(\xi_{r}^{2}+l(l+1)\xi_{h}^{2})r^{2}\rho dr}. \end{equation} In Eq.(3), $\xi_{r}$ is the radial displacement, $\xi_{h}$ the horizontal displacement, and $\rho$ the local density. Therefore, the effect of rotation is basically determined by the value of $\beta_{k,l}$. For high-order g modes, the terms containing $\xi_{r}$ can be neglected, thus \begin{equation} \beta_{k,l} \backsimeq 1 - \frac{1}{l(l+1)}, \end{equation} which is in agreement with Eq. (1). Figure 3 shows the plot of $\beta_{k,l}$ versus the theoretical frequency $\nu$ for the best-fitting model. It can be seen that most of values of $\beta_{k,l}$ in Fig. 3 agree well with the value of 0.5 for $l=1$ modes, 0.833 for $l=2$ modes, or 0.917 for $l=3$ modes derived from Eq. (1). These results indicate that the corresponding modes have pronounced g-mode characteristics. On the other hand, $\beta_{k,l}$ of several $l=1$, $l=2$ and $l=3$ modes deviate considerably from the values derived from Eq. (1), indicating that they also possess significant p-mode characteristics. Table 5 lists results of comparisons of frequencies for those modes in Table 2, where $m\neq0$ modes in columns named by $\nu_{\rm mod}$ are derived from $m=0$ modes based on $P_{rot}$ and $\beta_{k,l}$. The filled circles in Fig. 3 denote corresponding $m=0$ modes in Table 5. It can be seen clearly in Fig. 3 that values of $\beta_{k,l}$ for $m=0$ modes corresponding to $f_{22}$, $f_{29}$, $f_{11}$, $f_{1}$, $f_{15}$ and $f_{34}$ agree well with those derived from Eq.(1). In Multiplet 7, 8, 9, 11 and 12, $m=0$ components are not observed. Values of $\beta_{k,l}$ for corresponding $m=0$ components in Multiplet 7, 9, 11, and 12 are also in good agreement with those derived form Eq. (1). The mode corresponding to $f_{25}$ in Multiplet 3 and corresponding $m=0$ component in Multiplet 10 have slightly larger values of $\beta_{k,l}$ than those derived from Eq. (1). It can be noticed in Table 5 that there are two possible identifications for Multiplet 8, i.e., corresponding to modes of $m$ = (-1, +2) derived from 80.189 $\mu$Hz (2, 0, -78, 0), or $m$=(-2, +1) from 84.368 $\mu$Hz (2, 0, -74, 0). The filled squares in Fig. 3 denote these two possible $m = 0$ modes in Multiplet 8. It can be seen in Fig. 3 that the values of $\beta_{k,l}$ for both of the two choices are in good agreement with Eq. (1). These results confirm our approach of using Eq. (1) to search for rotational splitting in Section 2. Based on the best-fitting model, possible identifications for $f_{13}$, $f_{16}$, $f_{30}$, $f_{31}$, $f_{32}$, and $f_{40}$ are listed in Table 6. It can be noticed in Table 6 that $f_{30}$, $f_{31}$, $f_{32}$, and $f_{40}$ may be identified as four modes with $l=3$. Poretti er al. (2009) identified $f_{30}$ as $(l=4,m=2)$ and $f_{31}$ as $(l=4, m=3)$. Considering uncertainties of spectroscopic observations obtained by (Poretti et al. 2009), the spherical harmonic degree $l$ of our suggestions agree with those of Poretti et al. (2009). For $f_{16}$, there are three possible identifications, i.e., corresponding to modes of $(l,m)$ =$(2,-2)$, $(3,+2)$, or $(3,0)$. Besides, there are two possible identifications for $f_{15}$ based on the analyses in Section 2. If $f_{15}$ and $f_{13}$ are identified as two components of one incomplete septuplet, $(130.081, 134.465)$ derived from 138.850 $\mu$Hz $(3,2,-62,0)$ or $(130.035,134.392)$ derived from 143.104 $\mu$Hz $(3,2,-60,0)$ may be two possibilities. If $f_{15}$ and $f_{18}$ are identified as two components of one incomplete quintuplet, the results of comparisons are listed in Table 5. Poretti et al. (2009) identified $f_{15}$ as a mode with ($l=8$, $m=5$). The spherical harmonic degree $l$ for both of the two cases ($l=3$ or $2$) are lower than the value of Poretti et al. (2009). \subsection{Discussions} An important question is why physical parameters of HD 50844 are well limited in a small region based on four identified pulsation modes. We have found a possible reason to explain this result. First of all, it should be pointed out that the four identified modes consist of two $l=1$ modes ($f_{22}$ and $f_{29}$), one $l=2$ mode ($f_{11}$), and the fundamental radial mode ($f_{4}$). Table 4 shows that most of pulsation modes are mixed modes. Figure 4 shows the propagation diagram for the best-fitting model. According to the parameter settings of MESA (Paxton et al. 2011, 2013), the boundary of helium core is set to the position where the hydrogen fraction $X_{cb}$ = 0.01. The vertical lines in Fig. 4 and Fig. 5 denote the position of the boundary of the helium core. The inner zone is the helium core, the outer zone is the stellar envelope. It can be seen in Fig. 4 that the Brunt-V$\ddot{a}$is$\ddot{a}$l$\ddot{a}$ frequency $N$ has a peak in the helium core, which corresponds to the hydrogen burning shell. Figure 5 shows distributions of the radial displacement for the fundamental radial mode and the three nonradial modes that we have considered. It can be seen clearly in Fig. 5 that the fundamental radial mode propagates mainly in the stellar envelope, and represents the property of the stellar envelope. However, the three nonradial modes propagate like g modes in the helium core while like p modes in the stellar envelope. This fact confirms that they are mixed modes and can therefore represent the property of the helium core. In order to fit those four modes at the same time, both the stellar envelope and the helium core must be matched to the considered star. The acoustic radius $T_{\rm h}$ is defined as $T_{\rm h}$ = $\int_{0}^{R}c_{s}^{-1}dr$ (Aerts et al. 2010), where $c_{s}$ is the adiabatic sound speed. Therefore, the acoustic radius $T_{\rm h}$ is mainly determined by the distribution of $c_{s}$ inside the star. The sound speed $c_{s}$ is much smaller in the stellar envelope than in the helium core, thus $T_{\rm h}$ can be used to reflect property of the stellar envelope. According to the asymptotic theory of g modes, there is an equation for the period separation (Unno et al. 1979; Tassoul 1980) \begin{equation} \Delta\bar{P}(l) = \frac{\Pi_{0}}{\sqrt{l(l+1)}} = \frac{2\pi^{2}(\int_{r_{1}}^{r_{2}}\frac{N}{r}dr)^{-1}}{\sqrt{l(l+1)}}, \end{equation} where $r_{1}$ is the inner boundary of the region where gravity waves propagate, $r_{2}$ is the outer boundary, and $N$ is the Brunt-V$\ddot{a}$is$\ddot{a}$l$\ddot{a}$ frequency. In Eq.(5), $\Pi_{0}$=$2\pi^{2}(\int_{r_{1}}^{r_{2}}N/rdr)^{-1}$, which is mainly determined by the distribution of Brunt-V$\ddot{a}$is$\ddot{a}$l$\ddot{a}$ frequency $N$ in the helium core. Therefore, $\Pi_{0}$ can be used to reflect property of the helium core. Figure 6 shows the distribution of the period spacing $\Pi_{0}$ versus the acoustic radius $T_{\rm h}$ for theoretical models with the same initial metallicity $Z$ but different stellar mass $M$. Figure 7 shows the same plot for theoretical models with the same stellar mass $M$ but different initial metallicity $Z$. The filled circle corresponds to our best-fitting model, while the filled triangles correspond to stellar models having minimum values of $\chi^{2}$ on the corresponding evolutionary tracks, respectively. It can be noticed in Fig. 6 that the acoustic radius $T_{\rm h}$ of the stellar models marked by the filled triangles obviously deviate from the value of our best-fitting model, which indicates that the stellar envelopes of these models can not match the actual structure of the considered star. In contrast, the period spacing $\Pi_{0}$ of the stellar models marked by the filled triangles in Fig. 7 obviously deviate from the value of our best-fitting model, which indicates that the helium cores of these models can not match the actual structure of the considered star. Based on above arguments further more, the size of the helium core of the $\delta$ Scuti star HD 50844 is determined for the first time. The corresponding physical parameters are listed in Table 3. Gizon $\&$ Solanki (2003) investigated in details the relation between the stellar oscillation amplitude and the inclination angle $i$ of stellar rotation axes. It can be noticed in Table 1 that the $m=$ 0 component $f_{22}$ of Multiplet 1 has an amplitude that is about 9 times smaller than the $m=-1$ components $f_{21}$ and about 7 times smaller than the $m=+1$ components $f_{23}$. Such large differences correspond to an inclination angle $i\approx76 ^{\circ}$ according to the relation given by Gizon $\&$ Solanki (2003). This fact is roughly in agreement with the value of $82^{\circ}$ (Poretti et al. 2009). The $m=0$ component in Multiplet 4 has the least amplitude, and the $m=$ 0 component in Multiplet 2 also has a smaller amplitude. The rotational period $P_{rot}$ is determined to be 2.44$^{+0.13}_{-0.08}$ days according to Eq. (1). It can be noticed in Table 3 that the theoretical radius of HD 50844 is $R$ = 3.300 $\pm$ 0.023 $R_{\odot}$. Then the rotational velocity at the equator is derived as $\upsilon_{\rm rot}=$ 68.33$^{+2.34}_{-3.70}$ km $\rm s^{-1}$ according to $\upsilon_{\rm rot}$ = 2$\pi R/P_{\rm rot}$. Assuming the inclination angle $i=$ 82 $\pm$ 4 deg (Poretti et al. 2009), $\upsilon_{\rm rot}\sin i$ is estimated to be 66.86 $\pm$ 3.64 km $\rm s^{-1}$, which is higher than the value of $\upsilon\sin i$ = 58 $\pm$ 2 km $\rm s^{-1}$ (Poretti et al. 2009). It has been discussed in Sect.4.1 that most of the considered frequencies are mixed modes. They have pronounced g-mode characteristics. The corresponding rotational velocity derived from rotational splitting of these modes mainly reflects rotational properties of the helium core. The $\delta$ Scuti star HD 50844 is a slightly evolved star. As the star evolves into the post-main sequence stage, the core shrinks and the envelope expands. Based on conservation of angular momentum, rotational angular velocity of the core should be larger than that of the envelope. The spectroscopic value of $\upsilon\sin i$ (Poretti et al. 2009) mainly reflects the property of the envelope. This may be the reason why our rotational velocity is higher than that of Poretti et al. (2009). \section{Summary} In our work, we have analysed the observed frequencies given by Balona (2014) for possible rotational splitting, and carried out numerical model fittings for the $\delta$ Scuti star HD 50844. We summarize our results as follows: 1. We identify two complete triplets ($f_{21}$, $f_{22}$, $f_{23}$) and ($f_{27}$, $f_{29}$, $f_{33}$) as modes with $l=1$, and one incomplete quintuplet ($f_{9}$, $f_{11}$, $f_{14}$) as modes with $l=2$, as well as one more incomplete triplet ($f_{24}$, $f_{25}$) as modes with $l=1$ and six more incomplete quintuplets ($f_{1}$, $f_{5}$), ($f_{15}$, $f_{18}$), ($f_{35}$, $f_{36}$), ($f_{2}$, $f_{7}$), ($f_{12}$, $f_{20}$), and ($f_{38}$, $f_{39}$) as modes with $l=2$. Besides, three incomplete septuplets $(f_{3}, f_{6}, f_{8})$, $(f_{10}, f_{17}, f_{19})$, and $(f_{26}, f_{28}, f_{34}, f_{37})$ are identified as modes with $l=3$. Based on frequency differences of above multiplets, the corresponding rotational period of HD 50844 is found to be 2.44$^{+0.13}_{-0.08}$ days. 2. Based on our model calculations, we compare theoretical pulsation modes with four identified observational modes, including three nonradial modes ($f_{11}$, $f_{22}$, $f_{29}$) and the fundamental radial mode ($f_{4}$). The physical parameters of HD 50844 are well limited in a small region. Based on the fitting results, the theoretical model with $M=1.81$ $M_{\odot}$, $Z=$ 0.008 is suggested as the best-fitting model. 3. Based on our best-fitting model, we find that values of $\beta_{k,l}$ for most of the calculated modes are in good agreement with the asymptotic values for g modes. Some modes may have values of $\beta_{k,l}$ that are considerably higher than the asymptotic values. However, the values of $\beta_{k,l}$ for the $m=0$ modes in those identified triplets, quintuplets, or septuplets are in good agreement with the asymptotic values of g modes, which confirms that our approach of searching for rotational splitting based on the rule of g modes is self-consistent. 4. Based on comparisons of all observed frequencies with their theoretical counterparts, we find that most of the considered frequencies may belong to mixed modes. The radial fundamental mode $f_{4}$ reflects the property of the stellar envelope, while the three nonradial modes $f_{11}$, $f_{22}$, and $f_{29}$ reflect property of the helium core. These features require that both the stellar envelope and the helium core must be matched to the actual structure in order to fit those four oscillation modes. Finally, the mass of helium core of HD 50844 is estimated to be 0.173 $\pm$ 0.004 $M_{\odot}$. \begin{acknowledgements} We are sincerely grateful to an anonymous referee for instructive advice and productive suggestions, which greatly help us to improve the manuscript. This work is funded by the NFSC of China (Grant No. 11333006, 11521303, and 11403094) and by the foundation of Chinese Academy of Sciences (Grant No. XDB09010202 and "Light of West China" Program). We gratefully acknowledge the computing time granted by the Yunnan Observatories, and provided on the facilities at the Yunnan Observatories Supercomputing Platform. We are also very grateful to J.-J. Guo, G.-F. Lin, Q.-S. Zhang, Y.-H. Chen, and J. Su for their kind discussions and suggestions. \end{acknowledgements}
585e7b2c4df54b984d487fc135737576ea2bdc52
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The chiral ring of a four dimensional $\mathcal{N}=1$ theory plays a crucial role in understanding the dynamics of the theory. In particular, the chiral ring can be used to determine the structure of the moduli space of vacua and the phase structure \cite{Cachazo:2002ry}. Moreover, the chiral ring structure seems to be still quite important even if the theory has a unique vacua \cite{Xie:2016hny,Buican:2016hnq}. However, little is known about the general structure of the chiral ring of $\mathcal{N}=1$ theory. The purpose of this paper is to study the chiral ring of a superconformal field theory (SCFT). We ask the following question: when is a chiral ring ${\cal R}$ the chiral ring of a SCFT? An obvious necessary condition is that the chiral ring has to be graded, since for a SCFT there is always a $U(1)_R$ symmetry which acts non-trivially on all of the chiral operators. In particular, we really should start with a polarized chiral ring $({\cal R}, \zeta)$ and ask whether it is a chiral ring of a SCFT with $U(1)_R$ symmetry $\zeta$. On the other hand, It is also known that the existence of a grading is not sufficient. A second motivation for asking above question is the following: in many studies of supersymmetric field theory we start with an asymptotically free gauge theory ${\cal T}$ and assume that it flows to a SCFT ${\cal T}_0$ at a certain point of the moduli space (often the most singular point). We can compute the chiral ring ${\cal R}$ of the theory ${\cal T}$, and let us denote by ${\cal R}_{0}$ the chiral ring of ${\cal T}_0$. Many interesting quantities of the SCFT ${\cal T}_0$ can be computed if ${\cal R}= {\cal R}_0$. For example, we can use $a$ maximization to determine the $U(1)_R$ symmetry \cite{Intriligator:2003jj} of ${\cal T}_0$. In general, however, ${\cal R}_0$ can differ from ${\cal R}$, for example: \begin{itemize} \item[(a)] It is believed that the chiral ring ${\cal R}$ of $\mathcal{N}=1$ $SU(N_c)$ SQCD with ${3\over 2}N_c<N_f<3N_c$ is the chiral ring ${\cal R}_0$ of the SCFT at the origin of the moduli space \cite{Seiberg:1994bz,Intriligator:1995au}; \item[(b)] If $ N_c<N_f\leq {3\over 2}N_c$, the chiral ring ${\cal R}$ of SQCD is not the chiral ring ${\cal R}_0$ of the SCFT at the origin, as the mesons become free at the SCFT point \cite{Seiberg:1994bz,Intriligator:1995au}; \item[(c)] A trivial example is a chiral scalar $\phi$ with cubic superpotential, and the chiral ring ${\cal R}$ of this theory is generated by the ideal $\phi^2=0$. However, the superpotential is marginally irrelevant at SCFT point, and the IR SCFT is free so its chiral ring ${\cal R}_0$ is free generated by operator $\phi$ and is different from ${\cal R}$. \end{itemize} From above examples, we learn that the possible reasons for ${\cal R}$ failing to be the chiral ring of a SCFT are: \begin{itemize} \item Some operators hit the unitarity bound and become free at SCFT point, and this also modifies the chiral ring. \item Certain superpotential term is irrelevant at the SCFT point, and we should not impose the constraint from superpotential for chiral operators of SCFT ${\cal T}_0$ \footnote{Notice that we can not ignore such superpotential term for ${\cal T}$ as it could be relevant at other vacua, and they are called dangerously irrelevant operator in \cite{Kutasov:1995ss}.}; \item There might be some other unknown dynamics that would lead to different chiral ring for SCFT. We do not have a systematical way to detect them. \end{itemize} We can learn several interesting lessons from the above examples. Firstly ${\cal R}_0$ has more symmetries than ${\cal R}$. Namely, there is a new symmetry generator acting on ${\cal O}$ alone if ${\cal O}$ hits unitarity bound and becomes free; If a superpotential term formed by an operator ${\cal O}$ becomes irrelevant, there could be a new symmetry acting on this operator ${\cal O}$ alone. Secondly, ${\cal R}_0$ either leads to higher central charge $a$ \footnote{This does not violate the $a$ theorem, as ${\cal R}$ is not the chiral ring of a SCFT.}, or the same central charge as evidenced by example (c), but no less central charge. Motivated by above examples, we introduce a notion of stability on chiral ring to characterize whether ${\cal R}={\cal R}_0$ and this notion also gives a method to define the chiral ring of a SCFT. The definition involves two basic elements: test chiral ring and generalized a maximization. Let's first discuss the test chiral ring. From above examples, if the chiral ring ${\cal R}$ fails to be the chiral ring of a SCFT, there is an associated different chiral ring ${\cal R}_0$: ${\cal R}_0$ can be derived by forgetting some of the superpotential terms if ${\cal R}$ is derived from a quiver gauge theory, etc. More generally ${\cal R}_0$ should have more symmetries, and it should satisfy certain continuity condition with respect to ${\cal R}$. Based on those observations, we propose: \begin{definition} A test chiral ring ${\cal R}_0$ can be derived from ${\cal R}$ by using a symmetry generator $\eta$ on ${\cal R}$ and taking a \textbf{flat} limit. \label{def1} \end{definition} Let's discuss more precisely what this definition means. Assume that the chiral ring ${\cal R}$ is given by \begin{equation} R={C[x_0,x_1,\ldots, x_n]\over I}, \end{equation} here $x_i,i=0,\ldots,n$ are the generators of chiral ring and $I=(f_1,f_2,\ldots, f_m)$ is the ideal which gives the chiral ring relation among the generators. Now consider a one parameter subgroup $\eta(t)$ of $C^{n+1}$ and define its action on the elements of idea $I$ as \begin{equation} f(t)=\lambda(t)\cdot f=f(\lambda(t)\cdot(x_0,x_1,\ldots, x_n)). \end{equation} So we have a family of rings ${\cal R}_t={C[x_0,x_1,\ldots, x_n]\over I_t}$ parameterized by $t$. The flat limit $I_0 = lim_{t\rightarrow 0} I_t$ is defined as follows. We can decompose any $f \in I$ as $f = f_1+\ldots+ f_k$ into elements in distinct weight spaces for the $C^{*}$ action $\eta$ on $C[x_0, . . . , x_n]$. Let us write $in(f)$ for the element $f_i$ with the smallest weight, which we can think of as the "initial term" of $f$. Then $I_0$ is the ideal generated by the set of initial terms $\{in(f)|f \in I\}$. The test chiral ring is defined as ${\cal R}_0={C[x_0,x_1,\ldots, x_n]\over I_0}$. The test chiral ring has the following crucial proerties: a) The flat limit is the same if we use the symmetry generator $s\eta$ with $s>0$; b) $R_0$ is invariant with respect to symmetries of $R$ and $\eta$; c): The Hilbert series of ${\cal R}$ and ${\cal R}_0$ are the same for the symmetries of ${\cal R}$, this is the continuity condition on test configuration. Using above procedure, we can get infinite number of test chiral rings. The criteria for determining whether a test chiral ring ${\cal R}_0$ destabilizes a polarized ring $({\cal R},\zeta)$ is \begin{definition} A test chiral ring ${\cal R}_0$ destabilizes $({\cal R},\zeta)$ if ${\cal R}_0$ gives \textbf{no less} central charge $a$ with respect to the space of possible $U(1)_R$ symmetries $a \zeta+s\eta,~s\geq 0$. \label{def2} \end{definition} It is crucial that $s\geq 0$ so we have the same test chiral ring using the symmetry generator $s\eta$ on ${\cal R}$. Now we state the definition of stable chiral ring: \begin{definition} A polarized chiral ring $({\cal R},\zeta)$ is called stable if there is no destabilizing test chiral ring. \end{definition} This definition can be thought of as the \textbf{generalized a maximization} procedure. For the original \textbf{a maximization} procedure \cite{Intriligator:2003jj}, we do not change the chiral ring, namely we only use the symmetry generator of ${\cal R}$ to generate the test chiral ring, and the flat limit ${\cal R}_0$ is the same as ${\cal R}$. The hidden assumption in this process is that the ring ${\cal R}$ is already the ring of a SCFT, and we would like to determine the correct $U(1)_R$ symmetry. Once we define the notion of stability of chiral ring, we would like to state the main conjecture of this paper: \begin{conjecture} A polarized chiral ring $({\cal R},\zeta)$ is the chiral ring of a SCFT if and only if it is stable. \end{conjecture} This conjecture answers the question when a chiral ring can be that of a SCFT. We would like to test the above conjecture for general class of $\mathcal{N}=1$ theories. However, we face several difficulties. First, it is usually not easy to derive the full chiral ring of a theory, and so examples are in short supply. Secondly, we do not know how to characterize the $U(1)_R$ like symmetry from the chiral ring itself. Thirdly, it is not known how to determine the trial central charge $a(\zeta)$ for a symmetry $\zeta$ from the chiral ring itself. However, these problems are solved for a class of models arising from string theory. Precisely, it is possible to determine the exact chiral ring for $\mathcal{N}=1$ theories derived from $N$ D3 branes probing a three dimensional singularity \cite{Klebanov:1998hh}: the 3d singularity can be defined by an affine variety $X$ with coordinate ring ${\cal H}_{X} := \mathbb{C}[x_1,\ldots, x_n]/I$, which determines the chiral ring ${\cal R}$ of field theory. We can characterize $U(1)_R$ like symmetries by requiring the top form $\Omega$ \footnote{The existence of such form puts restriction on the singularity type.} on $X$ having charge two. In the large N limit, the central charge can be computed from the Hilbert series of $X$ \cite{Bergman:2001qi,Martelli:2005tp,Martelli:2006yb}. We would like to determine whether the chiral ring $({\cal R},\zeta)$ of above field theory model is stable or not. Assuming our conjecture relating stability of chiral ring and SCFT, the stability of the chiral ring of these models has the following geometric consequence: If the chiral ring is stable, then according to AdS/CFT dictionary \cite{Maldacena:1997re,Witten:1998qj,Gubser:1998bc}, in the large N limit the IR SCFT is dual to type IIB string theory on $AdS_5\times L_5$ \cite{Klebanov:1998hh, Morrison:1998cs}, where $L_5$ is a five manifold, defined as the link of a 3d singularity $X$, which carries a Sasaki-Einstein (SE) metric. The $U(1)_R$ symmetry $\zeta$ is identified with the Reeb vector field on $L_5$. In other words, the stability of the chiral ring is equivalent to the existence of Sasaki-Einstein metric on $L_5$, or equivalently the existence of a Ricci-flat conic metric on $X$. The existence of Sasaki-Einstein metrics has been studied extensively recently in mathematics literature. Briefly, in the setting of Fano K\"ahler manifolds, the Yau-Tian-Donaldson conjecture predicted that the existence of K\"ahler-Einstein metrics with positive scalar curvature is equivalent to the algebro-geometric notion of K-stability \cite{donaldson2002scalar} which is an improvement of the original conjecture of Yau \cite{yau1993open}. This conjecture was recently proved by Chen-Donaldson-Sun \cite{chen2015kahler,chen2015kahler1,chen2015kahler2}. In our more general context, a notion of K-stability and it's implications for the existence of Sasaki-Einstein metrics was studied by the first author and Sz\'ekelyhidi in \cite{collins2012k,collins2015sasaki}. The notion of K-stability involves constructions of so called test configurations ${\cal X}$ and the criteria for determining whether ${\cal X}$ destabilizes $X$ is determined by the sign of the so-called Donaldson-Futaki invariant. One of major point of this paper is to provide an interpretation of the Donaldson-Futaki invariant as a version of generalized $a$ maximization: \begin{theorem} The K-stability of the affine variety $X$ is equivalent to the stability of the chiral ring of the corresponding field theory. \end{theorem} The paper is organized as follows: section two reviews some basic facts about $\mathcal{N}=1$ chiral ring; section three studies theory engineered using D3 brane probing certain three dimensional singularity $X$, and K-stability of $X$ is interpreted as the generalized $a$ maximization procedure introduced above; section four discusses some physical consequences from K stability; finally, a conclusion is given in section five. \section{Generality of chiral ring} Consider a four dimensional $\mathcal{N}=1$ supersymmetric field theories. A chiral operator ${\cal O}_i$ is defined as an operator annihilated by supercharges $\bar{Q}_{\dot{\alpha}}$, and is defined modulo cohomology of $\bar{Q}_{\dot{\alpha}}$: ${\cal O}_i\sim {\cal O}_i+[\bar{Q}_{\dot{\alpha}}, \chi]$ \cite{Cachazo:2002ry}. Chiral operators have some interesting properties: \begin{itemize} \item The sum of two chiral operators is still a chiral operator, and the product of two chiral operators is still a chiral operator. \item There is an identity operator. \item The expectation value of a product of chiral operators are independent of their positions, and they have the simple OPE structure ${\cal O}_i{\cal O}_j=\sum C_{ij}^k {\cal O}_k$ with $C_{ij}^k$ constant. \end{itemize} These properties imply that the chiral operators form a commutative ring with an identity. To solve a $\mathcal{N}=1$ theory, one would like to determine the full set of chiral operators. That is, one would like to find the generators and relations determining the chiral ring. For example, for $SU(N)$ gauge theory with an adjoint matter field $\Phi$, the generators of single trace chiral operators are: $\text{Tr}(\Phi^k)$, $\text{Tr}(W_\alpha \Phi^k)$, and $\text{Tr}(W_\alpha W^\alpha\Phi^k)$, the full chiral ring relations are determined in \cite{Cachazo:2002ry}. The chiral ring relations usually are much harder to determine. Typically, we have the following classical chiral ring relations; \begin{itemize} \item Chiral ring relations come from the finite size of matrices, and we have Caley-Hamilton equation for a matrix. For example, for a chiral field in the adjoint representation of gauge group $SU(N)$, the chiral operators $\text{Tr}(\phi^i), i> N$ can be expressed in terms of $\text{Tr}(\phi^j)$ with $j\leq N$. \item Chiral ring relations come from the constraints of the superpotential. For example, consider $\mathcal{N}=4$ $SU(N)$ gauge theory: this theory has three chiral fields $X, Y, Z$, and a superpotential $W=\text{Tr}XYZ-\text{Tr} X Z Y$. The $F$-term equations from the superpotential are \begin{equation} [X,Y]=[Y,Z]=[Z,X]=0, \end{equation} and so the matrices $X,Y,Z$ commute, which lead to chiral ring relations of the type $\text{Tr}(XYZ)=\text{Tr}(XZY)$ and so on. \end{itemize} These classical chiral ring relations can be modified by quantum effects such as instantons, Konishi anomalies and strongly coupled dynamics, and we have the quantum chiral ring. The determination of the generators of chiral ring and the quantum chiral ring relation is a central task in the study of supersymmetric gauge theory. Let's assume that the generators of the chiral ring are $x_1, x_2, \ldots x_s$, and the chiral ring relations are generated by polynomial relations, then the quantum chiral ring is isomorphic to \begin{equation} \mathbb{C}[x_1, x_2,\ldots, x_s]/I, \end{equation} where $\mathbb{C}[x_1, x_2,\ldots, x_s]$ is ring of polynomials with complex coefficients, and $I$ is the ideal generated by the chiral ring relations. In general, the parameters of our theory such as the dynamically generated scale $\Lambda$ and masses $m_i$ should be included into the generators and relations of chiral ring. From now on, by the chiral ring we will always mean the quantum chiral ring. \textbf{Example}: Consider $\mathcal{N}=1$ $SU(N)$ SQCD with $N_f=N$ quarks $Q_i, \tilde{Q}_j$, the space of chiral operators are \begin{align} & M_{ij}=Q_i\tilde{Q}_j, \nonumber\\ &B=\epsilon_{\alpha_1 \alpha_2 \ldots \alpha_N} Q_1^{\alpha_1}\ldots Q_N^{\alpha_N}, \nonumber\\ &\tilde{B}=\epsilon_{\alpha_1 \alpha_2 \ldots \alpha_N} \tilde{Q}_1^{\alpha_1}\ldots \tilde{Q}_N^{\alpha_N}. \end{align} The classical chiral ring relation is $\text{Det}(M)-B\tilde{B}=0$, but quantum mechanically, the ring relation is changed to \begin{equation} f=\text{Det}(M)-B\tilde{B}-\Lambda^{2N_c}=0. \end{equation} Here $\Lambda$ is the dynamical scale of the theory. The chiral ring is then $C[M, B, \tilde{B},\Lambda]/f$ \cite{Intriligator:1995au}. We are interested in the chiral ring of a SCFT. The $\mathcal{N}=1$ SCFT has a distinguished $U(1)_R$ symmetry, and the scaling dimension of a chiral operator are related to its $U(1)_R$ charge by \begin{equation} D({\cal O})={3\over 2} R({\cal O}). \end{equation} In general, it is not easy to determine the $U(1)_R$ symmetry of a SCFT. Intriligator and Wecht found a remarkable $a$-maximization procedure to determine the $R$-symmetry \cite{Intriligator:2003jj}. Namely, they predicted that the correct $R$-symmetry maximizes the central charge $a$. One usually defines a SCFT as the IR limit of a UV quiver gauge theory, and the $U(1)_R$ symmetry of IR SCFT can be determined as follows: First, find all the anomaly free $U(1)$ symmetries in the UV and define a trial R symmetry $U(1)_{trial}=\sum_I s_I F_I$. Second, compute the trial central charge using the formula \begin{equation} a_{trial}(s_I)={3\over 32} (3 \text{Tr}(R_{trial}^3)-\text{Tr}(R_{trial})). \end{equation} The true $U(1)_R$ symmetry is found by maximizing the central charge and this will fix the coefficients $s_I$. A crucial assumption of above procedure is that all the symmetries for the IR SCFT are manifest in the UV description! However, this is often not the case. For example, two possible scenarios are \begin{itemize} \item The violation of unitarity bound: if a gauge invariant operator ${\cal O}$ violates the unitarity bound after doing $a$-maximization, it is argued that this field becomes free \cite{Seiberg:1994bz}, and that there is an accidental $U(1)$ symmetry acting on this operator ${\cal O}$ only. \item Even if there is no violation of unitarity bound, accidental symmetry is still possible as a result of some unknown dynamical effect. \end{itemize} \textbf{Example}: Let's illustrate the above point by an example. Consider $\mathcal{N}=1$ $SU(N_c)$ SQCD with $N_f$ fundamental flavors. There is an unique $U(1)_R$ type symmetry such that the quarks $Q$ and antiquarks $\tilde{Q}$ have the following charges \begin{equation} R_{Q}=R_{\tilde{Q}}={N_f-N_c\over N_f}. \end{equation} We require $N_f>N_c$ so that the $R$ charge is positive. The mesons has $R$ charge $R(M)=2{N_f-N_c\over N_f}$, and baryons an anti-baryons have $R$ charge $R(B)=R(\tilde{B})={N_c(N_f-N_c)\over N_f}$. So using this candidate $U(1)_R$ symmetry for the IR SCFT, we have $\Delta(M)={3\over2}R(M)<1$ if $N_c<N_f<{3\over 2} N_c$ and it is argued that these mesons become free in the IR \cite{Seiberg:1994bz}. The baryons do not violate the unitarity bound, however, it becomes free if $N_f=N_c+1$ as we can see it from Seiberg dual description \cite{Seiberg:1994pq}. In fact, the appearance of accidental symmetry implies that the chiral ring of the IR SCFT is different from the UV theory, which means that the chiral ring of UV theory is not stable. Since it is difficult to detect the appearance of accidental symmetry, it is also difficult to tell whether the UV chiral ring is stable or not. In the next section, we will consider a class of $\mathcal{N}=1$ models where we will relate the stability of chiral ring to a problem in geometry. \section{K-stability and stability of chiral ring} \subsection{The chiral ring, Hilbert series and the central charge $a$ } Consider a $\mathcal{N}=1$ theory on world volume of $N$ D3 branes probing a graded three dimensional normal, Kawamata log-terminal (klt), Gorenstein singularity $X$, (see figure. \ref{d2}). The 3d singularity is defined by an affine ring \begin{equation} {\cal H}_X=\mathbb{C}[x_1, x_2, \ldots, x_r]/I, \end{equation} here $\mathbb{C}[x_1, x_2, \ldots, x_r]$ is the polynomial ring and $I$ is an ideal. Let's explain the meaning for various terms characterizing our singularity: \textbf{normal} means that the codimension of the singular locus $P$ of $X$ is no less than two; \textbf{graded} means that there is at least one $C^*$ action on $X$; \textbf{Gorenstein} means that the canonical sheaf $K_X$ is a line bundle and one has a non-vanishing top form $\Omega$ on $X/P$; \textbf{Kawamata log-terminal} can be characterized that the volume form $\Omega\wedge\overline{\Omega}$ has finite mass near the singularities of $X$ (see \cite{collins2015sasaki}). We have the following map between the properties of the ring $X$ and field theory, \begin{itemize} \item The automorphism group $G$ of $X$ gives the (complexified) anomaly free symmetries of the field theory, and the possible $U(1)_R$ symmetry $\zeta$ is a subgroup of $G$. \item The coordinate ring of the vacua moduli space is described as the coordinate ring of the variety $M_{N}=X^N/S^N$. In the large N limit, the single trace operators parameterizing $M_\infty$ can be identified as the ring elements of $X$ \footnote{Notice that in the large N limit the ring structure of $X$ is not the chiral ring structure of the field theory. In the large N limit, the chiral ring structure is trivial, namely the product of two single trace operator defines the multiple trace operators.}: namely the holomorphic functions on $X$ give the chiral scalar operators of the field theory in the large N limit. So $X$ essentially determines the nontrivial part of the chiral ring \footnote{There are other types of scalar chiral operators which do not get expectation value, and also chiral Baryonic operators. These operators seem not affect the stability issue of our model.}. \item $X$ has a canonical $(3,0)$ form $\Omega$, and it has charge 2 under the \textbf{possible} $U(1)_R$ symmetry $\zeta$: \begin{equation} [\Omega]=2, \end{equation} (in fact, this condition is equivalent $X$ being klt). We also require that the $U(1)_R$ charge of the coordinates $x_i$ is positive. \end{itemize} \begin{figure}[h] \centering \includegraphics{brane.eps} \caption{One can engineer four dimensional $\mathcal{N}=1$ theories using D3 brane probing three dimensional klt Gorenstein singularity $X$. For the singularity $X$, one can define a link $L$ which is a five dimensional Sasakian manifold.} \label{d2} \end{figure} Consider a possible $U(1)_R$ symmetry $\zeta$ which is realized as an automorphism of $X$. The trial central charge $a(\zeta)$ (of order $N^2$) of the field theory can be computed from the Hilbert series of the ring $X$ \cite{Bergman:2001qi,Martelli:2005tp,Martelli:2006yb,Eager:2010yu}. The Hilbert series of $X$ with respect to $\zeta$ is defined by \begin{equation} Hilb(X,\zeta,t)=\sum (dim H_{\alpha}) t^\alpha; \end{equation} Here $H_\alpha$ is the subspace of ring ${\cal H}_X$ with charge $\alpha$ under the action $\zeta$. The Hilbert series has a Laurent series expansion around $t=1$ obtained by setting $t=e^{-s}$ and expanding \begin{equation} Hilb(X,\zeta, e^{-s})={a_0(\zeta)\over s^3}+{a_1(\zeta)\over s^2}+\ldots \end{equation} The coefficients $(a_0(\zeta), a_1(\zeta))$ have following properties: \begin{itemize} \item $a_0$ is proportional to the volume of the link $L_5$ of the singularity, and the trial central charge $a(\zeta)$ (order $N^2$ term) is related to $a_0$ as \begin{equation} a(\zeta)={27 N^2\over 32} {1\over a_0(\zeta)}. \label{cent} \end{equation} \item $a_0=a_1$ which is due to the condition that $\Omega$ has charge $2$. \item $a_0$ is convex function of the symmetry generators \cite{Martelli:2006yb}. \end{itemize} For the singularity $X$, one can define a 5 dimensional link $L_5$ with Sasakian structure \cite{boyer2008sasakian}. If there is a Sasaki-Einstein metric on the link $L_5$, one can find the true $U(1)_R$ symmetry by minimizing $a_0$, and the field theory central charge is given by the formula~\eqref{cent}. In the large N limit, the SCFT on D3 branes is dual to Type IIB string theory on the following geometry \begin{equation} AdS_5\times L_5. \end{equation} The existence of the SE metric on $L_5$ is also equivalent to the existence of a Ricci-flat conic metric on $X$. \textbf{Example}: Consider the conifold singularity defined by the principal ideal $f(z)=z_0^2+z_1^2+z_2^2+z_3^2=0$, and it is known that the link $L_5$ is the manifold $T^{1,1}$ and has a Sasaki-Einstein metric. There is a $C^*$ action $\zeta$ on this singularity $f(\lambda^{q_i} z_i)=\lambda f(z_i)$ with weights $({1\over 2},{1\over 2},{1\over 2},{1\over 2})$. The canonical three form is $\Omega={dz_0\wedge d z_1\wedge d z_2\wedge dz_3\over dF}$. $\Omega$ has charge $1$ under the symmetry $\zeta$, so the possible $U(1)_R$ symmetry is actually $\zeta^{'}=2\zeta$ in order to ensure $\Omega$ has charge two. The Hilbert series of $X$ with respect to symmetry generator $\zeta^{'}$ is \begin{equation} Hilb(t)={(1-t^2)\over (1-t)^4}|_{t=e^{-s}}={2\over s^3}+{2\over s^2}+\ldots \end{equation} Using formula \ref{cent}, We find that the central charge is equal to $a={27\over 64} N^2$ which agrees with the result derived from field theory \cite{Klebanov:1998hh}. \subsection{K Stability and generalized $a$-maximization} Now thel question is whether the link $L_5$ has Sasaki-Einstein metric. This question is reduced to studying the K-stability of the ring $X$ \cite{collins2012k,collins2015sasaki}. On the other hand, $X$ essentially determines the chiral ring of the field theory, and if the chiral ring of the field theory is stable, i.e. it is a chiral ring of a SCFT, the field theory is dual to type IIB string theory on the background $AdS_5\times L_5$, where $L_5$ has a Sasaki-Einstein metric. From this AdS/CFT correspondence, one can see that K-stability should be equivalent to the stability of the chiral ring of field theory defined in introduction. In this subsection, we will discuss two crucial ingredients of K-stability; test configuration and the Donaldson-Futaki invariant. We will also give a physical interpretation of these two elements and show that K-stability is equivalent to the stability of the chiral ring. \subsubsection{Test configurations} Let's first describe the definition of a test configuration arising in K stability, which actually motivates our definition of test chiral ring in the introduction. In the K-stability context, one constructs a test configuration by constructing a flat family $\pi: {\cal X} \rightarrow \mathbb{C}$ (for a simple illustration of flat and non-flat family, see figure. \ref{flat}.). This flat family is generated by a one dimensional symmetry generator $\eta$, and for $t\neq 0$, the ring ${\cal}H_{X_t}$ corresponding to the fiber $X_t = \pi^{-1}(t)$ is isomorphic to the original ring ${\cal H}_{X}$. At $t=0$, the ring degenerates into a different ring which we call $H_{X_0}$, and it is also called central fibre. The flat limit is a quite common concept in algebraic geometry, but its definition is quite involved and we do not want to give a detailed introduction here. For the interested reader, see section 6 of \cite{eisenbud2013commutative}. Here, we just want to point out several important features of the flat family constructed above. \begin{itemize} \item[(a)] The Hilbert series is not changed if we use the same symmetry generator for the new ring ${\cal H}_{X_0}$. In particular, $X_0$ has the same dimension as $X$. \item[(b)] The maximal torus in the automorphism group of the central fibre $X_0$ has one more dimensional symmetry generated by $\eta$, unless $X_0 \cong X$. \end{itemize} We require that the degeneration is normal (which implies that the codimension of the singular locus is not less than two). The new singularity $X_0$ is still Gorenstein and klt and, in the non-trivial case, possesses an extra one-dimensional symmetry. \begin{figure}[h] \centering \includegraphics{test.eps} \caption{Left: A flat family of rings. At $t\neq 0$, there are two points and the configuration degenerates into one point at $t=0$, which is the central fibre of this flat limit. Right: A non-flat family of rings. At $t\neq 0$, the ring is zero dimensional, but at $t=0$, the ring is one dimensional. } \label{flat} \end{figure} \textbf{Example}: Consider the ring $X$ defined by the ideal $x^2+y^2+z^2+w^k=0$, and consider a $\mathbb{C}^*$ action $\eta$ which acts only on coordinate $w$ with the action $\eta(w)=t w$. We then get a family of rings parametrized by the coordinate $t$: \begin{equation} x^2+y^2+z^2+t^kw^k=0. \end{equation} The flat limit of this family over $t=0$ is found (in this case) by keeping the terms with lowest order. The central fiber of this test configuration is then cut out by the equation \begin{equation} x^2+y^2+z^2=0 \end{equation} Notice that $l\eta$ with $l>0$ gives the same degeneration limit $X_0$. On the other hand $l\eta$ with $l<0$ gives a different degeneration limit-- we get the ring generated by the ideal $w^k=0$, which is not normal! \subsubsection{Futaki invariant and generalized $a$-maximization} Now let's start with a ring $X$ with symmetry $\zeta$ and we also choose the generators $t_i, i=1,\ldots, n$ for the Lie algebra $\mathfrak{t}$ of the maximal torus in the automorphism group $G$ of X. Let us write $\zeta=\sum_{i=1}^n \zeta_it_i$, and we may as well assume that $\zeta$ minimizes the volume over all the possible $U(1)_R$ symmetries parametrized by $\mathfrak{t}$. Consider a test configuration ${\cal X}$ generated by a symmetry generator $\eta$ and let $X_0$ denote the central fibre. We would like to determine whether or not $X_0$ destabilizes $X$. The crucial ingredient is the Donaldson-Futaki invariant defined in \cite{donaldson2002scalar}. The ring $(X_0, \zeta, \eta)$ is still Gorenstein and klt, and has a at least two dimensional symmetry group generated by $\zeta$ and $\eta$. There is only one dimensional possible $U(1)_R$ symmetry as we need to impose following two conditions \begin{itemize} \item[(a)] The charge on the coordinates $x_i$ is positive \item[(b)] The $(3,0)$ form has charge 2. \end{itemize} The second condition can be fixed by computing the Hilbert series of $X_0$ with respect to symmetry generator and imposing the condition $a_0=a_1$. This one dimensional candidate $U(1)_R$ symmetry can be parameterized as \begin{equation} \zeta(\epsilon)=\zeta+\epsilon(\eta-a\zeta). \end{equation} Notice that we require $\epsilon>0$ so that the central fibre is the same as the original one if we use the symmetry $\epsilon(\eta-a\zeta)$ to generate the test configuration. Substitute the above parameterization into the equation $a_0=a_1$ and expand it to first order in $\epsilon$, we have \begin{equation} a_0(\zeta+\epsilon(\eta-a\zeta))=a_1(\zeta+\epsilon(\eta-a\zeta)) \rightarrow a_0(\zeta)+\epsilon (\eta-a \zeta)\cdot a_0^{'} = a_1(\zeta)+\epsilon (\eta-a \zeta)\cdot a_1^{'}. \end{equation} Here $a_0^{'}$ and $a_1^{'}$ are the vectors defined by the derivative ${da_i(\vec{x})\over d {\vec{x}}}|_{\vec{x}=\zeta}$, and $\vec{x}=\sum_{i=1}^ns_i t_i+b \eta$. Using the result $a_0(\zeta)=a_1(\zeta)$, we have \begin{equation} a={\eta\cdot (a_0^{'}-a_1^{'}) \over \zeta \cdot (a_0^{'}-a_1^{'})}={\eta\cdot (a_1^{'}-a_0^{'}) \over a_0}={1\over a_0(\zeta)}({da_1(\zeta+\epsilon \eta) \over d \epsilon }-{d a_0(\zeta+\epsilon \eta) \over d\epsilon })|_{\epsilon=0}. \label{constant} \end{equation} We also use the fact $\zeta\cdot a_0^{'}=3 a_0(\zeta),~\zeta\cdot a_1^{'}=2a_1(\zeta)=2a_0(\zeta)$. Now the Futaki invariant is defined to be \begin{equation} F(X,\zeta, \eta)=D_\epsilon a_0(\zeta(\epsilon))|_{\epsilon=0}. \end{equation} This definition is not of the form of the original Futaki invariant defined in \cite{donaldson2002scalar}, however, we will now show that our definition is equivalent to the original one (see also \cite{collins2015sasaki} for more discussion). We have \begin{align} & F(X,\zeta,\eta)=D_\epsilon a_0(\zeta+\epsilon(\eta-a\zeta))=(\eta-a \zeta) \cdot a_0^{'} \nonumber\\ &=D_\epsilon a_0(\zeta+\epsilon \eta)-{\eta\cdot (a_0^{'}-a_1^{'}) \over a_0}D_\epsilon a_0(\zeta+\epsilon \zeta) \nonumber\\ &=D_\epsilon a_0(\zeta+\epsilon \eta)+3a_0(\zeta){\eta\cdot (a_0^{'}-a_1^{'}) \over a_0} \nonumber \\ &= D_\epsilon a_0(\zeta+\epsilon \eta)+3a_0(\zeta)D_{\epsilon}{a_1(\zeta+\epsilon \eta)\over a_0(\zeta+\epsilon \eta)}. \label{futaki} \end{align} We use the definition from first line to second line, and from second line to third line we use the fact $D_\epsilon a_0(\zeta+\epsilon \zeta)=-3 a_0(\zeta)$ (which can be found using the definition of Hilbert series). The formula in the last line is precisely the Futaki invariant defined in \cite{collins2015sasaki}. Having defined the Futaki invariant, we can now state the definition of K-stability. \begin{theorem} A polarized ring $(X,\zeta)$ is stable if for any non-trivial test configuration generated by the symmetry $\eta$, the Futaki invariant satisfies \begin{equation} F(X, \zeta, \eta)>0. \end{equation} And for the trivial test configuration, namely the central fibre $X_0$ is the same as $X$, the Futaki invariant satisfies \begin{equation} F(X, \zeta, \eta)\geq0. \end{equation} \end{theorem} We now provide a physical interpretation of the Futaki invariant $F$. Since $a_0$ is inverse proportional to the central charge of the coordinate ring of the central fiber, the Futaki invariant is directly related to the maximization of the central charge. The shape of the function $a_0$ with respect to $\epsilon$ is drawn in figure. \ref{futaki}. $F<0$ implies that $a_0(\epsilon)$ is minimized at $\epsilon>0$, and the new ring gives larger central charge $a$! When $F=0$ and $X_0$ is different from X, the two ring gives the same central charge $a$, but the central fiber $X_0$ has a strictly larger symmetry group which then destabilizes $X$. When $F>0$, the new ring gives less central charge over the allowed space of symmetries ($\epsilon>0$). In summation, Futaki invariant is actually implying generalized $a$-maximization. Namely, a test configuration $X_0$ destabilizes $X$ if it gives no less central charge! \begin{figure}[h] \centering \includegraphics{futaki.eps} \caption{Three situations for Futaki invariant. $F>0$: $a_0(\epsilon)>a(0)$ for $\epsilon>0$; $F=0$: the minima of $a_0$ is achieved at $\epsilon=0$; $F<0$: the minima of $a_0$ is achieved fro $\epsilon>0$. Notice that we only need to look at $a_0$ for $\epsilon>0$.} \label{futaki} \end{figure} \textbf{Example}: Consider the ring $X$ which is generated by the ideal $x^2+y^2+z^2+w^k=0$, this ring has a symmetry $\zeta$ with charge $({2k\over k+2},{2k\over k+2},{2k\over k+2},{4\over k+2})$ on coordinates $(x,y,z,w)$. This symmetry is chosen such that the $(3,0)$ form $\Omega={dx \wedge dy \wedge dz \wedge dw\over df}$ has charge two. The Hilbert series for $\zeta$ is \begin{equation} Hilb(X,\zeta, t)=\frac{1-t^{\frac{4 k}{k+2}}}{\left(1-t^{\frac{4}{k+2}}\right) \left(1-t^{\frac{2 k}{k+2}}\right)^3}. \end{equation} Expand around $t=1$, we find $a_0(\zeta)=a_1(\zeta)={(2+k)^3\over 8k^2}$. Now consider the test configuration generated by the symmetry $\eta$ with charges $(0,0,0,1)$. In this case, the central fibre $X_0$ is generated by the ideal $x^2+y^2+z^2=0$. Using formula \eqref{constant}, the one parameter possible $U(1)_R$ symmetry is \begin{equation} \zeta(\epsilon)=\zeta+\epsilon(\eta-{1\over 2}\zeta). \end{equation} The Hilbert series with respect to above symmetry is \begin{equation} Hilb(X_0,\zeta(\epsilon),t)={1-t^{(1-{\epsilon\over 2}){4k\over k+2}}\over (1-t^{(1-{\epsilon\over 2}){2k\over k+2}})^3(1-t^{(1-{\epsilon\over 2}){4\over k+2}+\epsilon)})}. \end{equation} Substituting $t=exp(-s)$ and expand the Hilbert series around $s=0$, we get \begin{equation} a_0(\zeta(\epsilon))=a_1(\zeta(\epsilon))=\frac{2 (k+2)^3}{(\epsilon-2)^2 k^2 (\epsilon k+4)}. \end{equation} The Futaki invariant is computed as \begin{equation} F=D_\epsilon a_0(\zeta(\epsilon))|_{\epsilon=0}=\frac{(4-k) (k+2)^3}{32 k^2}. \end{equation} So $F\leq 0$ for $k\geq 4$. Since $X_0$ is clearly not isomorphic to $X$, we conclude that $X_0$ destabilizes $X$ for $k\geq 4$. A physical interpretation of this result will be given in the next section. \subsubsection{Some discussions} Checking K-stability involves two steps. First, finding a test configuration and then computing the Futaki invariant. While the computation of Futaki invariant is straightforward, the set of possible test configurations is in principle infinite. Thus, in order to check K-stability one needs to reducing the sets of possible test configurations. There are several simplifications we can make \begin{itemize} \item The first simplification has already been used, namely we require that the central fibre to be normal, Gorenstein and klt. This is simply due to the reason that the central fibre should describe the chiral ring of a $\mathcal{N}=1$ field theory. \item Assume that the symmetry group of the ring $X$ is $G$, then one only need to consider the flat families generated by a symmetry which commutes with $G$ \cite{datar2015k, collins2015sasaki}. This fact is quite useful for singularities with many symmetries. In particular, if the variety has three dimensional symmetries (or in other words, $X$ is toric), then there are no non-trivial test configurations, and hence checking stability reduces to volume minimization (or $a$-maximization). \end{itemize} \section{Some physical consequences} \subsection{$a$-maximization} Let's assume that the ring $X$ is stable and has more than one dimension worth of possible $U(1)_R$ symmetry. The determination of $U(1)_R$ symmetry is solved by $a$-maximization \cite{Intriligator:2003jj} or equivalently volume minimization \cite{Martelli:2005tp,Martelli:2006yb}. We now show that $a$-maximization can be explained using K-stability. Consider a test configuration generated by the symmetry vector $\eta$, and the central fibre $X_0$ is the same as $X$. The Futaki invariant is \begin{equation} F=D_\zeta a_0(\zeta+\epsilon \eta)|_{\epsilon=0}=\eta\cdot a_0^{'}(\zeta), \end{equation} If $F(X, \zeta, \eta)>0$, the test configuration $(X_0, \zeta, \eta)$ does not destabilize $X$. But, since $\eta$ preserves $X$, we can use the symmetry generator $-\eta$ to generate a test configuration with the same central fibre $X$, and the Futaki invariant now is $F(X, \zeta, -\eta)=-F(X, \zeta, \eta)<0$ which will make the ring unstable. So K-stability implies that the symmetry generator has to satisfy \begin{equation} a_0^{'}(\zeta)=0. \end{equation} Notice that since $a_0$ is a convex function, the solution of above equation is the minimum, and therefore the central charge $a$ is maximized. \subsection{Unitarity bound} One can always generate a test chiral ring by using a symmetry acting on a single coordinate $x$ only. The central fibre $X_0$ is a new ring with $x$ free \footnote{Mathematically such test configuration is generated by the so-called Rees algebra.}. The Futaki invariant is computed in \cite{collins2012k}, and the answer is \begin{equation} F\propto (dim(x)-1), \end{equation} here we ignore a positive constant, and $dim(x)$ is the scaling dimension of the chiral scalar operator $x$. $X$ is not destabilized by this particular test configuration if \begin{equation} dim(x)>1. \end{equation} This is nothing but the unitarity bound on scalar operator represented by $x$. \subsection{Singularity with more than one dimensional symmetries} Consider a toric Gorenstein singularity, and the rank of symmetry group is $3$. As we discussed above, there is no non-trivial test configuration, and so toric singularity is stable provided we choose the $U(1)_{R}$ symmetry which minimizes the volume. On the other hand, the existence of Sasaki-Einstein metrics on the link of a toric singularity was established using analytic methods in \cite{futaki2009transverse}. For the ring $X$ with two dimensional symmetries, one only needs to check finite number of test configurations, see \cite{collins2015sasaki}. \textbf{Example}: Consider the ring defined by the ideal $x^2+y^2+z^p+w^q=0$. This ring has a two dimensional symmetry group. One can characterize all the test chiral rings, and the ring is proven to be stable if $(p,q)$ satisfies the following condition \cite{collins2015sasaki}: \begin{equation} p<2q~\text{and}~q<2p. \end{equation} Notice that this is just the requirement of the unitarity bound on the operators represented by $z$ and $w$. However, as we will see later, the unitarity bound is not the only obstruction which can appear, even in the case of hypersurface singularities. \subsection{Hypersurface singularity} Consider an isolated three-fold hypersurface singularity $f:(C^4,0)\rightarrow (C,0)$ with a $C^*$ action $\vec{\zeta}$: \begin{equation} f(\lambda^{w_i}q_i)=\lambda f(z_i). \end{equation} Here all the charges $w_i$ are positive. The canonical three-form is \begin{equation} \Omega={dz_0\wedge dz_1\wedge dz_2\wedge dz_3\over d F}. \end{equation} This form has charge $\sum w_i-1$, and the candidate $U(1)_R$ symmetry is found by requiring $\Omega$ to have charge two: \begin{equation} (\sum w_i-1)\delta=2\rightarrow \delta={2\over \sum w_i-1}, \end{equation} and the candidate $U(1)_R$ symmetry is $\zeta^{'}=\delta \zeta$. To make the coordinate $z_i$ have positive r charge, we require $\sum w_i-1>0$, which implies that the singularity is a rational Gorenstein (and hence klt) singularity. Such rational hypersurface singularities have been classified by Yau and Yu \cite{yau2003classification}. The Hilbert series of a hypersurface singularity is easy to compute. It takes the following form \begin{equation} Hilb(f,t,\zeta^{'})={1-t^{\delta}\over (1-t^{w_0 \delta})(1-t^{w_1 \delta})(1-t^{w_2 \delta})(1-t^{w_3 \delta})}. \end{equation} Now let's consider a test configuration which is derived by using a one parameter transformation $\eta$. For simplicity, let's assume that the action $\vec{\eta}$ is diagonal on the coordinates with charges $(v_1,v_2,v_3,v_4)$. In the flat limit, we get a new polynomial $f_0$ which does not necessarily define an isolated singularity. $f_0$ has two dimensional symmetries generated by $(\zeta^{'}, \eta)$, however, there is only a one dimensional symmetries which could be the possible $U(1)_R$ symmetry. The one parameter symmetry group can be parameterized as \begin{equation} \eta(\epsilon)=\zeta^{'}+\epsilon(\eta-a \zeta^{'}), \end{equation} and $a$ can be computed using the formula \ref{constant}. The Futaki invariant can be computed using formula \ref{futaki}, and we have \begin{align} &F(f,\zeta^{'}, \eta)=D_{\epsilon} a_0(\zeta(\epsilon))|_{\epsilon=0}= \nonumber\\ &-[(v_4w_1w_2w_3(w_1+w_2+w_3-2w_4-1)+v_3 w_1 w_2 w_4(w_1+w_2+w_4-2w_3-1)+ \nonumber\\ &v_2w_1w_3w_4(w_1+w_3+w_4-2w_2-1)+v_1w_2w_3w_4(w_2+w_3+w_4-2w_1-1). \label{hyper} \end{align} In the following, we are going to use this formula to test whether a hypersurface singularity is stable or not. \subsubsection{Irrelevance of superpotential term} Recall that we have already studied the singularity \begin{equation} f=z_0^2+z_1^2+z_2^2+z_3^{2k}, \end{equation} from K-stability perspective, and we showed that this ring is unstable for $k\geq 2$. The destabilizing configuration has the central fibre $X_0= \{z_0^2+z_1^2+z_2^2=0\}$; see section $3.2.2$. Let's interpret this result from field theory point of view. The quiver gauge theory description is found in \cite{Cachazo:2001sg}; see figure \ref{quiver} below. \begin{figure}[h] \centering \includegraphics{quiver.eps} \caption{Quiver gauge theory description for D3 brane probing the singularity defined by $z_0^2+z_1^2+z_2^2+z_3^{2k}=0$. The superpotential is described in \eqref{super}.} \label{quiver} \end{figure} We have the following superpotential term: \begin{equation} W=\text{Tr} (\phi_1(A_1B_1+A_2B_2))-\text{Tr}(\phi_2 (B_1A_1+B_2 A_2))-2{\text{Tr}\phi_1^{k+1}\over k+1}+2{\text{Tr}\phi_2^{k+1}\over k+1}. \label{super} \end{equation} The $U(1)_R$ charge is fixed such that the NSVZ $\beta$ function is zero, and each term in superpotential $W$ has charge two: \begin{align} &{1\over2}[(R(A_1)-1)+(R(A_2)-1)+(R(B_1)-1)+(R(B_2)-1)]+(R(\phi_1)-1)+1=0, \nonumber\\ &{1\over2}[(R(A_1)-1)+(R(A_2)-1)+(R(B_1)-1)+(R(B_2)-1)]+(R(\phi_2)-1)+1=0, \nonumber\\ &R(A_1)+R(B_1)+R(\phi_1)=2,~~R(A_2)+R(B_2)+R(\phi_1)=2, \nonumber\\ &R(A_1)+R(B_1)+R(\phi_2)=2,~~R(A_2)+R(B_2)+R(\phi_2)=2, \nonumber\\ &R(\phi_1)=R(\phi_2)={2\over k+1}. \end{align} We can use symmetry or $a$ maximization to find the following $R$ charges: $R(A_1)=R(A_2)=R(B_1)=R(B_2)={k\over k+1},~R(\phi_1)=R(\phi_2)={2\over k+1}$. The $F$-term relations from the superpotential are: \begin{align} & {\partial W\over \partial A_1}=0:~~B_1\phi_1-\phi_2 B_1=0, \nonumber\\ &{\partial W\over \partial B_1}=0:~~\phi_1 A_1-A_1\phi_2=0, \nonumber\\ &{\partial W\over \partial A_2}=0:~~B_2\phi_1-\phi_2 B_2=0, \nonumber\\ &{\partial W\over \partial B_2}=0:~~\phi_1 A_2-A_2\phi_2=0, \nonumber\\ &{\partial W\over \partial \phi_1}=0:~~A_1B_1+A_2B_2-2\phi_1^k=0, \nonumber\\ &{\partial W\over \partial \phi_2}=0:~~B_1A_1+B_2A_2-2\phi_2^k=0. \nonumber\\ \end{align} The scalar chiral ring of this theory (which is related to the holomorphic functions on $X$) is generated by the loops in the quiver subject to the above relations. The single trace scalar chiral operators are generated by the simple loops, such as $\text{Tr}(A_iB_j)$, and one can order them by their $U(1)_R$ charge. Consider the singularity $X$ defined by the equation $f=z_0^2+z_1^2+z_2^2+z_3^{2k}$, the unique candidate $U(1)_R$ symmetry $\zeta^{'}$ has charge $({2k\over k+1},{2k\over k+1},{2k\over k+1}, {2\over k+1})$ which is identified as the field theory $U(1)_R$ charge. We can make a holomorphic change of coordinates to write $f$ as $f=U^2+V^2+(-W+Z^k)(W+Z^k)$. The holomorphic functions on $X$ can be identified with the field theory chiral operators as follows: \begin{align} \text{Tr}A_1B_2&=U,\qquad &\text{Tr}A_2 B_1&=V, \nonumber\\ \text{Tr}A_1B_1&=-W+Z^k,\qquad &\text{Tr}A_2B_2&=W+Z^k. \nonumber\\ \text{Tr}\phi_1&=Z. \end{align} It can be checked that the full set of scalar chiral operators of the field theory which can get expectation value is captured by the ring $X$. The properties of the IR SCFT can be derived as follows: The quiver without the superpotential term $\text{Tr}(\phi_1^{k+1})$ and $\text{Tr}(\phi_2^{k+1})$ defines a four dimensional $\mathcal{N}=2$ SCFT ${\cal T}_0$, and all of the elementary fields $A_i, B_i, \phi_i$ are free with $U(1)_R$ charge ${2\over 3}$. Our $\mathcal{N}=1$ theory can be thought of as deforming $\mathcal{N}=2$ SCFT ${\cal T}_0$ by the superpotential terms involving the adjoint chiral superfields $\phi_1$ and $\phi_2$. The scaling dimensions for the superpotential terms ${\cal O}_1=\text{Tr}(\phi_1^{k+1})$ and ${\cal O}_2=\text{Tr}(\phi_2^{k+1})$ are $\Delta[{\cal O}_i]=k+1$, and so they are irrelevant for $k>2$; the IR SCFT is just the original SCFT ${\cal T}_0$. For $k=2$, the superpotential term is marginally irrelevant \cite{Green:2010da}, and the IR SCFT is also the original SCFT ${\cal T}_0$. We have used K stability to check that the ring is unstable for $k\geq 2$, and the destabilizing configuration has a central fibre $X_0: f=z_0^2+z_1^2+z_2^2$. It is interesting to note that the IR SCFT (affine $A_1$, $\mathcal{N}=2$ SCFT) associated with the ring $X$ is actually described by $D3$ branes probing the singularity $X_0$. This fact supports our claim that the central fibre $X_0$ describes the possible chiral ring of the IR SCFT, and the result from K-stability is in agreement with field theory result! \subsubsection{Further obstructions} The unstable example considered so far have been caused by the irrelevance of superpotential terms or the violation of the unitarity bound. We now give an example where the instability of the chiral ring is more subtle. Consider a singularity \begin{equation} f=z_0^2+z_1^2+z_2^p+z_2z_3^q. \end{equation} The only possible $U(1)_R$ symmetry has charge $({pq\over p+q-1},{pq\over p+q-1},{2q\over p+q -1},{2(p-1)\over p+q-1})$. The scaling dimensions of $z_2$ and $z_3$ are \begin{equation} [z_2]={3q\over 2(p+q-1)},~~[z_3]={3}{(p-1)\over p+q-1}. \end{equation} using the relation $\Delta({\cal O})={3\over2}R({\cal O})$. The unitarity bound on the scalar operators implies that $[z_2]>1$ and $[z_3]>1$, we find \begin{equation} p<2q+1~\&~q<2p-2. \label{uni} \end{equation} The unitarity bound can also be found using the test configuration generated by the symmetry acting on coordinate $z_2$ and $z_3$ only. Consider a test configuration generated by the symmetry $\eta$ with charge $(0,0, 1, -1/q)$. We have the following family generated by $\eta$; \begin{equation} z_0^2+z_1^2+t^pz_2^p+z_2z_3^q=0. \end{equation} The flat limit over $t=0$ is described by the equation $z_0^2+z_1^2+z_2 z_3^q=0$. The Futaki invariant can be computed using the formula \ref{hyper}: \begin{equation} F(X_0, \zeta, \eta)=-\frac{(p+q-1)^2 \left(p^2-2 p q+q-1\right)}{2 (p-1)^2 q^2}. \end{equation} So the original ring is stable if \begin{equation} q(2p-1)-(p^2-1)>0 \rightarrow q>{p^2-1\over 2p-1}. \label{stronger} \end{equation} This bound is stronger than the unitarity bound~\eqref{uni} for certain range of the parameters. Let's set $p=6$, the unitarity bound from~\eqref{uni} implies that \begin{equation} {5\over 2}<q<10. \end{equation} The bound from~\eqref{stronger} implies that \begin{equation} q>34/11, \end{equation} which gives a stronger lower bound, i.e. $q=3$ satisfies the unitarity bound, but is unstable due to some other dynamical reason. The chiral ring of the IR SCFT is described by $z_0^2+z_1^2+z_2 z_3^q=0$ for $q=3$ which is also a three dimensional quotient singularity. \section{Conclusion} We introduce a notion of stability for $\mathcal{N}=1$ chiral rings, and conjecture that a chiral ring is the chiral ring of a SCFT if and only if it is stable. We test our stability notion for models engineered using D3 brane probing 3-fold singularity, and show that the notion of K-stability for the existence of Ricci-flat conic metric is equivalent to the field theory stability. This notion can be used to explain $a$-maximization, an operator becoming free if it violates unitarily bound, and the irrelevance of superpotential terms, etc. In general, our stability notion explains the consequences of accidental symmetries appearing in the study of quiver gauge theory: the chiral ring of the IR SCFT is different form that of UV theory if there are accidental symmetries. Accidental symmetries cause many problems in studying supersymmetric field theory with four supercharges \cite{Kutasov:2003iy,Buican:2011ty}. Our study shows the importance of the chiral ring, and shows that the generalized notion of $a$-maximization plays a key role. Similar notion of generalized a maximization idea has already been used by Intriligator to settle some interesting IR phase questions \cite{Intriligator:2005if}. It would be interesting to use our stability notion to reconsider those models. The stability notion proposed here can be generalized to three dimensional $\mathcal{N}=2$ theory. Although one does not have the central charge notion in this context, we may replace it by the so-called $F$-function \cite{Jafferis:2010un}. For the theory engineered by M2 branes probing a four-fold singularity, one still has the notion of K-stability for the four-fold singularity and much of the theory is similar. We leave the details to the interested reader. Similarly, one can also define the a notion of stability for two dimensional $(0,2)$ theory, and we hope that the accidental symmetry for $(0,2)$ theory studied in \cite{Bertolini:2014ela} can be put into the stability framework. There are some further questions about the stability of $\mathcal{N}=1$ chiral ring. Of crucial importance is to understand the constraints on set of possible test chiral rings. At present, unless a large symmetry group intervenes, there are infinite number of possible test rings, and it seems computational impossible to check all of them, even in basic examples. It would be nice to have some physical input which could shed some light on this issue. In this paper, we only studied the models engineered using D3 brane, and it will be of great interest to study other $\mathcal{N}=1$ theories. Our primary focus has been on testing whether a chiral ring is the chiral ring of a SCFT. If the chiral ring is unstable, it is important to determine the ring of IR SCFT. Our study shows that the central fibre of the destabilizing test configuration should be the candidate chiral ring of the IR SCFT, and it is interesting to determine the special destabilizing test configuration which would give the chiral ring of IR SCFT. We hope to come to this question in the future. \section*{Acknowledgments} The work of S.T Yau is supported by NSF grant DMS-1159412, NSF grant PHY- 0937443, and NSF grant DMS-0804454. T.C. Collins is supported by NSF grant DMS-1506652 The work of D. Xie is supported by Center for Mathematical Sciences and Applications at Harvard University, and in part by the Fundamental Laws Initiative of the Center for the Fundamental Laws of Nature, Harvard University. \bibliographystyle{JHEP}
f6e73cf19e57be7543ba3fa7ae46f1ab8ac973bc
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Let $\mathbf{X}=\{X_t\}_{t\geq 0}$ be a L\'{e}vy process in $\mathbb{R} ^d$ with the distribution $\mathbb{P}$ such that $X_0=0$. We denote by $p_t(\mathrm{d} x)$ the distribution of the random variable $X_t$ and we use the standard notation $\mathbb{P}_x$ for the distribution related to the process $\mathbf{X}$ started at $x\in \mathbb{R}^d$. The characteristic exponent $\psi (x)$, $x\in \mathbb{R} ^d$, of the process $\mathbf{X}$ is given by the formula \begin{align}\label{charact_expo} \psi(x) = \sprod{x}{Ax} - i\sprod{x}{\gamma } - \int_{\mathbb{R} ^d}\left(e^{i\sprod{x}{y}} - 1 - i\sprod{x}{y} \textbf{1}_{\{\norm{y} \leq 1\}} \right) \nu(\mathrm{d} y) , \end{align} where $A$ is a symmetric non-negative definite $d\times d$ matrix, $\gamma\in \mathbb{R} ^d$ and $\nu$ is a L\'{e}vy measure, that is \begin{align}\label{Levy_measure} \nu(\{0\})=0\quad \mathrm{and}\quad \int_{\mathbb{R} ^d} \left( 1\wedge \norm{y}^2 \right)\, \nu(\mathrm{d} y) <\infty . \end{align} Let $\Omega$ and $\Omega _0$ be two non-empty subsets of $\mathbb{R} ^d$ such that $\Omega$ is open and its Lebesgue measure $|\Omega|$ is finite. We consider the following quantity associated with the process $\mathbf{X}$, \begin{align*} H _{\Omega , \Omega _0} (t) = \int_{\Omega}\mathbb{P}_{x} (X_t\in \Omega _0)\, \mathrm{d} x = \int_{\Omega}\int_{\Omega_0-x}p_t( \mathrm{d} y)\mathrm{d} x \end{align*} and we use the notation \begin{align}\label{heat_cont-H} H_\Omega (t)= H_{\Omega , \Omega } (t)\quad \mathrm{and}\quad H(t)=H _{\Omega , \Omega^{c}} (t). \end{align} The main goal of the present article is to study the asymptotic behaviour of $H_\Omega (t)$ as $t$ goes to zero. We observe that \begin{align*} H_\Omega (t) = |\Omega| - H(t), \end{align*} and thus it suffices to work with the function $H(t)$. The function $u(t,x) = \int_{\Omega-x}p_t(\mathrm{d} y)$ is the weak solution of the initial value problem \begin{align*} \frac{\partial}{\partial t}u(t,x) &= -\mathcal{L}\, u(t,x),\quad t>0,\, x\in \mathbb{R}^d,\\ u(0,x) &= \textbf{1}_{\Omega}(x), \end{align*} where $\mathcal{L}$ is the infinitesimal generator of the process $\mathbf{X}$, see \cite[Section 31]{Sato}. Therefore, $H_\Omega (t)$ can be interpreted as the amount of \textit{heat} in $\Omega$ if its initial temperature is one whereas the initial temperature of $\Omega^c$ is zero. In paper \cite{vanDenBerg1_POT}, the author calls the quantity $H_\Omega (t)$ \textit{heat content} and we will use the same terminology. There are a lot of articles where bounds and asymptotic behaviour of the heat content related to Brownian motion, either on $\mathbb{R}^d$ or on compact manifolds, were studied, see \cite{vanDenBerg1_POT}, \cite{vanDenBerg1}, \cite{vanDenBerg2}, \cite{vanDenBerg3}, \cite{vanDenBerg_4}, \cite{vanDenBerg5}. Recently Acu\~{n}a Valverde \cite{Valverde1} investigated the heat content for isotropic stable processes in $\mathbb{R}^d$, see also \cite{Valverde2} and \cite{Valverde3}. In this paper we study the small time behaviour of the heat content associated with rather general L\'{e}vy processes in $\mathbb{R}^d$. Before we state our results we recall the notion of perimeter. Following \cite[Section 3.3]{Ambrosio_2000}, for any measurable set\footnote{All sets in the paper are assumed to be Lebesgue measurable.} $\Omega \subset \mathbb{R} ^d$ we define its perimeter $\mathrm{Per}(\Omega)$ as \begin{align}\label{Perimeter_def} \mathrm{Per}(\Omega) = \sup \left\{ \int_{\mathbb{R}^d}\textbf{1}_{\Omega}(x)\mathrm{div}\, \phi (x)\, \mathrm{d} x:\, \phi \in C_c^1(\mathbb{R}^d,\mathbb{R}^d),\, \norm{\phi}_{\infty}\leq 1 \right\}. \end{align} We say that $\Omega$ is of finite perimeter if $\mathrm{Per}(\Omega)<\infty$. It was shown in \cite{Miranda1, Miranda2, Preunkert} that if $\Omega$ is an open set in $\mathbb{R}^d$ with finite Lebesgue measure and of finite perimeter then \begin{align*} \mathrm{Per}(\Omega) = \pi^{1/2}\lim_{t\to 0} t^{-1/2}\int_{\Omega}\int_{\Omega^c}p_t^{(2)}(x,y)\, \mathrm{d} y\, \mathrm{d} x, \end{align*} where \begin{align*} p_t^{(2)}(x,y) = (4\pi t)^{-d/2}e^{-\norm{x-y}^2/4t} \end{align*} is the transition density of the Brownian motion $B_{t}$ in $\mathbb{R}^d$. We also notice that for a non-empty and open set $\Omega$, $\mathrm{Per}(\Omega)>0$. Recall that for the L\'{e}vy process $\mathbf{X}$ with the transition probability $p_t(\mathrm{d} x)$ and the L\'{e}vy measure $\nu$ we have \begin{align*} \lim_{t\to 0} t^{-1} p_t(\mathrm{d} x) = \nu (\mathrm{d} x),\quad \text{vaguely on } \mathbb{R} ^d\setminus\{0\}. \end{align*} Therefore, we introduce the perimeter $\mathrm{Per}_{\mathbf{X}}(\Omega)$ related to the process $\mathbf{X}$ setting \begin{align}\label{X_perimeter} \mathrm{Per}_{\mathbf{X}}(\Omega)= \int_{\Omega}\int_{\Omega ^c-x}\nu (\mathrm{d} y)\, \mathrm{d} x . \end{align} For instance, if $\mathbf{X}$ is the isotropic (rotationally invariant) $\alpha$-stable process, denoted by $S^{(\alpha)}=(S^{(\alpha)}_t)_{t\geq 0}$, we obtain the well-known $\alpha$-perimeter, which for $0<\alpha <1$ is given by \begin{align*} \mathrm{Per}_{S^{(\alpha)}}(\Omega) = \int_{\Omega}\int_{\Omega ^c} \frac{\mathrm{d} y\, \mathrm{d} x}{\norm{x-y}^{d+\alpha}}. \end{align*} It was proved in \cite{Fusco} that if $\Omega$ has finite Lebesgue measure and is of finite perimeter then $\mathrm{Per}_{S^{(\alpha)}}(\Omega)$ is finite. In the present paper we prove (see Lemma \ref{Lemma_Per_X}) that for any L\'{e}vy process with finite variation, cf. \cite[Section 21]{Sato}, and for $\Omega$ of finite measure and of finite perimeter $\mathrm{Per}(\Omega) <\infty$ the quantity $\mathrm{Per}_{\mathbf{X}}(\Omega)$ is finite as well. After Pruitt \cite{Pruitt}, we consider the following function related to the L\'{e}vy process $\mathbf{X}$. For any $r>0$, \begin{align} \begin{aligned}\label{Pruitt_Function} h(r)= \norm{A}r^{-2} &+ r^{-1} \Big\lvert \gamma +\int_{\mathbb{R} ^d} y \left(\textbf{1}_{\norm{y} < r}-\textbf{1}_{\norm{y} < 1}\right){}& \nu(\mathrm{d} y)\Big\rvert \\ & +\int_{\mathbb{R} ^d} \left( 1\wedge \norm{y}^2r^{-2}\right) \, \nu(\mathrm{d} y), \end{aligned} \end{align} where $(A,\gamma ,\nu)$ is the triplet from \eqref{charact_expo} and $\Vert A\Vert = \max_{\Vert x\Vert =1} \norm{Ax}$. Our first result gives a general upper bound for the heat content related to any L\'{e}vy process in $\mathbb{R}^d$. \begin{theorem}\label{Thm_d>1} Let $\Omega \subset \mathbb{R}^d$ be an open set of finite measure $|\Omega| $ and of finite perimeter $\mathrm{Per} (\Omega)$, and set $R=2|\Omega|/\mathrm{Per}(\Omega)$. Let $\mathbf{X}$ be a L\'{e}vy process in $\mathbb{R}^d$. Then there is a constant $C_1=C_1(d) >0$ which does not depend on the set $\Omega$ such that \begin{align*} H(t) &\leq C_1\, t\, \mathrm{Per} (\Omega) \int_{\frac{R}{2} \wedge h^{-1}(1/t)}^{R} h(r)\, \mathrm{d} r,\quad t>0. \end{align*} \end{theorem} In Proposition \ref{H_lower_bound} we also prove a similar lower bound for a class of isotropic L\'{e}vy processes with characteristic exponent satisfying the so-called upper scaling condition, see \cite{BGR}. Let us recall that a L\'{e}vy process $\mathbf{X}$ is isotropic if the measure $p_t(\mathrm{d} x)$ is radial (rotationally invariant) for each $t > 0$, equivalently to the matrix $A=\eta I$ for some $\eta\geq0$, the L\'{e}vy measure $\nu$ is rotationally invariant and $\gamma =0$. In the next theorem we present the asymptotic behaviour of the heat content under the assumption that the L\'{e}vy process $\mathbf{X}$ is isotropic and its characteristic exponent is a regularly varying function at infinity with index greater than one. We say that a function $f(r)$ is regularly varying of index $\alpha$ at infinity, denoted by $f\in \calR_{\alpha} $, if for any $\lambda >0$, \begin{align*} \lim_{r\to \infty}\frac{f(\lambda r)}{f(r)} = \lambda ^\alpha . \end{align*} \begin{theorem}\label{Thm_alpha>1} Let $\Omega \subset \mathbb{R}^d$ be an open set of finite measure $|\Omega| $ and finite perimeter $\mathrm{Per} (\Omega)$. If $\mathbf{X}$ is an isotropic L\'{e}vy process in $\mathbb{R}^d$ with the characteristic exponent $\psi$ such that $\psi \in \calR_{\alpha} $, for some $\alpha \in (1,2]$, then\footnote{Here $\psi ^-=(\psi^*)^-$ is the generalized left inverse of the non-decreasing function $\psi^*(u) = \sup_{s \in [0, u]} \psi(s)$, see Subsection \ref{sec_Levy}.} \begin{align*} \lim_{t\to 0} \psi^{-}(1/t) H(t) = \pi^{-1}\Gamma(1-1/\alpha) \mathrm{Per} (\Omega) . \end{align*} \end{theorem} The following theorem deals with L\'{e}vy processes with finite variation. Recall that according to \cite[Theorem 21.9]{Sato} a L\'{e}vy process $\mathbf{X}$ has finite variation on any interval $(0,t)$ if and only if \begin{align}\label{Levy_bdd_var_cond} A=0\quad \mathrm{and}\quad \int_{\norm{x}\leq 1}\norm{x}\nu (\mathrm{d} x)<\infty . \end{align} In this case the characteristic exponent has the following simple form \begin{align*} \psi (x) = i\sprod{x}{\gamma _0} + \int_{\mathbb{R}^d}\left( 1-e^{i\sprod{x}{y}}\right)\nu (\mathrm{d} y), \end{align*} where \begin{align}\label{gamma_0} \gamma _0 = \int_{\norm{y}\leq 1}y\, \nu (\mathrm{d} y) - \gamma . \end{align} We notice that for symmetric L\'{e}vy processes with finite variation we have $\int_{\norm{y}\leq 1}y\, \nu (\mathrm{d} y) =0$. Thus, for any symmetric L\'{e}vy process with finite variation we have $\gamma _0 = 0$. Moreover, for such processes the related function $h$ defined at \eqref{Pruitt_Function} is Lebesgue integrable on every bounded interval. As we mentioned before, in front of Lemma \ref{Lemma_Per_X} the quantity $\mathrm{Per}_{\mathbf{X}}(\Omega)$ is finite in the following theorem. For the definition of a directional derivative we refer the reader to Subsection \ref{sec_geom}. \begin{theorem}\label{Thm_X_bdd_variation} Let $\mathbf{X}$ be a L\'{e}vy process in $\mathbb{R}^d$ with finite variation. Let $\Omega \subset \mathbb{R}^d$ be an open set of finite measure $|\Omega| $ and finite perimeter $\mathrm{Per} (\Omega)$. Then \begin{align*} \lim_{t\to 0}t^{-1}H(t) = \mathrm{Per}_{\mathbf{X}}(\Omega) + \frac{\norm{\gamma_0}}{2} V_{\frac{\gamma_0}{\norm{\gamma_0}}}(\Omega) \textbf{1}_{\mathbb{R}^d\setminus \{0\}}(\gamma _0), \end{align*} where $V_u(\Omega)$ is the directional derivative of the indicator function $\textbf{1}_{\Omega}$ in the direction $u$ on the unit sphere in $\mathbb{R}^d$. \end{theorem} The rest of the paper is organized as follows. We start with a paragraph which gives the list of examples. In Section \ref{sec_Prelim} we present all the necessary facts and tools that we use in the proofs. Section \ref{sec_Proofs} is devoted to the proofs of the above theorems. \subsubsection*{Notation} We write $a\wedge b$ for $\min \{a,b\}$ and $a\vee b=\max\{a,b\}$. Positive constants are denoted by $C_1, C_2$ etc. If additionally $C$ depends on some $M$, we write $C=C(M)$. We use the notation $f(x) = O(g(x))$ if there is a constant $C>0$ such that $f(x)\leq C g(x)$; $f(x)\asymp g(x)$ if $f(x)=O(g(x))$ and $g(x)=O(f(x))$; $f(x)\sim g(x)$ at $x_0$ if $\lim_{x\to x_0}f(x)/g(x)=1$. $\mathbb{S}^{d-1}$ stands for the unit sphere in $\mathbb{R}^d$ and $\sigma (\mathrm{d} u) = \sigma ^{d-1}(\mathrm{d} u)$ is the surface measure. \vspace*{0,2cm} \subsection{Examples} First we consider the isotropic (rotationally invariant) $\alpha$-stable process in $\mathbb{R}^d$. The following example shows that our theorems can be regarded as extensions of the results contained in papers \cite{Valverde1} and \cite{Valverde2}. \begin{example} Let $S^{(\alpha)} = (S_t^{(\alpha)})_{t\geq 0}$ be the isotropic $\alpha$-stable process in $\mathbb{R}^d$ with $\alpha \in (0,2)$. We recall that the characteristic exponent of $S^{(\alpha )}$ is $x\mapsto c\norm{x}^{\alpha}$, for some $c>0$, see \cite[Theorem 14.14]{Sato}. The L\'{e}vy measure $\nu$ of $S^{(\alpha)}$ has the form \begin{align*} \nu (\mathrm{d} x) = \frac{c_1\, \mathrm{d} x}{\norm{x}^{d+\alpha}},\quad \mathrm{for\ some}\ c_1>0. \end{align*} The related function $h$ defined in \eqref{Pruitt_Function} turns into $h(r) = c_2/r^{\alpha}$, for some $c_2>0$. Let $\Omega \subset \mathbb{R}^d$ be an open set of finite measure $|\Omega| $ and finite perimeter $\mathrm{Per} (\Omega)$ and let $R=2|\Omega|/\mathrm{Per}(\Omega)$. Then, by Theorem \ref{Thm_d>1}, for any $\alpha \in (0,2)$, \begin{align*} H(t)\leq C_1 \mathrm{Per}(\Omega)\, t\int_{\frac{R}{2} \wedge t^{1/\alpha}}^{R}r^{-\alpha}\, \mathrm{d} r ,\ \mathrm{for\ all}\ t>0, \end{align*} and, by Proposition \ref{H_lower_bound}, for $\alpha\in[1,2)$ and $t$ small enough, \begin{align*} H(t)\geq C_2\mathrm{Per}(\Omega)\, t\int_{t^{1/\alpha}}^{R}r^{-\alpha}\, \mathrm{d} r . \end{align*} In particular, for $\alpha = 1$ we get \begin{align*} \limsup_{t\to 0}\frac{H(t)}{t\log (1/t)}\leq C_1\mathrm{Per}(\Omega)\quad \mathrm{and}\quad \liminf_{t\to 0}\frac{H(t)}{t\log(1/t)}\geq C_2 \mathrm{Per}(\Omega). \end{align*} For $\alpha \in (1,2)$, by Theorem \ref{Thm_alpha>1}, \begin{align*} \lim_{t\to 0} t^{-1/\alpha}H(t) = \pi^{-1}\Gamma(1-1/\alpha) \mathrm{Per} (\Omega) \end{align*} and for $\alpha \in (0,1)$, by Theorem \ref{Thm_X_bdd_variation}, \begin{align*} \lim_{t\to 0}t^{-1}H(t) = \mathrm{Per}_{S^{(\alpha)}}(\Omega). \end{align*} Here $\gamma _0 = 0$ according to the comments following equation \eqref{gamma_0}. \end{example} \begin{example} Let $\mathbf{X}$ be a pure jump (i.e. $A=0$ and $\gamma=0$ in \eqref{charact_expo}) isotropic L\'{e}vy process in $\mathbb{R}^d$ such that its L\'{e}vy measure $\nu$ has the form \begin{align}\label{Levy_meas_reg} \nu (dx ) = ||x||^{-d}g(1/||x||)dx,\quad \mathrm{for\ some}\ g \in \mathcal{R}_{\alpha},\ \alpha \in (0,2). \end{align} By \cite[Proposition 5.1]{CGT}, we conclude that $\psi \in \calR_{\alpha} $. Hence for such processes, for $1<\alpha <2$, \begin{align*} \lim_{t\to 0} \psi^{-}(1/t) H(t) = \pi^{-1}\Gamma(1-1/\alpha) \mathrm{Per} (\Omega) , \end{align*} and, for $0<\alpha <1$, \begin{align*} \lim_{t\to 0}t^{-1}H(t) = \mathrm{Per}_{\mathbf{X}}(\Omega) \end{align*} Typical examples of isotropic L\'{e}vy processes satisfying \eqref{Levy_meas_reg} are \begin{enumerate} \item truncated stable process: $g(r)=r^{-\alpha} \textbf{1}_{(0, 1)}(r)$; \item tempered stable process: $g(r)=r^{-\alpha} e^{-r}$; \item isotropic Lamperti stable process: $g(r)=re^{\delta r}(e^r-1)^{-\alpha-1}$, $\delta<\alpha+1$; \item layered stable process: $g(r)=r^{-\alpha} \textbf{1}_{(0,1)(r)} + r^{-\alpha_1} \textbf{1}_{[1, \infty)}(r),\ \alpha_1\in(0,2)$. \end{enumerate} \end{example} \begin{example} Let $\mathbf{X}$ be a L\'{e}vy process which is the independent sum of the Brownian motion and the isotropic $\alpha$-stable process in $\mathbb{R}^d$. Then $\mathbf{X}$ is isotropic and its characteristic exponent is $\psi (x) = \eta ||x||^2+c||x||^\alpha$, for some $\eta ,c >0$ and $\alpha \in (0,2)$. Clearly we have $\psi \in \mathcal{R}_2$ and whence, by Theorem \ref{Thm_alpha>1}, \begin{align*} \lim_{t\to 0} t^{-1/2} H(t) = \sqrt{\frac{\eta}{\pi}} \mathrm{Per} (\Omega) . \end{align*} \end{example} \begin{example} Take $\alpha \in (0,2)$ and let $\mathbf{X}$ be a symmetric L\'{e}vy process in $\mathbb{R}$ which is the independent sum of the isotropic $\alpha $-stable process and a L\'{e}vy process of which the L\'{e}vy measure $\nu$ has the form \begin{align}\label{leevy_m} \nu(\mathrm{d} x)=\sum_{k=1}^{\infty} 2^{k\alpha /2}\left(\delta_{2^{-k}}( \mathrm{d} x)+\delta_{-2^{-k}}(\mathrm{d} x)\right), \end{align} where $\delta _x$ stands for the Diraac measure at $x$. According to Subsection \ref{sec_Levy}, the characteristic exponent $f(x)$ of the process related to $\nu(\mathrm{d} x)$ has the form \begin{align*} f(x) = 2\int_0^\infty \left(1-\cos(xu)\right)\nu (\mathrm{d} u). \end{align*} The characteristic exponent of the isotropic $\alpha$-stable process is $x\mapsto c|x|^\alpha$, for some $c>0$, see \cite[Theorem 14.14]{Sato}, and whence, by independence, the characteristic exponent of $\mathbf{X}$ equals to \begin{align*} \psi (x) = c|x|^\alpha + f(x),\quad c>0. \end{align*} Since $1-\cos (v)\asymp v^2$, for $0<v<1$, we have for $x>0$, \begin{align}\label{f_estimate} C x^2\int^{1/x}_0 u^2\nu(\mathrm{d} u) \leq f(x)\leq 4\nu \left((1/x,\infty)\right) + x^2\int^{1/x}_0 u^2\nu(\mathrm{d} u), \end{align} where $\nu \left((1/x,\infty)\right)$ is the $\nu$-measure of the half-line $(1/x,\infty )$. Using formula \eqref{leevy_m} we obtain that for $x\geq 1$, \begin{align*} \int^{1/x}_0 u^2\nu(\mathrm{d} u) = \sum_{k\geq \log_2 x}\!\! 2^{(\alpha /2-2)k} \asymp 2^{(\alpha /2-2)\log_2 x} = x^{\alpha /2 -2}. \end{align*} Similarly we have, for $x> 2$, \begin{align*} \nu \left( (1/x,\infty) \right) = \sum _{1\leq k<\log_2 x}2^{\alpha k/2}\leq \sum _{1\leq k\leq [\log_2 x]}2^{\alpha k/2} =\frac{1-2^{\alpha ([\log_2x ]+1)/2}}{1-2^{\alpha /2}} \asymp x^{\alpha /2}, \end{align*} where $[x]$ stands for the integer part of $x$. Hence, by \eqref{f_estimate}, $f(x)\asymp |x|^{\alpha /2}$, for $|x|> 2$. We obtain that $\psi (x) \sim c|x|^\alpha$ at infinity and thus, for $\alpha >1$, \begin{align*} \lim_{t\to 0}t^{-1/\alpha}H(t) = c^{1/\alpha} \pi^{-1}\Gamma(1-1/\alpha) \mathrm{Per} (\Omega) \end{align*} and, for $\alpha <1$, \begin{align*} \lim_{t\to 0}t^{-1}H(t) = \mathrm{Per}_{\mathbf{X}}(\Omega) \end{align*} \end{example} The next example shows that in the case when $\mathbf{X}$ is not isotropic then the constant in Theorem \ref{Thm_alpha>1} may depend on the process. \begin{example} For $\alpha>1$ and $\ell \in \mathcal{R}_0$ we consider a L\'{e}vy process $\mathbf{X}$ in $\mathbb{R}$ with the L\'{e}vy measure $\nu$ of the form \begin{align*} \nu(\mathrm{d} x)=\left( c_1f(1/x)x^{-1}\textbf{1}_{\{x>0\}} + c_2f(1/|x|)|x|^{-1}\textbf{1}_{\{x<0\}}\right)\mathrm{d} x, \end{align*} where $f(r)=r^\alpha\ell(r)$, for $r\geq 1$ and $f(r)=r^\alpha$ for $r<1$ and for some constants $c_1,c_2\geq 0$ such that $c_1+c_2>0$. The corresponding characteristic exponent we call $\psi$. Let $S$ be the non-symmetric $\alpha$-stable distribution in $\mathbb{R}$ with the L\'{e}vy measure given by $\left( c_1x^{-1-\alpha}\textbf{1}_{\{x>0\}} + c_2|x|^{-1-\alpha}\textbf{1}_{\{x<0\}}\right)\mathrm{d} x$ and with the characteristic exponent $\psi ^{(\alpha)}$. We observe that $f^{-1}(1/t)X_t$ converges in law to $S$. Indeed, it is enough to prove the convergence of characteristic functions and this holds since we easily get that for any $x$, \begin{align*} \lim_{t\to 0}t\psi(xf^{-1}(1/t))=\psi^{(\alpha)}(x). \end{align*} For $\Omega=(a,b)$ we have \begin{align*} H(t) = \int_a^b \mathbb{P} (X_t\leq a-x)\, \mathrm{d} x + \int_a^b \mathbb{P} (X_t\geq b-x)\, \mathrm{d} x. \end{align*} A suitable change of variable in both integrals yields \begin{align*} H(t) = \int_0^{R} \mathbb{P} (|X_t|\geq x)\, \mathrm{d} x . \end{align*} Hence, \begin{align*} \lim_{t\to 0}f^{-1}(1/t)H(t)=\lim_{t\to 0} \int^{(b-a)f^{-1}(1/t)}_0\!\!\!\! \mathbb{P}(f^{-1}(1/t)|X_t|>u)\mathrm{d} u = \int^\infty_0\mathbb{P}(|S|>u)\mathrm{d} u =\mathbb{E}|S|. \end{align*} \end{example} \section{Preliminaries}\label{sec_Prelim} In this section we collect all the necessary objects and facts that we use in the course of our study. We start with the short presentation of the geometrical tools. \subsection{Geometrical issues}\label{sec_geom} We refer the reader to \cite{Ambrosio_2000} for a detailed discussion on functions of bounded variation and related topics. Let $G\subseteq \mathbb{R} ^d$ be an open set and $f:G\rightarrow \mathbb{R}$, $f\in L^{1}(G)$. The total variation of $f$ in $G$ is defined by \begin{align*} V(f,G)=\sup \left\{ \int_G \,f(x) \mathrm{div} \varphi(x)\, \mathrm{d} x: \varphi\in C^{1}_{c}(G,\mathbb{R} ^d), \norm{\varphi}_{\infty}\leq 1\right\}. \end{align*} The directional derivative of $f$ in $G$ in the direction $u\in \mathbb{S}^{d-1}$ is $$V_{u}(f,G)=\sup \left\{ \int_{G}\,f(x) \sprod{\nabla \varphi(x)}{u}\, \mathrm{d} x: \varphi\in C^{1}_{c}(G,\mathbb{R} ^d), \norm{\varphi}_{\infty}\leq1\right\}.$$ We notice that $V(\textbf{1}_{\Omega}, \mathbb{R} ^d)$ is the perimeter $ \mathrm{Per}(\Omega)$ of the set $\Omega$, cf. \eqref{Perimeter_def}. Let $V_{u}(\Omega)$ stand for the quantity $V_u(\textbf{1}_{\Omega}, \mathbb{R} ^d)$. We mention that, by \cite[Proposition 3.62]{Ambrosio_2000}, for any open $\Omega$ with Lipschitz boundary $\partial \Omega$ and finite Hausdorff measure $\sigma (\partial \Omega)$ we have \begin{align*} \mathrm{Per}(\Omega) = \sigma (\partial \Omega). \end{align*} For any $\Omega \subset\mathbb{R}^d$ with finite Lebesgue measure $|\Omega|$ we define the covariance function $g_\Omega$ of $\Omega$ as follows \begin{align}\label{g_omega_defn} g_\Omega (y)=|\Omega\cap (\Omega + y)|=\int_{\mathbb{R} ^d}\,\textbf{1}_{\Omega}(x)\,\textbf{1}_{\Omega}(x-y) \mathrm{d} x,\quad y\in \mathbb{R}^d. \end{align} The next proposition collects all the necessary facts concerning the covariance function following the presentation of \cite{Galerne}. This also reveals the link between directional derivatives and covariance functions. \begin{proposition}{\cite[Proposition 2, Theorem 13 and Theorem 14]{Galerne}}\label{g_properties} Let $\Omega \subset \mathbb{R} ^d$ have finite measure. Then \begin{enumerate} \item[(i)] For all $y\in \mathbb{R} ^d$, $0\leq g_\Omega(y)\leq g_\Omega(0)=|\Omega|$. \item[(ii)] For all $y\in \mathbb{R} ^d$, $ g_\Omega(y)= g_\Omega(-y)$. \item[(iii)] $g_\Omega$ is uniformly continuous in $\mathbb{R} ^d$ and $\lim_{y\to \infty}g_\Omega (y)=0$. \end{enumerate} Moreover, if $\Omega$ is of finite perimeter $\mathrm{Per}(\Omega)<\infty$ then \begin{enumerate} \item[(iv)] the function $g_\Omega$ is Lipschitz, \begin{align*} 2\norm{g_\Omega}_{\mathrm{Lip}} = \sup_{u\in \mathbb{S}^{d-1}}V_u(\Omega)\leq \mathrm{Per}(\Omega) \end{align*} and \begin{align}\label{g_lip_limit} \lim_{r\to 0}\frac{g_\Omega(0) - g_\Omega(ru)}{|r|} = \frac{V_u(\Omega)}{2}. \end{align} \item[(v)] For all $r>0$ the limit $\lim_{r\to 0^+}\frac{g_\Omega(0)-g_\Omega(ru) }{r}$ exists, is finite and \begin{align*} \mathrm{Per}(\Omega) = \frac{\Gamma((d+1)/2)}{\pi^{(d-1)/2}} \int_{\mathbb{S}^{d-1}}\lim_{r\to 0^+}\frac{ g_\Omega(0)- g_\Omega(ru)}{r} \sigma (\mathrm{d} u). \end{align*} \end{enumerate} In particular, (i) and the fact that $g_\Omega$ is Lipschitz imply that there is a constant $C=C(\Omega)>0$ such that \begin{align} 0\leq g_\Omega (0)- g_\Omega (y) \leq C (1\wedge \norm{y}).\label{g_Omega_bound} \end{align} \end{proposition} \subsection{Regular variation}\label{sec_RV} A function $\ell : [x_0, +\infty) \rightarrow (0, \infty)$, for some $x_0 > 0$, is called slowly varying at infinity if for each $\lambda > 0$ \[ \lim_{x \to \infty} \frac{\ell(\lambda x)}{\ell(x)} = 1. \] We say that $f: [x_0, +\infty) \rightarrow (0, +\infty)$ is regularly varying of index $\alpha \in \mathbb{R}$ at infinity, if $f(x) x^{-\alpha}$ is slowly varying at infinity. The set of regularly varying functions of index $\alpha$ at infinity is denoted by $\calR_{\alpha} $. In particular, if $f \in \calR_{\alpha} $ then \[ \lim_{x \to \infty} \frac{f(\lambda x)}{f(x)} =\lambda^\alpha,\quad \lambda>0. \] The following property, so-called \textit{Potter bounds}, of regularly varying functions will be very useful, see \cite[Theorem 1.5.6]{bgt}. For every $C > 1$ and $\epsilon > 0$ there is $x_0=x_0(C,\epsilon)>0$ such that for all $ x, y \geq x_0$ \begin{equation} \label{eq:14} \frac{f(x)}{f(y)}\leq C \left( (x/y)^{\alpha -\epsilon} \vee (x/y)^{\alpha +\epsilon}\right). \end{equation} \subsection{L\'{e}vy processes}\label{sec_Levy} Throughout the paper $\mathbf{X}$ always denotes a L\'{e}vy process, that is a c\`{a}dl\`{a}g stochastic process with stationary and independent increments. The characteristic function of $X_t$ has the form $\mathbb{E} e^{i\sprod{X_t}{\xi}} = e^{-t\psi (\xi)}$, where the characteristic exponent $\psi$ is given by \eqref{charact_expo} with the corresponding L\'{e}vy measure $\nu $, cf. \eqref{Levy_measure}. We recall that $\mathbf{X}$ is isotropic if the measures $p_t(\mathrm{d} x)$ are all radial. This is equivalent to the radiality of the L\'{e}vy measure and the characteristic exponent. For isotropic processes the characteristic exponent has the simpler form \begin{align*} \psi (x) = \int_{\mathbb{R}^d}\left( 1- \cos \sprod{x}{y}\right)\nu (\mathrm{d} x) + \eta \norm{x}^2, \end{align*} for some $\eta \geq 0$. We usually abuse notation by setting $\psi(r)$ to be equal to $\psi(x)$ for any $x \in \mathbb{R}^d$ with $\norm{x} = r>0$. Since the function $\psi$ is not necessary monotone, it is more convenient to work with the non-decreasing function $\psi^*$ defined by \begin{equation*} \psi^*(u) = \sup_{s \in [0, u]} \psi(s),\quad u \geq 0. \end{equation*} We denote by $\psi ^-$ the generalized inverse of the function $\psi^*$, that is $\psi^-(u) = \inf \{x\geq 0:\, \psi^*(x)\geq u\}$. By \cite[Theorem 1.5.3]{bgt}, if $\psi \in \calR_{\alpha} $, for some $\alpha>0$, then $\psi ^* \in \calR_{\alpha} $ and thus $\psi ^- \in \mathcal{R}_{1/\alpha}$, which implies that $\lim_{t\to 0}\psi^-(1/t) = \infty$. To any L\'{e}vy process $\mathbf{X}$ we associate the function $h$ defined at \eqref{Pruitt_Function}. According to \cite[Formula (3.2)]{Pruitt}, there is some positive constant $C=C(d)$ such that for any $r>0$, \begin{align}\label{estimate_Pruitt} \mathbb{P} \left( \norm{X_t}\geq r \right) \leq \mathbb{P} \left(\sup_{0\leq s\leq t } \norm{X_s}\geq r \right)\leq C t h(r) . \end{align} We mention that the function $h$ is decreasing and satisfies the doubling property \begin{align}\label{doubling:h} h(2x)\geq h(x)/4,\quad x>0. \end{align} For a symmetric L\'{e}vy process $\mathbf{X}$ the function $h$ has the simplified form \begin{align*} h(r)= \norm{A}r^{-2} + \int_{\mathbb{R} ^d} \left( 1\wedge \norm{y}^2r^{-2} \right) \, \nu(\mathrm{d} y) \end{align*} and for these processes, see \cite[Corollary 1]{Grzywny1}, \begin{equation}\label{psi_star_estimate} \frac{1}{2} \psi^*(r^{-1}) \leq h(r) \leq 8(1 + 2d) \psi^*(r^{-1}). \end{equation} In the paper we also deal with L\'{e}vy processes which have finite variation on any interval $(0,t)$, for $t>0$. It holds if and only if condition \eqref{Levy_bdd_var_cond} is satisfied. It turns out that for such processes the quantity $\mathrm{Per}_{\mathbf{X}}(\Omega)$ defined at \eqref{X_perimeter} is finite. \begin{lemma}\label{Lemma_Per_X} Assume that $\mathbf{X}$ has finite variation. Then for any $\Omega \subset \mathbb{R} ^d$ of finite measure and finite perimeter $\mathrm{Per} (\Omega)<\infty$ we have $\mathrm{Per}_{\mathbf{X}}(\Omega)<\infty$. \end{lemma} \begin{proof} Using \eqref{g_Omega_bound} we can write \begin{align*} \mathrm{Per}_{\mathbf{X}}(\Omega)&= \int_{\Omega}\int_{\Omega ^c-x}\nu (\mathrm{d} y)\, \mathrm{d} x = \int \int \textbf{1}_{\Omega}(x)\textbf{1}_{\Omega ^c}(y+x)\nu (\mathrm{d} y)\mathrm{d} x\\ &= \int_{\mathbb{R} ^d}\left( g(0) - g(y)\right) \nu (\mathrm{d} y)\leq C \int_{\mathbb{R} ^d}\left( 1\wedge \norm{y}\right) \nu (\mathrm{d} y). \end{align*} Further, \begin{align*} \int_{\mathbb{R} ^d}\left( 1\wedge \norm{y}\right) \nu (\mathrm{d} y) = \int_{\norm{y}<1}\norm{y}\nu (\mathrm{d} y) + \int_{\norm{y}\geq 1}\nu (\mathrm{d} y), \end{align*} where the both integrals on the right hand side are finite due to \eqref{Levy_bdd_var_cond} and \eqref{Levy_measure}, and the proof is finished. \end{proof} For the detailed discussion on infinitesimal generators of semigroups related to L\'{e}vy processes we refer the reader to \cite[Section 31]{Sato} or \cite[Section 3.3]{Appl}. We recall that the heat semigroup $\{P_t\}_{t\geq 0}$ related to the L\'{e}vy process $\mathbf{X}$ is given by \begin{align*} P_tf(x) = \int_{\mathbb{R}^d} f(x+y)p_t(\mathrm{d} y),\quad f\in C_0(\mathbb{R}^d), \end{align*} where $C_0(\mathbb{R}^d)$ is the set of all continuous functions which vanish at infinity. The generator $\mathcal{L}$ of the process $\mathbf{X}$ is a linear operator defined by \begin{align}\label{gener} \mathcal{L}f(x) = \lim_{t\to 0}\frac{ P_tf(x) - f(x) }{t}, \end{align} with the domain $\mathrm{Dom}(\mathcal{L})$ which is the set of all $f$ such that the right hand side of \eqref{gener} exists. By \cite[Theorem 31.5]{Sato}, we have $C_0^2(\mathbb{R}^d)\subset \mathrm{Dom}(\mathcal{L})$ and for any $f\in C_0^2(\mathbb{R}^d)$ it has the form \begin{align*} \mathcal{L} f(x) &= \sum_{j,k=1}^d A_{jk}\partial^2_{jk} f(x)+\sprod{\gamma}{\nabla f(x)} \\ &\quad + \int \left(f(x+z)-f(x) -\textbf{1}_{\norm{z}<1}\sprod{z}{\nabla f(x)} \right) \nu(\mathrm{d} z), \end{align*} where $(A,\gamma ,\nu)$ is the triplet from \eqref{charact_expo}. For L\'{e}vy processes with finite variation we have the following. \begin{lemma}\label{Lemma_2} Let $\textbf{X}^0$ be a L\'{e}vy process with finite variation and such that $\gamma _0 =0$, cf. \eqref{gamma_0}. Let $f$ be a Lipschitz function (with constant $L$) in $\mathbb{R} ^d$ with $\lim _{x\to \infty}f(x) =0$. Then $f$ belongs to the domain of the generator $\mathcal{L}^0$ of the process $\textbf{X}^0$, i.e. $f\in \mathrm{Dom}(\mathcal{L}^0)$, and \begin{align*} \mathcal{L}^0 f(x) = \int_{\mathbb{R}^d} (f(x+y)- f(x))\nu (\mathrm{d} y). \end{align*} \end{lemma} \begin{proof} We take a function $\phi \in C_c^\infty (\mathbb{R}^d)$ such that $\phi (0)=1$, $\norm{\phi}_{L^1} =1$ and $\mathrm{supp}(\phi) \subset [0,1]$. We set $\phi _{\epsilon} (x) = \epsilon ^{-d}\phi (\epsilon ^{-1} x)$. It is well known that then the function $f_{\epsilon}(x) = \phi _{\epsilon}\ast f(x)$ belongs to $C_0^\infty (\mathbb{R}^d)$. Moreover, we have $\lim _{\epsilon \to 0} \norm{f_{\epsilon} - f}_{\infty} = 0 $. Indeed, for any $\delta >0$, \begin{align*} |f_{\epsilon}(x) - f(x)| &\leq \int _{\norm{y}<\delta}|\phi _{\epsilon}(y)||f(x-y)- f(x)|\, \mathrm{d} y + \int _{\norm{y}\geq \delta}|\phi _{\epsilon}(y)||f(x-y)- f(x)|\, \mathrm{d} y \\ &\leq L \delta \int _{\norm{y}<\delta}|\phi _{\epsilon}(y)|\, \mathrm{d} y + 2\norm{f}_{\infty} \int _{\norm{y}\geq \delta}|\phi _{\epsilon}(y)| \leq L \delta \norm{\phi}_{L^1} + 2\norm{f}_{\infty} \delta, \end{align*} for $\epsilon$ small enough. Taking $\delta $ small as well, we get the claim. Moreover, since $\gamma_0=0$, \begin{align*} \mathcal{L}^0 f_{\epsilon}(x) &= \sprod{\gamma}{\nabla f_{\epsilon}(x)} + \int \left(f_{\epsilon}(x+z)-f_{\epsilon}(x) -\textbf{1}_{\norm{z}<1}\sprod{z}{\nabla f_{\epsilon}(x)} \right) \nu(\mathrm{d} z)\\&=\sprod{\gamma_0}{\nabla f_{\epsilon}(x)} + \int \left(f_{\epsilon}(x+z)-f_{\epsilon}(x) \right) \nu(\mathrm{d} z) \\ &=\int_{\mathbb{R}^d} (f_{\epsilon}(x+y) - f_{\epsilon}(x))\, \nu (\mathrm{d} y) \end{align*} and we deduce that \begin{align*} \lim _{\epsilon \to 0}\mathcal{L}^0 f_{\epsilon}(x) = \int_{\mathbb{R}^d} (f(x+y) - f(x))\, \nu (\mathrm{d} y). \end{align*} Finally, since $\mathcal{L}^0$ is closed, we get that $f\in \mathrm{Dom}(\mathcal{L}^0)$ and \begin{align*} \mathcal{L}^0 f(x) = \int_{\mathbb{R}^d} (f(x+y) - f(x))\, \nu (\mathrm{d} y) \end{align*} which finishes the proof. \end{proof} \section{Proofs}\label{sec_Proofs} \subsection{Proof of Theorem \ref{Thm_d>1}} Before we prove Theorem \ref{Thm_d>1} we establish the following auxiliary lemma. \begin{lemma}\label{Lemma_1} Let $\mathbf{X}$ be a L\'{e}vy process in $\mathbb{R}^d$. Then \begin{enumerate} \item there is a constant $C=C(d) >0$ such that for any $R>0$, \begin{align}\label{H formula1} \int_0^{R} \mathbb{P} (\norm{X_t} \geq x)\, \mathrm{d} x &\leq C t\int_{ h^{-1}(1/t)\wedge \frac{R}{2}}^{R} h(r)\, \mathrm{d} r ,\quad t>0. \end{align} \item The related function $H(t)$ introduced in \eqref{heat_cont-H} has the following form \begin{align}\label{H_formula} H(t) = \int _{\mathbb{R} ^d}\left( g_\Omega (0) -g_\Omega (y)\right) p_t(\mathrm{d} y). \end{align} \end{enumerate} \end{lemma} \begin{proof} We start with the proof of (i). Using \eqref{estimate_Pruitt} we clearly get that for some $C>0$ and any $t>0$, \begin{align*} \mathbb{P} (\norm{X_t}\geq x)\leq C ( t h(x) \wedge 1) . \end{align*} Observe that $th(x)\geq 1$ if and only if $x\leq h^{-1}(1/t)$ and thus we set $\beta = h^{-1}(1/t)$. For any $R>0$, we have \begin{align}\label{est:1} \int_0^{R} \mathbb{P} (\norm{X_t} \geq x)\, \mathrm{d} x \leq C\left( \int_{0}^{\beta \wedge \frac{R}{2}}\! \mathrm{d} x + t \int_{\beta \wedge \frac{R}{2}}^R \! h(x)\mathrm{d} x \right). \end{align} First we consider the case $\beta \leq R/2$, which is equivalent to $t\leq 1/h(R/2)$. We estimate the second integral in \eqref{est:1} as follows \begin{align*} t \int_{\beta }^R \! h(x)\mathrm{d} x \geq t \int_{\beta}^{2\beta} \! h(x)\mathrm{d} x\geq th(2\beta )\beta \geq \beta/4. \end{align*} In the last inequality we used the doubling property \eqref{doubling:h}. We obtain that \begin{align*} \int_0^{R} \mathbb{P} (\norm{X_t} \geq x)\, \mathrm{d} x \leq 5 C t \int_\beta ^R h(x)\mathrm{d} x \end{align*} as desired. Next, assume that $R/2<\beta $. By monotonicity of $h$, we have \begin{align*} t \int_{R/2}^R \! h(x)\mathrm{d} x\geq \frac{tR}{2} h(R)\geq \frac{tR}{2} h(2\beta )\geq R/8. \end{align*} This together with \eqref{est:1} imply \begin{align*} \int_0^{R} \mathbb{P} (\norm{X_t} \geq x)\, \mathrm{d} x \leq 5 C t\int_{R/2}^R h(x)\mathrm{d} x \end{align*} and this gives (i). For (ii) we write \begin{align*} H(t) &= \int_{\mathbb{R} ^d}\textbf{1}_{\Omega}(x)\left( 1-\mathbb{P} (X_t \in \Omega -x) \right) \mathrm{d} x \\ &= |\Omega| - \int_{\mathbb{R} ^d}\int_{\mathbb{R} ^d} \textbf{1}_{\Omega}(x)\textbf{1}_{\Omega}(x+y)\, \mathrm{d} x \, p_t(\mathrm{d} y) \\ &= g_\Omega(0) - \int_{\mathbb{R} ^d} g_\Omega (-y)\, p_t(\mathrm{d} y), \end{align*} and using symmetry of $g_\Omega$ (see (ii) of Proposition \ref{g_properties}) we get \eqref{H_formula}. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm_d>1}] We set $R = 2|\Omega|/\mathrm{Per}(\Omega)$ and split the integral in \eqref{H_formula} into two parts \begin{align*} H(t) = \int_{\norm{y} >R}(g_\Omega(0) - g_\Omega(y))\, p_t(\mathrm{d} y) + \int_{\norm{y} \leq R}(g_\Omega(0) - g_\Omega(y)) \, p_t(\mathrm{d} y) = \mathrm{I}_1 + \mathrm{I}_2. \end{align*} Using (i) of Proposition \ref{g_properties} we estimate $\mathrm{I}_1$ as follows \begin{align*} \mathrm{I}_1\leq |\Omega|\, \mathbb{P} (||X_t||>R)\leq \frac{\mathrm{Per}(\Omega)}{2}\int^R_{0} \mathbb{P}(||X_t||>s)\mathrm{d} s. \end{align*} Next, by (iv) of Proposition \ref{g_properties} we obtain \begin{align*} \mathrm{I}_2&\leq \frac{\mathrm{Per}(\Omega)}{2}\int_{||y||\leq R} ||y||\,p_t(\mathrm{d} y)= \frac{\mathrm{Per}(\Omega)}{2}\int_{||y||\leq R} \int^{||y||}_0\mathrm{d} s\, p_t(\mathrm{d} y)\\& = \frac{\mathrm{Per}(\Omega)}{2}\int^R_0\int_{s<||y||\leq R} ||y||\,p_t(\mathrm{d} y)\mathrm{d} s\leq \frac{\mathrm{Per}(\Omega)}{2}\int^R_0\mathbb{P}(||X_t||>s) \mathrm{d} s. \end{align*} These estimates together imply \begin{align*} H(t)\leq \mathrm{Per}(\Omega)\int^R_0\mathbb{P}(||X_t||>s) \mathrm{d} s. \end{align*} Hence, applying (i) of Lemma 3 we deduce the result. \end{proof} In the following Proposition \ref{H_lower_bound}, we provide a lower bound for the heat content related to an isotropic L\'{e}vy process with the characteristic exponent satisfying some scaling condition. We start with a useful lemma. \begin{lemma}\label{lem:14} Let $\textbf{X}$ be an isotropic L\'{e}vy process with the radial characteristic exponent $\psi$. Suppose that there is a constant $C>0$ such that for some $\alpha \in (0,2)$, \begin{align}\label{WUSC_condition1} C^{-1}\psi(x)\leq \psi(y) \leq C \left(\frac{y}{x}\right)^{\alpha} \psi(x),\quad 1<x<y. \end{align} Then there exists $c>0$ such that \begin{align*} \mathbb{P}(||X_t||>r) &\geq c\, (1-e^{t \,h(r)}), \quad t,\,r<1. \end{align*} \end{lemma} \begin{proof}We observe that the left hand side inequality in \eqref{WUSC_condition1} implies that \begin{align}\label{psi_psi*} \psi (x)\geq C^{-1}\psi^*(x),\quad x>1. \end{align} Thus, proceeding exactly in the same fashion as in the proof of \cite[Lemma 14]{BGR}, we obtain \begin{align*} \mathbb{P}(||X_t||>r) &\geq C_1 (1-e^{t \,\psi^*(1/r)}), \quad t,\,r<1. \end{align*} Finally, inequality \eqref{psi_star_estimate} yields \begin{align*} \psi^*(r)\geq ch(1/r),\quad r>0, \end{align*} and the proof is finished. \end{proof} \begin{proposition}\label{H_lower_bound} Let $\textbf{X}$ be an isotropic L\'{e}vy process in $\mathbb{R}^d$ with the radial characteristic exponent $\psi$ which satisfies condition \eqref{WUSC_condition1}. Assume also that the related function $h$ (see \eqref{Pruitt_Function}) is not Lebesgue integrable around zero. Then, for any open $\Omega \subset \mathbb{R}^d$ with finite measure and of finite perimeter, there exists $C>0$ which does not depend on the set $\Omega$ such that, for $t$ small enough, \begin{align*} H(t) &\geq C \,t\,\mathrm{Per}(\Omega)\, \int_{h^{-1}(1/t)}^{R} h(r)\, \mathrm{d} r , \end{align*} where $R=2|\Omega|/\mathrm{Per}(\Omega)$. \end{proposition} \begin{proof} We first consider the case $d\geq2$. Since $h$ is not integrable around $0$, it is unbounded and so does $\psi$ due to inequality \eqref{psi_star_estimate}. Therefore, $\textbf{X}$ is not a compound Poisson process. Hence, by \cite[(4.6)]{Zabczyk}, all the transition probabilities $p_t(\mathrm{d} x)$ are absolutely continuous with respect to the Lebesgue measure. Since $p_t$ are radial, we have $p_t(x) = p_t(\norm{x}e_d)$ with $e_d=(0,\ldots ,0,1)$ and as a result by polar coordinates we get that, for any $u_1,u_2\in [0 , +\infty ]$, \begin{align}\label{polar_formula} \mathbb{P} \left( u_1< \norm{X_t} < u_2\right) = \int_{\mathbb{R}^d}\textbf{1}_{\{(u_1,u_2)\}}(\norm{w})p_t(w)\mathrm{d} w = \sigma (\mathbb{S}^{d-1})\int_{u_1}^{u_2}r^{d-1}p_t(re_d)\, \mathrm{d} r. \end{align} Applying \eqref{polar_formula} in \eqref{H_formula} we obtain that for any $\delta >0$, \begin{align*} H(t)\geq \int_0^\delta r^{d-1} p_t(r e_d) \int_{\mathbb{S}^{d-1}} \left( g_\Omega (0) - g_\Omega (ru)\right)\, \sigma (\mathrm{d} u)\, \mathrm{d} r = \int_0^\delta r^{d} p_t(r e_d) \, \mathcal{M}_{\Omega}(r)\, \mathrm{d} r, \end{align*} where \begin{align*} \mathcal{M}_{\Omega}(r) = \int_{\mathbb{S}^{d-1}} \frac{g_\Omega (0) - g_\Omega (ru)}{r}\, \sigma (\mathrm{d} u). \end{align*} Using (v) of Proposition \ref{g_properties} and applying Fatou's lemma, we get that $\mathcal{M}_{\Omega}(r) \geq C\mathrm{Per}(\Omega)$, for some positive constant $C=C(d)$ and for $r$ small enough. Hence, for $\delta$ small enough, \begin{align*} H(t)\geq C\mathrm{Per}(\Omega)\int_0^\delta r^{d} p_t(r e_d) \, \mathrm{d} r &= C\mathrm{Per}(\Omega)\int_0^\delta \int_0^r \mathrm{d} u\, r^{d-1} p_t(r e_d) \, \mathrm{d} r \\ &= \frac{C\mathrm{Per}(\Omega)}{\sigma (\mathbb{S}^{d-1})}\int_0^\delta \int_{u<||y||<\delta}p_t(y)\mathrm{d} y\, \mathrm{d} u \\ &= \frac{C\mathrm{Per}(\Omega)}{\sigma (\mathbb{S}^{d-1})}\int_0^\delta \mathbb{P}(u<\norm{X_t}<\delta ) \, \mathrm{d} u \\ &= \frac{C\mathrm{Per}(\Omega)}{\sigma (\mathbb{S}^{d-1})}\left(\int_0^\delta \mathbb{P}(\norm{X_t} >u ) \, \mathrm{d} u - \delta\mathbb{P}(\norm{X_t} > \delta)\right), \end{align*} where in the second equality we used \eqref{polar_formula}. By Lemma \ref{lem:14}, there is a constant $C_1=C_1(d)>0$ such that, for $u$ small enough, \begin{align*} \mathbb{P}(\norm{X_t} >u )\geq C_1 t h(u). \end{align*} This and \eqref{estimate_Pruitt} imply that, for $\delta$ small enough, \begin{align}\label{int_234} H(t)\geq C_2\, t\,\mathrm{Per}(\Omega)\, \left( \int_{h^{-1}(1/t)}^\delta h(u)\, \mathrm{d} u - C_3 \delta h(\delta) \right). \end{align} Since $\psi$ is continuous and $\psi(0)=0$ we have by \eqref{psi_star_estimate}, \begin{align*} \lim_{t\to 0}h^{-1}(1/t) = 0. \end{align*} Further, since $h$ is not integrable around zero, the integral on the right hand side of \eqref{int_234} tends to infinity for any $\delta >0$, as $t$ goes to zero. This implies that, for $t$ small enough, \begin{align*} \int_{h^{-1}(1/t)}^{\delta} h(u)\, \mathrm{d} u - C_3 \delta h(\delta)&= \int_{h^{-1}(1/t)}^{R} h(u)\, \mathrm{d} u- \int_{\delta}^{R} h(u)\, \mathrm{d} u- C_3 \delta h(\delta)\\ &= \int_{h^{-1}(1/t)}^{R} h(u)\, \mathrm{d} u \left( 1-\frac{\int_{\delta}^{R} h(u)\, \mathrm{d} u+ C_3 \delta h(\delta)}{ \int_{h^{-1}(1/t)}^{R} h(u)\, \mathrm{d} u} \right)\\ & \geq \frac{1}{2} \int_{h^{-1}(1/t)}^{R} h(u)\, \mathrm{d} u . \end{align*} Using this and \eqref{int_234} we obtain that there is some $C_4>0$ which does not depend on $\Omega$ such that, for $t$ small enough, \begin{align*} H(t)\geq C_4 \, t\,\mathrm{Per}(\Omega)\, \int_{h^{-1}(1/t)}^{R} h(u)\, \mathrm{d} u, \end{align*} and the proof is finished for $d\geq 2$. At last, in the case $d=1$ we use \eqref{H_formula}, \begin{align*} H(t)\geq \int_0^\delta \frac{g_\Omega (0) - g_\Omega (x)}{x}xp_t(\mathrm{d} x), \end{align*} and application of (v) of Proposition \ref{g_properties} with $d=1$ gives that, for $0<x$ small enough, \begin{align*} \frac{g_\Omega (0) - g_\Omega (x)}{x}\geq C\mathrm{Per}(\Omega). \end{align*} Thus, for $\delta$ small enough, by symmetry of $\textbf{X}$, \begin{align*}H(t)&\geq C\mathrm{Per}(\Omega) \int_0^\delta xp_t(\mathrm{d} x) = C\mathrm{Per}(\Omega) \int _0^\delta \int_0^{x}\mathrm{d} u\, p_t(\mathrm{d} x)\\ & = C\mathrm{Per}(\Omega) \int _0^\delta \int_{u}^{\delta} p_t(\mathrm{d} x)\, \mathrm{d} u\\ & = \frac{C}{2}\mathrm{Per}(\Omega) \int _0^\delta \mathbb{P}(u<|X_t|<\delta)\, \mathrm{d} u. \end{align*} The result is concluded by the same reasoning as for $d\geq2$. \end{proof} \subsection{Proof of Theorem \ref{Thm_alpha>1}} We start with the following auxiliary lemma. \begin{lemma}\label{lemma444} Let $\mathbf{X}$ be an isotropic L\'{e}vy process in $\mathbb{R}^d$ with the radial transition probability $p_t(\mathrm{d} x)$. Assume that its characteristic exponent $\psi \in \calR_{\alpha} $ with $\alpha \in (1,2]$. Then $p_t(\mathrm{d} x)=p_t(x)\mathrm{d} x$ and \begin{align}\label{limit_stable} \lim_{t\to 0} \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d} = p_1^{(\alpha)}(se_d), \end{align} where $p_t^{(\alpha)}(x)$ is the transition density of the isotropic $\alpha$-stable process in $\mathbb{R}^d$ when $1<\alpha <2$ and $p_t^{(2)}(x)$ is the transition density of the Brownian motion in $\mathbb{R}^d$. \end{lemma} \begin{proof} Since $\psi \in \calR_{\alpha} $, $\alpha \in (1,2]$, we have \begin{align*} \lim_{r\to \infty}\frac{\psi (r)}{\log (1+r)}=\infty, \end{align*} and this implies that $p_t(\mathrm{d} ) = p_t(x)\mathrm{d} x$ with the density $p_t\in L_1(\mathbb{R}^d)\cap C_0(\mathbb{R}^d)$, see e.g. \cite[Theorem 1]{Knopova_Schilling}. By the Fourier inversion formula, see \cite[Section 3.3]{Appl}, \begin{align}\label{integral_1} \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d} = \frac{1}{(2\pi)^d} \int_{\mathbb{R} ^d}\cos \sprod{se_d}{\xi} e^{-t\psi \left( \psi^{-}(1/t) \xi\right)}\mathrm{d} \xi . \end{align} Since $\psi$ is continuous, $\psi(\psi^{-}(1/t))=1/t$. Hence \begin{align*} \frac{\psi \left( \psi^{-}(1/t) \xi\right)}{1/t}= \frac{\psi \left( \psi^{-}(1/t) \xi\right)}{\psi \left( \psi^{-}(1/t) \right)}\sim \norm{\xi}^\alpha,\quad t\to 0, \end{align*} and this leads to \begin{align*} \lim_{t\to 0}e^{-t\psi \left( \psi^{-}(1/t) \xi\right)} = e^{-\norm{\xi}^\alpha}. \end{align*} Therefore, to finish the proof we apply the Dominated convergence theorem. We split the integral in \eqref{integral_1} into two parts. According to the Potter bounds \eqref{eq:14} there is $r_0>0$ such that, for $t$ small enough and $\norm{\xi}\geq r_0$, \begin{align*} t\psi \left( \psi^{-}(1/t) \xi\right)=\frac{\psi \left( \psi^{-}(1/t) \xi\right)}{\psi \left( \psi^{-}(1/t) \right)}\geq \frac{1}{2} \norm{\xi}^{\alpha/2}. \end{align*} This implies that $e^{-t\psi \left( \psi^{-}(1/t) \xi\right)}\leq e^{-\norm{\xi}^{\alpha/2}/2}$, for $\norm{\xi}\geq r_0$ and $t$ small enough. For $\norm{\xi}< r_0$ we bound $e^{-t\psi \left( \psi^{-}(1/t) \xi\right)}$ by one. The Dominated convergence theorem followed by the Fourier inversion formula proves \eqref{limit_stable}. \end{proof} \begin{proof}[Proof of Theorem \ref{Thm_alpha>1}.] By \eqref{H_formula}, \begin{align*} H(t) = \int_{\mathbb{R} ^d} p_t(x)\left( g_\Omega (0) - g_\Omega (x)\right) \mathrm{d} x. \end{align*} Since $g_\Omega (x) \leq g_\Omega (0) = |\Omega|$, $x\in \mathbb{R} ^d$ (see Proposition \ref{g_properties} (i)), for any given $\delta >0$, we can split the integral into two parts \begin{align}\label{split_delta} H(t) &= \int_{\norm{x}\leq \delta} p_t(x)\left( g_\Omega (0) - g_\Omega (x)\right) \mathrm{d} x + \int_{\norm{x}> \delta} p_t(x) \left( g_\Omega (0) - g_\Omega (x)\right) \mathrm{d} x \\ &= \mathrm{I}_1+\mathrm{I}_2.\nonumber \end{align} We estimate $\mathrm{I}_2$ using \eqref{estimate_Pruitt}, \begin{align*} \int_{\norm{x}> \delta} p_t(x) \left( g_\Omega (0) - g_\Omega (x)\right) \mathrm{d} x \leq |\Omega| \mathbb{P} (\norm{X}_t >\delta) = O(t). \end{align*} Since $\psi \in \calR_{\alpha} $, $1<\alpha \leq 2$, \cite[Theorem 1.5.12]{bgt} yields $\psi ^{-}\in \mathcal{R}_{1/\alpha}$ and thus $\psi^{-}(1/t) \,\mathrm{I}_2\to 0$ as $t$ tends to zero. We are left to study the integral $\mathrm{I}_1$. Recall that the radiality of $p_t$ implies that $p_t(x) = p_t(re_d)$, where $\norm{x} = r$ and $e_d=(0,\ldots,0,1)$. Changing variables into polar coordinates we obtain \begin{align*} \psi^{-}(1/t) \, \mathrm{I}_1 = \psi^{-}(1/t) \int_0^\delta r^{d-1} p_t(re_d) \int_{\mathbb{S}^{d-1}} \left( g_\Omega (0) - g_\Omega (ru)\right)\, \sigma (\mathrm{d} u)\, \mathrm{d} r. \end{align*} Making substitution $r=s/\psi^{-}(1/t)$ we get \begin{align*} \psi^{-}(1/t) \, \mathrm{I}_1 = \int_0^{\delta \psi^{-}(1/t) }\!\! s^{d}\, \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d}\, \mathcal{M}_{\Omega}(t,s)\, \mathrm{d} s, \end{align*} where \begin{align*} \mathcal{M}_{\Omega}(t,s) = \int_{\mathbb{S}^{d-1}} \frac{g_\Omega (0) - g_\Omega \left(\frac{s}{\psi^{-}(1/t)}u\right)}{s/\psi^{-}(1/t)}\, \sigma (\mathrm{d} u). \end{align*} We claim that for any fixed $M>0$, \begin{align}\label{claim111} \lim_{t\to 0} \int_0^{M }\!\! s^{d}\, \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right) }{(\psi^{-}(1/t) )^d}\, \mathcal{M}_{\Omega}(t,s)\, \mathrm{d} s = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\mathrm{Per} (\Omega) \int_0^M s^d p_1^{(\alpha)}(se_d)\mathrm{d} s. \end{align} To show the claim we use the Dominated convergence theorem. By Proposition \ref{g_properties} (iv-v), \begin{align*} 0\leq \mathcal{M}_{\Omega}(t,s)\leq \frac{1}{2} \mathrm{Per}(\Omega)\, \sigma (\mathbb{S}^{d-1}) \end{align*} and, for any $s>0$, \begin{align*} \lim_{t\to 0}\mathcal{M}_{\Omega}(t,s) = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\mathrm{Per} (\Omega). \end{align*} Next, by \cite[Formula (23)]{BGR}, for $s\leq M$, \begin{align*} \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d} \leq \frac{p_t(0)}{(\psi^{-}(1/t) )^d} \leq C(M) \end{align*} and, by Lemma \ref{lemma444}, \begin{align*} \lim_{t\to 0} \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d} = p_1^{(\alpha)}(se_d). \end{align*} Hence the Dominated convergence theorem implies \eqref{claim111}. Further, we have \begin{multline*} \int_M^{\delta \psi^{-}(1/t) }\!\! s^{d}\, \frac{p_t\left( \frac{s}{\psi^{-}(1/t)}e_d \right)}{(\psi^{-}(1/t) )^d}\, \mathrm{d} s = \psi^{-}(1/t) \int_{M/\psi^{-}(1/t)}^{\delta }s^d p_t(s e_d)\, \mathrm{d} s\\ = \psi^{-}(1/t) \int_{M/\psi^{-}(1/t)}^{\delta }\int_0^s \mathrm{d} u\, s^{d-1} p_t(s e_d)\, \mathrm{d} s = \psi^{-}(1/t) \int_0^\delta \int_{(M/\psi^{-}(1/t) )\vee u}^{\delta }\!\!\!\! s^{d-1} p_t(s e_d)\, \mathrm{d} s\, \mathrm{d} u\\ \leq \psi^{-}(1/t) \int_0^\delta \mathbb{P} \left( \norm{X_t}> (M/\psi^{-}(1/t) )\vee u\right)\mathrm{d} u \\ \leq M \mathbb{P} \left( \norm{X_t}> M/\psi^{-}(1/t) \right) + \psi^{-}(1/t) \int_{M/\psi^{-}(1/t)}^\delta \mathbb{P} \left( \norm{X_t}> u \right)\mathrm{d} u . \end{multline*} Now, we notice that combinig \eqref{estimate_Pruitt} and \eqref{psi_star_estimate}, we obtain $\mathbb{P}(\norm{X_t} > r) \leq C\, t\psi^{*}(1/r) $. Thus, using Potter bounds with $\epsilon <\alpha -1$, we estimate, for $t$ small enough, the first term as follows \begin{align*} M \mathbb{P} \left( \norm{X_t}> M/\psi^{-}(1/t) \right) &\leq Mt\psi^*\left( \psi^{-}(1/t) /M \right)\\ &\leq C_1 M \frac{\psi^*\left( \psi^{-}(1/t) /M \right)}{\psi^*\left( \psi^{-}(1/t) \right)} \leq C_2 M^{1-\alpha +\epsilon}. \end{align*} We proceed similarly with the second term. Applying Karamata's theorem \cite[Proposition 1.5.8]{bgt} and Potter bounds we obtain that for $t$ small enough \begin{multline*} \psi^{-}(1/t) \int_{M/\psi^{-}(1/t)}^\delta \mathbb{P} \left( \norm{X_t}> u \right)\mathrm{d} u \leq t\psi^{-}(1/t) \int_{M/\psi^{-}(1/t)}^\delta \psi^*(u^{-1})\, \mathrm{d} u \\ \leq C_3 M(\alpha +1)^{-1} t\, \psi^*\left( \psi^{-}(1/t) /M \right)\\ \leq C_4 M(\alpha +1)^{-1} \frac{\psi^*\left( \psi^{-}(1/t) /M \right)}{\psi^*\left( \psi^{-}(1/t) \right)}\leq C_5 (\alpha +1)^{-1}M^{1-\alpha +\epsilon}. \end{multline*} Finally, letting $M$ to infinity we obtain \begin{align*} \lim_{t\to 0}\psi^{-}(1/t) \,\mathrm{I}_1 = \frac{\pi^{(d-1)/2}}{\Gamma\left((d+1)/2\right)}\mathrm{Per} (\Omega)\int_{0}^\infty s^d p_1^{(\alpha)}(se_d)\mathrm{d} s . \end{align*} It is known that, see e.g. \cite[Lemma 4.1]{Valverde2}, \begin{align*} \int_{0}^\infty s^d p_1^{(\alpha)}(se_d)\mathrm{d} s = \pi^{-(d+1)/2}\Gamma ((d+1)/2)\Gamma(1-1/\alpha) \end{align*} and we conclude the result. \end{proof} \subsection{Proof of Theorem \ref{Thm_X_bdd_variation}} \begin{proof}[Proof of Theorem \ref{Thm_X_bdd_variation}] We consider two cases: the first is $\gamma_0 =0$. Then $X_t = X_t^0$, where $\mathbf{X}^0$ is as in Lemma \ref{Lemma_2} and we have \begin{align}\label{eq222} t^{-1}H(t) = \int _{\Omega}\frac{1 - \mathbb{P} (X_t +x \in \Omega)}{t}\, \mathrm{d} x = t^{-1}(g_\Omega(0) - P_tg_\Omega (0)), \end{align} which converges to $ -\mathcal{L}g_\Omega(0) = \mathrm{Per} _{\mathbf{X}}(\Omega)$ according to Subsection \ref{sec_Levy} and Lemma \ref{Lemma_2}. In the case $\gamma _0 \neq 0$, we write $X_t = X_t^0 +t \gamma_0 $, where $\mathbf{X}^0$ is again as in Lemma \ref{Lemma_2}, and thus \begin{align*} t^{-1}H(t) &= \int _{\Omega}\frac{1 - \mathbb{P} (X^0_t + t \gamma _0 +x \in \Omega)}{t}\, \mathrm{d} x \\ &= \int _{\Omega}\frac{1 - \mathbb{P} (X^0_t +x \in \Omega)}{t}\, \mathrm{d} x + \int _{\Omega}\frac{\mathbb{P} (X^0_t +x \in \Omega) - \mathbb{P} (X^0_t + t \gamma _0 +x \in \Omega)}{t}\, \mathrm{d} x . \end{align*} By \eqref{eq222} we obtain \begin{align}\label{eq222a} \lim_{t\to 0} \int _{\Omega}\frac{1 - \mathbb{P} (X^0_t +x \in \Omega)}{t}\, \mathrm{d} x = \mathrm{Per} _{\mathbf{X}}(\Omega). \end{align} We denote by $p^0_t(\mathrm{d} x)$ and $h^0$ the transition probabilities and the function introduced in \eqref{Pruitt_Function}, respectively, corresponding to the process $\mathbf{X}^0$. For the second integral we proceed as follows \begin{multline*} \int _{\Omega}\frac{\mathbb{P} (X^0_t +x \in \Omega) - \mathbb{P} (X^0_t + t \gamma _0 +x \in \Omega)}{t}\, \mathrm{d} x \\ = \int_{\mathbb{R}^d} \textbf{1}_{\Omega}(x)t^{-1}\int_{\mathbb{R}^d}\left( \textbf{1}_{\Omega}(y+x) - \textbf{1}_{\Omega}(y+x+t \gamma_0 ) \right)\, p_t^0(\mathrm{d} y)\, \mathrm{d} x\\ = \int_{\mathbb{R}^d}\frac{g_\Omega(y) - g_\Omega(y + t \gamma _0 )}{t}\, p_t^0(\mathrm{d} y). \end{multline*} We take $\epsilon >0$. Using (iv) of Proposition \ref{g_properties} and \eqref{estimate_Pruitt} we write \begin{align}\label{eq111} \begin{aligned} \Big| \int_{\norm{y}>\epsilon t} \frac{g_\Omega(y) - g_\Omega(y + t \gamma _0 )}{t}\, p_t^0(\mathrm{d} y) \Big| &\leq \norm{\gamma_0}\,\mathbb{P}\big(\norm{X^0_t}>\epsilon t\big)\leq C \norm{\gamma_0}\,t h^0(\epsilon t)\\ = C \norm{\gamma_0}t \int _{\mathbb{R}^d} \left( 1\wedge \frac{\norm{y}^2}{(\epsilon t)^2} \right)\,\nu (\mathrm{d} y) &= C \norm{\gamma_0}t \int _{\mathbb{R}^d} \left( 1\wedge \frac{\norm{y}}{\epsilon t} \right)^2 \,\nu (\mathrm{d} y) \\ \leq C \norm{\gamma_0}t \int _{\mathbb{R}^d} \left( 1\wedge \frac{\norm{y}}{\epsilon t} \right) \,\nu (\mathrm{d} y) &= C \norm{\gamma_0} \int _{\mathbb{R}^d} \left( t\wedge \frac{\norm{y}}{\epsilon } \right) \,\nu (\mathrm{d} y) . \end{aligned} \end{align} By the Lebesgue dominated convergence theorem the last quantity tends to zero as $t$ goes to zero. For the other part of the integral we have \begin{multline*} \int_{\norm{y}\leq \epsilon t} \frac{g_\Omega(y) - g_\Omega(y + t \gamma _0 )}{t}\, p_t^0(\mathrm{d} y) = \int_{\norm{y}\leq \epsilon t} \frac{g_\Omega(y) - g_\Omega(0 )}{t}\, p_t^0(\mathrm{d} y) \\ + \int_{\norm{y}\leq \epsilon t} \frac{g_\Omega(0) - g_\Omega(t \gamma _0 )}{t}\, p_t^0(\mathrm{d} y) + \int_{\norm{y}\leq \epsilon t} \frac{g_\Omega(\gamma_0 t) - g_\Omega(y + t \gamma _0 )}{t}\, p_t^0(\mathrm{d} y) = I_1 + I_2 +I_3. \end{multline*} Handling with $I_1$ is easy in front of condition (iv) of Proposition \ref{g_properties}. Indeed, \begin{align*} |I_1| \leq \int_{\norm{y}\leq \epsilon t} \frac{|g_\Omega(y) - g_\Omega(0)|}{\norm{y}}\cdot \frac{\norm{y}}{t}\, p_t^0(\mathrm{d} y)\leq L\epsilon \int_{\norm{y}\leq \epsilon t} p_t^0(\mathrm{d} y) \leq L\epsilon . \end{align*} Similarly we estimate $I_3$. The integral $I_2$ equals \begin{align*} I_2 = \frac{g_\Omega(0) - g_\Omega\left(( \norm{\gamma _0}t ) \frac{ \gamma _0}{\norm{\gamma _0}}\right)}{\norm{\gamma _0 } t}\norm{\gamma _0} \int_{\norm{y}\leq \epsilon t} p_t^0(\mathrm{d} y). \end{align*} Using \eqref{g_lip_limit} we obtain \begin{align}\label{eq333} \lim_{t\to 0}\frac{g_\Omega(0) - g_\Omega\left((\norm{\gamma _0} t) \frac{\gamma _0}{\norm{\gamma _0}} \right)}{\norm{\gamma _0}t} = \frac{V_{\frac{\gamma _0}{\norm{\gamma _0}}}(\Omega)}{2}. \end{align} Moreover, we claim that \begin{align*} \lim_{t\to 0}\int_{\norm{y}\leq \epsilon t} p_t^0(\mathrm{d} y) =1. \end{align*} Indeed, we have \begin{align*} \int_{\norm{y}\leq \epsilon t} p_t^0(\mathrm{d} y) = 1- \mathbb{P}\left( \norm{X_t^0}>\epsilon t \right). \end{align*} Proceeding in the same fashion as in \eqref{eq111} we show that $\mathbb{P}\left( \norm{X_t^0}>\epsilon t \right)$ tends to zero as $t$ goes to zero, which gives the claim. Finally, equations \eqref{eq222a} and \eqref{eq333} imply the result. \end{proof} \bibliographystyle{plain}
922a060895993fb129e852f10995aa8cc256055a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} A central problem in Mathematics is the classification problem. Given a set of objects and an equivalence relation, loosely speaking, the problem asks how to find an accessible way to tell whether two objects are in the same equivalence class. A general approach to this problem is to find a complete set of (geometric, analytic or algebraic) invariants. In the subject of Several Complex Variables and Complex Geometry, a fundamental problem is to classify complex manifolds or more generally, normal complex spaces under the action of biholomorphic transformations. When the normal complex spaces are open and have strongly pseudo-convex boundary, by the Fefferman-Bochner theorem, one needs only to classify the corresponding boundary strongly pseudoconvex CR manifolds under the application of CR diffeomorphisms. The celebrated Chern-Moser theory is a theory which gives two different constructions of a complete set of invariants for such a classification problem. Among various aspects of the Chern-Moser theory (especially the geometric aspect of the theory), the Chern-Moser-Weyl tensor plays a key role. However, this trace-free tensor is defined in a very complicated manner. This makes it hard to apply in the applications. The majority of first several sections in this article surveys some work done in papers of Chern-Moser [CM], Huang-Zhang [HZh], Huang-Zaitsev [HZa]. Here, we give a simple and more accessible account on the Chern-Moser-Weyl tensor. We also make an immediate application of the monotonicity property for this tensor to the study of CR embedding problem for the positive signature case. In the last section of this paper, we present new materials. We will show that the family of compact strongly pseudo-convex algebraic hypersurfaces constructed in [HLX] cannot be locally holomorohically embedded into a sphere of any dimension. The argument is based on the rationality result established in [HLX] and the Segre geometry associated with such a family. This gives a negative answer to a long standing folklore conjecture concerning the embeddability of compact strongly pseudo-convex algebraic hypersurfaces into a sphere of sufficiently high dimension. For an extensive discussion on the history on the CR embeddability into spheres, we refer the reader to the introduction section of a recent joint paper of the first author with Zaistev [HZa]. \bigskip \section{Chern-Moser-Weyl tensor for a Levi non-degenerate hypersurface } In this article, we assume that the CR manifolds under consideration are already embedded as hypersurfaces in the complex Euclidean spaces. We first consider the case where the manifolds are even Levi non-degenerate. We use $(z,w)\in \mathbb{C}^n\ \times\mathbb{C}$ for the coordinates of $\mathbb{C}^{n+1}$. We always assume that $n\ge 2$, for otherwise the Chern-Moser-Weyl tensor is identically zero. In that setting, one has to consider the Cartan curvature functions instead, which we will not touch in this article. Let $M$ be a smooth real hypersurface. We say that $M$ is Levi non-degenerate at $p\in M$ with signature $\ell\le n/2$ if there is a local holomorphic change of coordinates, that maps $p$ to the origin, such that in the new coordinates, $M$ is defined near $0$ by an equation of the form: \begin{equation} r=v-|z|^2_\ell+o(|z|^2+|zu|)=0 \label{001} \end{equation} Here, we write $u=\Re w, v=\Im w$ and $<a,\bar b>_\ell=-\sum_{j\le \ell} a_j \bar b_j+\sum_{j=\ell+1}^n a_j \bar b_j, |z|_\ell^2=<z,\bar z>_\ell.$ When $\ell=0$, we regard $\sum_{j\le \ell} a_j=0$. Assume that $M$ is Levi non-degenerate with the same signature $\ell$ at any point in $M$. For a point $p\in M$, a real non-vanishing 1-form $\theta_p$ at $p\in M$ is said to be appropriate contact form at $p$ if $\theta_p$ annihilates $T_p^{(1,0)}+ T_p^{(0,1)}M$ and the Levi form $L_{\theta_p}$ associated with $\theta_p$ at $p\in M$ has $\ell$ negative eigenvalues and $n-\ell$ positive eigenvalues. Here we recall the definition of the Levi-form $L_{\theta_p}$ at $p$ as follows: We first extend $\theta_p$ to a smooth 1-form $\theta$ near $p$ such that $\theta|_q$ annihilates $T_q^{(1,0)}+ T_q^{(0,1)}M$ at any point $q\approx p$. For any $ X_{\alpha}, X_{\beta} \in T_p^{(1,0)}$, we define \begin{equation} L_{\theta_p}(X_\alpha,X_\beta):=-i<d\theta|_p, X_{\alpha}\wedge\overline X_\beta>. \label{eqn:000} \end{equation} One can easily verify that $L_{\theta_p}$ is a well-defined Hermitian form in the tangent space of type $(1,0)$ of $M$ at $p$, which is independent of the choice of the extension of the 1-form $\theta$. In the literature, any smooth non-vanishing 1-form $\theta$ along $M$ is called a smooth contact form, if $\theta|_q$ annihilates $T_q^{(1,0)}M$ for any $q\in M$. If $\theta|_q$ is appropriate at $q\in M$, we call $\theta$ an appropriate smooth contact 1-form along $M$. Write $E_p$ for the set of appropriate contact 1-forms at $p$ defined above, and $E$ for the disjoint union of $E_p$. Then two elements in $E_p$ are proportional by a positive constant for the case of $\ell<n/2$; and are proportional by a non zero constant when $\ell=n/2.$ There is a natural smooth structure over $E$ which makes $E$ into a $R^{+}$ fiber bundle over $M$ when $\ell<n/2$, or a $R^{*}$-bundle over $M$ when $\ell=n/2$. When $M$ is defined near $0$ by an equation of the form as in (\ref{001}), then $i\partial r$ is an appropriate contact form of $M$ near $0$. In particular, for any appropriate contact 1-form $\theta_0$ at $0\in M$, there is a constant $c\not =0$ such that $\theta_0=ic\partial r|_0.$ And $c>0$ when $\ell<n/2$. Applying further a holomorphic change of coordinates $(z,w)\ra ( \sqrt{|c|}z,cw)$ and the permutation transformation $(z_1,\cdots,z_n,w)\ra (z_n,\cdots, z_1,w)$ if necessary, we can simply have $\theta_0=i \partial r|_0.$ Assign the weight of $z,\-{z}$ to be $1$ and that of $u,v,w$ to be $2$. We say $h(z,\-{z},u)=o_{wt}(k)$ if $\frac{h(tz,\-{tz},t^2u)}{t^k}\ra 0$ uniformly on compact sets in $(z,u)$ near the origin. We write $h^{(k)}(z,w)$ for a weighted homogeneous holomorphic polynomial of weighted degree $k$ and $h^{(k)}(z,\-{z},u)$ for a weighted homogeneous polynomial of weighted degree $k$. We first have the following special but crucial case of the Chern-Moser normalization theorem: \bigskip {\proposition {\it Let $M\subset {\mathbb C}^n\times {\mathbb C}$ be a smooth Levi non-degenerate hypersurface. Let $\theta_p\in E_p$ be an appropriate real 1-form at $p\in M$. Then there is a biholomorphic map $F$ from a neighborhood of $p$ to a neighborhood of $0$ such that $F(p)=0$ and $F(M)$ near $0$ is defined by an equation of the following normal form (up to fourth order): \begin{equation} r=v-|z|_\ell^2+\frac{1}{4}s (z,\bar{z})+R(z,\-{z},u)=v-|z|_\ell^2+\frac{1}{4}\sum s^0_{\alpha\bar {\beta}\gamma \bar{\delta}}z_{\alpha} {\bar z_\beta}z_{\gamma}{\bar z_\delta}+R(z,\-{z},u)=0. \label{eqn:002} \end{equation} Here $s(z,\-{z})=\sum s^0_{\alpha\bar \beta \gamma\bar{\delta}}z_{\alpha} {\bar z_\beta}z_{\gamma}{\bar z_\delta}$,$\ s^0_{\alpha\bar {\beta}\gamma \bar{\delta}}= s^0_{\gamma \bar {\beta}\alpha\bar{\delta}}= s^0_{\gamma \bar{\delta}\alpha\bar {\beta}},\ \overline {s^0_{\alpha\bar {\beta}\gamma \bar{\delta}}}=s^0_{\beta\bar{\alpha}\delta\bar{\gamma}}$ and \begin{equation} \sum_{\alpha, \beta=1}^n s^0_{\alpha\bar {\beta}\gamma \bar{\delta}}g_0^{\bar \beta \alpha}=0 \label{eqn:002-1} \end{equation} where $g_0^{\bar \beta \alpha}=0$ for $\beta\neq\alpha$, $g_0^{\bar\beta\beta}=1$ for $\beta>\ell, g_0^{\bar\beta\beta}=-1$ for $\beta\leq \ell$. Also $R(z,\-{z},u)=o_{wt}(|(z,u)|^4)\cap o(|(z,u)|^4)$. Moreover, we have $i\p r|_0=(F^{-1})^*\theta_p.$}} \bigskip {\it Proof of Proposition 2.1}: By what we discussed above, we can assume that $p=0$ and $M$ near $p=0$ is defined by an equation of the form as in (\ref{001}). We first show that we can get rid of all weighted third order degree terms. For this purpose, we choose a transformation of the form $f=z+f^{(2)}(z,w)$ and $g=w+g^{(3)}(z,w)$. Suppose that $F=(f_1,\cdots,f_n,g)=(f,g)$ maps $(M,p=0)$ to a hypersurface near $0$ defined by an equation of the form as in (\ref{001}) but without weighted degree $3$ terms in the right hand side. Substituting $F$ into the new equation and comparing terms of weighted degree three, we get $$\Im \left( g^{(3)}-2i<\-{z},f^{(2)}>_\ell\right)|_{w=u+i|z|_\ell}=G^{(3)}(z,\-{z},u)$$ where $G^{(3)}$ is a certain given real-valued polynomial of weighted degree $3$ in $(z,\-{z},u)$. Write $G^{(3)}(z,\-{z},u)=\Im\{a^{(1)}(z)w+\sum_{j=1}^{n}b^{(2)}_{j}(z)\-{z_j}\}.$ Choosing $g^{(3)}=a^{(1)}(z)w$ and $f_j^{(2)}=\frac{i}{2}b_j^{(2)}(z),$ it then does our job. Next, we choose a holomorphic transformation of the form $f=z+f^{(3)}(z,w)$ and $g=w+g^{(4)}(z,w)$ to simplify the weighted degree $4$ terms in the defining equation of $(M,p=0)$. Suppose that $M$ is originally defined by $$r=v-|z|^2_\ell+A^{(4)}(z,\-{z},u)+o_{wt}(4)=0 \label{001-n}$$ and is transformed to an equation of the form: $$r=v-|z|^2_\ell+N^{(4)}(z,\-{z},u)+o_{wt}(4)=0 \label{001-n}.$$ substituting the map $F$ and collecting terms of weighted degree $4$, we get the equation: $$\Im \left( g^{(4)}-2i<\-{z},f^{(3)}>_\ell\right)|_{w=u+i|z|_\ell}=N^{(4)}(z,\-{z},u)-A^{(4)}(z,\-{z},u).$$ Now, we like to make $N^{(4)}$ as simple as possible by choosing $F$. Write $$-A^{(4)}=\Im\{ b^{(4)}(z)+b^{(2)}(z)u+b^{(0)}u^2+\sum_{j=1}^{n}c_j^{(3)}(z)\-{z_j}+\sum_{|\a|=|\b|=2}\wt{c_{\a\-{\b}}}z^\a\-{z^\b}\}.$$ Let $$X^{(4)}(z,w)=b^{(4)}(z)+b^{(2)}(z)w+b^{(0)}w^2,\ -2i\delta_{j\ell}Y_j^{(3)}(z,w)=c_j^{(3)}(z)- ib^{(2)}(z)z_j-2ib^{(0)}z_jw,$$ $$ Y^{(3)}=(Y_1^{(3)},\cdots, Y_n^{(3)}),$$ where $\delta_{j\ell}$ is $1$ for $j>\ell$ and is $-1$ otherwise. Then $\Im \left( Y^{(4)}-2i<\-{z},X^{(3)}>_\ell\right)+A^{(4)}(z,\-{z},u)=-\Im({b^{(0)}})|z|_\ell^4+\sum_{|\a|=|\b|=2}d_{\a\-{\b}}z^\a\-{z^\b}.$ By the Fischer decomposition theorem ([SW]), write in the unique way $$-\Im({b^{(0)}})|z|_\ell^4+\sum_{|\a|=|\b|=2}d_{\a\-{\b}}z^\a\-{z^\b}=h^{(2)}(z,\-{z})|z|_{\ell}+h^{(4)}(z,\-{z}).$$ Here $h^{(2)}(z,\-{z})$ and $h^{(4)}(z,\-{z})$ are real-valued, bi-homogeneous in $(z,\overline{z})$ and $\Delta_\ell h^{(4)}(z,\-{z})=0$. Here, we write $\triangle_\ell=-\sum_{j\le \ell} \frac{\p^2}{\p z_j\p\bar z_j} +\sum_{j=\ell+1}^n \frac{\p^2}{\p z_j\p\bar z_j}$. Notice that $h^{(2)}$ has no harmonic terms, we can find $Z^{(1)}(z)$ such that $ \Re(<\-{z}, Z^{(1)}(z)>_\ell)=0$ and $\Im(2<\-{z},Z^{(1)}>)=h^{(2)}(z,\-{z}).$ Finally, if we define $f=z+X^{(4)}(z,w)+Z^{(1)}(z)w$ and $g^{(4)}=w+Y^{(4)}$, then $(f,g)$ maps $(M,0)$ to a hypersurface with $R(z,\-{z},u)=o_{wt}(4)\cap O(|(z,u)|^3)$. Now suppose that the terms with non-weighted degree of $3 $ or $4$ in $R$ are uniquely written as $ub^{(3)}(z,\ov{z})+u^2\Im{(b^{(1)}(z))+b^{(0)}u^3+c^{(0)}u^4}$ with $b^{(3)}(z,\ov{z})=\Im{(c^{(3)}(z)+\sum_{|\a|=2,|\b|=1}d_{\a\ov{\b}}z^\a\ov z^\b)}.$ Then we need to make further change of variables as follows to make $R=o_{wt}(4)\cap o(|(z,u)|^4)$ without changing $N^{(4)}(z,\-{z})$: $$w'=w+wc^{(3)}(z)+w^2b^{(1)}(z)+ib^{(0)}w^3+ic^{(0)}w^4,$$ $$z'_j=z_j+\delta_{j,\ell}wb^{(1)}(z)z_j+\frac{i}{2}\sum_{|\a|=2}wd_{\a,\overline{j}}z^\a+\delta_{j,\ell}\frac{3i}{2}w^2z_jb^{(0)}.$$ Now, the trace-free condition in (\ref{eqn:002-1}) is equivalent to the following condition : \begin{equation*} \triangle_\ell s(z,\bar z)\equiv 0. \end{equation*} Indeed, this follows from the following fact: Let $ \Delta_{H}=\sum_{l,k=1}^{n}h^{l\-{k}}\partial_l\-{\partial}_k$ with $\-{h^{l\-{k}}}=h^{k\-{l}}$ for any $l,k$. Then \begin{equation} \Delta_{H}s^0 (z,\-{z})=4\sum_{\gamma,\delta=1}^{n}\sum_{\a, \b=1}^{n}h^{\a\-{\b}}s^0_{\a\-{\b}\gamma\-{\delta}}z_\gamma\-{z_\delta}. \end{equation} This proves the proposition. $\endpf$ \bigskip We assume the notation and conclusion in Proposition 2.1. The Chern-Moser-Weyl tensor at $p$ associated with the appropriate 1-form $\theta_p$ is defined as the 4th order tensor $S_{\theta_p}$ acting over $T_p^{(1,0)}M\otimes T_p^{(0,1)}M\otimes T_p^{(1,0)}M\otimes T_p^{(0,1)}M$. More precisely, for each $X_p,Y_p.Z_p,W_p\in T_p^{(1,0)}M$, we have the following definition: Let $F$ be the biholomorphic map sending $M$ near $p$ to the normal form as in Proposition 2.1 with $F(p)=0$, and write $F_*(X_p)=\sum_{j=1}^{n}a^j\frac{\partial}{\partial z_j}|_0:=X_p^0$, $F_*(Y_p)=\sum_{j=1}^{n}b^j\frac{\partial}{\partial z_j}|_0:=Y_p^0,$ $F_*(Z_p)=\sum_{j=1}^{n}c^j\frac{\partial}{\partial z_j}|_0:=Z_p^0,$ and $F_*(W_p)=\sum_{j=1}^{n}d^j\frac{\partial}{\partial z_j}|_0:=W_p^0.$ Then \begin{equation} S_{\theta_p}(X_p,\-{Y_p},Z_p,\-{W_p}):= \sum_{\alpha,\beta,\gamma,\delta=1}^{n}s^0_{\alpha\-{\beta}\gamma\-{\delta}}a^\alpha\-{b^\beta}c^{\gamma} \-{d^\delta},\ \ \hbox{which is denoted by}~ S_{\theta_0}(X^0_p,\-{Y^0_p},Z^0_p,\-{W^0_p}). \label{eqn:002-02} \end{equation} Since the normalization map $F$ is not unique, we have to verify that the tensor $S_{\theta_p}$ is well-defined. Namely, we need to show that it is independent of the choice of the normal coordinates. We do this in the next section. For the rest of this section, we assume this fact and derive some basic properties for the tensor. For a basis $\{X_{\alpha}\}_{\alpha=1}^n$ of $ T_p^{(1,0)}M$ with $p\in M$, write $({ S_{{\theta}_p}})_{\alpha\bar {\beta}\gamma \bar{\delta}}= S_{\theta_p}(X_{\alpha}, \overline X_{\beta}, X_{\gamma}, \overline X_{\delta})$. From the definition, we then have the following symmetric properties: \begin{eqnarray*} &(S_{{\theta}_p})_{\alpha\bar {\beta}\gamma \bar{\delta}}=(S_{\theta_p})_{\gamma\bar {\beta}\alpha \bar{\delta}}=(S_{\theta_p})_{\gamma \bar{\delta}\alpha\bar {\beta}}\\ &\overline{(S_{\theta_p})_{\alpha \bar{\beta}\gamma \bar{\delta}}} =(S_{\theta_p})_{\beta\bar{\alpha} \delta\bar{\gamma}}, \end{eqnarray*} and the following trace-free condition: \begin{equation} \sum_{\beta,\alpha=1}^n g^{\bar \beta \alpha}(S_{\theta_p})_{\alpha\bar {\beta}\gamma \bar{\delta}}=0.\label{eqn:000-05} \end{equation} Here \begin{equation} g_{\alpha\bar\beta}=L_{\theta|_p}(X_\alpha,X_\beta):=-i<(d\theta)|_p, X_{\alpha}\wedge\overline X_\beta> \label{eqn:000-01} \end{equation} is the Levi form of $M$ associated with $\theta_p$ and $\theta$ is a smooth extension of $\theta_p$ as a proper contact form of $M$ near $p$. Also, $(g^{\bar \beta \alpha})$ is the inverse matrix of $(g_{\alpha\bar \beta}).$ In the following, we write $\wt{\theta}=(F^{-1})^*(\theta).$ To see the trace-free property in (\ref{eqn:000-05}), we write that $F_*(X_\alpha)=\sum_{k=1}^{n}a_\alpha^k\frac{\partial}{\partial z_k}|_0.$ Then $g_{\alpha\bar\beta}=L_{\theta_p}(X_\alpha,X_\beta)=-i<(d\theta)|_p, X_{\alpha}\wedge\overline X_\beta>=-i<(dF^*(\wt{\theta})|_p, X_{\alpha}\wedge\overline X_\beta>=-i<(i\-{\partial}{\partial}r|_0, F_*(X_{\alpha})\wedge\overline {F_*(X_\beta)}>=(g_0)_{k\-{l}}a_\alpha^k\-{a_\beta^l}.$ Here $(g_0)_{k\-{l}}$ is defined as before. Write $G=(g_{\a\b}), G^0=(g_0)_{\a\b}, A=(a_k^l), B=A^{-1}:=(b_k^l)$. Then we have the matrix relation: $G=AG^0\-{A}^t$. Thus $G^{-1}=(\-{A^t})^{-1}(G^0)^{-1}A^{-1},$ from which we have $g^{\gamma \-{\b}}=\-{b^{\b}_{l}}(g_0)^{j\-{l}}b_j^\gamma.$ Thus, $$g^{\a\-{\b}}S_{\a\-{\b}\gamma\-{\delta}}=\-{b^{\b}_{l}}(g_0)^{j\-{l}}b_j^\a s^0_{\wt{k}\-{\wt{j}}\wt{l}\-{\wt{m}}}a_\alpha^{\wt{k}}\-{a_\beta^{\wt{j}}}a_{\gamma}^{\wt{l}} \-{a_\delta^{\wt{m}}}=(g_0)^{j\-{l}}s^0_{j\-{l}\wt{l}\-{\wt{m}}}a_{\gamma}^{\wt{l}} \-{a_\delta^{\wt{m}}}=0.$$ We should mention the above argument can also be easily adapted to show the biholomorphic invariance of the appropriateness. Namely, if $F$ is a CR diffeomorphism between two Levi non-degenerate hypersurfaces $M$ and $\wt{M}$ of signature $\ell$. For $\wt{\theta_q}$ is an appropriate contact 1-form at $q\in \wt{M}$, then $F^*(\wt{\theta_q})$ is also an appropriate contact 1-form at $F^{-1}(q)\in M$. For a smooth vector field $X, Y, Z, W$ of type $(1,0)$ and an appropriate smooth contact form along $M$, ${\cal S}_{\theta}(X,\-{Y},Z,\-{W})$ is also a smooth function along $M$. One easy way to see this is to use the Webster-Chern-Moser-Weyl formula obtained in [We1] through the curvature tensor of the Webster pseudo-Hermitian metric, whose constructions are done by only applying the algebraic and differentiation operations on the defining function of $M$. Another more direct way is to trace the dependence of the tensor on the base points under the above normalization procedure. \medskip Assume that $\ell>0$ and define \begin{equation*} \mathcal{C}_\ell=\{z\in\CC^n: |z|_\ell=0\}. \end{equation*} Then $\mathcal C_\ell$ is a real algebraic variety of real codimension $1$ in $\CC^n$ with the only singularity at $0$. For each $p\in M$, write $\s C_\ell T_p^{(1,0)}M=\{v_p\in T_p^{(1,0)}M:\ <(d\theta)|_p, v_p\wedge{\bar v_p}>=0\}.$ Apparently, $\s C_lT_p^{(1,0)}M$ is independent of the choice of $\theta_p$. Let $F$ be a CR diffeomorphism from $M$ to $M'$. We also have $F_{*}(\s C_\ell T_p^{(1,0)}M)=C_\ell T_{F(p)}^{(1,0)}M'$. Write $\s C_\ell T^{(1,0)}M=\coprod_{p\in M} \s C_\ell T_p^{(1,0)}M$ with the natural projection $\pi$ to $M$. We say that $X$ is a smooth section of $\s C_\ell T^{(1,0)}M$ if $X$ is a smooth vector field of type $(1,0)$ along $M$ such that $X|_p\in \s C_\ell T_p^{(1,0)}M$ for each $p\in M$. $\s C_\ell T^{(1,0)}M$ is a kind of smooth bundle with each fiber isomorphic to $\s C_\ell$. $\s C_\ell$ is obviously a uniqueness set for holomorphic functions. The following lemma shows that it is also a uniqueness set for the Chern-Moser-Weyl curvature tensor. (For the proof, see Lemma 2.1 of [HZh].) \begin{proposition} (Huang-Zhang [HZh]) (I). Suppose that $H(z,\bar z)$ is a real real-analytic function in $(z, \bar z)$ near $0$. Assume that $\triangle_\ell H(z, \bar z)\equiv 0$ and $H(z, \bar z)|_{\s C_\ell}=0$. Then $H(z, \bar z)\equiv 0$ near $0$. (II). Assume the above notation and $\ell>0$. If $ S_{\theta_p}(X,\-X, X, \- X)=0$ for any $X\in \s C_\ell T^{(1,0)}_pM$, then $ S_{\theta|_p}\equiv 0.$ \end{proposition} \bigskip \section {Transformation law for the Chern-Moser-Weyl tensor} We next show that the Chern-Moser-Weyl tensor defined in the previous section is well-defined by proving a transformation law. We follow the approach and expositions developed in Huang-Zhang [HZh]. Let $\widetilde M \subset \CC^{N+1}=\{(z, w)\in \CC^n\times \CC\}$ be also a Levi non-degenerate real hypersurface near $0$ of signature $\ell\ge 0$ defined by an equation of the form: \begin{equation} \w{r}=\Im \w{w}-|\w{z}|^2_\ell+o(|\w{z}|^2+|\w{z}\w{u}|)=0. \label{002} \end{equation} Let $F:=(f_1, \ldots, f_n,\phi, g): M\rightarrow \widetilde M$ be a smooth CR diffeomorphism. Then, as in [Hu1] and \cite{BH}, we can write \begin{eqnarray}\label{eqn:022} \begin{aligned} \tilde z&=\tilde f(z, w)=(f_1(z, w), \ldots, f_n(z, w))=\lambda z U+\vec{a}w+ O(|(z, w)|^2)\\ \tilde w&=g(z, w)=\sigma\lambda^2w+O(|(z, w)|^2). \end{aligned} \end{eqnarray} Here $U\in SU(n, \ell)$. (Namely $<X U, Y\overline{ U}>_\ell=<X, Y>_\ell$ for any $X, Y\in \CC^n$). Moreover, $\ \vec{a}\in \CC^n,\ \lambda >0$ and $\sigma=\pm1$ with $\sigma= 1$ for $\ \ell<\frac{n}{2}$. When $\sigma=-1$, by considering $F\circ\tau_{n/2}$ instead of $F$, where $\tau_{\frac{n}{2}}(z_1,\ldots, z_{\frac{n}{2}},z_{\frac{n}{2}+1},\ldots, z_n, w)=(z_{\frac{n}{2}+1},\ldots, z_n,z_1,\ldots, z_{\frac{n}{2}},-w),$ we can make $\sigma=1$. Hence, we will assume in what follows that $\sigma=1$. Write $r_0=\frac{1}{2}\Re \{g^{''}_{ww}(0)\},\ q(\tilde z, \tilde w)=1+2i<\tilde z, \lambda^{-2}\overline{\vec a}>_\ell+\lambda^{-4}(r_0-i|\vec a|_\ell^2)\tilde w$, \begin{equation}\label{eqn:033} T(\tilde z, \tilde w)=\frac{(\lambda^{-1}(\tilde z- \lambda^{-2}\vec a \tilde w) U^{-1}, \lambda^{-2}\tilde w)}{q(\tilde z, \tilde w)}. \end{equation} Then \begin{equation} F^{\sharp}(z,w)=(\tilde f^{\sharp}, g^{\sharp})(z,w): =T\circ F(z, w)=(z, w)+O(|(z, w)|^2) \label{eqn:044} \end{equation} with $\Re \{g^{\sharp''}_{ww}(0)\}=0$. Assume that $\widetilde M$ is also defined in the Chern-Moser normal form up to the 4th order: \begin{equation} \tilde r=\Im \tilde w-|\tilde z|_\ell^2+\frac{1}{4}\tilde {s}(\tilde z, \bar{\tilde z})+o_{wt}(|(\tilde z,\wt{u})|^4)=0. \label{eqn:003} \end{equation} Then $M^\sharp= T(\widetilde M)$ is defined by \begin{equation} r^{\sharp}=\Im w^{\sharp}-|z^{\sharp}|_\ell^2+\frac{1}{4} s^{\sharp}(z^{\sharp}, \bar{z^{\sharp}})+o_{wt}(|(z^{\sharp},w^{\sharp})|^4)=0 \label{eqn:004} \end{equation} with $s^{\sharp}(z^{\sharp}, \bar{z^{\sharp}})= \lambda^{-2}\tilde {s}(\lambda z^{\sharp}U, \lambda \overline{z^{\sharp} U})$. One can verify that \begin{equation} (-\sum_{j=1}^\ell \frac{\p^2}{\p z_j^{\sharp}\p\bar z_j^{\sharp}} +\sum_{j=\ell+1}^N \frac{\p^2}{\p z_j^{\sharp}\p\bar z_j^{\sharp}}) s^{\sharp}(z^{\sharp}, \overline {z^{\sharp}})=0. \label{eqn:005} \end{equation} Therefore (\ref{eqn:004}) is also in the Chern-Moser normal form up to the 4th order. Write $F^{\sharp}(z, w)=\sum_{k=1}^{\infty}F^{\sharp (k)}(z, w)$. Since $F^{\sharp}$ maps $M$ into $M^{\sharp}=T(\widetilde M)$, we get the following \begin{eqnarray}\label{22} \begin{aligned} \Im\{\sum_{k\geq2}g^{\sharp(k+1)}(z, w)-2i\sum_{k\geq2}<f^{\sharp(k)}(z, w), \bar z>_\ell\}&\\ =\sum_{k_1,\ k_2\geq2}<f^{\sharp(k_1)}(z, w), \overline{ f^{\sharp(k_2)}(z, w)}>_\ell&+\frac{1}{4}( s(z, \bar z)- s^{\sharp}(z,\overline{z}))+o_{wt}(4) \end{aligned} \end{eqnarray} over $\Im w =|z|_\ell^2$. Here, we write $F^{\sharp}(z, w)=(f^{\sharp}(z,w), g^{\sharp}(z, w))$.\\ Collecting terms of weighted degree 3 in (\ref{22}), we get \begin{equation*} \Im\{g^{\sharp(3)}(z, w)-2i<f^{\sharp(2)}(z, w), \bar z>_\ell\}=0\ \ \text{on} \ \ \Im w =|z|_\ell^2. \end{equation*} By \cite{Hu}, we get $g^{\sharp(3)}\equiv 0, f^{\sharp(2)}\equiv 0$.\\ Collecting terms of weighted degree 4 in (\ref{22}), we get \begin{equation*} \Im \{g^{\sharp (4)}(z,w)-2i<f^{\sharp(3)}(z, w), \bar z>_\ell\}= \frac{1}{4}( s(z, \bar z)- s^{\sharp}(z, \overline{z})). \end{equation*} Similar to the argument in \cite{Hu} and making use of the fact that $\Re\{\frac{\p ^2 g^{\sharp(4)}}{\p w^2}(0)\}=0$, we get the following: \begin{eqnarray}\label{33} \begin{aligned} &g^{\sharp(4)}\equiv 0,\ f^{\sharp(3)}(z,w)=\frac{i}{2}a^{(1)}(z)w, \\ <a^{(1)}(z), \bar z>_\ell|z|_\ell^2=&\frac{1}{4}( s(z, \bar z) -s^{\sharp}(z, \overline {z}))=\frac{1}{4}( s(z, \bar z) -\lambda^{-2}\widetilde {s}(\lambda z U, \overline{\lambda z U})). \end{aligned} \end{eqnarray} Since the right hand side of the above equation is annihilated by $\Delta_\ell$ and the left hand side of the above equation is divisible by $|z|^2_\ell$. We conclude that $f^{\sharp(3)}(z,w)=0$ and \begin{equation} \label{eqn:33-001} s(z, \bar z)=\lambda^{-2}\widetilde {s}(\lambda z U, \overline{\lambda z U}). \end{equation} Write $\theta_0=i\partial r|_0$ and $\wt{\theta_0}=i\partial \wt{r}|_0$. Then $F^*(\wt{\theta_0})=\lambda^2\theta_0.$ For any $X=\sum_{j=1}^{n} z_j\frac{\partial}{\partial z_j}|_0,$ $F_*(X)=\lambda (z_1\frac{\partial}{\partial \wt{z_1}}|_0,\cdots, z_n\frac{\partial}{\partial \wt{z_n}}|_0) U.$ Under this notation, (\ref{eqn:33-01}) can be written as $$S^0_{F^*(\wt{\theta_0})}(X,\-{X},X,\-{X})=S^0_{\wt{\theta_0}}(F_*(X),\-{F_*(X)},F_*(X),\-{F_*(X)}).$$ This immediately gives the following transformation law and thus the following theorem, too. \begin{equation} \label{eqn:33-01} S^0_{F^*(\wt{\theta_0})}(X,\-{Y},Z,\-{W})=S^0_{\wt{\theta_0}}(F_*(X),\-{F_*(Y)},F_*(Z),\-{F_*(W)}), \hbox{ for } X, Y, Z, W\in T_0^{(1,0)}M. \end{equation} \bigskip {\theorem {\it (1). The Chern-Moser-Weyl tensor defined in the previous section is independent of the choice of the normal coordinates and thus is a well-defined fourth order tensor. (2). Let $F$ be a CR diffeomorphism between two Levi non-degenerate hypersurfaces $M, M'\subset {\CC}^{n+1}$. Suppose $F(p)=q$. Then, for any appropriate contact 1-form $\wt{\theta_q}$ of $\wt{M}$ at $q$ and a vector $v\in T_p^{(1,0)}M,$ we have the following transformation formula for the corresponding Chern-Moser-Weyl tensor: \begin{equation} \tilde { S}_{\tilde \theta_p}(F_*(v_1), \overline {F_*(v_2)}, F_*(v_3), \overline {F_*(v_4)})= S_{F^*(\tilde{\theta}_q)}(v_1, \-{ v_2}, v_3, \-{ v_4}). \label{eqn:303} \end{equation} }} \bigskip {\it Proof}: Let $\theta_p$ be an appropriate contact form of $M$ at $p$, and let $F_1, F_2$ be two normalization (up to fourth order) of $M$ at $p$. Suppose that $F_1(M)$ and $F_2(M)$ are defined near $0$ by equations $r_1=0$ and $r_2=0$ as in (\ref{001}), respectively. Write $\Phi=F_2\circ F_1^{-1}$ and $\theta^1_0=i\partial r_1$, $\theta^2_0=i\partial r_2$. We also assume that $F_1^*(\theta^1_0)=\theta_p$ and $F_2^*(\theta^2_0)=\theta_p$. Then for any $X_p, Y_p,Z_p,W_p\in T_p^{(1,0)}M$, we have $$S^1_{\theta_p}(X_p, \-{Y_p},Z_p,\-{W_p})=S^1_{\theta^1_0}((F_1)_{*}(X_p), \-{(F_1)_{*}(Y_p)}, (F_1)_{*}(Z_p), \-{(F_1)_{*}(W_p)})$$ if we define the tensor at $p$ by applying $F_2$. We also have $$S^2_{\theta_p}(X_p, \-{Y_p},Z_p,\-{W_p})=S^2_{\theta^2_0}((F_2)_{*}(X_p), \-{(F_2)_{*}(Y_p)}, (F_2)_{*}(Z_p), \-{(F_2)_{*}(W_p)}),$$ if we define the tensor at $p$ by applying $F_2$. Since $\theta^2_0=\Phi^*(\theta_0^1)$, and $\Phi_*((F_1)_{*}(X_p))=(F_2)_*(X_p)$, by the transformation law obtained in (\ref{eqn:33-01}), we see the proof in Part I of the theorem. The proof in Part II of the theorem also follows easily from the formula in (\ref{eqn:33-01}). \section {A monotonicity theorem for the Chern-Moser-Weyl tensor} We now let $M_\ell\subset {{\mathbb C}^{n+1}} $ be a Levi non-degenerate hypersurface with signature $\ell>0$ defined in the normal form as in (\ref{eqn:002}). Let $F=(f_1,\cdots,f_N,g)$ be a CR-transversal CR embedding from $M_\ell$ into ${\mathbb H}^{N+1}_\ell$ with $N\ge n$. Then again as in Section 3, a simple linear algebra argument ([HZh]) shows that after a holomorphic change of variables, we can make $F$ into the following preliminary normal form: \begin{eqnarray}\label{eqn:022} \begin{aligned} \tilde z&=\tilde f(z, w)=(f_1(z, w), \ldots, f_N(z, w))=\lambda z U+\vec{a}w+ O(|(z, w)|^2)\\ \tilde w&=g(z, w)=\sigma\lambda^2w+O(|(z, w)|^2). \end{aligned} \end{eqnarray} Here $U$ can be extended to an $N\times N$ matrix $\widetilde U\in SU(N, \ell)$. Moreover, $\ \vec{a}\in \CC^N,\ \lambda >0$ and $\sigma=\pm1$ with $\sigma= 1$ for $\ \ell<\frac{n}{2}$. When $\sigma=-1$, qs discussed before, by considering $F\circ\tau_{n/2}$ instead of $F$, where $\tau_{\frac{n}{2}}(z_1,\ldots, z_{\frac{n}{2}},z_{\frac{n}{2}+1},\ldots, z_n, w)=(z_{\frac{n}{2}+1},\ldots, z_n,z_1,\ldots, z_{\frac{n}{2}},-w),$ we can make $\sigma=1$. Hence, we will assume that $\sigma=1$. Write $r_0=\frac{1}{2}\Re \{g^{''}_{ww}(0)\},\ q(\tilde z, \tilde w)=1+2i<\tilde z, \lambda^{-2}\overline{\vec a}>_\ell+\lambda^{-4}(r_0-i|\vec a|_\ell^2)\tilde w$, \begin{equation}\label{eqn:033} T(\tilde z, \tilde w)=\frac{(\lambda^{-1}(\tilde z- \lambda^{-2}\vec a \tilde w)\widetilde U^{-1}, \lambda^{-2}\tilde w)}{q(\tilde z, \tilde w)}. \end{equation} Then \begin{equation} F^{\sharp}(z,w)=(\tilde f^{\sharp}, g^{\sharp})(z,w): =T\circ F(z, w)=(z, 0, w)+O(|(z, w)|^2) \label{eqn:044} \end{equation} with $\Re \{g^{\sharp''}_{ww}(0)\}=0$. Now, $T({\mathbb H}^{N+1}_\ell)={\mathbb H}^{N+1}_\ell$. With the same argument as in the previous section, we also arrive at the following: \begin{eqnarray}\label{33} \begin{aligned} g^{\sharp(3)}=g^{\sharp(4)}\equiv 0,\ f^{\sharp(3)}(z,w)=\frac{i}{2}a^{(1)}(z)w, & \\ <a^{(1)}(z), \bar z>_\ell|z|_\ell^2=|\phi^{\sharp(2)}(z)|^2+&\frac{1}{4} s(z, \bar z). \end{aligned} \end{eqnarray} In the above equation, if we let $z$ be such that $|z|_\ell=0$, we see that $s(z, \overline z)\le 0$. Now, if $F$ is not CR transversal but not totally non-degenerate in the sense that $F$ does not map an open subset of ${\mathbb C}^n$ into ${\mathbb H}^{N}_\ell$ (see [HZh]), then one can apply this result on a dense open subset of $M$ [BER] where $F$ is CR transversal and then take a limit as did in [HZh]. Then we have the following special case of the monotonicity theorem for the Chern-Moser-Weyl tensor obtained in Huang-Zhang [HZh]: \begin{theorem} ([HZh]) \label {thm111} Let $M_\ell\subset \CC^{n+1}$ be a Levi non-degenerate real hypersurface of signature $\ell$. Suppose that $F$ is a holomorphic mapping defined in a (connected) open neighborhood $U$ of $M$ in ${\mathbf C}^{n+1}$ that sends $M_\ell$ into ${\mathbf H}_\ell ^{N+1}\subset \CC^{N+1}$. Assume that $F(U)\not \subset {\mathbf H}_\ell^{N+1}$. Then when $\ell<\frac{n}{2}$, the Chern-Moser-Weyl curvature tensor with respect to any appropriate contact form $\theta$ is pseudo semi-negative in the sense that for any $p\in M$, the following holds: \begin{equation} \s{S}_{ \theta|_{p}}(v_p, \overline{v_p}, v_p, \overline {v_p})\le 0,\ \ \hbox{for}\ v_p\in {\s C}_\ell T_p^{(1,0)}M. \end{equation} When $\ell=\frac{n}{2}$, along a certain contact form $\theta$, $\s S_{\theta}$ is pseudo negative. \end{theorem} \section {Counter-examples to the embeddability problem for compact algebraic Levi non-degenerate hypersurfaces with positive signature into hyperquadrics} In this section, we apply Theorem \ref{thm111} to construct a compact Levi-nondegenerate hypersurface in a projective space, for which any piece of it can not be holomorphically embedded into a hyperquadric of any dimension with the same signature. This section is based on the work in the last section of Huang-Zaitsev [HZa]. Let $n,\ell$ be two integers with $1<\ell\le n/2$. For any $\epsilon$, define $${M_\epsilon}:= \left\{[z_0,\cdots,z_{n+1}]\in {\PP}^{n+1}: |z|^2 \left(-\sum_{j=0}^{\ell} |z_j|^2 + \sum_{j=\ell+1}^{n+1}|z_j|^2\right) +\epsilon\left(|z_1|^4-|z_{n+1}|^4\right) =0 \right\}.$$ Here $|z|^2=\sum_{j=0}^{n+1}|z_j|^2$ as usual. For $\epsilon =0$, ${M_\epsilon}$ reduces to the generalized sphere with signature $\ell$, which is the boundary of the generalized ball $${\BB}^{n+1}_\ell:= \left\{\{[z_0,\cdots,z_{n+1}]\in {\PP}^{n+1}: - \sum_{j=0}^{\ell}|z_j|^2 + \sum_{j=\ell+1}^{n+1}|z_j|^2 <0 \right\}.$$ The boundary $\p{{\BB}_\ell^{n+1}}$ is locally holomorphically equivalent to the hyperquadric $ {\mathbb H}^{n+1}_\ell\subset {\CC}^{n+1}$ of signature $\ell$ defined by $\Im{z_{n+1}}=-\sum_{j=1}^{\ell}|z_j|^2+ \sum_{j=\ell+1}^{n+1}|z_j|^2,$ where $(z_1,\cdots, z_{n+1})$ is the coordinates of ${\CC}^{n+1}$. For $0<\epsilon<<1$, ${M_\epsilon}$ is a compact smooth real-algebraic hypersurface with Levi form non-degenerate of the same signature $\ell$. \begin{theorem}\label{thm222} ([HZa]) There is an $\epsilon_0>0$ such that for $0<\epsilon<\epsilon_0$, the following holds: (i) ${M_\epsilon}$ is a smooth real-algebraic hypersurface in ${\PP}^{n+1}$ with non-degenerate Levi form of signature $\ell$ at every point. (ii) There does not exist any holomorphic embedding from any open piece of ${M_\ell}$ into $ {\mathbb H}_\ell^{N+1}$. \end{theorem} When $0<\epsilon<<1$, since ${M_\epsilon}$ is a small algebraic deformation of the generalzied sphere, we see that ${M_\epsilon}$ must also be a compact real-algebraic Levi non-degenerate hypersurface in ${\PP}^{n+1}$ with signature $\ell$ diffeomorphic to the generalized sphere which is the boundary of the generalized ball ${\BB}^{n+1}_\ell\subset {\PP}^{n+1}$. \medskip {\it Proof of Theorem \ref{thm222}}: The proof uses the following algebraicity of the first author: \begin{theorem}[Hu2, Corollary in $\S 2.3.5$] \label{thm333} Let $M_1\subset {\CC }^n$ and $M_2\subset {\CC}^N$ with $N\ge n\ge 2$ be two Levi non-degenerate real-algebraic hypersurfaces. Let $p\in M_1$ and $U_p$ be a small connected open neighborhood of $p$ in ${\mathbf C}^n$ and $F$ be a holomorphic map from $U_p$ into ${\mathbf C}^N$ such that $F(U_p\cap M_1)\subset M_2$ and $F(U_p)\not \subset M_2$. Suppose that $M_1$ and $M_2$ have the same signature $\ell$ at $p$ and $F(p)$, respectively. Then $F$ is algebraic in the sense that each component of $F$ satisfies a nontrivial holomorphic polynomial equation. \end{theorem} Next, we compute the Chern-Moser-Weyl tensor of $M_\epsilon$ at the point $$P_0:=[\xi^0_0,\cdots,\xi^0_{n+1}], \quad \xi^0_j=0 \text{ for } j\not = 0,\ell+1, \quad \xi^0_0=1,\quad \xi_{\ell +1}^0=1,$$ and consider the coordinates $$\xi_0=1,\quad \xi_j=\frac{\eta_j}{1+\sigma}, \quad j=1,\cdots, \ell, \quad \xi_{\ell+1}=\frac{1-\sigma}{1+\sigma}, \quad \xi_{j+1}=\frac{\eta_j}{1+\sigma},\quad j=\ell+1 ,\cdots, n.$$ Then in the $(\eta,\sigma)$-coordinates, $P_0$ becomes the origin and $M_\epsilon$ is defined near the origin by an equation in the form: \begin{equation}\label{26} \rho=-4\Re{\sigma}-\sum_{j=1}^{\ell}|\eta_j|^2+\sum_{j=\ell+1}^{n}|\eta_j|^2 +{a}(|\eta_1|^4-|\eta_{n}|^4)+o(|\eta|^4)=0, \end{equation} for some $a>0$. Now, let $Q(\eta,\-\eta)=-a(|\eta_1|^4-|\eta_n|^4)$ and make a standard $\ell$-harmonic decomposition [SW]: \begin{equation}\label {33} Q(\eta,\-\eta)=N^{(2,2)}(\eta,\-\eta)+ A^{(1,1)}(\eta,\-\eta)|\eta|^2_{\ell}. \end{equation} Here $N^{(2,2)}(\eta,\eta)$ is a $(2,2)$-homogeneous polynomial in $(\eta,\-{\eta})$ such that $\Delta_\ell N^{(2,2)}(\eta,\-\eta)=0$ with $\Delta_\ell$ as before. Now $N^{(2,2)}$ is the Chern-Moser-Weyl tensor of $M_\epsilon$ at $0$ (with respect to an obvious contact form) with $N^{(2,2)}(\eta,\-\eta)=Q(\eta,\-\eta)$ for any $\eta\in{\mathcal C}T^{(1,0)}_0 M_e$. Now the value of the Chern-Moser-Weyl tension has negative and positive value at $X_1=\frac{\p }{\p \eta_1}+\frac{\p }{\p \eta_{\ell+1}}|_0$ and $X_2=\frac{\p }{\p \eta_{2}}+\frac{\p }{\p \eta_{n}}|_0$, respectively. If $\ell>1$, then both $X_1$ and $X_2$ are in ${\mathcal C}T^{(1,0)}_0 M_e$. We see that the Chern-Moser-Weyl tensor can not be pseudo semi-definite near the origin in such a coordinate system. Next, suppose an open piece $U$ of ${M_\epsilon}$ can be holomorphically and transversally embedded into the ${\mathbf H}_\ell^{N+1}$ for $N>n$ by $F$. Then by the algebraicity result in Theorem \ref{thm333}, $F$ is algebraic. Since the branching points of $F$ and the points where $F$ is not defined (poles or points of indeterminancy of $F$) are contained in a complex-algebraic variety of codimension at most one, $F$ extends holomorphically along a smooth curve $\gamma$ starting from some point in $U$ and ending up at some point $p^* (\approx 0)\in M_\ell$ in the $(\eta,\sigma)$-space where the Chern-Moser-Weyl tensor of $M_\epsilon$ is not pseudo-semi-definite. By the uniqueness of real-analytic functions, the extension of $F$ must also map an open piece of $p^*$ into $ {\bf H}^{N+1}_\ell$. The extension is not totally degenerate. By Theorem \ref{thm111}, we get a contradiction. $\endpf$ \bigskip \section{Non-embeddability of compact strongly psuedo-convex real algebraic hypersurfaces into spheres} As discussed in the previous sections, spheres serve as the model of strongly pseudoconvex real hypersurfaces where the Chern-Moser-Weyl tensor vanishes. An immediate application of the invariant property for the Chern-Moser-Weyl tensor is that very rare strongly pseudoconvex real hypersurfaces can be biholomorphically mapped to a unit sphere. Motivated by various embedding theorems in geometries (Nash embedding, Remmert embedding theorems, etc), a natural question to pursue in Several Complex Variables is to determine when a real hypersurface in $\mathbb{C}^n$ can be holomorphically embedded into the unit sphere $\mathbb{S}^{2N-1}=\{Z \in \mathbb{C}^N: ||Z||^2=1\}.$ By a holomorphic embedding of $M\subset {\mathbb C}^n$ into $M' \subset {\mathbb C}^N$, we mean a holomorphic embedding of an open neighborhood $U$ of $M$ into a neighborhood $U'$ of $M'$, sending $M$ into $M'$. We also say $M$ is locally holomorphically embeddable into $M'$ at $p \in M$, if there is a neighborhood $V$ of $p$ and a holomorphic embedding $F: V \rightarrow \mathbb{C}^{N}$ sending $M \cap V$ into $M'$. A real hypersurface holomorphically embeddable into a sphere is necessarily strongly pseudoconvex and real-analytic. However, due to results by Forstneri\'c [For1] (See a recent work [For2] for further result) and Faran [Fa], not every strongly pseudoconvex real-analytic hypersurface can be embedded into a sphere. Explicit examples of non-embeddable strongly pseudoconvex real-analytic hypersurfaces constructed much later in [Za1]. Despite a vast of literature devoted to the embeddability problem, the following question remains an open question of long standing. Here recall a smooth real hypersurface in an open subset $U$ of $\mathbb{C}^n$ is called real-algebraic, if it has a real-valued polynomial defining function. \begin{question} \label{question} Is every compact real-algebraic strongly pseudoconvex real hypersuraface in $\mathbb{C}^n$ holomorphically embeddable into a sphere of sufficiently large dimension? \end{question} Part of the motivation to study this embeddability problem is a well-known result due to Webster [We2] which states that every real-algebraic Levi-nondegenerate hypersurface admits a transversal holomorphic embedding into a non-degenerate hyperquadric in sufficiently large complex space. (See also [KX] for further study along this line.) Notice that in [HZa], the authors showed that there are many compact real-algebraic pseudoconvex real hypersurfaces with just one weakly pseudoconvex point satisfying the following property: Any open piece of them cannot be holomorphically embedded into any compact real-algebraic strongly pseudoconvex hypersurfaces which, in particular, includes spheres. Many other related results can be found in the work of Ebenfelt-Son [ES], Fornaess [Forn], etc. In [HLX], the authors constructed the following family of compact real-algebraic strongly pseudoconvex real hypersurfaces: \begin{equation} \label{equhym} M_{\epsilon}=\{(z,w)\in\mathbb{C}^2: \varepsilon_0(|z|^8+c\mathrm{Re}|z|^2z^6)+|w|^2+ |z|^{10}+ {\epsilon} |z|^2-1=0\},~~ 0 < \epsilon < 1. \end{equation} Here, $2<c<\frac{16}{7}$, $\varepsilon_0 >0$ is a sufficiently small number such that $M_\varepsilon$ is smooth for all $0 \leq \epsilon <1$. An easy computation shows that for any $0 < \epsilon < 1, M_{\epsilon}$ is strongly pseudoconvex. $M_{\epsilon}$ is indeed a small algebraic deformation of the boundary of the famous Kohn-Nirenberg domain [KN]. It is shown in [HLX] that for any integer $N,$ there exists a small number $0 < \epsilon(N) < 1$, such that for any $0 < \epsilon < \epsilon(N)$, $M_{\epsilon}$ cannot be locally holomorphically embedded into the unit sphere $\mathbb{S}^{2N-1}$ in $\mathbb{C}^N$. More precisely, any holomorphic map sending an open piece of $M_{\epsilon}$ to $\mathbb{S}^{2N-1}$ must be a constant map. We will write $$\rho_{\epsilon}=\rho_{\epsilon}(z, w, \overline{z}, \overline{w}):=\varepsilon_0(|z|^8+c\mathrm{Re}|z|^2z^6)+|w|^2+ |z|^{10}+ {\epsilon} |z|^2-1.$$ We first fix some notations. Let $M \subset \mathbb{C}^{n}$ be a real-algebraic subset defined by a family of real-valued polynomials $\{\rho_{\alpha}(Z,\overline{Z})=0\},$ where $Z$ is the coordinates of $\mathbb{C}^{n}.$ Then the complexification $\mathcal{M}$ of $M$ is the complex-algbraic subset in $\mathbb{C}^n \times \mathbb{C}^n$ defined by $\rho_{\alpha}(Z,W)=0$ for each $\alpha, (Z, W) \in \mathbb{C}^n \times \mathbb{C}^n.$ Then for $p \in \mathbb{C}^{n},$ the Segre variety of $M$ associated with the point $p$ is defined by $Q_{p}:=\{Z \in \mathbb{C}^n:(Z,\overline{p}) \in \mathcal{M}\}.$ The geometry of Segre varieties of a real-analytic hypersurface has been used in many literatures since the work of Segre [S] and Webster [We]. In this note, fundamentally based on our previous joint work with Li [HLX], we show that $M_{\epsilon}$ cannot be locally holomorphically embedded into any unit sphere. The other important observation we need is the fact that for some $p\in M_\epsilon$, the associated Segre variety $Q_p$ cuts $M_\epsilon$ along a one dimensional real analytic subvariety inside $M_\epsilon$. The geometry related to intersection of the Segre variety with the boundary plays an important role in the study of many problems in Several Complex Variables. (We mention, in particular, the work of D'Angelo-Putinar [DP], Huang-Zaitsev [HZa]). This then provides a counter-example to a long standing open question--- Question \ref{question}. (See [HZa] for more discussions on this matter). \begin{theorem}\label{t0} There exist compact real-algebraic strongly pseudoconvex real hypersurfaces in $\mathbb{C}^2$, diffeomorphic to the sphere, that are not locally holomorphically embeddable into any sphere. In particular, for sufficiently small positive $\varepsilon_0, \epsilon, M_{\epsilon}$ cannot be locally holomorphically embedded into any sphere. More precisely, a local holomorphic map sending an open piece of $M_{\epsilon}$ to a unit sphere must be a constant map. \end{theorem} Write $D_{\epsilon}= \{ \rho_{\epsilon} < 0 \}$ as the interior domain enclosed by $M_{\epsilon}.$ Since $M_{\epsilon}$ is a small smooth deformation of $\{ |z|^{10} + |w|^2=1 \}$ for small $\varepsilon_0$ and $\epsilon$. This imples $M_{\epsilon}$ is diffeomorphic to the unit sphere $\mathbb{S}^3$ for sufficiently small $\varepsilon_0$ and $\epsilon$. Consequently, $M_{\epsilon}$ separates ${\CC}^2$ into two connected components $D_\epsilon$ and ${\CC }^2\sm \overline{D_\epsilon}$. \begin{proposition}\label{prop1e} Let $p_0=(0,1) \in M_{\epsilon}.$ Let $Q_{p_0}$ be the Segre variety of $M_{\epsilon}$ associated to $p_0.$ There exists $\widetilde{\epsilon}>0$ such that for each $0 < \epsilon < \widetilde{\epsilon}$, $Q_{p_0} \cap M_{\epsilon}$ is a real analytic subvariety of dimension one. \end{proposition} {\it Proof of Proposition \ref{prop1e}}: It suffices to show that there exists $q \in Q_{p_{0}}$ such that $q \in D_{\epsilon}.$ Note that $ Q_{p_0}=\{(z, w): w=1 \}$. Set $$\psi(z, \epsilon)=\varepsilon_0(|z|^8 + c\mathrm{Re} |z|^2 z^6)+|z|^{10}+ \epsilon |z|^2,~0\leq \epsilon < 1.$$ Note $q=(\mu_0, 1) \in D_{\epsilon}$ if and only if $\psi(\mu_0, \epsilon)< 0.$ Now, set $\phi(\lambda, \epsilon)=\varepsilon_0 \lambda^8 (1 -c)+ \lambda^{10}+ \epsilon \lambda^2, 0 \leq \epsilon < 1.$ First we note there exists small $\lambda'> 0,$ such that $\phi(\lambda', 0)< 0$. Consequently, we can find $\widetilde{\epsilon} >0$ such that for each $0 < \epsilon \leq \widetilde{\epsilon}, \phi(\lambda', \epsilon) <0.$ Write $\mu_0=\lambda' e^{i\frac{\pi}{6}}.$ It is easily to see that $\psi(\mu_0, \epsilon)< 0$ if $0 < \epsilon \leq \widetilde{\epsilon}$. This establishes Proposition \ref{prop1e}. $\endpf$ \bigskip \bigskip \begin{proposition}\label{prpt01} Let $M:= \{Z \in \mathbb{C}^n: \rho(Z, \overline{Z})=0\},n \geq 2, $ be a compact, connected, strongly pseudo-convex real-algebraic hypersurface. Assume that there exists a point $p \in M$ such that the associated Segre variety $Q_{p}$ of $M$ is irreducible and $Q_{p}$ intersects $M$ at infinitely many points. Let $F$ be a holomorphic rational map sending an open piece of $M$ to the unit sphere $\mathbb{S}^{2N-1}$ in some $\mathbb{C}^N.$ Then $F$ is a constant map. \end{proposition} {\it Proof of Proposition \ref{prpt01}}: Let $D$ be the interior domain enclosed by $M.$ From the assumption and a theorem of Chiappari [Ch], we know $F$ is holomorphic in a neighborhood $U$ of $\overline{D}$ and sends $M$ to $\mathbb{S}^{2N-1}.$ Consequently, if we write $\mathcal{S}$ as the singular set of $F$, then it does not intersect $U$. Write $Q'_q$ for the Segre variety of $\mathbb{S}^{2N-1}$ associated to $q \in \mathbb{C}^N$. We first conclude by complexification that for a small neighborhood $V$ of $p,$ \begin{equation}\label{eqnpq} F(Q_p \cap V) \subset Q'_{F(p)}. \end{equation} Note that $\mathcal{S} \cap Q_p$ is a Zariski close proper subset of $Q_p$. Notice that $Q_p$ is connected as it is irreducible. We conclude by unique continuation that if $\widetilde{p} \in Q_p$ and $F$ is holomorphic at $\widetilde{p}$, then $F(\widetilde{p}) \in Q'_{F(p)}$. In particular, if $\widetilde{p} \in Q_p \cap M,$ then $F(\widetilde{p}) \in Q'_{F(p)} \cap \mathbb{S}^{2N-1}=\{ F(p) \}.$ That is, $F(\widetilde{p})=F(p).$ \smallskip Notice by assumption that $Q_{p} \cap M$ is a compact set and contains infinitely many points. Let $\hat{p}$ be an accumulation point of $Q_{p} \cap M.$ Clearly, by what we argued above, $F$ is not one-to-one in any neighborhood of $\hat{p}.$ This shows that $F$ is constant. Indeed, suppose $F$ is not a constant map. We then conclude that $F$ is a holomorphic embedding near $\hat{p}$ by a standard Hopf lemma type argument (see [Hu2], for instance) for both $M_\epsilon$ and ${\mathbb S}^{2N-1}$ are strongly pseudo-convex. This completes the proof of Proposition \ref{prpt01}. $\endpf$ \medskip {\it Proof of Theorem \ref{t0}:} Pick $p_{0}=(0,1) \in M_{\epsilon}.$ Notice that the associated Segre variety $Q_{p_0}=\{(z,1): z \in \mathbb{C} \}$ is an irreducible complex variety in $\mathbb{C}^2$. Let $\epsilon, \varepsilon_0$ be sufficiently small such that Proposition \ref{prop1e} holds. Now, let $F$ be a holomorphic map defined in a small neighborhood $U$ of some point $q \in M_{\epsilon}$ that sends an open piece of $M_{\epsilon}$ into $\mathbb{S}^{2N-1}, N \in \mathbb{N}$. It is shown in [HLX] that $F$ is a rational map. Then it follows from Proposition \ref{prpt01} that $F$ is a constant map. We have thus established Theorem \ref{t0}. $\endpf$
aa9f81828dfb96de9cbf0073fcb37b318da24901
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:Intro} Dark matter, which constitutes $\sim 26\%$ of the total energy density \citep{Planck2015}, plays an important role in the structure formation of the Universe. The conventional cold dark matter (CDM) model has been successful at explaining observations on large scales, such as cosmic microwave background (CMB) and large-scale galaxy structure resulting from primordial fluctuations \citep{Tegmark2006, Cole2005, Frenk&White2012, Bennett2013}. However, several discrepancies between observations and dissipationless CDM simulations on the galactic scale still persist. There are three major problems. (i) ``missing satellite problem'': CDM simulations predict an order of magnitude more subhalos than observed satellite galaxies in the Milky Way \citep{Klypin1999, Moore1999}. (ii) ``too-big-to-fail problem'': the average central density of the most massive CDM subhalos are significantly higher than that of the most luminous dwarf spheroidal (dSph) galaxies derived from their kinematic data \citep{Boylan2012, Boylan2011, Tollerud2012}. (iii) ``cusp-core problem'': CDM simulations predict a universal Navarro-Frenk-White (NFW) halo density profile \citep{Navarro1997} with a cuspy central density, while observations of dSphs \citep{Moore1994, Flores1994, Kleyna2002, Goerdt2006,Walker2011, Amorisco2013} and low surface brightness galaxies \citep{deBlok2001, deBlok2002, deBlok2005, Burkert1995, Borriello2001, Oh2008} are, in general, better described by a cored density profile. Generally speaking, there are two categories of solutions to these small-scale issues. The first class of approaches considers additional baryonic physics in CDM simulations, such as feedback from supernova explosions and stellar wind \citep{Navarro1996, Read2005, Governato2010, Governato2012, Mashchenko2008, Pontzen2014}, energy transfer from dynamical friction of compact baryonic objects \citep{El-Zant2001}, ram pressure stripping \citep{Arraki2014}, tidal stripping \citep{Brooks2013}, suppression of star formation rate by photoionization due to early reionization \citep{Bullock2000, Benson2002, Somerville2002}, blazer \citep{Pfrommer2012}, and cosmic-ray heating \citep{Wadepuhl2011}. See \citet{Weinberg2013} for a comprehensive review. The second class of approaches adopts alternative dark matter models, such as warm dark matter \citep{Colin2000, Maccio2012, Lovell2012}, self-interacting dark matter \citep{Spergel2000, Vogelsberger2012}, and axion/scalar field dark matter \citep{Turner1983, Khlopov1985, Sin1994, Guzman2000, Peebles2000, Goodman2000, Hu2000, Matos2000,Sahni2000, Bohmer2007, Sikivie2009, Chavanis2011,Robles2013,Guth2015,Davidson2015}, which naturally produce constant-density cores and a cut off in the matter power spectrum. In this work, we focus on scalar field dark matter composed of extremely light bosons with negligible self-interaction, known as \emph{$\psiDM$} \citep{Schive2014a, Marsh&Silk2013} or \emph{fuzzy} dark matter \citep{Hu2000}. In this scenario, dark matter consists of axion-like particles proposed by string theory \citep{Arvanitaki2010, Svrcek2006} or non-QCD axions \citep{Chiueh2014}. By assuming a dark matter particle mass of $m_{\psi} \sim 10^{-22}\eV$, the de Broglie wave nature becomes manifest on astrophysical scale. Here uncertainty principle counters gravity below a Jeans scale, simultaneously leading to kpc-scale central density cores in dSph-sized subhalos and suppressing the abundance of subhalos with masses below $\sim 10^{10}\Msun$ \citep{Hu2000, Marsh2010, Schive2014a, Schive2014b}. It thus provides a plausible solution to the small-scale issues in CDM. Moreover, on large scales $\psiDM$ is statistically indistinguishable from CDM \citep{Woo2009, Schive2014a, Marsh&Silk2013, Bozek2015, Widrow1993}. For these reasons, $\psiDM$ has become a viable candidate for dark matter. One of the key features in $\psiDM$ is that it has only one free parameter, $m_{\psi}$, the dark matter particle mass. It is thus crucial to validate whether the estimates of $m_{\psi}$ from various independent observational constraints are consistent with each other. Using CMB and galaxy clustering data, \citet{Hlozek2015} obtained $m_{\psi}>10^{-24}\eV$. \citet{Schive2016} used high-redshift galaxy luminosity function to derive $m_{\psi}>1.2\times10^{-22}\eV$ (see also \citet{Bozek2015,Corasaniti2016}). \citet{Sarkar2016} used damped Lyman-$\alpha$ observations and found $m_{\psi}>10^{-23}\eV$. \citet{Marsh&Pop2015} explored multiple stellar subpopulations in Fornax and Sculptor and estimated $m_{\psi}<1.1\times 10^{-22}\eV$. \citet{Lora2012, Lora2014} used the longevity of the cold clumps in Ursa Minor and Sextans and found $m_{\psi} \sim 0.3 \-- 1\times10^{-22}\eV$ and $m_{\psi} \sim 0.12 \-- 8\times10^{-22}\eV$, respectively. Using newly discovered ultra-faint dSphs, \citet{Calabrese2016} estimated $m_{\psi} \sim 3.7 \-- 5.6\times10^{-22}\eV$. According to these studies, which cover a variety of astrophysical probes, the viable $\psiDM$ particle mass generally lies in the range $10^{-23} \-- 10^{-21}\eV$. See \citet{Marsh2015a} for a comprehensive review. DSph galaxies are the most dark-matter-dominated systems with mass-to-light ratios exceeding $>100$ \citep{Mateo1998,Kleyna2005} and with little gas and no recent star formation \citep{Smecker-Hane1994, Tolstoy2003, Venn2004}. Large spectroscopic surveys provide rich stellar kinematics and metallicity data of dSphs, allowing detailed studies on dark matter properties from dynamical modeling. We refer to \citet{Battaglia2013} for a comprehensive review on this subject. Several works suggest that the dark matter density profiles are cored rather than cuspy \citep{Kleyna2003,Goerdt2006,Sanchez-Salcedo2006, Battaglia2008, Jardel2012, Walker2011, Amorisco&Evans2012a}, although debate still remains \citep{Strigari2010, Strigari2014, Jardel2013}. In this work, we apply Jeans analysis to the kinematic data of eight classical dSphs to determine $m_{\psi}$. Most importantly, we investigate (i) whether the estimates of $m_{\psi}$ from different dSphs are in good agreement with each other, and (ii) whether the combined constraint of $m_{\psi}$ from the eight classical dSphs is consistent with other independent observational constraints described previously. The paper is structured as follows. We describe the procedure of our Jeans analysis in \sref{sec:Jeans} and show results in \sref{sec:Results}. In \sref{sec:Discussion}, we address various uncertainties in the analysis and discuss some extended models. Finally, we summarize our findings in \sref{sec:Conclusion}. \section{Jeans Analysis} \label{sec:Jeans} In order to constrain the dark matter particle mass $m_{\psi}$ by the stellar kinematics of dSphs, we regard stars as tracers of the gravitational potential dominated by the dark matter and assume the tidal disturbance is negligible. Dynamical equilibrium of stars in dSphs is supported by velocity dispersion with negligible rotation \citep{Mateo1998}. Assuming spherical symmetry, the Jeans equation \citep{GalacticDynamics2008} relates the stellar phase-space distribution to the dark matter halo as \begin{equation} \frac{1}{\nu}\frac{d}{dr}(\nu \bar{v_r^2})+2\frac{\beta\bar{v_r^2}}{r}=-\frac{GM(r)}{r^2}, \label{eq:jeans} \end{equation} where $\nu(r)$, $\bar{v_r}(r)$, $\bar{v_t}(r)$, and $\beta(r)\equiv 1-\bar{v_t^2}/2\bar{v_r^2}$ are the stellar number density, radial velocity dispersion, tangential velocity dispersion, and orbital anisotropy, respectively. $M(r)$ is the enclosed mass of the dark matter. By assuming $\beta = \rm{constant}$, \eref{eq:jeans} has the solution \citet{Lokas&Mamon2003}: \begin{equation} \nu\bar{v^2_r}=Gr^{-2\beta}\int_r^{\infty}s^{2\beta-2}\nu(s)M(s)ds. \label{eq:jeanssolution} \end{equation} By projecting \eref{eq:jeanssolution} along the line of sight, we get \citep{GalacticDynamics2008} \begin{equation} \sigma_p^2(R)=\frac{2}{\Sigma(R)}\int_{R}^{\infty}\biggl (1-\beta\frac{R^2}{r^2}\biggr ) \ \frac{\nu \bar{v_r^2}r}{\sqrt{r^2-R^2}}dr, \label{eq:jeansproject} \end{equation} where $R$, $\Sigma(R)$, and $\sigma_p(R)$ are the projected radius, stellar surface density, and line-of-sight velocity dispersion, respectively. In the following, we address in detail (1) the dark matter mass profile in the $\psiDM$ model, (2) the stellar density and velocity dispersion profiles in the Jeans analysis, and (3) the Markov chain Monte Carlo algorithm for constraining the posterior distribution of $m_{\psi}$. \subsection{Wave Dark Matter Halo} \label{subsec:wave dark matter model} From numerical simulations, \citet{Schive2014a} found a gravitationally self-bound, constant-density core at the center of each $\psiDM$ halo, which connects to an NFW profile at a larger radius. This cored profile satisfies a soliton solution and can be well fitted by \citep{Schive2014a} \begin{equation} \rho_{\rm{soliton}}(r) = \frac{1.9~(m_\psi/10^{-23}\eV)^{-2}(r_c/\pc)^{-4}} {[1+9.1\times10^{-2}(r/r_c)^2]^8}~10^{12}\Msun\pc^{-3}, \label{eq:SolitonFit} \end{equation} which has two free parameters, $m_{\psi}$ and $r_c$, where $r_c$ is the core radius defined as the radius at which the density drops to one-half its peak value. The main goal of this work is thus to determine both $r_c$ and $m_{\psi}$ for each dSph. Note that the mass profile, $M(r)= 4\pi\int_{0}^{r}s^2\rho(s)ds$, can be calculated analytically from \eref{eq:SolitonFit}. See Appendix \ref{sec:SolitonMassProfile} for the explicit form. \citet{Schive2014a} shows that, in general, the transition radius between the inner soliton and the outer NFW halo is greater that $3\,r_c$, which is a few times greater than the observed half-light radii of dSphs. It is therefore reasonable to assume that all stars reside within the central soliton and ignore the outer NFW halo in the first place when conducting the Jeans analysis. For dSphs with stars possibly extending beyond $3\,r_c$ (e.g., Fornax), we also extend the main analysis to include outer NFW halos (see \sref{subsec:SolitonNFW}). \subsection{Stellar Density and Velocity Dispersion} \label{subsec:stellar density and velocity dispersion} Stellar surface density of dSphs are commonly fitted by King, Sersic, or Plummer models \citep[e.g.,][]{Irwin1995}. Following \citet{Walker2009b}, we first adopt a Plummer profile, $I(R)=L(\pi R_h^2)^{-1}[1+R^2/R_h^2]^{-2}$, where $L$ is the total luminosity and $R_h$ is the radius enclosing $0.5\,L$. We use $R_h$ derived from \citet{Walker2009b}. Further assuming a constant mass-to-light ratio, the three-dimensional density profile can be derived from the surface density profile by the Abel transform as $\nu(r)=3L(4\pi R_h^3)^{-1}[1+r^2/R_h^2]^{-5/2}$. We also investigate other stellar density models in \sref{subsubsec:StellarDens} so as to consolidate the results. For the stellar velocity dispersion, we first adopt the data of eight classical dSphs in \citet{Walker2007} and \citet{Walker2009b, Walker2013} (see Figs. \ref{fig:vel_disp} and \ref{fig:vel_disp_w07}), where the error bars indicate $1\sigma$ uncertainty. We shall also discuss the results using different observational data sets in \sref{subsubsec:VelDisp}. \subsection{Markov Chain Monte Carlo} \label{subsec:MCMC} To constrain the range of $m_{\psi}$, we follow \citet{Walker2009b,Salucci2012} and compare the empirical line-of-sight velocity dispersion, $\sigma_{V_0}(R_i)$, with the projected velocity dispersion, $\sigma_p(R_i)$, estimated from our Jeans analysis using \eref{eq:jeansproject}. We explore a three-dimensional parameter vector set $\vec{\theta} \equiv \left \{ -\log_{10}(1-\beta),\,\log_{10}[r_c/\rm{pc}],\, \log_{10}[m_\psi /10^{-23}\rm{eV}] \right \}$ with uniform priors over the ranges $-1\leq-\log_{10}(1-\beta)\leq1$, $1\leq \log_{10}[r_c/\rm{pc}]\leq4$, and $0\leq \log_{10}[m_\psi /10^{-23}\rm{eV}]\leq3$. We take the likelihood function \begin{equation} L(\vec{\theta}) = \prod_{i=1}^N \frac{1}{\sqrt{2\pi\rm{Var}[\sigma_{V_0}(R_i)]}} \exp \biggl [ -\frac{1}{2}\frac{(\sigma_{V_0}(R_i)-\sigma_p(R_i))^2}{\rm{Var}[\sigma_{V_0} (R_i)]} \biggr ], \label{eq:likelihood} \end{equation} where $\rm{Var}[\sigma_{V_0}(R_i)]$ is the square of the observational uncertainty associated with $\sigma_{V_0}(R_i)$ and $N$ is the number of bins in the velocity dispersion profile. We get a posterior probability distribution of each model parameter using a Markov chain Monte Carlo (MCMC) analysis with the Metropolis-Hastings algorithm \citep{Metropolis1953, Hastings1970}. We use the MCMC engine \emph{CosmoMC} \citep{Lewis2002ah,Lewis1999bs} as a generic sampler and run four parallel chains. The code computes the R-statistic of Gelman and Rubin \citep{An98stephenbrooks} as the convergence criterion and stops iterations when this value is less than 1.01 for each model parameter. The first $30\%$ accepted steps are discarded as burn-in. To account for the observational uncertainty of $R_h$, we follow \citet{Salucci2012} and let $R_h$ sample randomly from a Gaussian distribution with mean and standard deviation equal to the observations \citep{Irwin1995}. We also investigate the case with a fixed $R_h$ in \sref{subsubsec:StellarDens}. \section{Results} \label{sec:Results} We apply the Jeans analysis described above to the observational data of eight classical dSphs. \fref{fig:vel_disp} shows the empirical velocity dispersions of \citet[][hereafter W09]{Walker2009b} and the estimated best-fit dispersions, which correspond to the maximum likelihood points in our MCMC chains with $m_{\psi}$ confined in the $2\sigma$ range of the combined constraint from eight dSphs. Clearly, the soliton core profile in the $\psiDM$ model provides satisfactory fits to the observations. The reduced chi-square, $\chi_{\rm{red}}^2$, for each dSph assuming three free parameters lies in the range $0.26 \-- 1.45$. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig1.pdf} \caption{ Projected velocity dispersion profiles for the eight classical dSphs in the observational data set of \citet{Walker2009b}. Error bars show the empirical profiles with $1\sigma$ uncertainties. Solid lines show the best-fit profiles obtained in our Jeans analysis using the soliton density profile in $\psiDM$ (\eref{eq:SolitonFit}), which provide satisfactory fits to the observations. The reduced chi-square, $\chi_{\rm{red}}^2$, of each dSph is also shown. The confidence intervals of the model parameters for each dSph are listed in \tref{table:fit}. } \label{fig:vel_disp} \end{figure} \fref{fig:mb_rc_contour_w09} shows the $1\sigma\,(68\%)$ and $2\sigma\,(95\%)$ contours for the posterior distributions of $m_{\psi}$ and $r_{c}$. The corresponding values of $\beta$ are shown in the colored scatter plot. Notice that there is a clear correlation between $m_{\psi}$ and $r_c$. It results from the fact that a significant fraction of stars are located within the soliton core radius, where the dark matter density is roughly a constant and thus $m_{\psi}$ and $r_c$ become degenerate as $m_{\psi} \propto r_c^{-2}$ (see \eref{eq:SolitonFit}). It therefore relies on stars outside $r_c$ to break this degeneracy. The mean correlation is found to be $m_{\psi} \propto r_c^{-1.4}$, shallower than the fully degenerate case. It explains the tendency that the tangential anisotropy increases with decreasing $r_c$ and increasing $m_{\psi}$. By inserting $m_{\psi} \propto r_c^{-1.4}$ into \eref{eq:SolitonFit}, we have $\rho_{\rm{soliton}} \propto r_c^{-1.2}$. Therefore, a smaller $r_c$ corresponds to a higher core density, which needs a larger amount of tangential anisotropy to counter gravity and match the observed velocity dispersion profile. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig2.pdf} \caption{ Posterior distributions of $m_{\psi}$ and $r_{c}$ colored by $\beta$ for each dSph in our MCMC analysis. Contours show the $1\sigma$ and $2\sigma$ confidence regions. The confidence intervals of the model parameters for each dSph are also listed in \tref{table:fit}. } \label{fig:mb_rc_contour_w09} \end{figure} \tref{table:fit} lists the means, $1\sigma$, and $2\sigma$ confidence intervals of $r_c$, $m_{\psi}$, and $\beta$ for each dSph using the one-dimensional posterior distribution obtained from MCMC chains. We estimate the combined constraint by multiplying the one-dimensional posterior distributions of all dSphs based on the assumption that each dSph gives independent constraint on $m_{\psi}$. It leads to $1\sigma\,(2\sigma)$ confidence intervals of $m_{\psi}=1.79_{-0.17(-0.33)}^{+0.17(+0.35)}\times10^{-22}\eV$ and a corresponding reduced chi-square of particle mass of $\chi_{\rm{red}}^{2}=1.2$. We also estimate the combined constraint by fitting all eight dSphs simultaneously, and the results are very consistent with the values given above. See Appendix \ref{sec:Joint analysis} for details. Note that the combined constraint is dominated by Fornax since it has the smallest variance among the eight dSphs. Also note that the anisotropy parameters $\beta$ of some dSphs have only $1\sigma$ constraints. We have adopted a larger range for the uniform prior of $\beta$ and also a different anisotropy prior distribution (see \sref{subsubsec:Anisotropy}) and verified that the estimates of $m_{\psi}$ are consistent with \tref{table:fit}. \begin{table*} \begin{center} \caption{Constraints on model parameters from MCMC analysis. Values represent the means and $1\sigma$ ($2\sigma$) confidence intervals. The first and second rows for each dSph represent the results estimated from the kinematic data sets of W09 and W07, respectively. The anisotropy parameters of some dSphs have only $1\sigma$ constraints.} \label{table:fit} \begin{tabular}{llrr} \hline Galaxy & $\log_{10}[m_\psi /10^{-23}\rm{eV}]$ & $\log_{10}[r_c/\rm{pc}]$ & $-\log_{10}(1-\beta)$ \\ \hline Carina & $1.29^{+0.29(+0.50)}_{-0.18(-0.56)}$ & $2.76^{+0.13(+0.35)}_{-0.18(-0.33)}$ & $\hphantom{-}0.24^{+0.15(+0.35)}_{-0.18(-0.33)}$ \\ & $1.09^{+0.18(+0.32)}_{-0.14(-0.35)}$ & $2.87^{+0.09(+0.22)}_{-0.11(-0.21)}$ & $\hphantom{-}0.29^{+0.10(+0.22)}_{-0.11(-0.20)}$ \\ Draco & $0.91^{+0.20(+0.40)}_{-0.20(-0.40)}$ & $2.86^{+0.12(+0.25)}_{-0.12(-0.24)}$ & $\hphantom{-}0.53^{+0.35\hphantom{(+0.00)}}_{-0.24\hphantom{(-0.00)}}$ \\ & $1.05^{+0.17(+0.31)}_{-0.15(-0.34)}$ & $2.75^{+0.10(+0.21)}_{-0.10(-0.20)}$ & $\hphantom{-}0.16^{+0.13(+0.31)}_{-0.16(-0.29)}$ \\ Fornax & $1.21^{+0.06(+0.10)}_{-0.05(-0.11)}$ & $2.79^{+0.05(+0.10)}_{-0.05(-0.09)}$ & $\hphantom{-}0.07^{+0.06(+0.11)}_{-0.06(-0.11)}$ \\ & $0.99^{+0.07(+0.13)}_{-0.07(-0.15)}$ & $2.90^{+0.05(+0.11)}_{-0.05(-0.10)}$ & $\hphantom{-}0.07^{+0.06(+0.12)}_{-0.06(-0.12)}$ \\ Leo I & $1.18^{+0.25(+0.42)}_{-0.18(-0.46)}$ & $2.71^{+0.13(+0.31)}_{-0.16(-0.29)}$ & $\hphantom{-}0.33^{+0.22(+0.57)}_{-0.32(-0.48)}$ \\ & $1.08^{+0.27(+0.44)}_{-0.17(-0.49)}$ & $2.78^{+0.12(+0.31)}_{-0.17(-0.29)}$ & $\hphantom{-}0.41^{+0.18(+0.50)}_{-0.28(-0.33)}$ \\ Leo II & $1.71^{+0.39(+1.03)}_{-0.42(-0.85)}$ & $2.30^{+0.37(+0.71)}_{-0.24(-0.94)}$ & $\ge 0.21^{\hphantom{+0.00}\hphantom{(+0.00)}}_{\hphantom{+0.00}\hphantom{(+0.00)}}$\\ & $1.61^{+0.30(+0.64)}_{-0.29(-0.70)}$ & $2.45^{+0.20(+0.45)}_{-0.21(-0.48)}$ & $\hphantom{-}0.40^{+0.46\hphantom{(+0.00)}}_{-0.28\hphantom{(+0.00)}}$ \\ Sculptor & $1.31^{+0.10(+0.18)}_{-0.08(-0.19)}$ & $2.65^{+0.06(+0.14)}_{-0.07(-0.13)}$ & $\hphantom{-}0.09^{+0.08(+0.18)}_{-0.09(-0.17)}$ \\ & $1.11^{+0.14(+0.26)}_{-0.11(-0.26)}$ & $2.77^{+0.08(+0.17)}_{-0.09(-0.17)}$ & $\hphantom{-}0.22^{+0.10(+0.27)}_{-0.14(-0.25)}$ \\ Sextans & $1.79^{+0.33(+0.53)}_{-0.19(-0.58)}$ & $2.41^{+0.31(+0.61)}_{-0.30(-0.64)}$ & $-0.31^{+0.39(+0.46)}_{-0.19(-0.63)}$ \\ & $1.33^{+0.22(+0.38)}_{-0.13(-0.44)}$ & $2.78^{+0.12(+0.35)}_{-0.17(-0.32)}$ & $-0.12^{+0.13(+0.25)}_{-0.12(-0.26)}$ \\ Ursa Minor & $1.39^{+0.29(+0.55)}_{-0.24(-0.65)}$ & $2.59^{+0.18(+0.42)}_{-0.19(-0.44)}$ & $-0.01^{+0.19(+0.39)}_{-0.14(-0.40)}$ \\ \hline \end{tabular} \end{center} \centering \end{table*} \begin{table*} \begin{center} \caption{Summary of various stellar models applied to individual dSphs. The results generally agree with \tref{table:fit} and reveal insensitivity to these variations. } \label{table:discussion} \begin{tabular}{llll} \hline Galaxy & $\log_{10}[m_\psi /10^{-23}\rm{eV}]$ & $\log_{10}[r_c/\rm{pc}]$ & Description \\ \hline Fornax & $1.09^{+0.07}_{-0.06}$ & $2.85^{+0.05}_{-0.06}$ & velocity dispersion (Amorisco and Evans 2012b) \\ & $1.15^{+0.07}_{-0.06}$ & $2.84^{+0.05}_{-0.05}$ & stellar density (generalized Plummer) \\ & $1.23^{+0.05}_{-0.07}$ & $2.78^{+0.06}_{-0.05}$ & soliton + NFW model \\ & $0.91^{+0.26}_{-0.09}$ & $2.96^{+0.07}_{-0.19}$ & stellar subpopulations \\ Leo I & $1.19^{+0.32}_{-0.20}$ & $2.70^{+0.14}_{-0.20}$ & velocity dispersion (tidal effect) \\ & $0.92^{+0.42}_{-0.21}$ & $2.86^{+0.13}_{-0.24}$ & stellar density (generalized Plummer) \\ Draco & $0.85^{+0.34}_{-0.19}$ & $2.90^{+0.12}_{-0.19}$ & stellar density (generalized Plummer) \\ Sextans & $0.51^{+0.23}_{-0.44}$ & $3.31^{+0.25}_{-0.14}$ & velocity dispersion (VLT) \\ & $1.64^{+0.48}_{-0.14}$ & $2.55^{+0.24}_{-0.44}$ & stellar density (exponential profile) \\ Carina & $1.41^{+0.44}_{-0.18}$ & $2.67^{+0.14}_{-0.28}$ & velocity dispersion (tidal effect) \\ Sculptor & $1.08^{+0.28}_{-0.22}$ & $2.78^{+0.14}_{-0.14}$ & stellar subpopulations (OM anisotropy) \\ & $1.21^{+0.17}_{-0.15}$ & $2.70^{+0.10}_{-0.11}$ & stellar subpopulations (constant anisotropy) \\ \hline \end{tabular} \end{center} \end{table*} To further ascertain whether our estimate of $m_{\psi}$ is sensitive to the adopted observational data sets, we also apply the same Jeans analysis to the data of seven dSphs in \citet[][hereafter W07]{Walker2007}, which does not include Ursa Minor. The results are shown in Figures \ref{fig:vel_disp_w07} and \ref{fig:mb_rc_contour_w07}. The reduced chi-square of the velocity dispersion fit of each dSph lies in the range $0.61 \-- 1.66$. Note that the estimates of $m_{\psi}$ are, in general, lower than those obtained from W09, which arises from the generally higher velocity dispersions in W07. We address this discrepancy in more detail in \sref{subsubsec:VelDisp}. By multiplying the one-dimensional posterior distributions of the seven dSphs in W07, we get $1\sigma\,(2\sigma)$ confidence intervals of $m_{\psi}=1.18_{-0.13(-0.24)}^{+0.14(+0.28)}\times10^{-22}\eV$ (see Appendix \ref{sec:Joint analysis} for the results of fitting all dSphs in W07 simultaneously). This estimate is marginally consistent with the lower limit from W09. The corresponding reduced chi-square of particle mass is $\chi_{\rm{red}}^2=0.92$. The estimated means, $1\sigma$, and $2\sigma$ confidence intervals of $r_c$, $m_{\psi}$, and $\beta$ for each dSph are also listed in \tref{table:fit}. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig3.pdf} \caption{ Same as \fref{fig:vel_disp} but for the observational data set of \citet{Walker2007}. The confidence intervals of the model parameters for each dSph are listed in \tref{table:fit}. } \label{fig:vel_disp_w07} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig4.pdf} \caption{ Same as \fref{fig:mb_rc_contour_w09} but for the observational data set of \citet{Walker2007}. The confidence intervals of the model parameters for each dSph are listed in \tref{table:fit}. } \label{fig:mb_rc_contour_w07} \end{figure} \section{Discussion} \label{sec:Discussion} \subsection{Model Uncertainties} \label{subsec:ModelUncertainty} In order to consolidate the results of our Jeans analysis, we test how sensitive $m_{\psi}$ is to different models adopted in the MCMC calculations, including different observational data sets of the velocity dispersion profiles, different stellar density models, and a different orbital anisotropy prior distribution. Results are summarized in \tref{table:discussion}. \subsubsection{Velocity Dispersion} \label{subsubsec:VelDisp} As shown in \tref{table:fit}, the estimates of $m_{\psi}$ based on the velocity dispersion profiles of W07 are, in general, lower than those obtained from W09. It results from the generally higher velocity dispersions in W07. This difference is most significant in Fornax, where only about half of the data points in W07 and W09 overlap within the $1\sigma$ range (see Fig. \ref{fig:fornax_dispersion}). For comparison, we also show the velocity dispersion profile of \citet{Amorisco&Evans2012b}, which lies in between W07 and W09. The inverse-variance-weighted means of the velocity dispersions of Fornax in W07 and W09 are $\bar{\sigma}_{V_0}=11.6\pm 0.2$ and $\bar{\sigma}_{V_0}=9.4\pm 0.1$, respectively. The lower velocity dispersion in W09 mainly results from a more restrictive selection of member stars, which only includes stars with a membership probability greater than $95\%$ \citep{Walker2009b, Walker2009c}. The discarded stars can have velocity $3\sigma$ away from the mean velocity (see Figure 1 and Table 1 in \citet{Walker2009c}), and thus discarding them would lower the velocity dispersion noticeably. Also, the new samples of Fornax in W09 lead to a gently declining velocity dispersion profile at large projected radii $(\geq 1 \kpc)$ as compared with W07. Since Fornax provides the most stringent constraints on $m_{\psi}$, any difference in the observed velocity dispersion of Fornax would directly effect the estimate of $m_{\psi}$. \fref{fig:fornax_fit} shows a comparison of the one-dimensional and two-dimensional posterior distributions of $m_{\psi}$, $r_{c}$, and $\beta$ from the three different velocity dispersion profiles of Fornax in \fref{fig:fornax_dispersion}. It clearly shows that the estimate of $m_{\psi}$ is sensitive to the adopted velocity dispersion profile, where a lower and steeper profile leads to a higher estimate of $m_{\psi}$. The data sets of W07 and W09 thus likely bracket the uncertainty of the estimates of $m_{\psi}$, which lies in the range $m_{\psi} \sim 1 \-- 2\times10^{-22}\eV$. We emphasize that this value is in good agreement with other independent estimates from the stellar subpopulations in dSphs \citep{Schive2014a, Marsh&Pop2015}, the high-redshift luminosity functions \citep{Bozek2015,Schive2016,Corasaniti2016}, and the Thomson optical depth to the cosmic microwave background \citep{Bozek2015,Schive2016}. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig5.pdf} \caption{ Comparison of the projected velocity dispersion profiles of Fornax given by \citet{Walker2007} (circles), \citet{Walker2009b} (triangles), and \citet{Amorisco&Evans2012b} (squares). } \label{fig:fornax_dispersion} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig6.pdf} \caption{ Posterior distributions of $m_{\psi}$, $r_{c}$, and $\beta$ in our MCMC analysis for the three different velocity dispersion data sets of Fornax shown in \fref{fig:fornax_dispersion}. Solid curves in the diagonal panels show the one-dimensional marginalized distributions. Filled contours in the corner panels show the $1\sigma$ and $2\sigma$ confidence regions. } \label{fig:fornax_fit} \end{figure} We also notice that the estimate of $m_{\psi}$ from Sextans in W09 is higher than those obtained from other dSphs in both W07 and W09. For example, by using the data of Sextans in W07, we find a $1\sigma$ confidence interval of $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 1.33_{-0.13}^{+0.22}$, significantly lower than the value determined using W09 but more consistent with other dSphs (see \tref{table:fit}). It is mainly because the inverse-variance-weighted mean of the velocity dispersion of W07 is $\bar{\sigma}_{V_0}=7.1\pm 0.3$, apparently higher than that of W09, $\bar{\sigma}_{V_0}=6.1\pm 0.3$. To look into this discrepancy, we further apply the same MCMC algorithm to the more recent Very Large Telescope (VLT) observation of Sextans \citep{Battaglia2011}, which is in better agreement with W07 than W09. We find $r_{c} \sim 1.5-3.5\kpc$, consistent with \citet{Battaglia2011} using a pseudo-isothermal model for the dark matter density profile. However, we get $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 0.51_{-0.44}^{+0.23}$ in a $1\sigma$ confidence interval, apparently lower than the estimates from both W07 and W09. The inconsistency between $m_{\psi}$ determined from the three different observations therefore suggest that the current data of Sextans provide a poor constraint on $m_{\psi}$. One caveat about the VLT data is that they extend to as far as $\sim 2.5\kpc$, which lies beyond $3\,r_c$ for the $r_c$ estimated from W07. It thus seems reasonable to consider an NFW halo outside the central soliton. However, the NFW halo introduces two additional free parameters, making the Jeans analysis infeasible due to the very few data points. There is evidence of tidal stripping in the outermost parts of both Carina and Leo I \citep{Munoz2006,Mateo2008,Battaglia2012}. Therefore, for these two dSphs we also apply Jeans analysis discarding the outermost one data point in the velocity dispersions. We obtain $1\sigma$ confidence intervals of $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=1.41_{-0.18}^{+0.44}$ for Carina and $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=1.19_{-0.20}^{+0.32}$ for Leo I, both consistent with \tref{table:fit}. However, we notice the relatively larger ranges in the revised estimates of $m_{\psi}$, which arise from the $m_{\psi}\--r_c$ degeneracy. For example, for Carina the outermost data point has a projected radius of $R\sim 870\pc$, well beyond the estimated mean core radius, $r_c = 570\pc$. For comparison, the second outermost data point has a projected radius of $R\sim 530\pc$, comparable to the core radius. Therefore, the outermost data point is important for constraining the dark matter mass profile and breaking the $m_{\psi}\--r_c$ degeneracy. \subsubsection{Stellar Density} \label{subsubsec:StellarDens} It is important to investigate the impact of different stellar density models. First, in addition to having $R_h$ randomly sampled from a Gaussian distribution with a given mean and variance, we also experiment with fixing $R_h$ to the mean value and validate that the estimates of $m_{\psi}$ are largely unchanged in all dSphs. Second, we adopt a generalized Plummer model \citep{Mashchenko2015}, $I(R)=L_{0} \left[ 1 + (R/b)^2\right]^{-(\alpha-1)/2}$, where $L_{0}$ is the central luminosity, $b$ is the core radius, and $\alpha$ is an integer which must be greater than three for a finite total stellar mass. The standard Plummer model corresponds to $\alpha=5$. This model has been shown to fit well with recent observations of Fornax \citep{Coleman2005}, Leo I \citep{Smolcic2007}, and Draco \citep{Odenkirchen2001} even at the tidal radius. The corresponding estimates of particle mass are $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 1.15_{-0.06}^{+0.07}$ for Fornax, $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 0.92_{-0.21}^{+0.42}$ for Leo I, and $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 0.85_{-0.19}^{+0.34}$ for Draco, in good agreement with \tref{table:fit}. A relatively larger difference is found in Leo I. It is because the data of \citep{Smolcic2007}, when fitted with a King profile, suggests a $50\%$ larger King core radius but a similar tidal radius compared to the data of \citet{Irwin1995}. Finally, for Sextans, we also apply an exponential profile \citep{Irwin1995}. It leads to $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 1.64_{-0.14}^{+0.48}$, consistent with \tref{table:fit}. \subsubsection{Orbital Anisotropy} \label{subsubsec:Anisotropy} For the orbital anisotropy, we investigate this issue with a different uniform prior, $\eta=(\bar{v_r^2}-\bar{v_\theta^2})/(\bar{v_r^2}+\bar{v_\theta^2})$ \citep{Mashchenko2006}. This form has the advantage of being symmetric for tangential and radial anisotropies, where $\eta=-1$ for pure tangential orbit, $\eta=0$ for isotropic orbit, and $\eta=1$ for pure radial orbit. We validate that the resulting $m_{\psi}$ is very consistent with \tref{table:fit}. One caveat in Jeans analysis is that the choice of functional form for stellar anisotropy may affect the estimate of enclosed mass due to the mass-anisotropy degeneracy \citep[e.g.,][]{Gonzles2016}. Using more flexible dynamical modeling methods that extract more information from the line-of-sight velocity or stellar distributions, such as modeling higher moments \citep{Lokas2002}, phase-space analyses \citep{Kleyna2002,Amorisco&Evans2012a}, and Schwarzschild's orbit superposition methods \citep{Jardel2012,Breddels2013}, can help reduce the degeneracy and provide a more robust constraint on $m_{\psi}$. We plan to address this issue in the future. \subsection{Soliton + NFW model} \label{subsec:SolitonNFW} To justify the assumption that all stars are located within the central soliton core, we investigate the case where the soliton core connects to an NFW halo at a larger radius. The overall density profile, as shown by \citet{Schive2014a,Schive2014b,Marsh&Pop2015}, can be modeled as \begin{equation} \rho(r)=\Theta (r_\epsilon -r)\rho_{\rm{soliton}}(r) + \Theta (r-r_\epsilon)\rho_{\rm{NFW}}, \label{eq:soliton_plu_NFW} \end{equation} where \begin{equation} \rho_{\rm{NFW}}(r)=\frac{\rho_0}{(\frac{r}{r_s})(1+\frac{r}{r_s})^{2}} \label{eq:NFW} \end{equation} is the NFW profile, $\rho_{\rm{soliton}}(r)$ is given in \eref{eq:SolitonFit}, $\Theta(r)$ is a step function, and $r_\epsilon$ is the transition radius between soliton and NFW halo given by $\rho_{\rm{NFW}}(r_\epsilon) = \rho_{\rm{soliton}}(r_\epsilon) = \epsilon\rho_{\rm{soliton}}(0)$. Since $\rho_0$ can be uniquely determined from a given $r_\epsilon$ and $r_s$, this model introduces two additional free parameters, $\left \{ \log_{10}\epsilon, \log_{10}[r_s/\rm{pc}]\right \}$, which we model with uniform priors over the ranges $-5 \leq \log_{10}\epsilon \leq \log_{10}0.5$ and $1 \leq \log_{10}[r_s/\rm{pc}] \leq 4$. Note that the soliton core mass $(M_c)$ has been found to correlate with halo virial mass $(M_h)$ in the mass range $\sim 10^8 \-- 5\times10^{11}\Msun$ \citep{Schive2014b} as $M_c = 0.25(M_h/M_{\rm{min},0})^{1/3} M_{\rm{min},0}$, where $M_{\rm{min},0} \sim 4.4\times10^7 (m_{\psi}/10^{-22}\eV)^{-3/2}\Msun$ is the predicted minimum $\psiDM$ halo mass at the present day. Using this relation can, in principle, eliminate one free parameter. However, it is unclear whether this relation, which is determined from isolated galaxies, can be applied to satellite dSph galaxies underwent complex evolution history. In order to make our analysis more robust, we thus do not take this core-halo mass relation into account when conducting Jeans analysis. We apply this soliton + NFW model to Fornax since it provides the strongest constraint on $m_{\psi}$ and has around half of the observation points lying outside the core radius determined from the soliton-only model. To properly constrain the central soliton density profile that has two free parameters, we set the lower limit of $r_c$ to the position of the second innermost observation point, which is $\sim 100\pc$. This is a very weak constraint since the core size of Fornax is typically found to be $\sim 1\kpc$ \citep[e.g.,][]{Amorisco2013, Walker2011}. We obtain $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=1.23_{-0.07(-0.14)}^{+0.05(+0.13)}$, and $\log_{10}[r_c/\rm{pc}]=2.78_{-0.05(-0.11)}^{+0.06(+0.11)}$ (also listed in \tref{table:discussion}), consistent with the soliton-only model (see \tref{table:fit}). The transition radius between soliton and NFW halo is found to be greater than $\sim 2.5\,r_c$, where the soliton density has dropped by a factor of 30. The scale radius $r_s$ is unconstrained, which is expected since most stars still reside within the transition radius. The slightly larger $m_{\psi}$ and smaller $r_c$ arise from the additional gravity support from the external NFW halo for stars outside the transition radius. \subsection{Stellar Subpopulations} \label{subsec:subpopulation} Several dSphs have been found to exhibit more than one stellar subpopulations, each with different metallicity and kinematics. Modeling different subpopulations separately in the same gravitational potential can further break the mass-anisotropy degeneracy in the Jeans analysis \citep{Battaglia2008}. We model the stellar density and velocity dispersion profiles of the metal-rich (MR) and metal-poor (MP) subpopulations in Sculptor \citep{Battaglia2008}. The MR subpopulation is known to be better described by radially biased orbits in the outer region due to its rapidly declining velocity dispersion profile \citep{Battaglia2008, Strigari2014}. Therefore, we follow \citet{Battaglia2008} and adopt the Osipkov-Merritt \citep[OM,][]{Osipkov1979,Merritt1985} anisotropy profile on the MR subpopulation, and use either constant or OM anisotropy on the MP subpopulation. The OM anisotropy profile is given by $\beta(r)=r^{2}/(r^{2}+r_{a}^{2})$, where $r_a$ is the anisotropy radius with $\beta \rightarrow 0$ for $r \ll r_a$ and $\beta \rightarrow 1$ for $r \gg r_a$. We discard the last observation data point of the MP subpopulation because of its large uncertainty. The $1\sigma$ ranges for constant and OM anisotropy on the MP subpopulation are $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=1.21_{-0.15}^{+0.17}$ and $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=1.08_{-0.22}^{+0.28}$, respectively (also listed in \tref{table:discussion}). Both are in good agreement with \tref{table:fit}. The empirical and the best-fit velocity dispersion profiles in our MCMC chains are shown in \fref{fig:Sculptor_dispersion_2population}, which have chi-squares of $\chi^2=6.8$ (with $\chi_{\rm{MP}}^{2}=5.4,\,\chi_{\rm{MR}}^{2}=1.4$) and $\chi^2=4.3$ (with $\chi_{\rm{MP}}^{2}=3.8,\,\chi_{\rm{MR}}^{2}=0.5$) for constant and OM anisotropy on the MP subpopulation, respectively. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig7.pdf} \caption{ Projected velocity dispersion profiles of the two stellar subpopulations in Sculptor. Error bars show the observational $1\sigma$ uncertainties \citep{Battaglia2008}. Dashed (dotted) lines show the best-fit $\psiDM$ model in the hypothesis of OM (constant) anisotropy profile on the MP subpopulation and OM anisotropy profile on the MR subpopulation. } \label{fig:Sculptor_dispersion_2population} \end{figure} Fornax also has three distinct stellar subpopulations \citep{Amorisco&Evans2012c}. The intrinsic rotation detected in the MP and intermediate-metallicity (IM) subpopulations is ${\Omega}_{int}\sim 1-2\kms\kpc^{-1}$, which is negligible compared with velocity dispersion. For example, at the outermost kinematic data point ($\sim 1\kpc$), the ratio between rotation velocity and velocity dispersion is only $\sim 0.3$ \citep{Amorisco&Evans2012c}. Therefore, we neglect rotation and apply Jeans analysis to the three subpopulations using the stellar density and velocity dispersion profiles in \citet{Amorisco2013}. Note that \citet{Amorisco2013} applied an empirical relation between $M(<1.67R_h)$, $R_h$, and $\bar{\sigma}_{V_0}$ to the three subpopulations of Fornax to estimate the total mass profile (contours in \fref{fig:fornax_fit_3population}). This empirical mass estimator has been derived to describe a wide family of models based on the Michie-King phase-space distribution function \citep{Amorisco&Evans2012a}, where velocity distribution is isotropic in the center and nearly radial in the outer region, similar to the OM anisotropy model. Therefore, in order to compare with their result, we adopt OM anisotropy profiles on all three stellar subpopulations. In addition, since the kinematic data have observational uncertainties in both $\sigma_{V_0}(R_i)$ and $R_i$, we follow \citet{Ma2013} and convert the uncertainty associated with $R_i$ into an effective variance in $\sigma_{V_0}(R_i)$ as $\rm{Var}[\sigma_{V_0}(R_i)]+{\sigma}'_p(R_i)\rm{Var}[R_i]$, where ${\sigma}'_p(R_i)$ is the derivative of the estimated velocity dispersions. We obtain $1\sigma$ confidence intervals of $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]= 0.91_{-0.09}^{+0.26}$ and $\log_{10}[r_c/\rm{pc}]=2.96_{-0.19}^{+0.07}$. \fref{fig:fornax_dispersion_3population} shows the empirical velocity dispersions of the three subpopulations and our estimated velocity dispersions, which correspond to the maximum likelihood point within the $1\sigma$ range. \fref{fig:fornax_fit_3population} shows the corresponding $1\sigma$ total enclosed mass, $M(<1.67R_h)$, where $R_h$ is the half-light radius of each subpopulation. This result is in good agreement with the $1\sigma$ estimate of \citet[][contours in \fref{fig:fornax_fit_3population}]{Amorisco2013} It is also consistent with the estimate of \citet[][solid line in the figure]{Schive2014a} using only the IM subpopulation, which gives $m_{\psi}=8.1_{-1.7}^{+1.6}\times10^{-23}\eV$ and $r_c=0.92_{-0.11}^{+0.15}\kpc$ Note that we find another local maximum of likelihood function around $\log_{10}[m_{\psi}/10^{-23}\rm{eV}]=0.59_{-0.15}^{+0.18}$ and $\log_{10}[r_c/\rm{pc}]= 3.18_{-0.11}^{+0.10}$, with anisotropy radii of the IM and MP subpopulations close to their $R_h$. However, this local peak only covers a small volume in the five-dimensional posterior distributions and is negligible in the one-dimensional marginalized distributions of $m_{\psi}$ and $r_c$. Also note that the MR subpopulation, which has the lowest velocity dispersion, has only two observation points and thus gives a relatively weak constraint compared with the IM and MP subpopulations. Accordingly, the estimate of $m_{\psi}$ is more consistent with that using W07, which has a higher velocity dispersion closer to IM and MP than to MR. \begin{figure} \centering \includegraphics[width=\columnwidth]{fig8.pdf} \caption{ Projected velocity dispersion profiles of the three stellar subpopulations in Fornax. Error bars show the observational $1\sigma$ uncertainties \citep{Amorisco2013}, and lines show the best-fit profiles within the $1\sigma$ ranges of our estimated $m_{\psi}$ and $r_c$ based on the soliton density profile in $\psiDM$. } \label{fig:fornax_dispersion_3population} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{fig9.pdf} \caption{ Total enclosed mass estimated from the three stellar subpopulations shown in \fref{fig:fornax_dispersion_3population}. Error bars represent the $1\sigma$ confidence intervals estimated in this work. For comparison, solid contours show the $1\sigma$ estimate of \citet{Amorisco2013}, and solid line shows the best-fit soliton model using only the intermediate-metallicity subpopulation \citet{Schive2014a}. These results are in good agreement with each other. } \label{fig:fornax_fit_3population} \end{figure} \section{Conclusion} \label{sec:Conclusion} Wave dark matter ($\psiDM$), characterized by a single parameter, the dark matter particle mass $m_{\psi}$, predicts a central soliton core in every galaxy. In this work, we have applied the Jeans equation to the empirical velocity dispersion profiles of eight classical dSphs in the Milky Way so as to constrain $m_{\psi}$. We find combined $1\sigma (2\sigma)$ confidence intervals of $m_{\psi}=1.18_{-0.13(-0.24)}^{+0.14(+0.28)}\times10^{-22}\eV$ and $m_{\psi}=1.79_{-0.17(-0.33)}^{+0.17(+0.35)}\times10^{-22}\eV$ using the observational data sets of \citet{Walker2007} and \citet{Walker2009b}, respectively. The discrepancy of W07 and W09 suggests that a more elaborate star membership determination for calculating velocity dispersion will improve the constraint on $m_{\psi}$ further. This combined constraint of $m_{\psi}$ is dominated by Fornax but is consistent with the estimate from individual dSphs. It is also in good agreement with other independent constraints from, for instance, the stellar subpopulations in dSphs \citep{Schive2014a,Marsh&Pop2015}, the high-redshift luminosity functions \citep{Bozek2015,Schive2016}, and the Thomson optical depth to CMB \citep{Bozek2015,Schive2016}, which all suggest $m_{\psi} \sim 10^{-22}\eV$. To consolidate the results, we have investigated a variety of models in the MCMC calculations, including different velocity dispersion data, stellar density profiles, and orbital anisotropy priors. We have also extended the soliton-only model to account for an NFW halo at a larger radius, and further considered distinct stellar subpopulations in both Sculptor and Fornax. It is demonstrated that these factors have only minor effect on the estimate of $m_{\psi}$. Finally, we emphasize that the existence of large cores in dSphs is still under debate \citep[e.g.,][]{Strigari2010}. The methodology of our study is to assume a soliton core profile, \eref{eq:SolitonFit}, and ascertain whether the resulting constraint on $m_{\psi}$ is consistent with other independent constraints mentioned above. In other words, we focus on validating the self-consistency of $\psiDM$, but not on falsifying the NFW profile or the CDM model. In principle, the latter can be addressed by extending the soliton + NFW model to allow for a much smaller soliton component and we plan to explore it in the future. \section{Acknowledgement} This work is supported in part by the National Science Council of Taiwan under the grant MOST 103-2112-M-002-020-MY3. \bibliographystyle{mnras}
ca85f26ffe5ffb8eb03c1a9443934363c543f712
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In this article, we make two remarks about the \emph{infinite wedge representation}. To describe what we do let $\mathbf{gl}(\infty)$ denote the Lie algebra of $\ZZ\times \ZZ$ \emph{band infinite matrices}. Then $\mathbf{gl}(\infty)$ consists of those matrices $A =(a_{ij})_{i,j\in\ZZ}$ with $a_{ij}\in \CC$ and $a_{ij}=0$ for all $|i-j| \gg 0$. Next let $\widehat{\mathbf{gl}}(\infty)$ denote the Lie algebra determined by the $2$-cocycle $\mathrm{c}(\cdot,\cdot)$ of $\mathbf{gl}(\infty)$ with values in the trivial $\mathbf{gl}(\infty)$-module $\CC$: $$\mathrm{c}(A,B):= \sum_{i \leq 0, k>0} a_{ik}b_{ki} - \sum_{i>0,k\leq 0} a_{ik}b_{ki} \text{,} $$ see for instance \cite[p. 12]{Bloch:Okounkov:2000} or \cite[p. 115]{Kac:infinite:Lie:Algebras}. The \emph{infinite wedge representation} is a suitably defined, see \S \ref{4} for precise details, Lie algebra representation $\rho : \widehat{\mathbf{gl}}(\infty) \rightarrow \End_\CC(F)\text{;}$ here $F$ is the \emph{infinite wedge space} that is the $\CC$-vector space determined by the set $\mathscr{S}$ which consists of those ordered strictly decreasing sequences of integers $S=(s_1,s_2,\dots)$, $s_i \in \ZZ$, with the properties that $s_i = s_{i-1}-1$ for all $i \gg 0$. To describe our first theorem, let $g$ be a finite dimensional semi-simple complex Lie algebra, $g[t,t^{-1}]$ its loop algebra and $\widehat{g}$ the universal extension of $g[t,t^{-1}]$ in the sense of Garland \cite[\S 2]{Garland:1980} (see also \cite[\S 7.9]{Weibel}). In Theorem \ref{universal:extension:theorem} we show how the representation $\rho$ is related to $\widehat{g}$. Our second theorem, Theorem \ref{bosonic:Extension:theorem}, gives an elementary proof of the \emph{boson-fermion correspondence}, in the sense of Kac-Raina-Rozhkovskaya \cite[Lecture 5, p. 46]{kac-raina-rozhkovskaya}. To place this theorem in its proper context, let $\mathfrak{s}$ denote the \emph{oscillator algebra}, which is the universal extension of $\CC[t,t^{-1}]$ the loop algebra of the abelian Lie algebra $\CC$. The Lie algebra $\mathfrak{s}$ is faithfully represented in $\End_\CC(F)$ and also in $\End_\CC(B)$, where $B$ denotes the polynomial ring in countably many variables with coefficients in the ring of Laurent polynomials. The boson-fermion correspondence, as formulated in \cite[Lecture 5, p. 46]{kac-raina-rozhkovskaya} compare also with \cite[\S 14.9--14.10]{Kac:infinite:Lie:Algebras}, concerns extending these representations to all of $\widehat{\mathbf{gl}}(\infty)$ in such a way that an evident $\CC$-linear isomorphism $F\rightarrow B$ becomes an isomorphism of $\widehat{\mathbf{gl}}(\infty)$-modules; see \S \ref{6} for more precise details. The traditional approach for proving this result is by way of vertex-operators, see \cite{Kac:infinite:Lie:Algebras} and \cite[Lecture 6, p. 46]{kac-raina-rozhkovskaya}. The key point to our approach, which does not require the use of vertex-operators, is a combinatorial construction related to partitions, see \S \ref{fermi:gl}, together with the Murnaghan-Nakayama rule which we recall in \S \ref{2.6}. \noindent {\bf Acknowledgements.} This work benefitted from discussions with Jacques Hurtubise and John Harnad. It was completed while I was a postdoctoral fellow at McGill University and also while I was a postdoctoral fellow at the University of New Brunswick where I was financially supported by an AARMS postdoctoral fellowship. \section{Preliminaries}\label{2} In this section, to fix notation and terminology for what follows, we recall a handful of combinatorial and Lie theoretic concepts. For the most part we use combinatorial terminology and conventions similar to that of \cite[I \S 1 -- 5]{Macdonald:Sym:Func} and Lie theoretic terminology and conventions similar to that of \cite[\S 7 and \S 14]{Kac:infinite:Lie:Algebras}. \np{}\label{2.1} Let $\mathscr{P}$ denote the set of partitions. Then $\mathscr{P}$ consists of those infinite weakly decreasing sequences of non-negative integers $\lambda=(\lambda_1,\lambda_2,\dots)$ with the property that at most finitely many of the $\lambda_i$ are nonzero. If $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}$, then the number $\operatorname{weight}(\lambda):= \sum_{i=1}^{\infty}\lambda_i$ is called the \emph{weight of $\lambda$} and we denote by $\mathscr{P}_d$ the set of partitions of weight $d$. If $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}$, then we often identify $\lambda$ with the finite weakly decreasing sequence $(\lambda_1,\dots, \lambda_r)$, where $r = \operatorname{length}(\lambda) := \max \{ i : \lambda_i \not = 0 \}$. \np{}\label{2.2} The \emph{Young diagram} of a partition $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}$ is defined to be the set of points $(i,j) \in \ZZ^2$ such that $1 \leq j \leq \lambda_i$. When drawing the Young diagram associated to a partition we use the convention that the first coordinate is the row index, starts at $1$ and increases from left to right. Similarly, the second coordinate is the column index, starts at $1$ and increases downward. We refer to the elements $(i,j)$ of a Young diagram as the \emph{boxes} of the associated partition and the entires $i$ and $j$ as the sides of the box. \np{}\label{2.3} If $\lambda, \mu \in \mathscr{P}$ and $\lambda \supseteq \mu$, so $\lambda_i \geq \mu_i$ for all $i \geq 1$, then the set theoretic difference of the Young diagrams corresponding to $\lambda$ and $\mu$ is denoted by $\lambda \backslash \mu$ and is called a \emph{skew diagram}. If $\lambda, \mu \in \mathscr{P}$ and $\lambda \supseteq \mu$, then let $\theta := \lambda \backslash \mu$ denote the skew diagram that they determine. By a \emph{path} in $\theta$, we mean a sequence $x_0,x_1,\dots,x_m$ with $x_i \in \theta$, such that $x_{i-1}$ and $x_i$ have a common side for $1\leq i\leq m$. A subset $\nu \subseteq \theta$ is said to be \emph{connected} if every two boxes in $\nu$ can be connected by a path in $\nu$. The \emph{length} of $\theta$ is defined to be the number of boxes that appear in its diagram and is denoted by $\#\theta$. We say that $\theta$ is a \emph{border strip} if it is connected and if it contains no $2 \times 2$ box. Finally, if $\theta$ is a border strip, then we denote its \emph{height} by $\operatorname{height}(\theta)$ and define it to be one less then the number of rows that it occupies. \np{}\label{2.4} The symmetric group $S_n$ acts on the polynomial ring $\CC[x_1,\dots,x_n]$ by permuting the variables and we let $\Lambda_n$ denote the subring of invariants. We then have that $\Lambda_n = \bigoplus_{k\geq 0} \Lambda_n^k$ where $\Lambda_n^k\subseteq \Lambda_n$ is the subspace of symmetric polynomials of degree $k$. If $k\in \ZZ_{\geq 0}$, $m,n\in \ZZ_{\geq 1}$ and $m\geq n$, we have evident restriction maps $ \rho_{m,n}^k : \Lambda_m^k \rightarrow \Lambda_n^k\text{;}$ let $ \Lambda^k = \idlim \Lambda^k_n$ and $ \Lambda = \bigoplus_{k\geq 0} \Lambda^k\text{.}$ Then $\Lambda = \CC[h_1,h_2,\dots]$ where the $h_k$ are such that their image in $\Lambda^k_n$ is the $k$th complete symmetric function in the variables $x_1,\dots, x_n$. \np{}\label{2.5} Let $H(Z) := \sum_{k\geq 0} h_k Z^k \in \Lambda[[Z]]$ and define $p_k \in \Lambda$ by the coefficient of $Z^{k-1}$ in the power series $P(Z) := H'(Z)/H(Z)\text{.} $ The image of each $p_k$ in $\Lambda^k_n$ is the $k$th power sum in the variables $x_1,\dots,x_n$ and the $h_k$ can be expressed in terms of the $p_k$ via the equality of power series $H(Z) = \exp\left(\sum_{k\geq 1} t_k Z^k \right)\text{;} $ here $t_k = p_k/k$. The Schur functions $s_\lambda$, defined for all partitions $\lambda = (\lambda_1,\lambda_2,\dots) \in \mathscr{P}$, are defined by $s_\lambda := \operatorname{det} \left( h_{\lambda_i - i + j}\right)_{1\leq i,j \leq n} \text{,}$ where $n = \operatorname{length}(\lambda)$, and form a $\CC$-basis for $\Lambda$. In what follows we let $\langle \cdot , \cdot \rangle$ denote the symmetric bilinear form on $\Lambda$ for which the Schur polynomials are orthonormal. In particular, $ \langle s_\lambda, s_\mu \rangle = \delta_{\lambda,\mu} \text{.}$ \np{}\label{2.6} By abuse of notation, we let $p_k \in \End_\CC(\Lambda)$ be the $\CC$-linear endomorphism given by multiplication by $p_k$. The adjoint of $p_k$ with respect to $\langle \cdot, \cdot \rangle$, which we denote by $p_k^\perp$, is the $\CC$-linear endomorphism given by the differential operator $k \frac{\partial}{\partial p_k}$, \cite[p. 76]{Macdonald:Sym:Func}. The effect of the operator $p_k$ in the basis of Schur polynomials is given by the Murnaghan-Nakayama rule: \begin{equation}\label{Murnaghan:Nakayama} p_k s_\lambda = \sum_{\substack{\nu \supseteq \lambda, \\ \nu \backslash \lambda \text{ is a border strip} \\ \text{of length $k$}} }(-1)^{\operatorname{height}(\nu \backslash \lambda)} s_\nu \text{,} \end{equation} \cite[p. 601]{Okounkov:Vershik:1996}. Using \eqref{Murnaghan:Nakayama}, in conjunction with \cite[I.V. Ex. 3, p. 75]{Macdonald:Sym:Func}, we deduce the adjoint form of the Murnaghan-Nakayama rule: \begin{equation}\label{Murnaghan:Nakayama:adjoint} p_k^{\perp} s_\lambda = \sum_{\substack{ \lambda \supseteq \nu, \\ \lambda \backslash \nu \text{ is a border strip} \\ \text{of length $k$}}} (-1)^{\operatorname{height}(\lambda \backslash \nu)} s_\nu\text{.} \end{equation} \np{}\label{2.7} We let $\mathrm{Mat}(\infty)$ denote the $\CC$-vector space of $\ZZ \times \ZZ$ matrices with entries in $\CC$. If $A=\left(a_{ij}\right)_{i,j\in\ZZ}, B=\left(b_{ij}\right)_{i,j\in\ZZ} \in \mathrm{Mat}(\infty)$ and $a_{ik} b_{kj} = 0$, for all $i,j\in\ZZ$ and almost all $k \in \ZZ$, then their product is given by $C = AB := \left( c_{ij}\right)_{i,j \in \ZZ}\text{,} $ where $c_{ij} = \sum_{k \in \ZZ} a_{ik}b_{kj}\text{.} $ \np{}\label{2.8} We let $\E_{ij}$ denote the element of $\mathrm{Mat}(\infty)$ with $i,j$ entry equal to $1$ and all other entries equal to zero. We say that a matrix $A=(a_{ij})_{i,j\in\ZZ} \in \operatorname{Mat}(\infty)$ is a \emph{band infinite matrix} if $a_{ij}=0$ for all $|i-j|\gg0$. We denote the collection of band infinite matrices by ${\bf gl}(\infty)$ and regard it as a Lie algebra with Lie bracket given by $[A,B]=AB-BA \text{.}$ We often express elements of ${\bf gl}(\infty)$ as infinite sums of matrices. For example, the identity matrix $1_{\ZZ \times \ZZ}=\left( \delta_{ij} \right)_{i,j\in \ZZ}$, can be expressed as $1_{\ZZ \times \ZZ} = \sum_{p\in \ZZ} \mathrm{E}_{pp}\text{.} $ Also every element of ${\bf gl}(\infty)$ can be written as a finite linear combination of matrices of the form $\sum_i a_i \E_{i,i+k}$, where $k \in \ZZ$ and $a_i \in \CC$. \np{}\label{2.9} Let $\mathfrak{gl}_N[t,t^{-1}] :=\CC[t,t^{-1}]\otimes_\CC \mathfrak{gl}_N(\CC)$ which we regard as a Lie algebra with Lie bracket determined by $$[f(t)\otimes A, g(t) \otimes B ]= f(t)g(t) \otimes [A,B] \text{.}$$ If $t^m \otimes e_{ij}$, for $i,j = 1,\dots, N$ and $m \in \ZZ$, denotes the standard basis elements of $\mathfrak{gl}_N[t,t^{-1}]$, we then have that $$[t^m \otimes e_{ij}, t^n \otimes e_{k \ell}]= t^{m+n} \otimes \left(\delta_{jk}e_{i\ell} - \delta_{\ell i} e_{kj} \right), $$ and that the map \begin{equation}\label{loop:incl}\iota_N : \mathfrak{gl}_N[t,t^{-1}] \rightarrow \mathbf{gl}(\infty)\text{,} \end{equation} determined by $$t^m \otimes e_{ij} \mapsto \sum_{k \in \ZZ} \E_{N(k-m)+i,Nk+j}\text{,}$$ is a monomorphism of Lie algebras. The image of $\iota_N$ is the Lie algebra of \emph{$N$-periodic band infinite matrices}, that is those $\ZZ \times \ZZ$ band infinite matrices $A = (a_{ij})_{i,j \in \ZZ}$ for which $a_{i+N,j+N}=a_{ij}\text{, for all $i,j \in \ZZ$.} $ \np{}\label{2.10} To define the Lie algebra $\widehat{\mathbf{gl}}(\infty)$, first let $$J := \sum_{m \leq 0} \mathrm{E}_{mm} - \sum_{m > 0} \mathrm{E}_{mm} \in {\bf gl}(\infty)$$ and observe that if $A$ and $B$ are elements of ${\bf gl}(\infty)$, then the matrix $[J,A]B$ has at most finitely many nonzero diagonal elements. Also, and the expression $ \frac{1}{2}\operatorname{tr}\left([J,A]B \right)$ is a well defined element of $\CC$. In particular, we have \begin{equation}\label{eqn2.4} \frac{1}{2}\operatorname{tr}\left([J,A]B \right) = \sum_{i\leq 0, k>0} a_{ik}b_{ki} - \sum_{i>0, k\leq 0} a_{ik}b_{ki}, \end{equation} and we define the Lie algebra $\widehat{\mathbf{gl}}(\infty)$ to be the central extension determined by the following $2$-cocycle of $\mathbf{gl}(\infty)$ with values in the trivial $\mathbf{gl}(\infty)$-module $\CC$: \begin{equation}\label{eqn2.5}\mathrm{c}(A,B):= \frac{1}{2}\operatorname{tr}\left([J,A]B \right) = \sum_{i \leq 0, k>0} a_{ik}b_{ki} - \sum_{i>0,k\leq 0} a_{ik}b_{ki} \text{.} \end{equation} As a special case of \eqref{eqn2.5}, we have that \begin{equation}\label{eqn2.6} \mathrm{c}(\mathrm{E}_{ij},\mathrm{E}_{k\ell}) = \begin{cases} -1 & \text{ $i=\ell > 0$, $j=k \leq 0$} \\ 1 & \text{ $i=\ell \leq 0$, $j=k>0$} \\ 0 & \text{otherwise,} \end{cases} \end{equation} for $i,j,k,\ell \in \ZZ$; compare with \cite[p. 12]{Bloch:Okounkov:2000} or \cite[p. 115 and p. 313]{Kac:infinite:Lie:Algebras}. Explicitly, as a $\CC$-vector space $$\widehat{{\bf gl}}(\infty) = \CC \oplus {\bf gl}(\infty), $$ and the Lie bracket is defined by $$[(a,x),(b,y)]=(\mathrm{c}(x,y),[x,y])\text{,}$$ for all $(a,x),(b,y) \in \widehat{{\bf gl}}(\infty)$. \np{}\label{2.11} We regard $\CC[t,t^{-1}]$, the ring of Laurent polynomials, as the loop algebra of the abelian Lie algebra $\CC$. The \emph{oscillator algebra} is the Lie algebra $\mathfrak{s}$ determined by the $2$-cocycle with values in the trivial $R$-module $\CC$ given by: $$\omega : \CC[t,t^{-1}] \times \CC[t,t^{-1}] \rightarrow \CC\text{,}$$ $$\omega\left((f(t),g(t) \right) := \operatorname{res}\left(\frac{df}{dt} g \right). $$ Concretely, $$\mathfrak{s} = \CC \oplus \CC[t,t^{-1}] \text{,}$$ and the bracket is given by $$[(a,t^m),(b,t^n)]=(m\delta_{m,-n},0),$$ for all $a,b \in \CC$ and $m,n\in \ZZ$. \np{}\label{2.12} As in \cite[p. 313]{Kac:infinite:Lie:Algebras}, we realize the oscillator algebra $\mathfrak{s}$ as a subalgebra of $\widehat{\mathbf{gl}}(\infty)$ by the monomorphism of Lie algebras \begin{equation}\label{oscillator algebra:mono} \delta_0 :\mathfrak{s} \rightarrow \widehat{\mathbf{gl}}(\infty), \end{equation} defined by $$(a,t^m) \mapsto \left(a,\sum_{j \in \ZZ} \E_{j,j+m}\right).$$ \section{The Lie algebra $\widehat{\mathbf{gl}}(\infty)$ and universal extensions}\label{3} In this section we establish Theorem \ref{universal:extension:theorem} which shows how the Lie algebra $\widehat{\mathbf{gl}}(\infty)$ is related to the Lie algebra $\widehat{g}$ which we define to be the \emph{universal extension} of $g[t,t^{-1}]$ the Loop algebra of $g$ a complex finite dimensional semi-simple Lie algebra. \np{}\label{3.1} Let $g$ be a complex finite dimensional semi-simple Lie algebra and $\kappa(\cdot,\cdot)$ its killing form. We denote by $\widehat{g}$ the \emph{universal extension} of $g[t,t^{-1}]$. Then $\widehat{g}$ is the central extension determined by the $2$-cocycle \begin{equation}\label{eqn3.1} u(\cdot,\cdot) : g[t,t^{-1}]\times g[t,t^{-1}] \rightarrow \CC \end{equation} defined by \begin{equation}\label{eqn3.2} u\left(\sum t^i\otimes x_i, \sum t^j \otimes y_j\right) :=\sum i \kappa(x_i,y_{-i})\text{,} \end{equation} \cite[\S 2]{Garland:1980} see also \cite[\S 7.9]{Weibel} especially \cite[\S 7.9.6, p. 250]{Weibel}. To relate $\widehat{g}$ and $\widehat{\mathbf{gl}}(\infty)$, we choose a basis for $g$ and then consider its \emph{extended adjoint representation}: \begin{equation}\label{eqn3.3} 1\otimes \operatorname{ad} : g[t,t^{-1}] \rightarrow \mathbf{gl}(\infty)\text{,} \end{equation} see \eqref{eqn3.4} below. The morphism $1\otimes \operatorname{ad}$, given by \eqref{eqn3.3}, allows us to compare the pullback of $\widehat{\mathbf{gl}}(\infty)$, with resect to $1\otimes \operatorname{ad}$, and the universal extension $\widehat{g}$. In \S \ref{Sec3.3}, we prove: \begin{theorem}\label{universal:extension:theorem} The universal extension of $g[t,t^{-1}]$ is the pull-back of $\widehat{\mathbf{gl}}(\infty)$ via $1\otimes \operatorname{ad}$, the extended adjoint representation of $g$. \end{theorem} \np{}\label{Sec3.2} Before proving Theorem \ref{universal:extension:theorem} we first observe: \begin{proposition}\label{cocycle:calc} The pullback of $\operatorname{c}(\cdot,\cdot)$ to $\mathfrak{gl}_N[t,t^{-1}]$ via $\iota_N$ is given by: \begin{equation}\label{eqn3.4} \operatorname{c}(\iota_N(t^m\otimes x),\iota_N(t^n\otimes y)) = m \delta_{m,-n}\operatorname{tr}(xy)\text{.} \end{equation} \end{proposition} \begin{proof} In light of the map \eqref{loop:incl}, it suffices to check that, for fixed $N \in \ZZ_{\geq 1}$, $1 \leq i,j,k,\ell \leq N$, $m,n\in\ZZ$, we have $$\mathrm{c} \left( \sum_{p \in \ZZ} \E_{N(p-m)+i, Np+j}, \sum_{q \in \ZZ} \E_{N(q-n)+k, Nq+\ell} \right) = \begin{cases} m & \text{if $j=k$, $i=\ell$ and $m=-n$.} \\ 0 & \text{ otherwise.} \end{cases} $$ To compute $$\mathrm{c} \left( \sum_{p \in \ZZ} \E_{N(p-m)+i, Np+j}, \sum_{q \in \ZZ} \E_{N(q-n)+k, Nq+\ell} \right)\text{,} $$ considering \eqref{eqn2.5}, it is clear that we need to understand the quantity: \begin{equation}\label{important:quantity} \sum_{\substack{p,q \in \ZZ, \\ Np+j > 0, \\ k+N(q-n)>0, \\ i+N(p-m)\leq 0, \\ Nq+\ell \leq 0}} \delta_{Np+j,N(q-n)+k} \delta_{N(p-m)+i,Nq+\ell} - \sum_{\substack{p,q \in \ZZ, \\ Np+j \leq 0, \\ k+N(q-n) \leq 0, \\ i+N(p-m) > 0, \\ Nq+\ell > 0}} \delta_{Np+j,N(q-n)+k} \delta_{N(p-m)+i,Nq+\ell}\text{.} \end{equation} To this end, we make the following deductions: \begin{enumerate} \item{if \eqref{important:quantity} is nonzero, then $m=-n$;} \item{if $m\geq 0$ the first sum appearing in \eqref{important:quantity} is nonzero if and only if $j=k$ and $i=\ell$, while the second sum is zero; the nonzero summands appearing \eqref{important:quantity}, when $j=k$ and $i=\ell$, are in bijection with the set of pairs $(p,q) \in \ZZ \times \ZZ$ with $-1\leq p \leq m$ and $n\leq q < 0$;} \item{if $m<0$, the second sum appearing in \eqref{important:quantity} is nonzero if and only if $j=k$ and $i=\ell$, while the first sum is zero; the nonzero summands appearing in \eqref{important:quantity}, when $j=k$ and $i=\ell$, are in the bijection with the set of pairs $(p,q) \in \ZZ \times \ZZ$ with $m\leq p < 0$, $0\leq q < n$.} \end{enumerate} The conclusion of Proposition \ref{cocycle:calc} follows immediately from these deductions. \end{proof} \np{}\label{Sec3.3} We now establish Theorem \ref{universal:extension:theorem}. To do so, first consider an arbitrary semi-simple Lie algebra $g$ and its adjoint representation $$ \operatorname{ad} : g \rightarrow \operatorname{End}_\CC(g)\text{.}$$ Let $N=\dim_\CC g$ and fix a basis for $g$. By composition we obtain a representation $$ \operatorname{ad} : g \rightarrow \operatorname{End}_\CC(g) \xrightarrow{\sim} \mathfrak{gl}_N(\CC),$$ which we can use to define the \emph{extended adjoint representation} of $g$ \begin{equation}\label{eqn3.4} g[t,t^{-1}] \xrightarrow{1\otimes \operatorname{ad}} \mathfrak{gl}_N(\CC)[t,t^{-1}] \xrightarrow{\iota_N} \mathbf{gl}(\infty)\text{.} \end{equation} The homomorphism \eqref{eqn3.4} allows us to compare the pull-back of $\widehat{\mathbf{gl}}(\infty)$, via $1 \otimes \operatorname{ad}$, with $\widehat{g}$. \begin{proof}[Proof of {Theorem \ref{universal:extension:theorem}}] It is enough to show that $$u\left(\sum t^i \otimes x_i, \sum t^j \otimes y_j \right) :=\sum i \kappa(x_i,y_{-i}) $$ equals $$\mathrm{c}\left(\sum t^i\otimes \operatorname{ad}x_i, \sum t^j \otimes \operatorname{ad}y_j \right) \text{.} $$ That this equality holds true follows from the fact that $\kappa(x,y):=\operatorname{tr}(\operatorname{ad}x\operatorname{ad}y)$ and from Proposition \ref{cocycle:calc}. \end{proof} \section{Semi-infinite monomials and the infinite wedge representation}\label{4} In this section we study certain subsequences of $\ZZ$ which we refer to as \emph{semi-infinite monomials}, see \S \ref{4.1}. We then describe the \emph{infinite wedge space} and the \emph{infinite wedge representation} of the Lie algebra $\widehat{\mathbf{gl}}(\infty)$, see \S \ref{4.6} and \S \ref{4.8} respectively. What we do here is influenced heavily by what is done in \cite{Kac:infinite:Lie:Algebras}, \cite{Segal:Wilson:85}, \cite{jimbo:miwa:1983} and \cite{Miwa:Jimbo:Date}. We give proofs of all assertions for completeness and because they are needed in our proof of Theorem \ref{bosonic:Extension:theorem}. \np{}\label{4.1} By a \emph{semi-infinite monomial} we mean an ordered strictly decreasing sequence of integers $S=(s_1,s_2,\dots)$, $s_i \in \ZZ$, with the properties that $s_i = s_{i-1}-1$ for all $i \gg 0$. We let $\mathscr{S}$ denote the set of semi-infinite monomials. If $S \in \mathscr{S}$, then define strictly decreasing sequences of integers $S_+$ and $S_-$ by $S_+:= S \backslash \ZZ_{\leq 0}$ and $S_- := \ZZ_{\leq 0} \backslash S$. \np{}\label{4.2}If $S = (s_1,s_2,\dots) \in \mathscr{S}$, then there exists a unique integer $m$ with the property that $s_i = m-i+1$ for all $i \gg 0$. We refer to this number as the \emph{charge} of $S$ and denote it by $\operatorname{charge}(S)$, compare with \cite[p. 12]{Segal:Wilson:85}, \cite[p. 310]{Kac:infinite:Lie:Algebras}, and \cite[A.3]{Okounkov:2001:partitions} for instance. If $m \in \ZZ$, then let $\mathscr{S}_m := \{ S \in \mathscr{S} : \operatorname{charge}(S) = m \}$. We record the following proposition for later use. \begin{proposition}\label{prop5.1} The following assertions hold true: \begin{enumerate} \item{If $S \in \mathscr{S}$, then $\operatorname{charge}(S) = \# S_+ - \# S_-$;} \item{Let $m \in \ZZ$. The map $\lambda: \mathscr{S}_m \rightarrow \mathscr{P}$ defined by $$S=(s_1,s_2,\dots) \mapsto \lambda(S) = (\lambda_1,\lambda_2,\dots)\text{,} $$ where \begin{equation} \lambda_j := s_j - m + j - 1, \end{equation} is a bijection.} \end{enumerate} \end{proposition} \begin{proof} To prove (a), let \begin{equation}\label{prop5.1.0} S := (s_1,s_2,\dots) \in \mathscr{S}, S_+ := (s_1,\dots, s_\ell) \text{ and } S_- := (n_1,\dots, n_r); \end{equation} here $s_1 > s_2 > \dots > s_\ell$ and $0 \geq n_1 > n_2 > \dots > n_r$. Considering the definitions of $S_+$ and $S_-$ we deduce that \begin{equation}\label{prop5.1.1} s_{n+k} = n_r - k \text{ for $n := \ell-n_r-r+1$ and $k \geq 1$.} \end{equation} Now suppose that $m := \ell - r = \# S_+ - \# S_-$ and let $i = n + k$ for $k \geq 1$. We then have $ m-i+1= n_r - k$ which equals $s_i$ by \eqref{prop5.1.1}. Conversely, suppose that $s_i = m-i+1$ for all $i \gg 0$. We then have for all $k \gg0$ that \begin{equation}\label{prop5.1.2} s_{n+k} = m-n-k+1 = m-\ell+n_r+r-k. \end{equation} Combining \eqref{prop5.1.1} and \eqref{prop5.1.2}, we then have \begin{equation} n_r-k = m-\ell +n_r+r-k \end{equation} and so $m=\ell-r$ as desired. For (b), first note that the map $\lambda$ is clearly injective. To see that it is surjective, if $\lambda = (\lambda_1,\lambda_2,\dots) \in \mathscr{P}$, then define an element $S = (s_1,s_2,\dots) \in \mathscr{S}_m$ by $s_j = \lambda_j + m - j + 1$. By construction $S \in \mathscr{S}$. To see that $S \in \mathscr{S}_m$ note that $s_j = m-j + 1$ for $j > \operatorname{length}(\lambda)$. \end{proof} \np{}\label{4.3}{\bf Remark.} When we express $S \in \mathscr{S}$ as in \eqref{prop5.1.0}, the length of the partition $\lambda(S)$ equals the number $n$ defined in \eqref{prop5.1.1}. Also the weight of the partition $\lambda(S)$ is sometimes referred to as the \emph{energy} of $S$, \cite[p. 310]{Kac:infinite:Lie:Algebras}. \np{}\label{4.4}{\bf Example.} We can use the approach of \cite[\S 7.2]{Mike:covers} to give a graphical interpretation of Proposition \ref{prop5.1} for the case $m=0$. The case $m \not = 0$ can be handled similarly with a shift. As an example, the Young diagram associated to the partition $\lambda = (4,4,3,3,2,2,1) \in \mathscr{P}_{19}$ is: $$ \begin{Young} &&&\cr \\ &&&\cr \\ &&\cr \\ &&\cr \\ &\cr \\ &\cr \\ \cr \end{Young} $$ If we cut this Young diagram along the main diagonal then there are $3$ rows in the top piece and $3$ columns in the bottom piece. Let $u_i$, $i=1,2,3$, denote the number of boxes in the $i$-th row of the top piece and let $v_i$, $i=1,2,3$, denote the number of boxes in the $i$-th column of the bottom piece. Then, $u_1 = 3.5$, $u_2 = 2.5$, $u_3 = .5$ and $v_1 = 6.5$, $v_2 = 4.5$, and $v_3 = 1.5$. If $S$ is the charge zero semi-infinite monomial corresponding to $\lambda$, then $S$ is determined by the condition that $$S_+=(u_1+.5, u_2 + .5, u_3 + .5) = (4,3,1)$$ and $$S_-=(-v_3+.5, -v_2 + .5, -v_1 + .5) = (-1,-4, -6)\text{.}$$ In other words, $$S=(4,3,1,0,-2,-3,-5,-7,-8,\dots)$$ is the element of $\mathscr{S}_0$ corresponding to the partition $\lambda = (4,4,3,3,2,2,1)$. We can also relate the set $S$ to the \emph{code}, in the sense of \cite[\S 2]{Carrell:Goulden:2010}, of the partition $\lambda$. Specifically, if $n \in \ZZ$, $n\geq 1$ and $n\not \in S$, then $n$ corresponds to an $R$; if $n \geq 1$ and $n \in S$, then $n$ corresponds to a $U$. If $n \in \ZZ$, $n\leq0$ and $n \in S$, then $n$ corresponds to a $U$; if $n \leq 0$ and $n \not \in S$, then $n$ corresponds to an $R$. The string consisting of these $R$'s and $U$'s is the code corresponding to $\lambda$ and our set $S$. \np{}\label{4.5} Let $\lambda:\mathscr{S} \rightarrow \mathscr{P}$ denote the extension of the bijections $\lambda : \mathscr{S}_m \rightarrow \mathscr{P}$ described in Proposition \ref{prop5.1} (b). Also, to keep track of various minus signs which appear in what follows, we make the following definition: if $S \in \mathscr{S}$ and $j \in \ZZ$, then define $\operatorname{count}(j,S)$ to be the number of elements of $S$ that are strictly greater than $j$, that is: \begin{equation}\label{5.5} \operatorname{count}(j,S) := \# \{s \in S : j < s \} \text{.} \end{equation} \np{}\label{4.6} The \emph{infinite wedge space} is the $\CC$-vector space $F:=\bigoplus_{S \in \mathscr{S}} \CC$ determined by the set $\mathscr{S}$, see for instance \cite[\S 14.15]{Kac:infinite:Lie:Algebras} or \cite[p. 76]{Okounkov:2001:partitions}. In particular, $$F = \operatorname{span}_\CC \{v_S : S \in \mathscr{S} \} $$ where $v_S =(r_T)_{T \in \mathscr{S}}$ denotes the element of $F$ given by $r_T=0$ for $T \not = S$ and $r_S=1$. If $m \in \ZZ$, then let $F^{(m)} := \Span_\CC \{ v_S : S \in \mathscr{S}_m \}\text{.} $ We then have $F= \bigoplus_{m \in \ZZ} F^{(m)}$, compare with \cite[p. 310]{Kac:infinite:Lie:Algebras}. \np{}\label{fermi:creation:ann:1 We now recall the definition of \emph{wedging} and \emph{contracting} operators. Our approach here is only notationally different from that of \cite[p. 311]{Kac:infinite:Lie:Algebras}. On the other hand, we find our approach useful for relating these operators to our combinatorial construction on partitions, see \S \ref{fermi:gl} and especially Proposition \ref{prop6.2}. To begin with, if $S = (s_1,s_2,\dots)$ is an ordered strictly decreasing sequence of integers and $j \in \ZZ$, then we use the notations $S\bigcup \{j\}$ and $S \backslash \{j\}$ to denote the ordered strictly decreasing sequence of integers determined by the sets $\{s_1,s_2,\dots\} \bigcup\{j\}$ and $\{s_1,s_2,\dots\} \backslash \{j\}$ respectively. Next, given $j \in \ZZ$, define elements $f_j, f_j^* \in \operatorname{End}_\CC(F)$, for $j \in \ZZ$, by: \begin{equation}\label{creation1} f_j(v_S) := \begin{cases} (-1)^{\operatorname{count}(j,S)} v_{S \cup \{j\}} & \text{if $j\not \in S$} \\ 0 & \text{ if $j \in S$} \end{cases} \end{equation} and \begin{equation}\label{ann1} f^*_j(v_S) := \begin{cases} (-1)^{\operatorname{count}(j,S)}v_{S\backslash \{j\}} & \text{ if $j \in S$} \\ 0 & \text{ if $j \not \in S$,} \end{cases} \end{equation} and extending $\CC$-linearly, compare with \cite[\S 14.17]{Kac:infinite:Lie:Algebras}, \cite[p. 12]{Bloch:Okounkov:2000}, and \cite[\S A]{Bloch:Okounkov:2000}. These endomorphisms have the properties that \begin{equation}\label{eq4.9} \text{ $f_if_j^*+f_j^*f_i = \delta_{ij}$, $f_if_j+f_jf_i=0$, $f_i^*f_j^*+f_j^*f_i^* =0$} \end{equation} and \begin{equation}\label{eq4.10} [f_if_j^*, f_\ell f^*_k] = \delta_{j\ell}f_if_k^*-\delta_{ik}f_\ell f_j^*, \end{equation} for all $i,j,k,\ell \in \ZZ$, see \cite[p. 311]{Kac:infinite:Lie:Algebras} for example. For completeness, we note that \eqref{eq4.9} follows immediately from the definitions given in \eqref{creation1} and \eqref{ann1}. On the other hand, \eqref{eq4.10} is a consequence of \eqref{eq4.9}. Indeed, first note: $$ [f_if_j^*, f_\ell f_k^*]=f_if_j^* f_\ell f_k^* - f_i f_\ell f_k^* f_j^*+f_if_\ell f_k^* f_j^* - f_\ell f_k^* f_if_j^*$$ which can be rewritten using the second and third properties of \eqref{eq4.9} as: \begin{equation}\label{fermi:calc:1} f_i(f_j^*f_\ell + f_\ell f_j^*)f_k^*-f_\ell(f_if_k^*+f_k^*f_i)f_j^*\text{.} \end{equation} Applying the first property given in \eqref{eq4.9} to \eqref{fermi:calc:1} yields the righthand side of \eqref{eq4.10}. Note also that the operators $f_i$, for $i\in\ZZ$, map $F^{(m)}$ to $F^{(m+1)}$, the operators $f_i^*$, for $i\in\ZZ$, map $F^{(m)}$ to $F^{(m-1)}$ whereas the operators $f_i f_j^*$, for $i,j\in\ZZ$, map $F^{(m)}$ to $F^{(m)}$. \np{}\label{4.8} The \emph{infinite wedge representation} is the Lie algebra homomorphism $$\rho :\widehat{\mathbf{gl}}(\infty) \rightarrow \End_\CC(F)$$ determined by the conditions that \begin{equation}\label{4.8'} \rho((0,\E_{ij})) = \begin{cases} f_if_j^* & \text{ if $i\not = j$ or $i=j>0$} \\ f_if_i^* - \operatorname{id}_F & \text{ if $j=i \leq 0$} \end{cases} \end{equation} and \begin{equation}\label{4.8''} \rho((a,0))=a \operatorname{id}_F, \end{equation} for $a \in \CC$, compare with \cite[p. 313]{Kac:infinite:Lie:Algebras} for instance. The fact that the above conditions \eqref{4.8'} and \eqref{4.8''} determine a representation of Lie algebras is deduced easily from property \eqref{eq4.10} above together with the definition of the $2$-cocycle $\operatorname{c}(\cdot,\cdot)$, given in \eqref{eqn2.6}, and the fact that every element of $\mathbf{gl}(\infty)$ can be written as a finite linear combination of matrices of the form $\sum_{i\in\ZZ}a_i \E_{i,i+k}\text{,}$ where $k \in \ZZ$ and $a_i \in \CC$. \np{}\label{4.9} In what follows we refer to the restriction of $\rho$ to the image of the morphism \eqref{oscillator algebra:mono} as the \emph{infinite wedge representation of the oscillator algebra} $\mathfrak{s}$. \section{Combinatorial properties of the operators $f_i f_j^*$}\label{fermi:gl} In this section we define and study certain operators on partitions. This construction will be used in our definition of the bosonic representation of the Lie algebra $\widehat{\mathbf{gl}}(\infty)$, see \S \ref{6}. Our main result is Proposition \ref{prop6.2} which describes the combinatorics encoded in the vector \begin{equation}\label{psi:p:q:S:eqn1} f_if_j^*(v_S) = (-1)^{\alpha} v_{T} \text{;} \end{equation} here $$S := (s_1,s_2,\dots) \in \mathscr{S},$$ $i,j \in \ZZ$, are such that $$\text{$j \in S$ and $i \not \in S \backslash \{j\}$,}$$ $$T := (S \backslash \{j \}) \bigcup \{i\},$$ and $$\alpha := \operatorname{count}(i, S \backslash \{j\}) - \operatorname{count}(j,S).$$ As it turns out the combinatorics encoded in \eqref{psi:p:q:S:eqn1} are related to a certain skew diagram associated to the partition determined by $S$, see Proposition \ref{prop6.1} and Proposition \ref{prop6.2}. \np{}\label{5.1} Let $m,i\in\ZZ$, let $\mathscr{P}_{m,i}$ denote the set $$\mathscr{P}_{m,i} := \{\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P} : \lambda_k \not = i -m+k-1 \text{ for all $k$} \} \text{,}$$ and let $\mathscr{P}_{m,i}^*$ denote the set $$ \mathscr{P}_{m,i}^* := \{ \lambda = (\lambda_1,\lambda_2,\dots) \in \mathscr{P} : \lambda_k = i -m+k-1 \text{ for some $k$}\}.$$ Given $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}$, define \begin{equation}\label{6.1} \operatorname{count}_m(i,\lambda) := \# \{k : \lambda_k > i-m+k-1 \}\text{.} \end{equation} The main idea behind \eqref{6.1} is that if $\lambda = \lambda(S)$ is the partition corresponding to a charge $m$ semi-infinite monomial $S \in \mathscr{S}_m$, then \begin{equation}\label{6.2} \operatorname{count}_m(i,\lambda) = \operatorname{count}(i,S), \end{equation} where $\operatorname{count}(i,S)$ denotes the number of elements of $S$ which are strictly greater than $i$, see \eqref{5.5}. That \eqref{6.2} holds true is easy to check using \eqref{5.5} and Proposition \ref{prop5.1} (b). \np{}\label{5.2} We now use \eqref{6.1} to define certain combinatorial operators on partitions. Precisely, if $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}_{m,i}$, then define $p_{m,i}(\lambda)$ to be the partition $\mu=(\mu_1,\mu_2,\dots)$ where: \begin{equation}\label{6.2'} \mu_j = \begin{cases} \lambda_j - 1 & \text{ for $j \leq \operatorname{count}_m(i,\lambda)$} \\ i-m+\operatorname{count}_m(i,\lambda) - 1 & \text{ for $j=\operatorname{count}_m(i,\lambda)+1$} \\ \lambda_{j-1} & \text{ for $j >\operatorname{count}_m(i,\lambda)+1$.} \end{cases} \end{equation} On the other hand, if $\lambda=(\lambda_1,\lambda_2,\dots) \in \mathscr{P}_{m,i}^*$, then define $p_{m,i}^*(\lambda)$ to be the partition $\mu=(\mu_1,\mu_2,\dots)$ where: \begin{equation}\label{6.2''}\mu_j = \begin{cases} \lambda_j + 1 & \text{for $j \leq \operatorname{count}_m(i,\lambda)$} \\ \lambda_{j+1} & \text{for $j > \operatorname{count}_m(i,\lambda)$. }\end{cases} \end{equation} \np{} The following proposition is used in the proof of Proposition \ref{prop6.2} which relates the combinatorial operators defined in \S \ref{5.2} to the operators $f_i f_j^*$ described in \S \ref{fermi:creation:ann:1} and \eqref{psi:p:q:S:eqn1}. \begin{proposition}\label{prop6.1} Fix $m,i,j\in \ZZ$, $\lambda = (\lambda_1,\lambda_2,\dots) \in \mathscr{P}_{m,j}^*$, let $\mu := p_{m,j}^*(\lambda)$, assume that $\mu \in \mathscr{P}_{m-1,i}$ and let $\nu := p_{m-1,i}(\mu) = p_{m-1,i}p_{m,j}^*(\lambda)$. The following assertions hold true: \begin{enumerate} \item{if $i<j$, then $\nu \subseteq \lambda$, the skew diagram $\lambda \backslash \nu$ is a border strip, $\# (\lambda \backslash \nu) = j - i$, and $\operatorname{height}(\lambda \backslash \nu) = \operatorname{count}_{m-1}(i,\mu) - \operatorname{count}_m(j,\lambda)$;} \item{if $i > j$, then $\lambda \subseteq \nu$, the skew diagram $\nu \backslash \lambda$ is a border strip, $\#(\nu \backslash \lambda) = i-j$, and $\operatorname{height}(\nu \backslash \lambda) = \operatorname{count}_m(j,\lambda) - \operatorname{count}_{m-1}(i,\mu)$.} \item{if $i = j$, then $\nu = \lambda$ and the skew diagrams $\nu \backslash \lambda $ and $\lambda \backslash \nu$ are empty.} \end{enumerate} \end{proposition} \begin{proof} By assumption we have \begin{equation} \mu := p_{m,j}^*(\lambda) \end{equation} and \begin{equation} \nu := p_{m-1,i}(\mu) = p_{m-1,i}p_{m,j}^*(\lambda)= (\nu_1,\nu_2,\dots); \end{equation} set \begin{equation}\label{6.4} \alpha := \operatorname{count}_m(j,\lambda) \end{equation} and \begin{equation}\label{6.5} \beta := \operatorname{count}_{m-1}(i,\mu). \end{equation} For (a), we have $i < j$. As a consequence, using the definitions \eqref{6.2'} and \eqref{6.2''}, we deduce that the partition $\nu = (\nu_1,\nu_2,\dots)$ has the form: \begin{equation}\label{6.5} \nu_k = \begin{cases} \lambda_k & \text{ for $1 \leq k \leq \alpha$ } \\ \lambda_{k+1} - 1 & \text{ for $\alpha + 1 \leq k \leq \beta$} \\ i-m+\beta & \text{ for $k = \beta + 1$} \\ \lambda_k & \text{ for $k \geq \beta + 2$.} \end{cases} \end{equation} Considering \eqref{6.5}, it is clear that $\nu \subseteq \lambda$, that $\theta := \lambda \backslash \nu$ is a border strip, and that the number of rows of $\theta$ equals \begin{equation}\label{6.5'}\#[\alpha+1,\beta+1] = \beta - \alpha + 1; \end{equation} it follows from \eqref{6.5'} that \begin{equation}\label{6.5''} \operatorname{height}(\theta) = \beta - \alpha. \end{equation} Next if $\theta_k$ denotes the number of elements in the $k$th row of $\theta$, then $\theta_k = 0$ for $k \leq \alpha$ and $k \geq \beta+2$. We also have: \begin{equation}\label{6.6} \theta_k = \lambda_k - \lambda_{k+1} + 1, \end{equation} for $\alpha + 1 \leq k \leq \beta$, \begin{equation}\label{6.7} \theta_{\beta + 1} = \lambda_{\beta+1} - i -\beta +m\text{,} \end{equation} and \begin{equation}\label{6.8} \lambda_{\alpha+1} = j + \alpha - m. \end{equation} Thus, using \eqref{6.6}, \eqref{6.7}, and \eqref{6.8}, we have: $$\sum_{k=\alpha+1}^{\beta+1} \theta_k = j + \alpha -m - i - \beta + m + \#[\alpha+1,\beta] = j - i, $$ whence $$\# \theta = j-i. $$ For (b), we have $i > j$. As a consequence, using the definitions \eqref{6.2'} and \eqref{6.2''}, we deduce that the partition $\nu = (\nu_1,\nu_2,\dots)$ is defined by: \begin{equation}\label{6.11} \begin{cases} \lambda_k & \text{ for $1 \leq k \leq \beta$} \\ i+ \beta - m & \text{ for $k = \beta + 1 $} \\ \lambda_{k-1}+1 & \text{ for $\beta + 1 < k \leq \alpha+1$} \\ \lambda_k & \text{ for $k \geq \alpha+2$.} \end{cases} \end{equation} Considering \eqref{6.11}, it is clear that $\lambda \subseteq \nu$, that $\theta := \nu \backslash \lambda$ is a border strip, and that the number of rows of $\theta$ equals \begin{equation}\label{6.12} \#[\beta+1,\alpha+1]=\alpha-\beta+1. \end{equation} Thus \begin{equation}\label{6.13} \operatorname{height}(\theta) = \alpha - \beta. \end{equation} Next let $\theta_k$ denote the number of elements in the $k$th row of $\theta$. Then $\theta_k = 0$ for $k \leq \beta$ and $k > \alpha + 1$. We also have: \begin{equation}\label{6.14} \theta_{\beta+1} = i + \beta -m - \lambda_{\beta+1}, \end{equation} \begin{equation}\label{6.14'} \theta_k = \lambda_{k-1}+1-\lambda_k \text{,} \end{equation} for $\beta+1 < k \leq \alpha+1$, and \begin{equation}\label{6.14''} \lambda_{\alpha+1} = j + \alpha - m. \end{equation} Using \eqref{6.14}, \eqref{6.14'}, and \eqref{6.14''}, it follows that $$\sum_{k = \beta + 1}^{\alpha+1} \theta_k = i + \beta - m - j - \alpha + m + \#[\beta+2,\alpha+1] = i - j $$ so that $$\# \theta = i - j. $$ Assertion (c) is trivial. \end{proof} \subsection{}\label{ex6.2}{\bf Example.} Recall, see \S \ref{4.4}, that $$S = (4,3,1,0,-2,-3,-5,-7,-8,\dots)$$ is the element of $\mathscr{S}_0$ corresponding to the partition $$\lambda = (4,4,3,3,2,2,1) \in \mathscr{P}_{19},$$ whose Young diagram is pictured in \S \ref{4.4}. To compute $f_{-1}f_3^*(v_S)$ note that $$T = (S\backslash \{3\}) \bigcup \{-1\} = (4,1,0,-1,-2,-3,-5,-7,-8,\dots)\text{,}$$ $\operatorname{count}(3,S)=1$ and $\operatorname{count}(-1,S\backslash \{3\})=3$. We conclude \begin{equation}\label{skew:diagram:eg1} f_{-1}f_3^*(v_S) = (-1)^{3-1}v_{T} = v_{T}\text{.} \end{equation} To see the combinatorics encoded in \eqref{skew:diagram:eg1} first note that if $\nu := \lambda(T)$, the partition corresponding to $T$, then $\nu = (4,2,2,2,2,2,1)$ which has Young diagram $$ \begin{Young} &&&\cr \\ &\cr \\ &\cr \\ &\cr \\ &\cr \\ &\cr \\ \cr \end{Young}$$ and $\nu \subseteq \lambda$. The skew diagram $\theta := \lambda \backslash \nu$ is the set $\{ \{2,3\}, \{2,4\}, \{3,3\}, \{4,3\} \} $ which can be represented pictorially as: $$\begin{Young} &\cr \\ \cr \\ \cr \end{Young} $$ Note that the skew diagram $\theta$ is a border strip and $\operatorname{height}(\theta) =2$. If we now identify $S$ with $\lambda$ and $T$ with $\nu$, then \eqref{skew:diagram:eg1} takes the form \begin{equation}\label{skew:diagram:eg1'} f_{-1}f_{3}^*(v_{\lambda}) = (-1)^{\operatorname{height}(\theta)} v_{\nu} \text{.} \end{equation} Suppose now that we wish to compute $f_{-1}f_{-3}^*(v_{S})$. In this case, $\operatorname{count}(-3,S)=5$, $\operatorname{count}(-1,S\backslash \{-3\})=4$ and hence \begin{equation}\label{skew:diagram:eg2} f_{-1}f_{-3}^*(v_S)=-1v_{T}\text{,} \end{equation} where $T = (4,3,1,0,-1,-2,-5,-7,-8,\dots)$. The combinatorics encoded in \eqref{skew:diagram:eg1'} is similar to that encoded in \eqref{skew:diagram:eg1}, but there is one difference which amounts to the fact that $-1 > -3$ while $3 > -1$. In more detail, if $\nu := \lambda(T)$, then $\nu = (4,4,3,3,3,3,1)$, $\lambda \subseteq \nu$, and the skew diagram $\theta := \nu \backslash \lambda$ is $\{ \{5,3\}, \{6,3\} \}$ which is a border strip. The border strip $\theta$ can be pictured pictorially as: $$ \begin{Young} \cr \\ \cr \end{Young} $$ and has height equal to one. If we identify $S$ with $\lambda$ and $T$ with $\nu$, then \eqref{skew:diagram:eg2} takes the form $$f_{-1}f_{-3}^*(v_\lambda) = (-1)^{\operatorname{height}(\theta)} v_{\nu}\text{.} $$ \np{} Example \ref{ex6.2} generalizes: \begin{proposition}\label{prop6.2} Suppose that $S = (s_1,s_2,\dots) \in \mathscr{S}$ and $i,j \in \ZZ$. Then $f_if_j^*(v_S) \not = 0$ if and only if $j \in S$, and $i \not \in S \backslash \{j\}$. In addition assume that $f_i f_j^*(v_S) \not = 0$, let $T := (S \backslash \{j\})\bigcup \{i\}$, let $\lambda$ and $\nu$ be the partitions determined by $S$ and $T$ respectively, and denote $v_S$ by $v_\lambda$ and $v_{T}$ by $v_\nu$. The following assertions hold true: \begin{enumerate} \item{If $i < j$, then $\nu \subseteq \lambda$, the skew diagram $\lambda \backslash \nu$ is a border strip of length $j-i$ and $$f_i f_j^*(v_\lambda) = (-1)^{\operatorname{height}(\lambda \backslash \nu)} v_{\nu} \text{;} $$ } \item{If $j<i$, then $\lambda \subseteq \nu$, the skew diagram $\nu \backslash \lambda$ is a border strip of length $i-j$ and $$f_if_j^*(v_\lambda) = (-1)^{\operatorname{height}(\nu \backslash \lambda)}v_\nu \text{.} $$ } \end{enumerate} \end{proposition} \begin{proof} The proposition is a consequence of Proposition \ref{prop5.1}, the discussion given in \S \ref{psi:p:q:S:eqn1} and Proposition \ref{prop6.1}. In particular, using Proposition \ref{prop5.1} (b) in conjunction with \eqref{6.5} and \eqref{6.11}, depending on whether $i<j$ or $j<i$, we compute that $$\nu = p_{i,m-1}p_{j,m}^*(\lambda).$$ The conclusion of Proposition \ref{prop6.2} then follows from Proposition \ref{prop6.1}, \eqref{6.1} and \eqref{psi:p:q:S:eqn1}. \end{proof} \section{The bosonic representation of $\widehat{\mathbf{gl}}(\infty)$}\label{6} We now provide an application of our combinatorial construction given in \S \ref{fermi:gl}. Indeed, we use this construction to prove the boson-fermion correspondence which we state as Theorem \ref{bosonic:Extension:theorem}. \np{}\label{Sec6.1} To begin with, let $A:=\CC[z,z^{-1}]$ and $B=A\otimes_\CC \Lambda :=\CC[z,z^{-1},h_1,h_2,\dots]$. The \emph{bosonic representation of the oscillator algebra} is the Lie algebra homomorphism \begin{equation}\label{eqn6.1} \xi_0: \mathfrak{s} \rightarrow \End_\CC(B) \end{equation} determined by: $$ \xi_0((0,t^k))=p_k^\perp = k \frac{\partial}{\partial p_k} \text{, for $k > 0$;}$$ $$\xi_0((0,t^k))=p_{-k} \text{, for $k<0$;} $$ $$ \xi_0((0,1))=z\frac{\partial}{\partial z} \text{;}$$ and $$ \xi_0((1,0))=1\text{,}$$ compare with \cite[p. 314]{Kac:infinite:Lie:Algebras} or \cite[Lecture 5, p. 46]{kac-raina-rozhkovskaya}. \np{}\label{Sec6.2} The first step to proving Theorem \ref{bosonic:Extension:theorem} is to define operators $b_i \in \operatorname{End}_{\CC}(B)$ by the rule: \begin{equation}\label{eqn6.2} b_i(z^m s_\lambda) = \begin{cases} (-1)^{\operatorname{count}_m(i,\lambda)} z^{m+1}s_{p_{m,i}(\lambda)} & \text{ for $\lambda \in \mathscr{P}_{m,i}$} \\ 0 & \text{ for $\lambda \not \in \mathscr{P}_{m,i}$.} \end{cases} \end{equation} Similarly define operators $b_i^* \in \operatorname{End}_{\CC}(B)$ by the rule \begin{equation}\label{eqn6.3'} b_i^*(z^m s_\lambda) \begin{cases} (-1)^{\operatorname{count}_m(i,\lambda)} z^{m-1}s_{p_{m,i}^*(\lambda)} & \text{ for $\lambda \in \mathscr{P}^*_{m,i}$} \\ 0 & \text{ for $\lambda \in \mathscr{P}^*_{m,i}$.} \end{cases} \end{equation} As in \eqref{eq4.9} and \eqref{eq4.10}, we have the relations \begin{equation}\label{eqn6.3} \text{ $b_i b_j^* + b_j^* b_i = \delta_{ij}$, $b_i b_j + b_j b_i = 0$, $b^*_i b_j^* + b_j^* b_i^* = 0$,} \end{equation} and \begin{equation}\label{eqn6.4} [b_i b^*_j, b_{\ell} b^*_k] = \delta_{j \ell} b_i b_k^* - \delta_{i k} b_\ell b^*_j, \end{equation} for all $i,j,k,\ell \in \ZZ$. Indeed, as in \eqref{eq4.9}, \eqref{eqn6.3} follows immediately from the definitions while \eqref{eqn6.4} is deduced from \eqref{eqn6.3}. \np{}\label{Sec6.3} {\bf Example.} As in \S \ref{4.4}, if $\lambda=(4,4,3,3,2,2,1)$, then $\operatorname{count}_0(3,\lambda)=1$ and $$b^*_3(s_\lambda) = -z^{-1}s_\mu\text{,}$$ where $\mu$ is the partition $\mu = (5,3,3,2,2,1)$. Also, $\operatorname{count}_{-1}(-1,\mu) = 3$, $$b_{-1}(z^{-1}s_{\mu})=-s_{\nu}\text{,}$$ where $\nu = (4,2,2,2,2,1)$, and $$b_{-1}b_3^*(s_\lambda) = s_\nu\text{.}$$ \np{}\label{Sec6.4} The key point in the proof of Theorem \ref{bosonic:Extension:theorem} is the following observation which is a consequence of Proposition \ref{prop6.2}. The point is that if $S \in \mathscr{S}$, $k$ a nonzero integer and $\mathfrak{s}_k := (0,t^k) \in \mathfrak{s}$, then \begin{equation}\label{7.1} \delta_0(\mathfrak{s}_k)(v_S) = \sum_{\mathrm{finite}} f_\ell f^*_{\ell + k}(v_S) \end{equation} and we now give a combinatorial description of this finite set: \begin{proposition}\label{prop7.2} Suppose that $S \in \mathscr{S}_m$. The following assertions hold true: \begin{enumerate} \item{ If $k>0$, then $$ \mathfrak{s}_k(v_S) = \sum_{\mathrm{finite}} (-1)^{\operatorname{height}(\lambda(S) \backslash \lambda(T))}v_{T} \text{,}$$ where the finite sum is taken over all $T \in \mathscr{S}_m$, which have the property that $\lambda(T)\subseteq \lambda(S)$, and $\lambda(S) \backslash \lambda(T)$ is a border strip of length $k$. } \item{If $k<0$, then $$\mathfrak{s}_k(v_S) = \sum_{\mathrm{finite}} (-1)^{\operatorname{height}(\lambda(T) \backslash \lambda(S))}v_{T} $$ where the finite sum is taken over all $T \in \mathscr{S}_m$ with the property that $\lambda(S) \subseteq \lambda(T)$ and $\lambda(T) \backslash \lambda(S)$ is a border strip of length $|k|$.} \end{enumerate} \end{proposition} \begin{proof} To begin with, note that for both (a) and (b), Proposition \ref{prop6.2} implies that each summand of \eqref{7.1} contributes a summand of the desired form. To establish Proposition \ref{prop7.2} it thus remains to show that, conversely, each border strip of the shape asserted in the proposition appears as a summand of \eqref{7.1}. To this end, consider the case that $k>0$. Let $\lambda$ be the partition corresponding to $S$, suppose that $\nu \subseteq \lambda$ is such that $\theta:= \lambda \backslash \nu$ is a border strip of length $k$. Let $\theta_{n'}$ denote the number of elements in the $n'$th row of $\theta$. Let $n:= \min \{ j : \nu_j \not = \lambda_j \}$. Then $\theta_{n'} = 0$ for $n'<n$ and $\theta_n \not = 0$; set \begin{equation}\label{prop7.2.1} \ell := \theta_n - k - n + m + \lambda_{n+1}. \end{equation} We then compute, using the definitions \eqref{6.2'} and \eqref{6.2''} together with the fact that $\theta$ is a border strip, that \begin{equation}\label{prop7.2.2} \nu = p_{m-1,\ell}p^*_{m,\ell+k}(\lambda); \end{equation} compare with \eqref{6.6} and \eqref{6.7}. Thus if $T$ is the element of $\mathscr{S}_m$ corresponding to $\nu$, then \begin{equation}\label{prop7.2.3} (-1)^{\operatorname{height}(\lambda \backslash \nu)} v_T = f_\ell f^*_{\ell+k}(v_S), \end{equation} by Proposition \ref{prop6.2} (a). Next suppose that $k<0$. Again let $\lambda$ be the partition corresponding to $S$, suppose that $\nu \supseteq \lambda$ is such that $\theta := \nu \backslash \lambda$ is a border strip of length $|k|$, let $T$ be the element of $\mathscr{S}_m$ corresponding to $\nu$ and let $n := \min \{ j : \nu_j \not = \lambda_j \}$. Let $\theta_{n'}$ denote the number of elements in the $n'$th row of $\theta$ and set \begin{equation}\label{prop7.2.3} \ell := \theta_n - (n-1)+\lambda_n + m. \end{equation} We then compute, using the definitions \eqref{6.2'} and \eqref{6.2''} together with the fact that $\theta$ is a border strip, that: \begin{equation}\label{prop7.2.4} \nu = p_{m-1,\ell} p_{m,\ell+k}^*(\lambda); \end{equation} compare with \eqref{6.14} and \eqref{6.14'}. Thus if $T$ is the element of $\mathscr{S}_m$ corresponding to $\nu$, then \begin{equation}\label{prop7.2.4} (-1)^{\operatorname{height}(\nu \backslash \lambda)} v_T = f_\ell f_{\ell+k}^*(v_S), \end{equation} by Proposition \ref{prop6.2} (b). \end{proof} \subsection{}\label{Sec6.5} Using the theory we have developed thus far we can prove the boson-fermion correspondence. \begin{theorem}\label{bosonic:Extension:theorem} The bosonic representation $$ \xi_0 : \mathfrak{s} \rightarrow \End_\CC(B),$$ namely \eqref{eqn6.1}, of the oscillator algebra extends to a representation $$\xi:\widehat{\mathbf{gl}}(\infty) \rightarrow \End_\CC(B)$$ of the Lie algebra $\widehat{\mathbf{gl}}(\infty)$. More precisely, the Lie algebra $\widehat{\mathbf{gl}}(\infty)$ admits a representation $ \xi:\widehat{\mathbf{gl}}(\infty) \rightarrow \End_\CC(B)$ with the property that the diagram $$\xymatrix{ \mathfrak{s} \ar[dr]^-{\xi_0} \ar[d]^-{\delta_0} & \\ \widehat{\mathbf{gl}}(\infty) \ar[r]^-{\xi}& \End_\CC(B) }$$ commutes. In addition, the $\CC$-linear isomorphism $$\sigma : F\rightarrow B$$ \text{ defined by } $$v_S \mapsto z^m s_{\lambda(S)}, $$ for $m = \operatorname{charge}(S)$ and $\lambda(S)$ the partition determined by the semi-infinite monomial $S$, is an isomorphism of $\widehat{\mathbf{gl}}(\infty)$-modules. \end{theorem} \begin{proof} Consider the representation $\xi :\widehat{\mathbf{gl}}(\infty) \rightarrow \End_\CC(B)$ determined by the conditions that: $$\xi((0,\E_{ij})) = \begin{cases} b_ib_j^* & \text{ if $i\not = j$ or $i=j>0$} \\ b_ib_i^* - \operatorname{id}_B & \text{ if $j=i \leq 0$} \end{cases} $$ and $$\xi((a,0))=a \operatorname{id}_B,$$ for $a \in \CC$. The fact that $\xi$ is a representation follows from the relations given in \eqref{eqn6.3}. The fact that $\xi$ extends the representation $\xi_0$ follows from Proposition \ref{prop7.2} and the Murnaghan-Nakayama rule \eqref{Murnaghan:Nakayama} and \eqref{Murnaghan:Nakayama:adjoint}. For the second assertion, fix $i,j\in \ZZ$ and assume that $f_if_j^*(v_S) \not = 0$. We then have that $$f_if_j^*(v_S)=(-1)^\alpha v_T,$$ where $$T := (S \backslash \{j\})\cup \{i\},$$ and $$\alpha := \operatorname{count}(i,S\backslash \{j\}) - \operatorname{count}(j,S);$$ let $\lambda = \lambda(S)$ be the partition corresponding to $S$ and let $\nu = \lambda(T)$ be the partition corresponding to $T$. In this setting, the operator $p_{m,j}^*$ is defined on the partition $\lambda$ and the operator $p_{m-1,i}$ is defined on the partition $p_{m,j}^*(\lambda)$. In addition $$\nu = p_{m-1,i}p_{m,j}^*(\lambda).$$ On the other hand we have that $$\sigma(v_S)=z^m s_{\lambda}.$$ Considering the definitions of the operators $b_i$ and $b_j^*$, we then deduce that $$b_i b_j^*(z^m s_\lambda)=(-1)^\alpha z^m s_\nu$$ which is what we wanted to show. \end{proof} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
bf2154538265b1a54d6c82a2d0eb55fca994388a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Wave turbulence and solitonic turbulence are two major faces of the propagation of random and weakly nonlinear waves. The dynamics of incoherent dispersive waves has been studied in the framework of the statistical theory of Weak Turbulence since the pioneering work of Zakharov, Benney and Newell in the 60's (see \cite{Nazarenko,newell_wave_2011} for recent reviews). Based on the hypothesis of weak nonlinearity, the theory predicts an energy transfer through resonant interactions among waves. For forced turbulence it leads most often to a direct energy cascade from the forced large scales to small scales at which dissipation is acting. The Weak Turbulence Theory (WTT) provides analytical predictions of the statistical properties of turbulence such as the stationary wave spectrum. This theory has been applied to many physical systems such as plasmas \cite{sagdeev19791976}, optics \cite{picozzi2014optical}, thin elastic plates~\cite{During} or geophysical flows~\cite{Hasselmann,Galtier}. In particular, it was applied to wave propagating at the surface of deep water either in the gravity regime~\cite{Hasselmann} or the capillary regime~\cite{Filonenko}. In the last decades wave turbulence at the surface of a fluid has been the object of several detailed experimental investigations in either regimes or at the gravity-capillary crossover~\cite{Lukaschuk,Cobelli,Falconrev,aubourg_nonlocal_2015} that triggered a new dynamics in the field. For weakly non dispersive waves, weakly non linear waves may also develop into solitons i.e. localized structures that propagate at a constant velocity while keeping their shape unchanged~\cite{Dauxois}. They have been discovered by Russell~\cite{russell1845report} and modeled by the Korteweg-de Vries (KdV) equation~\cite{korteweg1895xli}. The interaction between solitons is elastic and appears as a phase shift between pre- and post- collision solitons. When a large number of solitons evolves in a medium, they develop a statistical state called soliton gas. If soliton gases have been experimentally observed for a long time in optics \cite{schwache1997properties,mitschke1999soliton}, their observation at the surface of fluids has been reported only recently~\cite{costa_soliton_2014,perrard2015capillary}. Soliton gases develop a quite different turbulent state than weak turbulence due to the integrability of the KdV equation~\cite{Zakharov_TIS} for which an infinite number of invariants exist in addition to energy. Here we report the observation of a transition between both weak turbulence and a solitonic regime. We focus on the case of weakly nonlinear waves propagating at the surface of a water layer at scales typical of the gravity-capillary crossover (i.e. wavelengths at centimeter scale). In this configuration weak wave turbulence has been unambiguously reported for a deep enough layer and weak enough forcing using space and time resolved measurements~\cite{aubourg_nonlocal_2015,aubourg2016investigation,Cobelli}. Such measurements are required to accurately characterize the wave propagation and identify the nonlinear regimes. At stronger forcing another wave turbulence regime has been identified that is more strongly nonlinear and involves coherent structures~\cite{CobelliPrz,Berhanu}. Shallow water weak wave turbulence was also observed numerically by di Leoni {\it et al.}~\cite{di2014wave} for gravity waves. Furthermore at finite depth gravity-capillary solitons are known to propagate~\cite{falcon_observation_2003}. We report an investigation of the influence of the water depth and the intensity of the forcing on the statistical properties of wave turbulence. Changing the water depth alters the dispersivity of the waves. We observe and characterize the transition between a weak turbulence regime (low forcing and/or large depth) and a solitonic regime (larger forcing and/or small depth) that may evolve into a soliton gas. \section{Experimental Set-up} The experimental set-up is similar to that of Aubourg $\&$ Mordant~\cite{aubourg_nonlocal_2015}. It consists in a rectangular $57\times37$~cm$^2$ Plexiglas tank filled with water. The depth of water at rest is changed in the range $[0.6,6]$~cm, going from finite depth regimes to deep water regimes considering the centimetric wavelengths. Great care is taken to prevent surface contamination in order to avoid dissipation by conversion to Marangoni waves~\cite{Przadka}. The waves are generated by exciting continuously horizontally the container (along its longest axis) using an oscillating table controlled by a sinusoidal tension modulated around a central frequency equal to 2~Hz with a random modulation in an interval $\pm$ 0.5~Hz around the central frequency. The deformation of the free surface is reconstructed using the Fourier Transform Profilometry \cite{cobelli_global_2009,maurel_experimental_2009} (see reference \cite{aubourg2016investigation} for more details of the set-up). The water is mixed with titanium dioxide particles at a volume fraction of 1$\%$ to improve its optical diffusivity making possible to project on its very surface a grayscale sinusoidal pattern. These particles are chemically neutral and do not alter the surface tension of water as stated by Przadka et al \cite{Przadka}. As the waves propagate the pattern seen by a high speed camera is deformed. The deformation of the pattern can be inverted to provide the elevation field of the waves. In this way we obtain a full space-time resolved measurement of the wave field. The images are recorded at 250 frames/s with a resolution of $1024\times1024$~pixels$^{2}$ covering a 20~cm$^{2}$ area at the center of the tank. \begin{figure}[!htb] \centering (a) \includegraphics[clip,width=8cm]{random_1cm_2Hz.eps} \hfill (b) \includegraphics[clip,width=8cm]{soliton_1cm_2Hz.eps} \caption{Elevation field of the waves for a height of water at rest equal to 1~cm. The excitation central frequency is 2~Hz. (a) amplitude of forcing equal to 0.9 mm. The {\it rms} slope is 1.6\% and this regime corresponds to wave turbulence. (b) amplitude of forcing equal to 2.5~mm. The {\it rms} slope is 4.1\% and a soliton can clearly be identified. In both cases the magnitude of the waves has been magnified strongly vertically.} \label{surface} \end{figure} We perform experiments with various magnitudes of the forcing and several values of the water depth at rest. Figure \ref{surface} shows the surface reconstruction in the case of a weak and a strong forcing amplitude. At weak forcing, fig.~\ref{surface}(a) shows a random distribution of the waves. The deformation seems to involve waves coming from all directions. At stronger forcing (fig~\ref{surface}(b)) we observe a localized coherent structure. These soliton-like, coherent structures are only propagating along the direction of the oscillation of the table ($x$ axis). Note that there is about a factor ten between the amplitudes of the reconstructed waves in both cases. In this regime the system contains actually several solitons which propagate in both directions along the $x$-axis and collide as represented in a time space representation in fig.~\ref{h_t_x}. \begin{figure}[!htb] \centering \includegraphics[width=12cm]{h_t_x.eps} \hfill \caption{Space-time plot of the solitonic regime of fig.~\ref{surface}(b). The excitation central frequency is 2~Hz and the amplitude of forcing equal to 2.5~mm. The colormap is the wave height in meters.} \label{h_t_x} \end{figure} \section{Spatio-temporal spectral Analysis} \begin{figure}[!htb] (a) \includegraphics[clip,width=8cm]{turb_faible_kx_1cm.eps} \hfill (b) \includegraphics[clip,width=8cm]{soliton_kx_1cm_2Hz_faible.eps} \caption{Space-time Fourier spectrum $E^v(\mathbf k,\omega)=\langle |v(\mathbf k,\omega)|^2\rangle$ of the velocity field $v=\frac{\partial \eta}{\partial t}$ of the waves. The height of water at rest is 1~cm. The forcing frequency is centered around 2~Hz. (a) $E^v(k_x,k_y=0,\omega)$ for an amplitude of forcing equal to 0.9 mm. (b) same quantity for an amplitude of forcing equal to 2.5 mm. In both (a), (b), the solid black line is the theoretical finite depth linear dispersion relation for gravity-capillary waves in pure water with surface tension $\gamma=72$~mN/m. In (b) the dashed red line is a straight line of slope $1/c_{s}$ with $c_{s}=\sqrt{gh}$ the celerity of long waves. In both subfigures the color is the log of $E^v (\mathbf k,\omega)$. } \label{spectra} \end{figure} \begin{figure}[!htb] \includegraphics[clip,width=16cm]{isotropy_10Hz_15Hz.eps} \caption{Cut of the energy spectrum $E^v(k_{x},k_{y},\omega)$ at a given value of the frequency $F=\omega/2\pi$ and shown as a function of $(k_{x},k_{y})$. The solid black line corresponds to the finite depth water gravity-capillary dispersion relation~(\ref{gc_disp}). The top line corresponds to $F=10$~Hz and the bottom line corresponds to $F=15$~Hz. The left column comes from a weakly non-linear regime in deep water $h=6$~cm where the system is quite isotropic, the central column comes from the WTT experiment in fig.~\ref{spectra}(a) and the right column corresponds to the soliton experiment of fig.~\ref{spectra}(b).} \label{isotropy} \end{figure} In order to identify the spectral content of the wave field, we compute the full frequency-wavenumber Fourier spectrum. For a better visualization of the spectra, we actually compute the spectrum of $\frac{\partial \eta}{\partial t}$, which can be interpreted as the vertical velocity due to the weak magnitude of the nonlinearity ($\eta(x,y,t)$ is the instantaneous height of the free surface). The spectrum is noted $E^v(\mathbf k,\omega)$ and is shown in fig.~\ref{spectra} for the two experiments of fig.~\ref{surface}. At weak forcing (fig.~\ref{spectra}(a)), the energy is concentrated on the dispersion relation of gravity-capillary waves at finite depth: \begin{equation} \omega = \sqrt{\left(gk + \frac{\gamma}{\rho}k^3\right)\tanh(kh)} \label{gc_disp} \end{equation} where $\gamma= 72$~mN/m is the surface tension of pure water, $\rho$ is the water density = $10^3$~kg/m$^{3}$, and $g=9.81$~m/s$^2$ is the acceleration of gravity. At large wavelength $kh\ll1$, $\omega\approx\sqrt{gh}k$ and the waves are weakly dispersive. The concentration of energy on this linear dispersion relation is the clear indication of the presence of weak turbulence as reported in~\cite{aubourg2016investigation,aubourg_nonlocal_2015}. At stronger forcing (fig.~\ref{spectra}(b)), when localized structures are present, the energy is concentrated around a straight line of slope $1/c_s$ where $c_s=\sqrt{gh}$. $c_s$ is the velocity expected for solitons. A similar spectral signature has been also reported for solitons in nonlinear optics \cite{laurie2012one}. In our case the energy is concentrated mostly at large wavelength ($\lambda\gtrsim 2$~cm, corresponding to the size of the core of the soliton $\approx5$~cm) but it extends down to millimeter wavelengths due to the capillary precursors (reported for gravity-capillary solitons by Falcon {\it et al.}~\cite{falcon_observation_2003}) that are expected when the Bond number $Bo = \frac{\gamma}{\rho gh^2}$ lies between 0 and 1/3 (which is the case for all our experiments)~\cite{grimshaw1993solitary}. At low frequency the spectrum is tangent to the shallow water non dispersive part of the linear dispersion relation but departs clearly from it at smaller wavelengths. Figure~\ref{isotropy} displays cuts of the spectrum $E^v(\mathbf k,\omega)$ at constant frequency. For deep water ($h=6$~cm, left column), the spectrum is quite isotropic although the forcing is acting along the $x$ axis. This case is very similar to the one previously reported by Aubourg \& Mordant~\cite{aubourg2016investigation,aubourg_nonlocal_2015}. It shows a strong angular redistribution of energy. At lower depth and weak forcing ($h=1$~cm, weak turbulence regime, central column, shown in fig.~\ref{surface}(a) \& \ref{spectra}(a)), the spectra are more anisotropic with energy mostly in the $x$ direction with a much weaker angular spreading. At the same depth but stronger forcing (soliton regime, right column, shown in fig.~\ref{surface}(b) \& \ref{spectra}(b)), energy is only visible on the $x$ axis, along the direction of propagation of the solitons. Furthermore the peak of energy is clearly out of the linear dispersion as already observed in fig.~\ref{spectra}. A first consequence of reducing the dispersivity of the waves seems to make the angular transfer of energy less efficient until the transition to the soliton regime at which no angular transfer is observed. \begin{figure}[!htb] (a) \includegraphics[clip,width=8cm]{spectre_integre_transitoire_film2_3cm_0_7Hz_25_11_16.eps} (b) \includegraphics[clip,width=8cm]{spectre_integre_film4_3cm_0_7Hz_25_11_16.eps} \caption{Space-time Fourier spectrum $E^v(k,\omega)$ of the velocity field of the waves for experiments where the height of water at rest $h =3$~cm with a forcing centered around 0.7~Hz in an intermediate regime between weak turbulence and solitonic regime. (a) amplitude of table oscillation equal to 6.6~mm. The {\it rms} slope is 6$\%$ (b) amplitude of table oscillation equal to 9.2~mm. The {\it rms} slope is 6.9$\%$. The solid black line corresponds to the gravity-capillary linear dispersion relation in finite depth for pure water. The red dashed line corresponds to the celerity of the soliton $c_{s}=\sqrt{gh}$.} \label{transient} \end{figure} We also observe intermediate states between the WTT and the ST represented in fig.~\ref{transient} where some energy is observed in between the linear dispersion relation and the solitonic straight dispersion at the celerity $c_{s}=\sqrt{gh}$. This intermediate stage could be seen only for experiments with finite depths higher than 1 cm and a frequency of forcing lower than 2~Hz. In fig.~\ref{transient}(a) energy is observed on the linear dispersion relation and below but not yet on the soliton line. Extra energy lines can be seen notably in fig.~\ref{transient}(b) that correspond to non dispersive structures that propagate at a different velocity. This state corresponds to the beginning of the development of solitons that may not enough stable to resist the influence of the large scale stationary modes of the tank that possibly tend to make them loose their coherence very quickly. When the amplitude of the forcing is increased the solitonic state is then reached and energy is concentrated around the expected soliton dispersion relation. \begin{figure}[!htb] \includegraphics[clip,width=9cm]{spectre_integre_6mm_25_10_2016_film3.eps} \caption{ Space-time Fourier spectrum $E^v(k,\omega)$ of the velocity field of the waves for experiments where the height of water at rest is h = 6mm with a forcing centered around 2~Hz and an amplitude of forcing is 3~mm. The rms slope is measured to be 6.4$\%$. The solid black line corresponds to finite depth water gravity-capillary linear dispersion relation. The red dashed line corresponds to the celerity of the soliton $c_{s}=\sqrt{gh}$.} \label{soliton_6mm} \end{figure} As shown in the previous paragraph, the spectral signature of solitons in $(k_x,\omega)$ space corresponds to a straight line that goes through the origin and is tangent to the shallow water gravity-capillary dispersion relation at the lowest frequencies. For a water depth equal to 1~cm (fig.~\ref{spectra}(b)) the line is located below the dispersion relation (for frequencies lower than 60~Hz). For a water depth equal to 6 mm, the energy line representing the soliton is above the dispersion relation as shown in fig.~\ref{soliton_6mm}. This represents a transition between a soliton that has a higher celerity than linear waves to a soliton that is slower than linear waves. This is called the bifurcation between the subcritical branch and the supercritical branch as explained by Kuznetsov \& Dias in their review \cite{kuznetsov2011bifurcations}. Analytically the inflection point disappears when the Bond number reaches $1/3$, i.e. when $h=\sqrt{\dfrac{3\gamma}{\rho g}}=4.7$~mm for pure water). In fig.~\ref{soliton_6mm} the Bond number is slightly below $1/3$ so that the inflection point is barely visible (compared to the spread of energy around the dispersion relation). The dispersion relation is close to linear in a large interval up to about 12Hz. The soliton energy line is superimposed with the dispersion relation in this interval and then lies above it at higher frequencies. In the previous case of fig.~\ref{spectra}(b) the soliton energy is above the dispersion relation only above 60 Hz. In this configuration we are still in the subcritical case but we clearly see the evolution to the supercritical case. \section{Frequency spectra} For pure power law dispersion relations, the Weak Turbulence Theory predicts the expression of the energy spectrum $E(k)$ as : \begin{equation} E(k) = CP^{1/(N-1)}k^{-\alpha} \label{spectrum_k} \end{equation} where $ k = \| \mathbf{k} \|$ is the wave number, $P$ the energy flux, $C$ a dimensional constant that can be calculated and $\alpha$ the spectral exponent. $N$ is the number of waves taking part in the non linear coupling~\cite{Nazarenko}. Using the dispersion relation, one can translate the expression of $k$ spectra into $\omega$ spectra that are often easier to measure. One of the outcomes of WTT is that energy is transmitted among resonant waves. At the lowest order it involves 3 waves that have then to satisfy the resonance conditions such as $\mathbf{k_{1}}=\mathbf{k_{2}}+\mathbf{k_{3}}$ and $\omega_{1}=\omega_{2}+\omega_{3}$. This is the case for pure capillary waves on infinite depth water~\cite{Filonenko}. In the case of gravity waves the 3-wave resonance conditions do not admit solutions because of the negative curvature of the dispersion relation in infinite depth $\omega = \sqrt{gk}$. One must take into account the next order that involves 4 waves. The predicted spectra of the surface deformation $\eta$ are thus expected to be \begin{equation} E(k) \propto P^{1/3}gk^{-5/2} \end{equation} for deep water pure gravity waves and \begin{equation} E(k) \propto P^{1/2}k^{-7/4} \end{equation} for deep water pure capillary waves~\cite{Nazarenko}. Using the dispersion relations for gravity waves and capillary waves in finite depth one can change the variable from $k$ to $\omega$ in the predicted spectra and obtain \begin{equation} E(\omega) \propto P^{1/3}g\omega^{-4} \end{equation} for gravity waves and \begin{equation} E(\omega) \propto P^{1/2}\left(\frac{\gamma}{\rho}\right)^{1/6}\omega^{-17/6} \end{equation} for capillary waves. Laboratory experiments fail to reproduce this predictions for the gravity waves. In large or small waves tanks, the spectral exponent of the gravity waves is seen to vary strongly with the forcing intensity and to be close to the WTT predictions only at the highest forcing magnitude, in opposition to the weak nonlinearity hypothesis \cite{nazarenko2010statistics,falcon2007observation}. For capillary waves, experiments seem to follow better the predictions of the WTT \cite{wright1997imaging,deike2012decay}. \begin{figure}[!htb] \includegraphics[clip,width=11cm]{pente_integre_1024_19_10_film3-6.eps} \caption{Time Fourier spectrum $E^v(\omega)$ of the surface deformation for experiments where the height of water at rest is 1~cm with a forcing centered around 2~Hz and an amplitude of forcing equal to 2.5~mm. The vertical dashed line corresponds to the frequency of the crossover between the gravity and the capillary regime for pure water. The injection scale corresponds to the frequency of the forcing.} \label{fspec} \end{figure} Figure~\ref{fspec} shows the frequency spectra for various experimental configurations. The spectra corresponding to a weak turbulence regime (at 1~cm and 6~cm depth) are very similar to that of Aubourg \& Mordant~\cite{aubourg2016investigation}. At frequencies below the gravity-capillary crossover (at 14.5~Hz) and thus corresponding to gravity waves the spectra are steeper than the WTT prediction, the steepest being observed at low depth. In the capillary range, the spectra are also steeper than the WTT prediction consistently with Aubourg \& Mordant~\cite{aubourg2016investigation} but at odds with observation by Falcon {\it et al.}~\cite{Falconrev} who observed a $\omega^{-3}$ scaling close to the WTT prediction. In the solitonic regime, the spectrum is not changed much in the gravity range. In the capillary range the spectra are changed and are closer to the WTT prediction and even less steep at 1~cm depth. This may suggest that the $\omega^{-3}$ scaling observed by Falcon {\it et al.} may correspond to the strong regime reported by Cobelli et al.~\cite{CobelliPrz}. In their figure 1 (top, strong regime) the surface velocity shows coherent structures that may be some sort of gravity-capillary solitons generated by the paddles even if their water depth is not that small (5~cm). The $\omega^{-3}$ scaling may be due to the solitons as it is the case in our experiment at 2~cm water depth rather than being due to a fulfillment of the WTT prediction. \section{Phase Diagram} The solitons that have been observed in our work are expected to follow the Korteweg-de Vries equation: \begin{equation} \eta_{t} + \frac{3}{2}\frac{c_{s}}{h} \eta\eta_{\xi} + \frac{1}{6}c_{s}h^{2}(\frac{1}{3} - Bo)\eta_{\xi\xi\xi} = 0 \label{kdv} \end{equation} with $c = c_{s}( 1 + \frac{\eta_{0}}{2h})$ and $\xi = x -ct$. Note that $ \eta_{0} < 0$ when $Bo > \frac{1}{3}$ , $\eta_{0} > 0$ when $0 \leq Bo < \frac{1}{3}$. Equation \eqref{kdv} can be derived only if nonlinear effects are small and have the same order of magnitude as dispersive ones. Dispersion is quantified by $\mu = (\frac{h}{L})^{2}$ and the non-linearity is quantified using the typical {\it rms} steepness of the waves \begin{equation} \epsilon \equiv \sigma = \left\langle \sqrt{\frac{1}{S}\int_S \parallel \nabla \eta(x,y,t) \parallel ^{2}\,\mathrm dx dy} \right\rangle \, . \end{equation} \begin{figure}[!htb] \center \includegraphics[width= 10cm]{steepness_h_2Hz.eps} \caption {Solitonic transition phase diagram represented as a function of the {\it rms} steepness of the waves and the depth of water at rest, for experiments where the central frequency of excitation is 2~Hz. The dashed red line delimits the area where localized structures are predominant from the one where random waves are predominant} \label{A_h_2Hz} \end{figure} Figure \ref{surface} and \ref{spectra} pointed out major changes in the spatial and spectral signatures of the studied states where only the amplitude of the forcing was tuned. Figure~\ref{A_h_2Hz} displays a phase diagram of the weak turbulence regime and the solitonic one as a function of the water depth and the measured steepness of the waves. The central frequency is kept constant equal to 2~Hz for all the experiments. At a given water depth, the central frequency imposes indirectly the size of the soliton although predicting the actual size involves computing the inverse scattering transform of the excitation~\cite{trillo2016experimental}. This computation is beyond the scope of this article but we observe that the size of the soliton changes only weakly with the experimental parameters. Thus the water depth is a measure of the dispersivity of the waves while the steepness is a measure of the nonlinearity. Each case is categorized as either solitonic or weakly turbulent by the observation of the Fourier spectrum. A border is clearly observed between both regimes. The border is increasing with the water depth: for deeper water it requires to force stronger waves to observe solitons. Equality between nonlinearity and dispersion is expected at the border, since it is the required condition to the development of solitary waves. To verify this condition we display in the table \ref{tableau_soliton} the minimal values of non-linearity and dispersion for which solitary waves were observed for different values of the water depth. Globally we observe that the balance between nonlinearity and dispersion is fullfilled at each depth of water. Dispersion is defined as $\mu = (\frac{h}{L})^{2}$ where L is the length of the core of the soliton. As $L$ is weakly changing it means that when the depth of water is increased the dispersion is stronger. The condition of balance of dispersion and nonlinearity imposes that the nonlinearity should be stronger as well in order to observe the solitons. In the deepest case ($h=4$~cm) it was not possible to observe solitons as the nonlinearity was becoming very high and the waves where close to overturning. \begin{table}[!htb] \center \begin{tabular}{|l|c|c|c|c|c|c|r|} \hline h (cm) & 0.6 & 1 & 1.5 & 2 & 3 & 4 \\ \hline $\epsilon$ (\%) & 3.2 & 4.6 & 7.3 & 7.6 & 8.8 & $\nexists$ \\ $\mu$ (\%) & 3.8 & 4 & 7.2 & 7.3 & 9 & $\nexists$ \\ \hline \end{tabular} \caption{Values of the nonlinearity $\epsilon$ and the dispersion $\mu$ for different depths of water with a frequency of excitation $\in$ $\left[1.5 \, \, 2.5\right]$~Hz. $\epsilon$ is estimated by directly measuring the surface averaged {\it rms} steepness of the waves. $\mu = (\frac{h}{L})^{2}$ is computed by measuring the depth of water at rest $h$ and $L$ the typical size of the core of the soliton} \label{tableau_soliton} \end{table} \begin{figure}[!htb] \center \includegraphics[width= 10cm]{Etransv_Elong_2Hz_faible_6cm.eps} \caption {Evolution of the ratio $R(\omega) = \frac{E^{v}(k_x=0,k_y(\omega))}{E^{\upsilon}(k_x(\omega),k_y=0)}$ of the energy perpendicular to the direction of the forcing oscillation over the energy in the direction of the forcing and for different depths $h$ of water. The energy level is taken on the linear dispersion relation $k(\omega)$. The central frequency of excitation is equal to 2 Hz. The {\it rms} steepness of the surface for values of the water depth $h = [ 6,\, 3,\, 2,\, 1.5,\, 1]$~cm are respectively $[ 4.4,\, 6.5,\, 5,\, 6.4,\, 4.1]\%$. In all cases the wave steepnesses are thus close to 5\%.} \label{Et_El} \end{figure} \section{Angular Energy Transmission} Although the transition between the two regimes seems very abrupt, the observation of the wave spectrum showed a progressive change in the angular transfer of energy in the weak turbulence regime when decreasing the water depth. For truly non dispersive waves (acoustic waves), the resonant manifold involves only waves propagating in the same direction. Thus Newell \& Rumpf asked the question of the evolution of an initially anisotropic distribution of waves~\cite{newell_wave_2011}. Would higher order terms lead to angular transfer of energy or would the energy be condensed on rays that would evolve into shocks ? Here we decrease progressively the dispersion of the waves and we clearly see that the efficiency or angular transfer is altered. In order to be more quantitative we define $R$ as the ratio of the energy of the waves transverse to the forcing ($k_x=0$) divided by the energy of longitudinal waves propagating in the direction of the forcing ($k_y=0$): \begin{equation} R(\omega) = \frac{E^{v}(k_x=0,k_y(\omega))}{E^{\upsilon}(k_x(\omega),k_y=0)} \label{R} \end{equation} where $k(\omega)$ is the linear dispersion relation. Aubourg \& Mordant~\cite{aubourg_nonlocal_2015} reported for deep water that, although the energy injection is strongly anisotropic by forcing mostly waves in the $x$ direction, the nonlinear interactions between the waves redistribute isotropically this energy in a very efficient way. We show the evolution of $R(\omega)$ in fig.~\ref{Et_El}. We chose datasets for which the wave steepness (measure of the nonlinearity) is almost the same and close to 5\% in order to show only the effect of changing the dispersion of the waves. Consistently with the observation of Aubourg \& Mordant, $R$ is close to one in the deepest case. $R$ decays strongly when the water depth evolves from 6 down to 1~cm. This decay is roughly the same at all frequencies. When the water depth is $1.5$~cm, the ratio $R$ decayed by a factor 100. At 1~cm water depth the system transited into the soliton regime. In that regime the ratio is down to $10^{-3}$ showing that almost no directional transfer of energy is occurring. Answering Newell \& Rumpf, for finite depth gravity-capillary waves no angular transfer is induced by higher order closure but the system develops solitons rather than shocks as a weak dispersion remains present in our case. \section{Conclusions} In summary, we investigated the impact of nonlinearity and wave dispersion on wave turbulence. For strong dispersion of the waves (large water depth) and weak nonlinearity we observe a state of weak turbulence. When the dispersion of the waves is weak, the strength of nonlinearity can match that of dispersion. In these conditions, we observe the generation of solitons and thus a change of statistical regime from weak turbulence to soliton gas. The fate of energy is thus expected to be quite different. For weak turbulence, through resonant interactions, energy cascades to very small scales at which it is dissipated. In absence of dissipation, solitons are propagating without changing their shape and their interaction is elastic. An ensemble of solitons evolves into a state of integrable turbulence~\cite{Zakharov_TIS, Randoux} which properties are distinct from that of weak turbulence. Although in fiber optics the soliton gas is restricted to 1D propagation, for gravity-capillary waves solitons can have a 2D propagation and this maybe the case of the state observed by Cobelli {\it et al.}~\cite{Cobelli} at their strongest forcing. Note that Zakharov {\it et al.} predicted in \cite{zakharov_one-dimensional_2004} that for unidirectional propagation of gravity waves in deep water, weak turbulence should be unstable and coexist with soliton-like structures. In our case no coexistence of solitons and weak turbulence has been observed except very near the transition. We observe two very distinct regimes. \begin{acknowledgements} This project has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No 647018-WATU). We thank V. Govart for his technical assistance. \end{acknowledgements}
371984c22a646fb03e9b3fb3b8c2029f82201082
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Introduction} Let $G$ be a connected reductive algebraic group, defined over an algebraically closed field $k$. Given a Borel subgroup $B \subseteq G$ with unipotent radical $U$, in this paper we investigate two closely related varieties associated with the Lie algebra $\mathfrak{u}:=\Lie(U)$: The commuting variety $\mathcal{C}_2(\mathfrak{u})$, given by \[ \mathcal{C}_2(\mathfrak{u}):=\{(x,y) \in \mathfrak{u}\!\times\!\mathfrak{u} \ ; \ [x,y]=0\},\] and the variety \[ \mathbb{A}(2,\mathfrak{u}) := \{ \mathfrak{a} \in \Gr_2(\mathfrak{u}) \ ; \ [\mathfrak{a},\mathfrak{a}]=(0)\},\] of two-dimensional abelian subalgebras of $\mathfrak{u}$, which is a closed subset of the Grassmannian $\Gr_2(\mathfrak{u})$ of $2$-planes of $\mathfrak{u}$. For $\Char(k)\!=\!0$, the authors proved in \cite{GR} that $\mathcal{C}_2(\mathfrak{u})$ is equidimensional if and only if the adjoint action of $B$ on $\mathfrak{u}$ affords only finitely many orbits. Being built on methods developed in \cite[\S2]{Pr} for $\Char(k)=0$, their arguments don't seem to readily generalize to fields of positive characteristic. In fact, most of Premet's paper \cite{Pr} is devoted to the technically more involved case pertaining to fields of positive characteristic. The purpose of this note is to extend the main result of \cite{GR} by employing techniques that work in good characteristics. For arbitrary $G$, this comprises the cases $\Char(k)\!=\!0$ as well as $\Char(k)\!\ge\!7$. Letting $Z(G)$ and $\modd(B;\mathfrak{u})$ denote the center of $G$ and the modality of $B$ on $\mathfrak{u}$, respectively, our main result reads as follows: \bigskip \begin{thm*} Suppose that $\Char(k)$ is good for $G$. Then \[ \dim \mathcal{C}_2(\mathfrak{u})=\dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u}).\] Moreover, $\mathcal{C}_2(\mathfrak{u})$ is equidimensional if and only if $B$ acts on $\mathfrak{u}$ with finitely many orbits. \end{thm*} \bigskip \noindent If $\modd(B;\mathfrak{u})\!=\!0$, then, by a theorem of Hille-R\"ohrle \cite{HR}, the almost simple components of the derived group $(G,G)$ of $G$ are of type $(A_n)_{n\le 4}$, or $B_2$. As in \cite{Pr} and \cite{GR}, the irreducible components are parametrized by the so-called distinguished orbits. Our interest in $\mathcal{C}_2(\mathfrak{u})$ derives from recent work \cite{CF} on the variety $\mathbb{E}(2,\mathfrak{u})$ of $2$-dimensional elementary abelian $p$-subalgebras of $\mathfrak{u}$, which coincides with $\mathbb{A}(2,\mathfrak{u})$ whenever $\Char(k)\!\ge\!h(G)$, the Coxeter number of $G$. \bigskip \begin{cor*} Suppose that $\Char(k)$ is good for a reductive group $G$ of semisimple rank $\ssrk(G)\!\ge\!2$. Then the following statements hold: \begin{enumerate} \item $\dim \mathbb{A}(2,\mathfrak{u})=\dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u})\!-\!4$. \item $\mathbb{A}(2,\mathfrak{u})$ is equidimensional if and only if $\modd(B;\mathfrak{u})=0$. \item $\mathbb{A}(2,\mathfrak{u})$ is irreducible if and only if every component of $(G,G)$ has type $A_1$ or $A_2$. \end{enumerate} \end{cor*} \bigskip \noindent For the reader's convenience, we begin by collecting a number of subsidiary results in the first two sections, some of which are variants of results in the literature. Throughout this paper, all vector spaces over $k$ are assumed to be finite-dimensional. \bigskip \noindent {\bf Acknowledgment.} I would like to thank Simon Goodwin for several helpful remarks, for pointing out a mistake in an earlier version, and for bringing references \cite{Go06} and \cite{GMR} to my attention. \bigskip \section{Preliminaries} Let $\mathfrak{g}$ be a finite-dimensional Lie algebra over $k$, $\Aut(\mathfrak{g})$ be its automorphism group. The commuting variety $\mathcal{C}_2(\mathfrak{g})$ is a conical closed subset of $\mathfrak{g}\!\times\!\mathfrak{g}$. Given a variety $X$, we denote by $\Irr(X)$ the set of irreducible components of $X$. Thus, each $C \in \Irr(\mathcal{C}_2(\mathfrak{g}))$ is a conical closed subset of the affine space $\mathfrak{g}\!\times\!\mathfrak{g}$. Recall that the group $\GL_2(k)$ acts on the affine space $\mathfrak{g}\!\times\!\mathfrak{g}$ via \[ \left(\begin{smallmatrix}\alpha & \beta \\ \gamma & \delta \end{smallmatrix}\right)\boldsymbol{.} (x,y) := (\alpha x\!+\!\beta y, \gamma x\!+\!\delta y),\] with $\mathcal{C}_2(\mathfrak{g})$ being a $\GL_2(k)$-stable subset. In particular, the group $k^\times:=k\!\smallsetminus\!\{0\}$ acts on $\mathcal{C}_2(\mathfrak{g})$ via \[ \alpha \boldsymbol{.} (x,y) := \left(\begin{smallmatrix} 1 & 0 \\ 0 & \alpha \end{smallmatrix}\right)\boldsymbol{.} (x,y) = (x,\alpha y).\] We denote the two surjective projection maps by \[ \pr_i : \mathcal{C}_2(\mathfrak{g}) \longrightarrow \mathfrak{g} \ \ \ \ \ \ \ (i \in \{1,2\}).\] Given $x \in \mathfrak{g}$, we let $C_\mathfrak{g}(x)$ be the centralizer of $x$ in $\mathfrak{g}$. Since \[ \pr_1^{-1}(x)=\{x\}\!\times\!C_\mathfrak{g}(x)\] for all $x \in \mathfrak{g}$, the surjection $\pr_1: \mathcal{C}_2(\mathfrak{g}) \longrightarrow \mathfrak{g}$ is a linear fibration $(\mathcal{C}_2(\mathfrak{g}),\pr_1)$ with total space $\mathcal{C}_2(\mathfrak{g})$ and base space $\mathfrak{g}$. For any (not necessarily closed) subvariety $X \subseteq \mathfrak{g}$, we denote by $\mathcal{C}_2(\mathfrak{g})|_X$ the subfibration given by $\pr_1 : \pr_1^{-1}(X) \longrightarrow X$. \bigskip \begin{Lemma} \label{Pre1} Let $X \subseteq \mathfrak{g}$ be a subvariety. Suppose that $C \subseteq \mathcal{C}_2(\mathfrak{g})|_X$ is a $k^\times$-stable, closed subset. Then $\pr_1(C)$ is a closed subset of $X$. \end{Lemma} \begin{proof} We consider the morphism \[ \iota : X \longrightarrow \mathcal{C}_2(\mathfrak{g})|_X \ \ ; \ \ x \mapsto (x,0).\] Given $x \in \pr_1(C)$, we find $y \in \mathfrak{g}$ such that $(x,y) \in C$. By assumption, the map \[ f : k \longrightarrow \mathcal{C}_2(\mathfrak{g})|_X \ \ ; \ \ \alpha \mapsto (x,\alpha y)\] is a morphism such that $f(k^\times) \subseteq C$. Hence \[ (x,0) = f(0) \in f(\overline{k^\times}) \subseteq \overline{f(k^\times)} \subseteq C,\] so that $x \in \iota^{-1}(C)$. As a result, $\pr_1(C)=\iota^{-1}(C)$ is closed in $X$. \end{proof} \bigskip \begin{Lemma} \label{Pre2} Let $C \in \Irr(\mathcal{C}_2(\mathfrak{g}))$. Then the following statements hold: \begin{enumerate} \item $\GL_2(k)\boldsymbol{.} C = C$. \item The set $\pr_i(C)$ is closed. \end{enumerate} \end{Lemma} \begin{proof} (1) This well-known fact follows from $\GL_2(k)$ being connected. (2) As $C$ is $\GL_2(k)$-stable, Lemma \ref{Pre1} ensures that $\pr_1(C)$ is closed. By the same token, the map $(x,y) \mapsto (y,x)$ stabilizes $C$, so that $\pr_2(C)$ is closed as well.\end{proof} \bigskip \noindent We next compute the dimension of $\mathcal{C}_2(\mathfrak{g})$ in terms of a certain invariant, that will be seen to coincide with the modality of certain group actions in our cases of interest. Given $n \in \mathbb{N}_0$, lower semicontinuity of ranks ensures that \[ \mathfrak{g}_{(n)} := \{ x \in \mathfrak{g} \ ; \ \rk(\ad x)\!=\!n\}\] is a (possibly empty) locally closed subspace of $\mathfrak{g}$. We put $\mathbb{N}_0(\mathfrak{g}):=\{n \in \mathbb{N}_0 \ ; \ \mathfrak{g}_{(n)} \ne \emptyset\}$ and define \[ \modd(\mathfrak{g}) := \max_{n \in \mathbb{N}_0(\mathfrak{g})} \dim \mathfrak{g}_{(n)}\!-\!n.\] \bigskip \noindent Our next result elaborates on \cite[(2.1)]{GR}. \bigskip \begin{Proposition} \label{Pre3}The following statements hold: \begin{enumerate} \item Let $n \in \mathbb{N}_0(\mathfrak{g})$. \begin{enumerate} \item $(\mathcal{C}_2(\mathfrak{g})|_{\mathfrak{g}_{(n)}},\pr_1)$ is a vector bundle of rank $\dim_k\mathfrak{g}\!-\!n$ over $\mathfrak{g}_{(n)}$. In particular, the morphism $\pr_1 : \mathcal{C}_2(\mathfrak{g})|_{\mathfrak{g}_{(n)}} \longrightarrow \mathfrak{g}_{(n)}$ is open. \item If $X \in \Irr(\mathfrak{g}_{(n)})$, then $\overline{\pr_1^{-1}(X)} \subseteq \mathcal{C}_2(\mathfrak{g})$ is irreducible of dimension $\dim X\!+\!\dim_k\mathfrak{g}\!-\!n$. \end{enumerate} \item We have $\dim \mathcal{C}_2(\mathfrak{g})=\dim_k\mathfrak{g}\!+\!\modd(\mathfrak{g})$. \item If $C \in \Irr(\mathcal{C}_2(\mathfrak{g}))$, then \[\dim C = \dim \pr_1(C)\!+\!\dim_k\mathfrak{g}\!-\!n_C,\] where $n_C := \max\{n \in \mathbb{N}_0 \ ; \ \mathfrak{g}_{(n)}\cap\pr_1(C) \ne \emptyset\}$. \item Let $X \in \Irr(\mathfrak{g}_{(n)})$ be such that $\overline{\pr_1^{-1}(X)} \in \Irr(\mathcal{C}_2(\mathfrak{g}))$. Then we have \[ C_\mathfrak{g}(x) \subseteq \overline{X} \subseteq \overline{\mathfrak{g}_{(n)}} \subseteq \bigsqcup_{m\le n} \mathfrak{g}_{(m)} \ \ \ \ \text{for all} \ x \in X.\] \item If $n \in \mathbb{N}_0$ is such that $\modd(\mathfrak{g})=\dim\mathfrak{g}_{(n)}\!-\!n$, then $\overline{\pr_1^{-1}(X)} \in \Irr(\mathcal{C}_2(\mathfrak{g}))$ for every $X \in \Irr(\mathfrak{g}_{(n)})$ such that $\dim X = \dim \mathfrak{g}_{(n)}$. \end{enumerate} \end{Proposition} \begin{proof} (1a) If $V,W$ are $k$-vector spaces and $\Hom_k(V,W)_{(n)} := \{ f \in \Hom_k(V,W) \ ; \ \rk(f)\!=\!n\}$, then the map \[ \Hom_k(V,W)_{(n)} \longrightarrow \Gr_{\dim_kV-n}(V) \ \ ; \ \ f \mapsto \ker f\] is a morphism. Consequently, \[ C_\mathfrak{g} : \mathfrak{g}_{(n)} \longrightarrow \Gr_{\dim_k\mathfrak{g}-n}(\mathfrak{g}) \ \ ; \ \ x \mapsto C_\mathfrak{g}(x)\] is a morphism as well and general theory implies that \[ E_{C_\mathfrak{g}} := \{(x,y) \in \mathfrak{g}_{(n)}\!\times\!\mathfrak{g} \ ; \ y \in C_\mathfrak{g}(x)\}\] is a vector bundle of rank $\dim_k\mathfrak{g}\!-\!n$ over $\mathfrak{g}_{(n)}$, which coincides with $\mathcal{C}_2(\mathfrak{g})|_{\mathfrak{g}_{(n)}}$, see \cite[(VI.1.2)]{Sh}. (1b) Given an irreducible component $X \in \Irr(\mathfrak{g}_{(n)})$, we consider the subbundle $\mathcal{C}_2(\mathfrak{g})|_X = \mathcal{C}_2(\mathfrak{g})\cap(X\!\times\!\mathfrak{g})$ together with its surjection $\pr_1 : \mathcal{C}_2(\mathfrak{g})|_X \longrightarrow X$. Let $C \in \Irr(\mathcal{C}_2(\mathfrak{g})|_X)$ be an irreducible component. Since $\mathcal{C}_2(\mathfrak{g})|_X$ is $k^\times$-stable, so is $C$. In view of Lemma \ref{Pre1}, we conclude that $\pr_1(C)$ is closed in $X$. It now follows from \cite[(1.5)]{Fa04} that the variety $\pr_1^{-1}(X)$ is irreducible. Hence its closure enjoys the same property. Consequently, \[ \pr_1 : \overline{\pr_1^{-1}(X)} \longrightarrow \overline{X} \] is a dominant morphism of irreducible affine varieties such that $\dim \pr_1^{-1}(x)=\dim_k\ker(\ad x) = \dim_k\mathfrak{g}\!-\!n$ for every $x \in X$. Since $X$ is locally closed, it is an open subset of $\overline{X}$. The fiber dimension theorem thus yields \[ \dim \overline{\pr_1^{-1}(X)} = \dim \overline{X}\!+\!\dim_k\mathfrak{g}\!-\!n = \dim X\!+\!\dim_k\mathfrak{g}\!-\!n,\] as desired. (2) We have \[ (\ast) \ \ \ \ \ \ \ \ \ \mathcal{C}_2(\mathfrak{g}) = \bigcup_{n\in \mathbb{N}_0(\mathfrak{g})} \bigcup_{X \in \Irr(\mathfrak{g}_{(n)})} \overline{\pr_1^{-1}(X)},\] whence \[ \dim \mathcal{C}_2(\mathfrak{g}) = \max_{n\in \mathbb{N}_0(\mathfrak{g})}\max_{X \in \Irr(\mathfrak{g}_{(n)})} \dim X\!+\!\dim_k \mathfrak{g}\!-\!n = \max_{n\in \mathbb{N}_0(\mathfrak{g})} \dim \mathfrak{g}_{(n)}\!+\!\dim_k\mathfrak{g}\!-\!n = \dim_k\mathfrak{g}\!+\!\modd(\mathfrak{g}),\] as asserted. (3) In view of (1b) and ($\ast$), there are $n_C \in \mathbb{N}_0$ and $X_C \in \Irr(\mathfrak{g}_{(n_C)})$ such that \[ C = \overline{\pr_1^{-1}(X_C)}.\] Since $\pr_1$ is surjective, we have $X_C = \pr_1(\pr_1^{-1}(X_C))$. Consequently, $\pr_1(C) = \pr_1(\overline{\pr_1^{-1}(X_C)}) \subseteq \overline{X_C}$, while $X_C \subseteq \pr_1(C)$ in conjunction with Lemma \ref{Pre1} yields $\overline{X_C} \subseteq \pr_1(C)$. Thus, lower semicontinuity of the rank function yields \[ \pr_1(C) \subseteq \overline{\mathfrak{g}_{(n_C)}} \subseteq \bigsqcup_{n \le n_C} \mathfrak{g}_{(n)},\] so that $\max\{n \in \mathbb{N}_0 \ ; \ \pr_1(C)\cap \mathfrak{g}_{(n)}\ne \emptyset\} \le n_C$. On the other hand, $\emptyset \ne X_C \subseteq \pr_1(C)\cap \mathfrak{g}_{(n_C)}$ implies $n_C \le \max\{n \in \mathbb{N}_0 \ ; \ \pr_1(C)\cap \mathfrak{g}_{(n)}\ne \emptyset\}$. Hence we have equality and (1b) yields \[ \dim C = \dim X_C\!+\!\dim_k\mathfrak{g}\!-\!n_C = \dim \overline{X_C}\!+\!\dim_k\mathfrak{g}\!-\!n_C = \dim \pr_1(C)\!+\!\dim_k\mathfrak{g}\!-\!n_C,\] as desired. (4) Let $x \in X$. Then we have $\{x\}\!\times\!C_\mathfrak{g}(x) = \pr_1^{-1}(x) \subseteq \overline{\pr_1^{-1}(X)}$. By assumption, the latter set is $\GL_2(k)$-stable, so that in particular $C_\mathfrak{g}(x)\!\times\!\{x\} \subseteq \overline{\pr_1^{-1}(X)}$. It follows that \[ C_\mathfrak{g}(x) \subseteq \overline{X} \ \ \ \ \ \ \forall \ x \in X.\] Since $\overline{X} \subseteq \overline{\mathfrak{g}_{(n)}} \subseteq \bigsqcup_{m \le n} \mathfrak{g}_{(m)}$, our assertion follows. (5) This follows from (1b) and (2). \end{proof} \bigskip \begin{Corollary} \label{Pre4}The following statements hold: \begin{enumerate} \item The subset $\overline{\pr_1^{-1}(\mathfrak{g}_{(\max \mathbb{N}_0(\mathfrak{g}))})}$ is an irreducible component of $\mathcal{C}_2(\mathfrak{g})$ of dimension $2\dim_k\mathfrak{g}\!-\!\max \mathbb{N}_0(\mathfrak{g})$. \item Suppose that $\mathcal{C}_2(\mathfrak{g})$ is equidimensional. Then we have $\modd(\mathfrak{g})=\dim_k\mathfrak{g}\!-\!\max\mathbb{N}_0(\mathfrak{g})$. \item Suppose that $\mathcal{C}_2(\mathfrak{g})$ is irreducible. Then we have $\dim\mathfrak{g}_{(n)}\!-\!n=\modd(\mathfrak{g})$ if and only if $n=\max\mathbb{N}_0(\mathfrak{g})$. \end{enumerate} \end{Corollary} \begin{proof} (1) Let $n_0:= \max \mathbb{N}_0(\mathfrak{g})$. By lower semicontinuity of the function $x \mapsto \rk(\ad x)$, $\mathfrak{g}_{(n_0)}$ is an open, and hence irreducible and dense subset of $\mathfrak{g}$. Hence $\pr_1^{-1}(\mathfrak{g}_{(n_0)})$ is open in $\mathcal{C}_2(\mathfrak{g})$, and Proposition \ref{Pre3} shows that $C_{(n_0)}:= \overline{\pr_1^{-1}(\mathfrak{g}_{(n_0)})}$ is irreducible of dimension $\dim \mathfrak{g}_{(n_0)}\!+\!\dim_k\mathfrak{g}\!-\!n_0 = 2\dim_k\mathfrak{g}\!-\!n_0$. Let $C \in \Irr(\mathcal{C}_2(\mathfrak{g}))$ be such that $C_{n_0} \subseteq C$. Then $\pr_1^{-1}(\mathfrak{g}_{(n_0)})$ is a non-empty open subset of $C$, so that $C_{n_0}= C \in \Irr(\mathcal{C}_2(\mathfrak{g}))$. (2) This follows directly from (1) and Proposition \ref{Pre3}(2). (3) Suppose that $n \in \mathbb{N}_0(\mathfrak{g})$ is such that $\modd(\mathfrak{g})=\dim\mathfrak{g}_{(n)}\!-\!n$. Let $X \in \Irr(\mathfrak{g}_{(n)})$ be an irreducible component such that $\dim X = \dim\mathfrak{g}_{(n)}$. Thanks to Proposition \ref{Pre3}(5), $C_X:=\overline{\pr_1^{-1}(X)}$ is an irreducible component of $\mathcal{C}_2(\mathfrak{g})$, so that $C_X = \mathcal{C}_2(\mathfrak{g})$. Consequently, \[ \mathfrak{g} = \pr_1(\mathcal{C}_2(\mathfrak{g})) = \pr_1(C_X) \subseteq \overline{X} \subseteq \bigcup_{m\le n} \mathfrak{g}_{(m)},\] so that $\max\mathbb{N}_0(\mathfrak{g})\le n$. Hence we have equality. \end{proof} \bigskip \noindent In general, the value of $\modd(\mathfrak{g})$ is hard to compute. For certain Lie algebras of algebraic groups and for those having suitable filtrations, the situation is somewhat better. \bigskip \begin{Example} Let $\Char(k)=p\ge 5$ and consider the $p$-dimensional Witt algebra $W(1):=\Der_k(k[X]/(X^p))$, see \cite[(IV.2)]{SF} for more details. This simple Lie algebra affords a canonical descending filtration \[ W(1)=W(1)_{-1} \supseteq W(1)_0 \supseteq \cdots \supseteq W(1)_{p-2} \supseteq (0),\] where $\dim_kW(1)_i = p\!-\!1\!-\!i$. By way of illustration, we shall verify the following statements: \begin{enumerate} \item The variety $\mathcal{C}_2(W(1))$ has dimension $p\!+\!1$ and is not equidimensional, with \[\Irr(\mathcal{C}_2(W(1))) = \{\overline{\pr_1^{-1}(W(1)_{(\ell)})} \ ; \ \frac{p\!+\!1}{2}\! \le \! \ell \! \le p\!-\!1\}.\] \item Let $\mathfrak{b}:=W(1)_0$. The variety $\mathcal{C}_2(\mathfrak{b})$ has pure dimension $p$, with \[\Irr(\mathcal{C}_2(\mathfrak{b})) = \{\overline{\pr_1^{-1}(\mathfrak{b}_{(\ell)})} \ ; \ \frac{p\!-\!1}{2}\!\le \!\ell\!\le \! p\!-\!2\}.\] \item (cf.\ \cite[(4.3)]{YC}) Let $\mathfrak{u}:= W(1)_1$. The variety $\mathcal{C}_2(\mathfrak{u})$ has pure dimension $p$, with \[\Irr(\mathcal{C}_2(\mathfrak{u})) = \{\overline{\pr_1^{-1}(\mathfrak{u}_{(\ell)})} \ ; \ \frac{p\!-\!3}{2}\le\!\ell\!\le\!p\!-\!4\}.\] \item (cf.\ \cite[(3.6)]{YC}) Let $\mathcal{N}:=\{x \in W(1) \ ; \ (\ad x)^p=0\}$ be the $p$-nilpotent cone of $W(1)$. The variety $\mathcal{C}_2(\mathcal{N}):= \mathcal{C}_2(W(1))\cap(\mathcal{N}\!\times\!\mathcal{N})$ has pure dimension $p$ with \[\Irr(\mathcal{C}_2(\mathcal{N}))=\{\overline{\pr_1^{-1}(W(1)_{(\ell)})} \ ; \ \ell \in \{\frac{p\!+\!1}{2},\ldots,\!p\!-\!2\}\}\cup \{\overline{\pr_1^{-1}(W(1)_{(p-1)}\cap\mathcal{N})}\}.\] \end{enumerate} \begin{proof} (1) Let $x \in W(1)\!\smallsetminus\!\{0\}$ and consider the Jordan-Chevalley-Seligman decomposition $x=x_s\!+\!x_n$, with $x_s$ semisimple, $x_n$ $p$-nilpotent and $[x_s,x_n] =0$, (cf.\ \cite[(II.3.5)]{SF}). Since every maximal torus $\mathfrak{t} \subseteq W(1)$ is one-dimensional and self-centralizing, the assumption $x_s \ne 0$ entails $x_n \in C_{W(1)}(x_s)=kx_s$, so that $x_n=0$. As a result, every $x \in W(1)\!\smallsetminus\!\{0\}$ is either $p$-nilpotent or semisimple, and \cite[(2.3)]{YC} implies \[ \ker (\ad x) = \left\{ \begin{array}{ccc} W(1)_{p-1-i} & x \in W(1)_i\!\smallsetminus\! W(1)_{i+1} & \frac{p-1}{2}\!\le\!i\!\le\! p\!-\!2 \\ kx\!\oplus\! W(1)_{p-1-i} & x \in W(1)_{i}\!\smallsetminus\!W(1)_{i+1} & 1\!\le\!i\!\le\!\frac{p-3}{2} \\ kx & x \in W(1)\!\smallsetminus\!W(1)_1. & \end{array} \right.\] This in turn yields \[ W(1)_{(\ell)} = \left\{ \begin{array}{cc} W(1)_{p-\ell}\!\smallsetminus\! W(1)_{p-\ell+1} & 2\!\le\!\ell\!\le\! \frac{p-1}{2} \\ W(1)_{\frac{p-3}{2}}\!\smallsetminus\!W(1)_{\frac{p+1}{2}} & \ell\!=\!\frac{p+1}{2} \\ W(1)_{p-\ell-1}\!\smallsetminus\!W(1)_{p-\ell} & \frac{p+3}{2}\!\le\!\ell\le\!p\!-\!2 \\ W(1)\!\smallsetminus\!W(1)_1 & \ell\!=\!p\!-\!1\\ \{0\} & \ell\!=\!0\\ \emptyset & \text{else.} \end{array} \right.\] We thus have $\modd(W(1))\!=\!1$, so that $\dim \mathcal{C}_2(W(1))\!=\!p\!+\!1$. Moreover, each of the varieties $W(1)_{(\ell)}$ is irreducible, with $\overline{W(1)_{(\ell)}} = W(1)_{p-\ell}$ for $2\!\le\!\ell\!\le\!\frac{p-1}{2}$. Proposition \ref{Pre3}(4) in conjunction with the above now shows that $\overline{\pr_1^{-1}(W(1)_{(\ell)})} \not \in \Irr(\mathcal{C}_2(W(1)))$ for $2\!\le\!\ell\!\le\!\frac{p-1}{2}$. Consequently, \[ (\ast) \ \ \ \ \ \ \ \ \mathcal{C}_2(W(1)) = \bigcup_{\frac{p+1}{2}\le \ell\le p-1} \overline{\pr_1^{-1}(W(1)_{(\ell)})}.\] According to Corollary \ref{Pre4}, \[\overline{\pr_1^{-1}(W(1)_{(p-1)})} = \overline{\bigcup_{x \in W(1)\smallsetminus W(1)_1}\{x\}\!\times\!kx} \subseteq \{(x,y) \in \mathcal{C}_2(W(1)) \ ; \ \dim_k kx\!+\!ky\!\le\!1\}\] is an irreducible component of dimension $p\!+\!1$. Let $\ell \in \{\frac{p+1}{2}, \ldots, p\!-\!2\}$. Given $x \in W(1)_{(\ell)}$, it thus follows that \[ \{x\}\!\times\!C_{W(1)}(x) \subseteq \overline{\pr_1^{-1}(W(1)_{(\ell)})} \ \ \text{while} \ \ \{x\}\!\times\!C_{W(1)}(x) \not \subseteq \overline{\pr_1^{-1}(W(1)_{(p-1)})},\] whence \[ \overline{\pr_1^{-1}(W(1)_{(\ell)})} \not \subseteq \overline{\pr_1^{-1}(W(1)_{(p-1)})}.\] Thanks to Proposition 1.3(3) we have \[\dim \overline{\pr_1^{-1}(W(1)_{(\ell)})} = \dim_kW(1)_{p-\ell-1}\!+\!\dim_kW(1)\!-\!\ell =p,\] so that there are no containments among the irreducible sets $(\overline{\pr_1^{-1}(W(1)_{(\ell)})})_{\frac{p+1}{2}\le \ell \le p-2}$. As a result, ($\ast$) is the decomposition of $\mathcal{C}_2(W(1))$ into its irreducible components. (2) We now consider the ``Borel subalgebra'' $\mathfrak{b}:= W(1)_0$ of dimension $p\!-\!1$. Writing $W(1)=ke_{-1}\!\oplus\!\mathfrak{b}$ with $C_{W(1)}(e_{-1})=ke_{-1}$, we have $(\ad x)(W(1))= k[x,e_{-1}]\!\oplus\!(\ad x)(\mathfrak{b})$ for all $x \in \mathfrak{b}$, whence $\mathfrak{b}_{(\ell)} = W(1)_{(\ell+1)}$ for $1\!\le\!\ell\!\le\!p\!-\!3$, while $\mathfrak{b}_{(p-2)} = \mathfrak{b}\!\smallsetminus\!W(1)_1$. Consequently, \[ \dim \mathfrak{b}_{(\ell)} = \left\{ \begin{array}{cc} \ell & 1\!\le\!\ell\!\le\! \frac{p-3}{2} \\ \ell\!+\!1 & \frac{p-1}{2}\!\le\!\ell\le\!p\!-\!2\\ 0 & \ell\!=\!0\\ -1 & \text{else,} \end{array} \right.\] where we put $\dim\emptyset = -1$. Thus, $\modd(\mathfrak{b})=1$ and $\dim \mathcal{C}_2(\mathfrak{b})=p$. The arguments above show that $\overline{\pr_1^{-1}(\mathfrak{b}_{(\ell)})} \not \in \Irr(\mathcal{C}_2(\mathfrak{b}))$, whenever $1\!\le\!\ell\!\le\! \frac{p-3}{2}$. In view of the irreducibility of $\mathfrak{b}_{(\ell)}$, Proposition \ref{Pre3}(5) shows that $\overline{\pr_1^{-1}(\mathfrak{b}_{(\ell)})}$ is an irreducible component of dimension $p$ for $\ell \in \{\frac{p-1}{2},\ldots, p\!-\!2\}$. (3) We next consider $\mathfrak{u}:=W(1)_1$ and observe that $\mathfrak{u}_{(\ell)} = \mathfrak{b}_{(\ell+1)}\cap\mathfrak{u}$ for $0\!\le\!\ell\!\le\!p\!-\!3$. Consequently, \[ \dim \mathfrak{u}_{(\ell)} = \left\{ \begin{array}{cc} \ell\!+\!1 & 0\!\le\!\ell\!\le\! \frac{p-5}{2} \\ \ell\!+\!2 & \frac{p-3}{2}\!\le\!\ell\le\!p\!-\!4\\ -1 & \text{else,} \end{array} \right.\] so that $\modd(\mathfrak{u})=2$ and $\dim \mathcal{C}_2(\mathfrak{u})=p$. The remaining assertions follow as in (2). (4) In view of \cite[(2.3)]{YC}, we have $C_\mathfrak{g}(x) \subseteq \mathcal{N}$ for all $x \in \mathcal{N}\!\smallsetminus\!\{0\}$. This implies \[\mathcal{C}_2(\mathcal{N}) = \bigcup_{2 \le \ell \le p\!-\!1} \overline{\pr_1^{-1}(W(1)_{(\ell)}\cap \mathcal{N})} = \bigcup_{2 \le \ell \le p\!-\!2} \overline{\pr_1^{-1}(W(1)_{(\ell)})}\cup\overline{\pr_1^{-1}(W(1)_{(p-1)}\cap\mathcal{N})}.\] By the arguments above, we have $\pr_1^{-1}(W(1)_{(\ell))} \subseteq \bigcup_{\frac{p+1}{2}\le n \le p-2}\overline{\pr_1^{-1}(W(1)_{(n)})}$ for $\ell \in \{1,\ldots,\frac{p\!-\!1}{2}\}$, so that \[ \mathcal{C}_2(\mathcal{N}) = \bigcup_{\frac{p+1}{2}\le \ell\le p-2} \overline{\pr_1^{-1}(W(1)_{(\ell)})}\cup (\overline{\pr_1^{-1}(W(1)_{(p-1)}\cap \mathcal{N})}.\] By work of Premet \cite{Pr0}, the variety $\mathcal{N}$ is irreducible of dimension $\dim \mathcal{N}=p\!-\!1$. It follows that the dense open subset $W(1)_{(p-1)}\cap\mathcal{N}$ is irreducible as well. Lemma \ref{Pre1} implies that $\pr_1(C)$ is closed in $W(1)_{(p-1)}\cap\mathcal{N}$ for every $C \in \Irr(\mathcal{C}_2(\mathcal{N})|_{W(1)_{(p-1)}\cap\mathcal{N}})$. Using \cite[(1.5)]{Fa04}, we conclude that the variety \[ \pr_1^{-1}(W(1)_{(p-1)}\cap \mathcal{N})= \mathcal{C}_2(\mathcal{N})|_{W(1)_{(p-1)}\cap\mathcal{N}}\] is irreducible of dimension $p$. \end{proof} \end{Example} \bigskip \begin{Remarks} (1) \ In \cite[(Thm.5)]{Le} P.\ Levy has shown that commuting varieties of Lie algebras of reductive algebraic groups are irreducible, provided the characteristic of $k$ is good for $\mathfrak{g}$. For $p\!=\!3$, we have $W(1) \cong \mathfrak{sl}(2)$, so that $\mathcal{C}_2(W(1))$ is in fact irreducible. Our example above shows that commuting varieties of Lie algebras, all whose maximal tori are self-centralizing, may not even be equidimensional. In contrast to $W(1)$, the ``Borel subalgebra'' $\mathfrak{b} \subseteq W(1)$, whose maximal tori are also self-centralizing, is an algebraic Lie algebra. (2) \ A consecutive application of (4) and \cite[(2.5.1),(2.5.2)]{CF} implies that the variety $\mathbb{E}(2,W(1))$ of two-dimensional elementary abelian subalgebras of $W(1)$ has pure dimension $p\!-\!4$ as well as $|\Irr(\mathbb{E}(2,W(1)))|$ $=\frac{p-3}{2}$. \end{Remarks} \bigskip \section{Algebraic Lie algebras} Let $\mathfrak{g}=\Lie(G)$ be the Lie algebra of a connected algebraic group $G$. The adjoint representation \[ \Ad : G \longrightarrow \Aut(\mathfrak{g})\] induces an action \[ g\boldsymbol{.}(x,y) := (\Ad(g)(x), \Ad(g)(y))\] of $G$ on the commuting variety $\mathcal{C}_2(\mathfrak{g})$ such that the surjections \[ \pr_i : \mathcal{C}_2(\mathfrak{g}) \longrightarrow \mathfrak{g}\] are $G$-equivariant. In the sequel, we will often write $g\boldsymbol{.} x:= \Ad(g)(x)$ for $g \in G$ and $x \in \mathfrak{g}$. Let $T \subseteq G$ be a maximal torus with character group $X(T)$, \[ \mathfrak{g} = \mathfrak{g}^T\!\oplus\!\bigoplus_{\alpha \in R_T} \mathfrak{g}_\alpha\] be the root space decomposition of $\mathfrak{g}$ relative to $T$. Here $R_T \subseteq X(T)\!\smallsetminus\!\{0\}$ is the set of roots of $G$ relative to $T$, while $\mathfrak{g}^T := \{x \in \mathfrak{g} \ ; \ t\boldsymbol{.} x = x \ \ \forall \ t \in T\}$ denotes the subalgebra of points of $\mathfrak{g}$ that are fixed by $T$. Given $x = x_0\!+\! \sum_{\alpha \in R_T}x_\alpha \in \mathfrak{g}$, we let \[ \supp(x):=\{\alpha \in R_T \ ; \ x_\alpha \ne 0\}\] be the {\it support} of $x$. For any subset $S \subseteq X(T)$, we denote by $\mathbb{Z} S$ the subgroup of $X(T)$ generated by $S$. The group $\mathbb{Z} R_T $ is the called the {\it root lattice} of $G$ relative to $T$. If $H\subseteq G$ is a closed subgroup and $x \in \mathfrak{g}$, then $C_H(x):=\{ h \in H \ ; \ h\boldsymbol{.} x = x\}$ is the centralizer of $x$ in $H$. \bigskip \subsection{Centralizers, supports, and components} \begin{Lem} \label{CSC1} Let $T \subseteq G$ be a maximal torus, $x \in \mathfrak{g}$. Then we have \[ \dim C_T(x) = \dim T\!-\!\rk(\mathbb{Z}\supp(x)).\] \end{Lem} \begin{proof} Writing \[ x = \sum_{\alpha \in R_T\cup\{0\}} x_\alpha,\] we see that $C_T(x)=\bigcap_{\alpha \in \supp(x)}\ker\alpha = \bigcap_{\alpha \in \mathbb{Z}\supp(x)}\ker\alpha$. Since $T$ is a torus, its coordinate ring $k[T]$ is the group algebra $kX(T)$ of $X(T) \subseteq k[T]^\times$. By the above, the centralizer $C_T(x)$ coincides with the zero locus $Z(\{ \alpha\!-\!1 \ ; \ \alpha \in \mathbb{Z}\supp(x)\})$. Thus, letting $(k\mathbb{Z}\supp(x))^\dagger$ denote the augmentation ideal of $k\mathbb{Z}\supp(x)$, we obtain the ensuing equalities of Krull dimensions \begin{eqnarray*} \dim k[C_T(x)] & = & \dim k[T]/k[T]\{ \alpha\!-\!1 \ ; \ \alpha \in \mathbb{Z}\supp(x)\} = \dim kX(T)/kX(T)(k\mathbb{Z}\supp(x))^\dagger \\ & =& \dim k(X(T)/\mathbb{Z}\supp(x)), \end{eqnarray*} so that \cite[(3.2.7)]{Sp98} yields \[\dim C_T(x) = \dim k[C_T(x)] = \rk(X(T)/\mathbb{Z}\supp(x)) = \dim T\!-\!\rk(\mathbb{Z}\supp(x)),\] as desired. \end{proof} \bigskip \noindent Let $\mathfrak{g}:= \Lie(G)$ be the Lie algebra of a connected algebraic group $G$, $\mathfrak{n} \subseteq \mathfrak{g}$ be a $G$-stable subalgebra. Then $\mathcal{C}_2(\mathfrak{n}) \subseteq \mathcal{C}_2(\mathfrak{g})$ is a closed, $G$-stable subset. For $x \in \mathfrak{n}$, we define \[ \mathfrak{C}(x):= \overline{G\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{n}(x))} \subseteq \mathcal{C}_2(\mathfrak{n}).\] Then $\mathfrak{C}(x)=\overline{\pr_1^{-1}(G\boldsymbol{.} x)}$ is a closed irreducible subset of $\mathcal{C}_2(\mathfrak{n})$ such that $\mathfrak{C}(x)=\mathfrak{C}(g\boldsymbol{.} x)$ for all $g \in G$. It will be convenient to have the following three basic observations at our disposal. \bigskip \begin{Lem} \label{CSC2} Let $\mathfrak{x},\mathfrak{y} : k \longrightarrow \mathfrak{n}$ be morphisms, $\mathcal{O} \subseteq k$ be a non-empty open subset such that \begin{enumerate} \item[(a)] $[\mathfrak{x}(\alpha),\mathfrak{y}(\alpha)] = 0 \ \ \ \ \ \ \forall \ \alpha \in k$, and \item[(b)] $\mathfrak{x}(\alpha) \in G\boldsymbol{.} \mathfrak{x}(1) \ \ \ \ \ \ \ \ \forall \ \alpha \in \mathcal{O}$. \end{enumerate} Then we have $(\mathfrak{x}(0),\mathfrak{y}(0)) \in \mathfrak{C}(\mathfrak{x}(1))$. \end{Lem} \begin{proof} In view of (a), there is a morphism \[ \varphi : k \longrightarrow \mathcal{C}_2(\mathfrak{n}) \ \ ; \ \ \alpha \mapsto (\mathfrak{x}(\alpha),\mathfrak{y}(\alpha)).\] Let $\alpha \in \mathcal{O}$. Then (b) provides $g \in G$ such that $\mathfrak{x}(\alpha) = g\boldsymbol{.} \mathfrak{x}(1)$. Thus, \[ \varphi(\alpha) = g\boldsymbol{.} (\mathfrak{x}(1),g^{-1}\boldsymbol{.} \mathfrak{y}(\alpha)) \in \mathfrak{C}(\mathfrak{x}(1)) \ \ \ \ \ \ \forall \ \alpha \in \mathcal{O},\] so that \[ (\mathfrak{x}(0),\mathfrak{y}(0))=\varphi(0) \in \varphi(\overline{\mathcal{O}}) \subseteq \overline{\varphi(\mathcal{O})} \subseteq \mathfrak{C}(\mathfrak{x}(1)),\] as desired. \end{proof} \bigskip \begin{Lem} \label{CSC3} Let $T \subseteq G$ be a maximal torus, $x \in \mathfrak{n}$. Suppose that $c \in \mathfrak{n}\cap \mathfrak{g}_{\alpha_0}$ (for some $\alpha_0 \in R_T$) is such that \begin{enumerate} \item[(a)] $\rk(\mathbb{Z}\supp(x\!+\!c))\! >\! \rk(\mathbb{Z}\supp(x))$, and \item[(b)] $k[c,x]=[c, C_\mathfrak{n}(x)]$. \end{enumerate} Then $\mathfrak{C}(x)\subseteq \mathfrak{C}(x\!+\!c)$. \end{Lem} \begin{proof} Note that \[ x\!+\! \alpha_0(t)c = t\boldsymbol{.} (x\!+\!c) \in G\boldsymbol{.} (x\!+\!c) \ \ \ \ \ \ \ \ \forall \ t \in C_T(x).\] In view of Lemma \ref{CSC1}, condition (a) ensures that $\dim C_T(x\!+\!c)^\circ\!<\!\dim C_T(x)^\circ$, so that $\dim \overline{\im \alpha_0(C_T(x)^\circ)}$ $=1$. Chevalley's Theorem (cf.\ \cite[(I.\S8)]{Mu}) thus provides a dense open subset $\mathcal{O} \subseteq k$ such that $\mathcal{O}\subseteq \alpha_0(C_T(x)^\circ)$. As a result, \[ (\ast) \ \ \ \ \ \ \ x\!+\!\lambda c \in G\boldsymbol{.} (x\!+\!c) \ \ \ \ \ \text{for all} \ \lambda \in \mathcal{O}.\] Condition (b) provides a linear form $\eta \in C_\mathfrak{n}(x)^\ast$ such that \[ [y,c] = \eta(y)[x,c] \ \ \ \ \ \ \ \forall \ y \in C_\mathfrak{n}(x).\] Given $y \in C_\mathfrak{n}(x)$, we define morphisms $\mathfrak{x}, \mathfrak{y} : k \longrightarrow \mathfrak{n}$ via \[ \mathfrak{x}(\alpha) = x\!+\!\alpha c \ \ \ \text{and} \ \ \ \mathfrak{y}(\alpha) := \left\{\begin{array}{cl} y\!+\!\eta(x)^{-1}\eta(y)\alpha c & \eta(x)\!\ne\!0 \\ y & \eta(x)\!=\!0. \end{array} \right.\] In view of ($\ast$), we may apply Lemma \ref{CSC2} to obtain \[ (x,y) = (\mathfrak{x}(0),\mathfrak{y}(0)) \in \mathfrak{C}(x\!+\!c).\] As a result, $\{x\}\!\times\!C_\mathfrak{n}(x) \subseteq \mathfrak{C}(x\!+\!c)$, whence $\mathfrak{C}(x) \subseteq \mathfrak{C}(x\!+\!c)$. \end{proof} \bigskip \begin{Lem} \label{CSC4} Given $x \in \mathfrak{n}$, let $\mathfrak{v} \subseteq \mathfrak{n}$ be a $G$-submodule such that $G\boldsymbol{.} x \subseteq \mathfrak{v}$. Then the following statements hold: \begin{enumerate} \item If $\mathfrak{C}(x) \in \Irr(\mathcal{C}_2(\mathfrak{n}))$, then $C_\mathfrak{n}(x) \subseteq \mathfrak{v}$. \item If $C_\mathfrak{n}(\mathfrak{v}) \not \subseteq \mathfrak{v}$, then $\mathfrak{C}(x) \not \in \Irr(\mathcal{C}_2(\mathfrak{n}))$. \end{enumerate} \end{Lem} \begin{proof} (1) Since the component $\mathfrak{C}(x)$ is $\GL_2(k)$-stable, we have $C_\mathfrak{n}(x)\!\times\!\{x\} \subseteq \mathfrak{C}(x)$. Thus, \[ C_\mathfrak{n}(x) \subseteq \pr_1(\mathfrak{C}(x)) \subseteq \overline{G\boldsymbol{.} x} \subseteq \mathfrak{v}.\] (2) Let $y \in C_\mathfrak{n}(\mathfrak{v})\!\smallsetminus\!\mathfrak{v}$. Since $x \in \mathfrak{v}$, we have $y \in C_\mathfrak{n}(x)\!\smallsetminus\!\mathfrak{v}$, and our assertion follows from (1). \end{proof} \bigskip \subsection{Distinguished elements} Let $\mathfrak{g}\!=\!\Lie(G)$ be the Lie algebra of a connected algebraic group $G$. In the following, we denote by $T(G)$ the maximal torus of $Z(G)$. Note that $T(G)$ is contained in any maximal torus $T \subseteq G$. An element $x \in \mathfrak{g}$ is {\it distinguished} ({\it for $G$}), provided every torus $T \subseteq C_G(x)$ is contained in $T(G)$. If $x$ is distinguished, so is every element of $G\boldsymbol{.} x$. In that case, we say that $G\boldsymbol{.} x$ is a {\it distinguished orbit}. \bigskip \begin{Lem} \label{DE1} Let $x \in \mathfrak{g}$. Then $x$ is distinguished if and only if $C_T(x)^\circ\!=\!T(G)$ for every maximal torus $T \subseteq G$. \end{Lem} \begin{proof} Suppose that $x$ is distinguished. If $T \subseteq G$ is a maximal torus, then $C_T(x)^\circ \subseteq C_G(x)$ is a torus, so that $C_T(x)^\circ \subseteq T(G)$. On the other hand, we have $T(G)\subseteq T$, whence $T(G) \subseteq C_T(x)^\circ$. For the reverse direction, we let $T' \subseteq C_G(x)$ be a torus. Then there is a maximal torus $T\supseteq T'$ of $G$, so that \[ T' \subseteq C_T(x)^\circ = T(G).\] Hence $x$ is distinguished. \end{proof} \bigskip \begin{Lem} \label{DE2} Let $B \subseteq G$ be a Borel subgroup with unipotent radical $U$. We write $\mathfrak{b}:=\Lie(B)$ and $\mathfrak{u}:=\Lie(U)$. \begin{enumerate} \item If $x \in \mathfrak{b}$ is distinguished for $G$, then it is distinguished for $B$. \item If $\mathcal{O} \subseteq \mathfrak{g}$ is a distinguished $G$-orbit, then $\mathcal{O}\cap\mathfrak{u}$ consists of distinguished elements for $B$. \end{enumerate} \end{Lem} \begin{proof} (1) Since $B$ is a Borel subgroup, \cite[(6.2.9)]{Sp98} yields $Z(G)^\circ = Z(B)^\circ$, whence $T(G)=T(B)$. Let $T' \subseteq C_B(x)$ be a torus. Since $x$ is distinguished for $G$, we obtain $T' \subseteq T(G)=T(B)$, so that $x$ is also distinguished for $B$. (2) This follows directly from (1). \end{proof} \bigskip \begin{Lem} \label{DE3} Let $G$ be a connected algebraic group with maximal torus $T$ such that $Z(G) = \bigcap_{\alpha \in R_T}\ker \alpha$. \begin{enumerate} \item If $x \in \mathfrak{g}$ is distinguished, then $\rk(\mathbb{Z}\supp(x))=\rk (\mathbb{Z} R_T)$. \item If $(T\cap C_G(x))^\circ$ is a maximal torus of $C_G(x)$ and $\rk(\mathbb{Z} \supp(x))=\rk (\mathbb{Z} R_T)$, then $x$ is distinguished. \end{enumerate} \end{Lem} \begin{proof} Let $\hat{x} \in \mathfrak{g}$ be an element such that $\supp(\hat{x})=R_T$. By assumption, we have $Z(G)=C_T(\hat{x})$, and Lemma \ref{CSC1} implies \[ \dim Z(G) = \dim T\!-\!\rk(\mathbb{Z} R_T).\] By the same token, \[ \dim C_T(x)\!-\!\dim Z(G) = \rk(\mathbb{Z} R_T)\!-\!\rk(\mathbb{Z} \supp(x))\] for every $x \in \mathfrak{g}$. (1) Let $x \in \mathfrak{g}$ be distinguished. Observing $Z(G) \subseteq T$, we have $Z(G)^\circ = C_T(x)^\circ$. Hence $\rk(\mathbb{Z} R_T)\!=\!\rk(\mathbb{Z} \supp(x))$. (2) We put $\hat{T}:= (T\cap C_G(x))^\circ$. Since $\hat{T} \subseteq C_T(x)^\circ$, we obtain $\hat{T}=C_T(x)^\circ$. Hence $\rk(\mathbb{Z}\supp(x))=\rk (\mathbb{Z} R_T)$ yields $\hat{T}=Z(G)^\circ$, so that $Z(G)^\circ$ is a maximal torus of $C_G(x)$. As a result, the element $x$ is distinguished. \end{proof} \bigskip \noindent Recall that the semisimple rank $\ssrk(G)$ of a reductive group $G$ coincides with the rank of its derived group $(G,G)$. \bigskip \begin{Cor} \label{DE4} Let $B \subseteq G$ be a Borel subgroup of a reductive group $G$, $T \subseteq B$ be a maximal torus. If $x \in \mathfrak{b}$ is distinguished for $B$, then \[ \rk(\mathbb{Z} \supp(x))= \ssrk(G).\] \end{Cor} \begin{proof} Let $T \subseteq B$ be a maximal torus. Then $T$ is a maximal torus for $G$ such that $Z(G)= \bigcap_{\alpha \in R_T} \ker\alpha$, cf.\ \cite[(\S 26, Ex.4)]{Hu81}. In view of \cite[(6.2.9)]{Sp98}, we have $\dim Z(G)^\circ = \dim Z(B)^\circ$. Lemma \ref{CSC1} implies \begin{eqnarray*} \dim C_T(x)\!-\!\dim Z(B) & = & \dim C_T(x)\!-\!\dim Z(G) = \rk(\mathbb{Z} R_T)\!-\!\rk(\mathbb{Z} \supp(x)) \\ &= & \ssrk(G)\!-\!\rk(\mathbb{Z}\supp(x)) \end{eqnarray*} for every $x \in \mathfrak{b}$, cf.\ \cite[(II.1.6)]{Ja03}. Let $x \in \mathfrak{b}$ be distinguished for $B$. Then $Z(B)^\circ \subseteq T$ is a maximal torus of $C_B(x)$ and $Z(B)^\circ \subseteq C_T(x) \subseteq C_B(x)$. Thus, $C_T(x)^\circ=Z(B)^\circ$, and the identity above yields $\rk(\mathbb{Z}\supp(x))=\ssrk(G)$. \end{proof} \bigskip \subsection{Modality} \label{S:Mod} Let $G$ be a connected algebraic group acting on an algebraic variety $X$. Given $i \in \mathbb{N}_0$, we put \[ X_{[i]} := \{x \in X \ ; \ \dim G.x = i\}.\] Since $X_{[i]}=\emptyset$ whenever $i\!>\!\dim X$, the set $\mathbb{N}_0(X):=\{ i \in \mathbb{N}_0 \ ; \ X_{[i]}\ne \emptyset\}$ is finite. The set $X_{[i]}$ is locally closed and $G$-stable. If $x \in X_{[i]}$, then $G.x$ is closed in $X_{[i]}$. \bigskip \noindent Suppose that $G$ acts on $X$. Then \[ \modd (G;X):= \max_{i \in \mathbb{N}_0(X)} \dim X_{[i]}\!-\!i\] is called the {\it modality of $G$ on $X$}. For ease of reference, we record the following well-known fact. \bigskip \begin{Lem} \label{Mod1}Suppose that the connected algebraic group $G$ acts on $X$. Then $\modd(G;X)=0$ if and only if $G$ acts on $X$ with finitely many orbits. In this case, $X_{[i]}$ has pure dimension $i$ for every $i \in \mathbb{N}_0(X)$. \end{Lem} \bigskip \begin{Prop} \label{Mod 2}Let $G$ be a connected algebraic group with Lie algebra $\mathfrak{g}$ and such that $\Lie(C_G(x)) = C_\mathfrak{g}(x)$ for all $x \in \mathfrak{g}$. Then we have \[ \dim \mathcal{C}_2(\mathfrak{g})= \dim G\!+\!\modd(G;\mathfrak{g}).\] \end{Prop} \begin{proof} Given $x \in \mathfrak{g}$, the identity $\Lie(C_G(x)) = C_\mathfrak{g}(x)$ implies that the differential \[ \mathfrak{g} \longrightarrow T_x(G\boldsymbol{.} x) \ \ ; \ \ y \mapsto [y,x]\] of the orbit map $g \mapsto g\boldsymbol{.} x$ is surjective, cf. \cite[(2.2)]{Ja04}. In particular, $\rk(\ad x)= \dim G\boldsymbol{.} x$, so that \[ \mathfrak{g}_{(n)} = \mathfrak{g}_{[n]}.\] Hence $\modd(\mathfrak{g})=\modd(G;\mathfrak{g})$, and our assertion follows from Proposition \ref{Pre3}(2). \end{proof} \bigskip \section{Springer Isomorphisms} The technical condition of Proposition \ref{Mod 2} automatically holds in case $\Char(k)\!=\!0$. In this section, we are concerned with its verification for the unipotent radicals of Borel subgroups for good characteristics of $G$. Throughout, we assume that $G$ is a connected reductive group. Following \cite[(2.6)]{Ja04} we say that the characteristic $\Char(k)$ is {\it good for $G$}, provided $\Char(k)=0$ or the prime $p:= \Char(k)\!>\!0$ is a good prime for $G$, see loc.\ cit.\ for more details. \bigskip \begin{Lemma} \label{SpI1} Let $G$ be semisimple with almost simple factors $G_1, \ldots, G_n$. For $i \in \{1,\ldots, n\}$, we let $B_i=U_i\!\rtimes\!T_i$ be a Borel subgroup of $G_i$ with unipotent radical $U_i$ and maximal torus $T_i$. Then the following statements hold: \begin{enumerate} \item $B\!:=\!B_1\cdots B_n$ is a Borel subgroup of $G$ with unipotent radical $U\!:=\!U_1\cdots U_n$ and maximal torus $T\!:=\! T_1\cdots T_n$. \item The product morphism \[ \mu_U : \prod_{i=1}^n U_i \longrightarrow U \ \ ; \ \ (u_1,\ldots, u_n) \mapsto u_1\cdot u_2\cdots u_n\] is an isomorphism of algebraic groups. \end{enumerate}\end{Lemma} \begin{proof} We consider the direct product $\hat{G} := \prod_{i=1}^n G_i$ along with the multiplication \[ \mu_G : \hat{G} \longrightarrow G \ \ ; \ \ (g_1,\ldots, g_n) \mapsto g_1\cdot g_2\cdots g_n.\] Since $(G_i,G_j)=e_k$ for $i\ne j$, it follows that $\mu_G$ is a surjective homomorphism of algebraic groups, cf.\ \cite[(27.5)]{Hu81} (1) We put $\hat{B}:= \prod_{i=1}^n B_i$, $\hat{U}:= \prod_{i=1}^n U_i$, and $\hat{T}:= \prod_{i=1}^n T_i$. These three subgroups of $\hat{G}$ are closed and connected. Moreover, they are solvable, unipotent and diagonalizable, respectively. Direct computation shows that $\hat{U}$ is normal in $\hat{B}$ as well as $\hat{B}=\hat{U}\!\rtimes\! \hat{T}$. Let $H\supseteq \hat{B}$ be a connected, closed solvable subgroup of $\hat{G}$. Since the $i$-th projection $\pr_i : \hat{G} \longrightarrow G_i$ is a homomorphism of algebraic groups for $1\!\le\!i\!\le\!n$, it follows that $H_i := \pr_i(H)\supseteq B_i$ is a closed, connected, solvable subgroup of $G_i$. Hence $H_i=B_i$, so that \[ H \subseteq \prod_{i=1}^nH_i = \hat{B}.\] As a result, $\hat{B}$ is a Borel subgroup of $\hat{G}$. In view of \cite[(21.3C)]{Hu81}, $B=\mu(\hat{B})$ is a Borel subgroup of $G$. Similarly, $T=\mu_G(\hat{T})$ is a maximal torus of $B$. In addition, $B = \mu_G(\hat{B})=\mu_G(\hat{U}\!\rtimes\!\hat{T})=U\cdot T$. It follows that the unipotent closed normal subgroup $U=\mu_G(\hat{U}) \unlhd B$ is the unipotent radical of $B$. (2) According to \cite[(27.5)]{Hu81}, the product morphism \[ \mu_G : \hat{G} \longrightarrow G\] has a finite kernel. Since $\hat{G}$ is connected, it follows that $\ker \mu_G \subseteq Z(\hat{G})$, while $\hat{G}$ being semisimple forces $Z(\hat{G})$ to be diagonalizable, cf.\ \cite[(II.1.6)]{Ja03}. As a result, the kernel $\ker \mu_U$ is diagonalizable and unipotent, so that $\ker \mu_U\!=\!\{1\}$. Since $\mu_U$ is surjective, map $\mu_U$ is a bijective morphism of algebraic varieties. Note that $\Lie(\hat{U})\!=\!\bigoplus_{i=1}^n\Lie(U_i)$ and that the differential $\mathsf{d}(\mu_U) : \Lie(\hat{U}) \longrightarrow \Lie(U)$ is given by \[ (x_1,\ldots, x_n) \mapsto \sum_{i=1}^n x_i.\] Let $i\!\ne\!j$. Since $(T_i,U_j)\!=\!\{1\}$, we have $\Ad(t_i)|_{\Lie(U_j)}\!=\!\id_{\Lie(U_j)} \ \ \forall \ t_i \in T_i$. Thus, if $(x_1,\ldots, x_n) \in \ker\mathsf{d}(\mu_U)$, then $\Ad(t)(x_i)\!=\!x_i$ for all $t \in T$ and $i \in \{1,\ldots, n\}$. Using the root space decomposition of $\Lie(U)$ relative to $T$, we conclude that $x_i\!=\!0$ for $i \in \{1,\ldots,n\}$. As a result, the map $\mathsf{d}(\mu_U)$ is injective. Since $\mu_U$ is bijective, we have \[ \dim_k\Lie(U) = \dim_k\Lie(\hat{U}) = \sum_{i=1}^n\dim_k\Lie(U_i)\] so that $\mathsf{d}(\mu_U)$ is an isomorphism. We may now apply \cite[(5.3.3)]{Sp98} to conclude that $\mu_U$ is an isomorphism as well. \end{proof} \bigskip \noindent Let $B \subseteq G$ be a Borel subgroup with unipotent radical $U \unlhd B$. A $B$-equivariant isomorphism \[ \varphi : U \longrightarrow \Lie(U)\] will be referred to as a {\it Springer isomorphism for $B$}. \bigskip \noindent Springer isomorphisms first appeared in \cite{Sp69} in the context of semisimple algebraic groups, providing a homeomorphism between the unipotent variety of a group and the nilpotent variety of its Lie algebra. Our next result extends \cite[(2.2),(4.2)]{Go06} to the context of reductive groups. \bigskip \begin{Proposition} \label{SpI2} Suppose that $\Char(k)$ is good for $G$. Let $B \subseteq G$ be a Borel subgroup with unipotent radical $U$ and put $\mathfrak{u}:= \Lie(U)$. \begin{enumerate} \item There is a Springer isomorphism $\varphi : U \longrightarrow \mathfrak{u}$. \item We have $\Lie(C_U(x))=C_\mathfrak{u}(x)$ for every $x \in \mathfrak{u}$. \end{enumerate}\end{Proposition} \begin{proof} (1) We first assume that $G$ is semisimple, so that $G=G_1\cdots G_n$, where $G_i \unlhd G$ is almost simple. As before, we put $\hat{G} := \prod_{i=1}^nG_i$. Then every Borel subgroup of $\hat{G}$ is of the form $\hat{B}= \prod_{i=1}^nB_i$ for some Borel subgroups $B_i \subseteq G_i$. Hence \cite[(21.3C)]{Hu81} ensures that there exist Borel subgroups $B_i = U_i\!\rtimes\!T_i$ of $G_i$ such that $B=B_1\cdots B_n$ and $U=U_1\cdots U_n$. We put $\mathfrak{u}_i:=\Lie(U_i)$. As noted in \cite[(2.2)]{Go06}, there are Springer isomorphisms $\varphi_i : U_i \longrightarrow \mathfrak{u}_i$ for $1\!\le\!i\!\le\!n$. We define $\hat{B}$ and $\hat{U}$ as in the proof of Lemma \ref{SpI1} and consider the product morphisms \[ \mu_B : \hat{B} \longrightarrow B \ \ \text{and} \ \ \mu_U : \hat{U} \longrightarrow U.\] Then $\Lie(\hat{U})=\bigoplus_{i=1}^n\mathfrak{u}_i$ and \[ \hat{\varphi} : \hat{U} \longrightarrow \Lie(\hat{U}) \ \ ; \ \ (u_1,\ldots,u_n) \mapsto (\varphi_1(u_1),\ldots,\varphi_n(u_n))\] is a $\hat{B}$-equivariant isomorphism of varieties. Lemma \ref{SpI1} implies that $\mu_U: \hat{U} \longrightarrow U$ is an isomorphism of algebraic groups such that \[ \mu_U(\hat{b}\hat{u}\hat{b}^{-1}) = \mu_B(\hat{b})\mu_U(\hat{u})\mu_B(\hat{b})^{-1} \] for all $\hat{b} \in \hat{B}$ and $\hat{u} \in \hat{U}$. Moreover, the differential \[\mathsf{d}(\mu_U): \Lie(\hat{U}) \longrightarrow \mathfrak{u}\] is an isomorphism such that \[ \mathsf{d}(\mu_U)(\Ad\hat{b}(x)) = \Ad(\mu_B(\hat{b}))(\mathsf{d} (\mu_U)(x))\] for all $\hat{b} \in \hat{B}$ and $x \in \Lie(\hat{U})$. Consequently, $\varphi := \mathsf{d}(\mu_U)\circ \hat{\varphi} \circ \mu_U^{-1}$ defines an isomorphism \[ \varphi : U \longrightarrow \mathfrak{u}.\] For $b=\mu_B(\hat{b}) \in B$ and $u \in U$, we obtain, writing $b\boldsymbol{.} x:= \Ad(b)(x)$, \[ \varphi(bub^{-1}) = (\mathsf{d}(\mu_U) \circ \hat{\varphi})(\hat{b}\mu_U^{-1}(u)\hat{b}^{-1}) = \mathsf{d}(\mu_U) (\hat{b}\boldsymbol{.} \hat{\varphi}(\mu_U^{-1}(u))) = b\boldsymbol{.}\varphi(u),\] as desired. Now let $G$ be reductive. Then $G':=(G,G)$ is semisimple, while $G=G'\cdot Z(G)^\circ$, with $Z(G)^\circ$ being a torus. Let $B \subseteq G$ be a Borel subgroup. Since $Z(G)^\circ \subseteq B$, we obtain $B=(B\cap G')Z(G)^\circ$, and $B$ being connected implies that $B=(B\cap G')^\circ Z(G)^\circ$. Let $B'\supseteq (B\cap G')^\circ$ be a Borel subgroup of $G'$. Then $B'Z(G)^\circ$ is a closed, connected, solvable subgroup of $G$ containing $B$, whence $B=B'Z(G)^\circ$. As a result, $B'\subseteq B\cap G'$, so that $B'=(B\cap G')^\circ$. Let $U$ be the unipotent radical of $B$. Since $Z(G)^\circ \twoheadrightarrow G/G'$ is onto, the latter group is diagonalizable, so that the canonical morphism $U \longrightarrow G/G'$ is trivial. As a result, $U \subseteq G'$, whence $U\subseteq (B\cap G')^\circ$. If $U'$ is the unipotent radical of $(B\cap G')^\circ$, then $B=(B\cap G')^\circ Z(G)^\circ$ implies that $U'$ is normal in $B$, whence $U'\subseteq U$. It follows that $U$ is the unipotent radical of the Borel subgroup $(B\cap G')^\circ$ of $G'$. The first part of the proof now provides a $(B\cap G')^\circ$-equivariant isomorphism $\varphi : U \longrightarrow \mathfrak{u}$. Since $Z(G)$ acts trivially on both spaces, this map is also $B$-equivariant. (2) In view of (1), the arguments of \cite[(4.2)]{Go06} apply. \end{proof} \bigskip \section{Commuting varieties of unipotent radicals} Throughout this section, $G$ denotes a connected reductive algebraic group. If $B$ is a Borel subgroup of $G$ with unipotent radical $U$, then $B$ acts on $\mathfrak{u}:=\Lie(U)$ via the adjoint representation. Hence $B$ also acts on the commuting variety $\mathcal{C}_2(\mathfrak{u})$, and for every $x \in \mathfrak{u}$ we consider \[ \mathfrak{C}(x):=\overline{B\boldsymbol{.} (\{x\}\!\times\!C_\mathfrak{u}(x))}.\] As observed earlier, we have \[ \mathfrak{C}(x) = \mathfrak{C}(b\boldsymbol{.} x) \ \ \ \ \ \ \ \ \ \forall \ b \in B, x \in \mathfrak{u}.\] \bigskip \subsection{The dimension formula} \begin{Lem} \label{Df1} Let $B \subseteq G$ be a Borel subgroup with unipotent radical $U \subseteq B$, $x \in \mathfrak{u}:= \Lie(U)$. \begin{enumerate} \item There exists a maximal torus $T \subseteq B$ such that \begin{enumerate} \item $C_B(x)^\circ=C_U(x)^\circ\!\rtimes\!C_T(x)^\circ$, and \item $\mathfrak{C}(x)$ is irreducible of dimension \[ \dim \mathfrak{C}(x) = \dim B\!-\!\dim C_T(x)\] whenever $\Char(k)$ is good for $G$. \end{enumerate} \item If $\Char(k)$ is good for $G$, then we have \[ \dim \mathfrak{C}(x)=\dim B\!-\!\dim Z(G)\] if and only if $x$ is distinguished for $B$. \end{enumerate} \end{Lem} \begin{proof} (1a) Let $T' \subseteq C_B(x)^\circ$ be a maximal torus, $T\supseteq T'$ be a maximal torus of $B$. We write $B=U\!\rtimes\!T$ and recall that $U=B_u$ is the set of unipotent elements of $B$, see \cite[(6.3.3),(6.3.5)]{Sp98}. Thus, $C_U(x)^\circ= C_B(x)^\circ_u=B_u\cap C_B(x)^\circ$ is the unipotent radical of $C_B(x)^\circ$. Since $T'\subseteq C_T(x)^\circ$, while the latter group is a torus of $C_B(x)^\circ$, it follows that $T'=C_T(x)^\circ$. General theory (cf.\ \cite[(6.3.3),(6.3.5)]{Sp98}) now yields \[ C_B(x)^\circ = C_B(x)^\circ_u\!\rtimes\!T' = C_U(x)^\circ\!\rtimes\!C_T(x)^\circ,\] as asserted. (1b) Since $\{x\}\!\times\!C_\mathfrak{u}(x)$ is irreducible, so is the closure $\mathfrak{C}(x)$ of its $B$-saturation. Consider the dominant morphism \[ \omega : B\!\times\!C_\mathfrak{u}(x) \longrightarrow \mathfrak{C}(x) \ \ ; \ \ (b,y) \mapsto (b\boldsymbol{.} x,b\boldsymbol{.} y).\] We fix $(b_0\boldsymbol{.} x,b_0\boldsymbol{.} y_0) \in \im \omega$. Then \[ \zeta : C_B(x) \longrightarrow \omega^{-1}(b_0\boldsymbol{.} x,b_0\boldsymbol{.} y_0) \ \ ; \ \ c \mapsto (b_0c,c^{-1}\boldsymbol{.} y_0)\] is a morphism with inverse morphism \[ \eta : \omega^{-1}(b_0\boldsymbol{.} x,b_0\boldsymbol{.} y_0) \longrightarrow C_B(x) \ \ ; \ \ (b,y) \mapsto b_0^{-1}b.\] As a result, $\dim\omega^{-1}(b_0\boldsymbol{.} x,b_0\boldsymbol{.} y_0)=\dim C_B(x)$, and the fiber dimension theorem gives \[ \dim \mathfrak{C}(x) = \dim B\!+\!\dim C_\mathfrak{u}(x)\!-\!\dim C_B(x).\] In view of Proposition \ref{SpI2}(2), we have $\Lie(C_U(x))=C_\mathfrak{u}(x)$. Consequently, \[ \dim \mathfrak{C}(x) = \dim B\!+\!\dim C_U(x)^\circ\!-\!\dim C_B(x)^\circ,\] and the assertion now follows from (1a). (2) Suppose that $\dim \mathfrak{C}(x)\!=\!\dim B\!-\!\dim Z(G)$. Part (1) provides a maximal torus $T \subseteq B$ such that $\dim C_T(x)\!=\!\dim Z(G)$. This readily implies $C_T(x)^\circ\!=\!Z(G)^\circ$, so that $C_B(x)^\circ\!=\!Z(G)^\circ\!\ltimes\!C_U(x)^\circ$. In particular, $Z(G)^\circ$ is the unique maximal torus of $C_B(x)^\circ$, so that $x$ is distinguished for $B$. Suppose that $x$ is distinguished for $B$. Let $T\subseteq B$ be a maximal torus such that $C_T(x)^\circ$ is a maximal torus of $C_B(x)^\circ$. It follows that $C_T(x)^\circ\! =\!Z(G)^\circ$, whence $\dim \mathfrak{C}(x)\!=\!\dim B\!-\!\dim Z(G)$. \end{proof} \bigskip \begin{Thm} \label{Df2} Suppose that $\Char(k)$ is good for $G$ and let $B \subseteq G$ be a Borel subgroup of $G$, $U \subseteq B$ be its unipotent radical, $\mathfrak{u}:=\Lie(U)$. Then we have \[ \dim\mathcal{C}_2(\mathfrak{u})=\dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u}).\] \end{Thm} \begin{proof} We first assume that $G$ is almost simple, so that $\dim Z(G)\!=\!0$. Thanks to \cite[Thm.10]{GMR}, we have \[ \modd(U;\mathfrak{u})=\modd(B;\mathfrak{u})\!+\!\rk(G),\] so that a consecutive application of Proposition \ref{SpI2} and Proposition \ref{Mod 2} implies \[ \dim \mathcal{C}_2(\mathfrak{u})= \dim U\!+\!\modd(U;\mathfrak{u}) = \dim U\!+\!\rk(G)\!+\!\modd(B;\mathfrak{u}) = \dim B\!+\!\modd(B;\mathfrak{u}).\] Next, we assume that $G$ is semisimple with almost simple constituents $G_1,\ldots, G_n$, say. There are Borel subgroups $B_i \subseteq G_i$ of $G_i$ with unipotent radicals $U_i$ such that $B\!=\!B_1\cdots B_n$ and $U\!=\!U_1\cdots U_n$. Let $\mathfrak{u}:=\Lie(U)$ and $\mathfrak{u}_i:=\Lie(U_i)$. Lemma \ref{SpI1} provides an isomorphism $U\cong \prod_{i=1}^n U_i$, so that $\mathfrak{u} = \bigoplus_{i=1}^n \mathfrak{u}_i$. If $x=\sum_{i=1}^nx_i \in \mathfrak{u}$, then $B\boldsymbol{.} x =\prod_{i=1}^n B_i\boldsymbol{.} x_i$, so that $\dim B\boldsymbol{.} x = \sum_{i=1}^n \dim B_i\boldsymbol{.} x_i$. This readily implies \[\mathfrak{u}_{[j]} := \{x \in \mathfrak{u} \ ; \ \dim B\boldsymbol{.} x\!=\!j\} = \bigcup_{\{m \in \mathbb{N}_0^n ; |m|=j\}} \prod_{i=1}^n(\mathfrak{u}_i)_{[m_i]} \ \ \ \ \ \ \forall \ j \in \mathbb{N}_0,\] where we put $|m|\!:=\!\sum_{i=1}^n m_i$ for $m \in \mathbb{N}_0^n$. Consequently, \[ \dim \mathfrak{u}_{[j]} = \max\{ \sum_{i=1}^n \dim (\mathfrak{u}_i)_{[m_i]} \ ; \ m \in \mathbb{N}_0^n \ , \ |m|\!=\!j \} \ \ \ \ \forall \ j \in \mathbb{N}_0.\] As a result, \begin{eqnarray*} \modd(B;\mathfrak{u}) &=& \max_{j\ge 0} \max \{ \sum_{i=1}^n \dim (\mathfrak{u}_i)_{[m_i]} \ ; \ m \in \mathbb{N}_0^n \ ; \ |m|\!=\!j\}\!-\!j \\ &= & \max_{j\ge 0} \max\{ \sum_{i=1}^n \dim (\mathfrak{u}_i)_{[m_i]} \!-\!m_i \ ; \ m \in \mathbb{N}_0^n \ ; \ |m|\!=\!j\} \\ &= & \max _{m \in \mathbb{N}_0^n} \sum_{i=1}^n (\dim (\mathfrak{u}_i)_{[m_i]} \!-\!m_i) = \sum_{i=1}^n \max_{m_i \ge 0} (\dim (\mathfrak{u}_i)_{[m_i]} \!-\!m_i) = \sum_{i=1}^n \modd(B_i;\mathfrak{u}_i). \end{eqnarray*} Since $\mathcal{C}_2(\mathfrak{u}) \cong \prod_{i=1}^n \mathcal{C}_2(\mathfrak{u}_i)$, we arrive at \[ \dim \mathcal{C}_2(\mathfrak{u}) = \sum_{i=1}^n\dim\mathcal{C}_2(\mathfrak{u}_i) = \sum_{i=1}^n \dim B_i\!+\!\modd(B_i;\mathfrak{u}_i) = \dim B\!+\!\modd(B;\mathfrak{u}),\] as desired. If $G$ is reductive, then $G=Z(G)^\circ G'$, with $G':=(G,G)$ being semisimple and $Z(G)^\circ$ being a torus. By the arguments of Proposition \ref{SpI2}, $B':=(B\cap G')^\circ$ is a Borel subgroup of $G'$ with unipotent radical $U$ and such that $B=B'Z(G)^\circ$ with $Z(G)\cap B'$ being finite. It follows that \[ B\boldsymbol{.} x= B'\boldsymbol{.} x\] for all $x \in \mathfrak{u}$, and the identities \[ \dim \mathcal{C}_2(\mathfrak{u}) = \dim B'\!+\!\modd(B';\mathfrak{u})= \dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u})\] verify our claim. \end{proof} \bigskip \noindent We denote by $\mathcal{O}_{\rm reg} \subseteq \mathfrak{g}$ the regular nilpotent $G$-orbit. \bigskip \begin{Lem} \label{Df3} Suppose that $\Char(k)$ is good for $G$. Given $x \in \mathcal{O}_\mathrm{reg}\cap\mathfrak{u}$, $\mathfrak{C}(x)$ is an irreducible component of $\mathcal{C}_2(\mathfrak{u})$ of dimension $\dim B\!-\!\dim Z(G)$. \end{Lem} \begin{proof} By general theory, $\mathcal{O}_{\rm reg}\cap\mathfrak{u}$ is an open $B$-orbit of $\mathfrak{u}$, cf.\ \cite[(5.2.3)]{Ca}. Consequently, $\mathcal{O}_{\rm reg}\cap \mathfrak{u}_{(\max \mathbb{N}_0(\mathfrak{u}))}$ is a non-empty subset of $\mathfrak{u}$. Since $\mathfrak{u}_{(\max\mathbb{N}_0(\mathfrak{u}))}$ is a $B$-stable subset of $\mathfrak{u}$, it follows that $\mathcal{O}_{\rm reg}\cap\mathfrak{u} \subseteq \mathfrak{u}_{(\max \mathbb{N}_0(\mathfrak{u}))}$. Let $x \in \mathcal{O}_{\rm reg}\cap\mathfrak{u}$. Then $B\boldsymbol{.} x \subseteq \mathfrak{u}_{(\max\mathbb{N}_0(\mathfrak{u}))}$ is open in $\mathfrak{u}$, so that $\pr_1^{-1}(B\boldsymbol{.} x)$ is open in $\mathcal{C}_2(\mathfrak{u})$. Corollary \ref{Pre4} now shows that $\pr_1^{-1}(B\boldsymbol{.} x)$ is an open subset of the irreducible component $\overline{\pr_1^{-1}(\mathfrak{u}_{(\max\mathbb{N}_0(\mathfrak{g}))})}$ of $\mathcal{C}_2(\mathfrak{u})$. Consequently, \[ \mathfrak{C}(x) = \overline{\pr_1^{-1}(B\boldsymbol{.} x)} = \overline{\pr_1^{-1}(\mathfrak{u}_{(\max\mathbb{N}_0(\mathfrak{u}))})}\] is an irreducible component of $\mathcal{C}_2(\mathfrak{u})$. Since the element $x$ is distinguished for $G$, Lemma \ref{DE2} shows that it is also distinguished for $B$. We may now apply Lemma \ref{Df1} to see that $\dim \mathfrak{C}(x)\!=\!\dim B\!-\!\dim Z(G)$. \end{proof} \bigskip \begin{Remarks} (1) The foregoing result in conjunction with Theorem \ref{Df2} implies that $\mathcal{C}_2(\mathfrak{u})$ is equidimensional only if $B$ acts on $\mathfrak{u}$ with finitely many orbits. (2) It also follows from the above and Corollary \ref{Pre4} that $\max\mathbb{N}_0(\mathfrak{u})\!=\!\dim_k\mathfrak{u}\!-\!\ssrk(G)$. \end{Remarks} \bigskip \subsection{Minimal supports} As before, we let $G$ be a connected reductive algebraic group, with Borel subgroup $B\! =\! U\!\rtimes\!T$. The corresponding Lie algebras will be denoted $\mathfrak{g}$, $\mathfrak{b}$ and $\mathfrak{u}$. Let $R_T$ the root system of $G$ relative to $T$, $\Delta:=\{\alpha_1,\ldots,\alpha_n\} \subseteq R_T$ be a set of simple roots. Given $\alpha = \sum_{i=1}^nm_i\alpha_i \in R_T$, we denote by $\height(\alpha)=\sum_{i=1}^nm_i$ the {\it height} of $\alpha$ (relative to $\Delta$), and put for $x \in \mathfrak{u}\!\smallsetminus\!\{0\}$ \[ \deg(x):= \min\{ \height(\alpha) \ ; \ \alpha \in \supp(x)\}\] as well as \[ \msupp (x):= \{\alpha \in \supp(x) \ ; \ \height(\alpha)=\deg(x)\}.\] Given $n \in \mathbb{N}_0$, we put \[ \mathfrak{u}^{(\ge n)} := \langle \{ x \in \mathfrak{u} \ ; \ \deg(x) \ge n\}\rangle.\] \bigskip \begin{Lem} \label{Ms1} Given $x \in \mathfrak{u}\!\smallsetminus\!\{0\}$, we have $\deg(b\boldsymbol{.} x) =\deg(x)$ and $\msupp(b\boldsymbol{.} x) = \msupp(x)$ for all $b \in B$.\end{Lem} \begin{proof} For $u \in U$ we consider the morphism \[ \Phi_u : U \longrightarrow U \ \ ; \ \ v \mapsto [u,v],\] where $[u,v]:=uvu^{-1}v^{-1}$ denotes the commutator of $u$ and $v$. According to \cite[(4.4.13)]{Sp98}, we have \[ \mathsf{d}(\Phi_u)(x) = u\boldsymbol{.} x\!-\!x \ \ \ \ \ \ \forall \ x \in \mathfrak{u}.\] Given a positive root $\alpha \in R_T^+$, we consider the root subgroup $U_\alpha$ of $U$. For $u \in U_\alpha$ and $\beta \in R^+_T$, an application \cite[(8.2.3)]{Sp98} shows that \[ \Phi_u(U_\beta) \subseteq \prod_{i,j>0} U_{i\alpha+j\beta}.\] Let $x \in \mathfrak{u}\!\smallsetminus\!\{0\}$ and put $d:= \deg(x)$. Since $\mathfrak{u}_\beta = \Lie(U_\beta)$, the foregoing observations in conjunction with \cite[(8.2.1)]{Sp98} yield \[ \Ad(u)(x) \equiv x \ \ \ \ \ \modd(\mathfrak{u}^{(\ge d+1)}) \ \ \ \ \ \ \ \ \ \forall \ u \in U.\] Thus, $\mathfrak{u}^{(\ge n)}$ is a $U$-submodule of $\mathfrak{u}$ for all $n\!\ge\!1$ such that $U$ acts trivially on $\mathfrak{u}^{(\ge n)}/\mathfrak{u}^{(\ge n+1)}$. Now write $x = \sum_{\alpha \in \msupp(x)} x_\alpha\!+\!x'$, where $x' \in \mathfrak{u}^{(\ge d+1)}$. Given $b \in B$, there are $t \in T$ and $u \in U$ such that $b=tu$. By the above, we obtain \[ b\boldsymbol{.} x \equiv \sum_{\alpha \in \msupp(x)} \alpha(t)x_\alpha \ \ \ \ \ \ \ \modd(u^{(\ge d+1)}),\] whence $\deg(b\boldsymbol{.} x) = \deg(x)$ and $\msupp(b\boldsymbol{.} x) = \msupp(x)$. \end{proof} \bigskip \noindent Let $\mathcal{O} \subseteq \mathfrak{u}$ be a $B$-orbit. In view of Lemma \ref{Ms1}, we may define \[ \msupp(\mathcal{O}) := \msupp(x) \ \ \ \ \ \ (x \in \mathcal{O}).\] \bigskip \subsection{The case $\mathbf{\modd(B;\mathfrak{u})\!=\!0}$} The case where $B$ acts on $\mathfrak{u}$ with finitely many orbits is governed by the Theorem of Hille-R\"ohrle \cite[(1.1)]{HR}, which takes on the following form in our context: \bigskip \begin{Prop} \label{fmod1} Suppose that $\Char(k)$ is good for $G$. Then $\modd(B;\mathfrak{u})=0$ if and only if every almost simple constituent of $(G,G)$ is of type $(A_n)_{n\le4}$ or $B_2$. \end{Prop} \begin{proof} Returning to the proof of Theorem \ref{Df2}, we let $G_1, \ldots, G_n$ be the simple constituents of $(G,G)$ and pick Borel subgroups $B_i$ of $G_i$, with unipotent radicals $U_i$. Then \[ B:= Z(G)^\circ B_1\cdots B_n \] is a Borel subgroup of $G$ with unipotent radical $U:= U_1\cdots U_n$. Setting $\mathfrak{u}:=\Lie(U)$ and $\mathfrak{u}_i:=\Lie(U_i)$, we have \[ \modd(B;\mathfrak{u}) = \sum_{i=1}^n\modd(B_i;\mathfrak{u}_i),\] so that \cite[(1.1)]{HR} yields the result. \end{proof} \bigskip \begin{Lem} \label{fmod2} Suppose that $\modd(B;\mathfrak{u})\!=\!0$. If $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$, then there is a unique orbit $\mathcal{O}_C \subseteq \pr_1(C)$ such that \begin{enumerate} \item[(a)] $\mathcal{O}_C$ is dense and open in $\pr_1(C)$, and \item[(b)] $C= \mathfrak{C}(x)$ for all $x \in \mathcal{O}_C$. \end{enumerate} \end{Lem} \begin{proof} Since the component $C$ is $B$-stable, so is the closed subset $\pr_1(C) \subseteq \mathfrak{u}$, cf.\ Lemma \ref{Pre2}. By assumption, $B$ thus acts with finitely many orbits on the irreducible variety $\pr_1(C)$. Hence there is a $B$-orbit $\mathcal{O}_C \subseteq \pr_1(C)$ such that $\overline{\mathcal{O}}_C = \pr_1(C)$. Consequently, $\mathcal{O}_C$ is open in $\pr_1(C)$. The unicity of $\mathcal{O}_C$ follows from the irreducibility of $\pr_1(C)$. Let $x \in \mathcal{O}_C$, so that $\mathcal{O}_C=B\boldsymbol{.} x$. Then there is $y \in \mathfrak{u}$ such that $(x,y) \in C$. In particular, $y \in C_\mathfrak{u}(x)$, so that $(x,y) \in B\boldsymbol{.} (\{x\}\!\times\!C_\mathfrak{u}(x))= \pr_1^{-1}(\mathcal{O}_C)$. Thanks to (a), $\pr^{-1}(\mathcal{O}_C)$ is open in $\pr_1^{-1}(\pr_1(C))$. It follows that $(B\boldsymbol{.} (\{x\}\!\times\!C_\mathfrak{u}(x)))\cap C$ is a non-empty open subset of $C$, so that \[ C = \overline{(B\boldsymbol{.} (\{x\}\!\times\!C_\mathfrak{u}(x)))\cap C} \subseteq \mathfrak{C}(x).\] Since the latter set is irreducible, while $C$ is a component, we have equality. \end{proof} \bigskip \begin{Remarks} (1) \ The Lemma holds more generally for each $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$ with $\modd(B;\pr_1(C))\!=\!0$. (2) \ Suppose that $\modd(B;\mathfrak{u})\!=\!0$. In view of Theorem \ref{Df2} and Lemma \ref{Df1}, each distinguished $B$-orbit $B\boldsymbol{.} x$ gives rise to an irreducible component $\mathfrak{C}(x)$ of maximal dimension. \end{Remarks} \bigskip \noindent Suppose that $\modd(B;\mathfrak{u})\!=\!0$. Using Lemma \ref{fmod2} we define \[ \msupp(C) = \msupp(\mathcal{O}_C)\] for every $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$. \bigskip \section{Almost simple groups}\label{S:Asg} The purpose of this technical section section is the proof of the following result, which extends \cite[\S3]{GR} to good characteristics. \bigskip \begin{Proposition} \label{Asg1} The following statements hold: \begin{enumerate} \item If $G$ has type $(A_n)_{n\le 4}$, then $\mathcal{C}_2(\mathfrak{u})$ is equidimensional and \[ |\Irr(\mathcal{C}_2(\mathfrak{u}))| = \left\{ \begin{array}{cc} 5 & n\!=\!4 \\ 2 & n\!=\!3 \\ 1 & \text{else.} \end{array} \right.\] \item If $\Char(k)\!\ne\!2$ and $G$ has type $B_2$, then $\mathcal{C}_2(\mathfrak{u})$ is equidimensional and $|\Irr(\mathcal{C}_2(\mathfrak{u}))|=2$. \end{enumerate} \end{Proposition} \bigskip \noindent For $G$ as above, the Borel subgroup $B \subseteq G$ acts on $\mathfrak{u}$ with finitely many orbits. We let $\mathfrak{R} \subseteq \mathfrak{u}$ be a set of orbit representatives, so that \[ \mathcal{C}_2(\mathfrak{u}) = \bigcup_{ x \in \mathfrak{R}} \mathfrak{C}(x)\] is a finite union of closed irreducible subsets. We will determine in each case the set $\{ x \in \mathfrak{R} \ ; \ \mathfrak{C}(x) \in \Irr(\mathcal{C}_2(\mathfrak{u}))\}$. A list of orbit representatives is given in \cite[\S 3]{GR} and we will follow the notation established there. \bigskip \subsection{Special linear groups}\label{S:SL} Let $G=\SL_{n+1}(k)$ and $\mathfrak{g} = \mathfrak{sl}_{n+1}(k)$, where $1\!\le\!n\!\le\!4$. Moreover, $B,T,U$ denote the standard subgroups of upper triangular, diagonal, and upper unitriangular matrices, respectively. For $i\!\le\!j \in \{1,\ldots, n\!+\!1\}$, we let $E_{i,j}$ be the $(i,j)$-elementary matrix, so that \[ \mathfrak{u}:= \bigoplus_{i<j}kE_{i,j}\] is the Lie algebra of the unipotent radical $U$ of $B$. We denote the set of simple roots by $\Delta :=\{\alpha_1,\ldots,\alpha_n\}$. Let $i\!<\!j\!\le\! n\!+\!1$. Then $E_{i,j}$ is the root vector corresponding to the root $\alpha_{i,j}:=\sum_{\ell=i}^{j-1}\alpha_\ell$. We therefore have $\alpha_i=\alpha_{i,i+1}$ for $1\!\le\!i\!\le \!n$ and \[ R_T^+ := \{ \alpha_{i,j} \ ; \ 1\!\le\!i\!<\!j\!\le\!n\!+\!1\}\] is the set of roots of $\mathfrak{u}$ relative to $T$. (The set of positive roots of $\mathfrak{sl}_{n+1}(k)$.) Recall that \[ E_{i,j}E_{r,s} = \delta_{j,r}E_{i,s},\] as well as \[ [E_{i,j},E_{r,s}] = \delta_{j,r}E_{i,s}\!-\!\delta_{s,i}E_{r,j} \ \ \ \ \ \ \text{for all} \ i,j,r,s \in \{1,\ldots,n\!+\!1\}. \] Let $\alpha = \alpha_{i,j}$ be a positive root. Then \[ U_\alpha := \{ 1\!+\! aE_{i,j} \ ; \ a \in k\}\] is the corresponding root subgroup of $U$, and the formula above implies \[ \Ad(1\!+\! aE_{i,j})(x) = (1\!+\! aE_{i,j})x(1\!-\! aE_{i,j}) = x\!+\!a[E_{i,j},x]\] for all $x \in \mathfrak{u}$. Note that $A:=\{(a_{ij}) \in \Mat_{n+1}(k) \ ; \ a_{ij}=0 \ \text{for} \ i\!>\!j\}$ is a subalgebra of the associative algebra $\Mat_{n+1}(k)$. We consider the linear map \[ \zeta : A \longrightarrow A \ \ ; \ \ E_{i,j} \mapsto E_{n+2-j,n+2-i}.\] Then we have \begin{enumerate} \item[(a)] $\zeta(ab) = \zeta(b)\zeta(a)$ for all $a,b \in A$, and \item[(b)] $\det(\zeta(a)) = \det(a)$ for all $a \in A$. \end{enumerate} There results a homomorphism \[ \tau : B \longrightarrow B \ \ ; \ \ a \mapsto \zeta(a)^{-1}\] of algebraic groups such that $\tau(U)=U$. We write $\mathfrak{b}:=\Lie(B)$ and put $\Upsilon := \mathsf{d}(\tau)|_\mathfrak{u}$. As $\zeta$ is linear, \cite[(4.4.12)]{Sp98} implies that \[\Upsilon(E_{i,j}) = -E_{n+2-j,n+2-i} \ \ \ \ \ \ 1\!\le\!i\!<\!j\!\le\!n\!+\!1.\] Thus, $\Upsilon$ is an automorphism of $\mathfrak{u}$ of order $2$ such that \[ \Upsilon(\mathfrak{u}_{\alpha_{ij}})= \mathfrak{u}_{\alpha_{n+2-j,n+2-i}}.\] Since $\Delta$ is a basis for the root lattice $\mathbb{Z} R_T^+ = \mathbb{Z} R_T$, there is an automorphism $\sigma : \mathbb{Z} R_T^+ \longrightarrow \mathbb{Z} R_T^+$ of order $2$ such that \[\sigma(\alpha_i) = \alpha_{n+1-i} \ \ \ \ \ \ \ 1\!\le\!i\!\le\!n.\] Thus, $\sigma(R_T^+) = R_T^+$, and \[ \Upsilon(\mathfrak{u}_\alpha)= \mathfrak{u}_{\sigma(\alpha)} \ \ \ \ \ \ \forall \ \alpha \in R_T^+.\] We denote by $(\mathfrak{u}^n)_{n \in \mathbb{N}}$ the descending series of the nilpotent Lie algebra $\mathfrak{u}$, which is inductively defined via $\mathfrak{u}^1:=\mathfrak{u}$ and $\mathfrak{u}^{n+1} := [\mathfrak{u},\mathfrak{u}^n]$. Note that $\mathfrak{u}^n = \mathfrak{u}^{(\ge n)}$ for all $n\!\ge\!1$. \bigskip \begin{Lem} \label{SL1} Let $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$. Then we have \[ \msupp([\Upsilon\!\times\!\Upsilon](C))=\sigma(\msupp(C)).\] \end{Lem} \begin{proof} We put $\mathcal{O}_C = B\boldsymbol{.} x$. In view of $\Upsilon = \mathsf{d}(\tau)|_\mathfrak{u}$, we have \[ \Upsilon(b\boldsymbol{.} x) = \tau(b)\boldsymbol{.} \Upsilon(x) \ \ \ \ \ \ \ \forall \ b \in B, x \in \mathfrak{u}.\] Consequently, \[ \Upsilon(\mathcal{O}_C) = \Upsilon(B\boldsymbol{.} x) = B\boldsymbol{.} \Upsilon(x)\] is an open orbit of $\Upsilon(\pr_1(C)) = \pr_1([\Upsilon\!\times\!\Upsilon](C))$, so that \[ \mathcal{O}_{[\Upsilon\!\times\!\Upsilon](C)} = \Upsilon(\mathcal{O}_C).\] Setting $d\!:=\!\deg(x)$, we have \[ x \equiv \sum_{\alpha \in \msupp(x)} x_\alpha \ \ \ \ \ \ \modd \mathfrak{u}^{(\ge d+1)}.\] Thus, \[ \Upsilon(x) \equiv \sum_{\alpha \in \msupp(x)} -x_{\sigma(\alpha)} \ \ \ \ \ \modd \mathfrak{u}^{(\ge d+1)},\] whence \[\msupp([\Upsilon\!\times\!\Upsilon](C)) = \msupp(\Upsilon(x)) = \sigma(\msupp(x)) = \sigma(\msupp(C)),\] as desired. \end{proof} \bigskip \begin{Remark} The list of orbit representatives for the case $A_4$ given in \cite[(3.4)]{GR} contains some typographical errors, which we correct as follows: \begin{enumerate} \item[(a)] In the form stated loc.\ cit., the element $e_3$ satisfies $\rk(\mathbb{Z}\supp(e_3))\!=\!3$, so that it is not distinguished, see Corollary \ref{DE4}. We write $e_3 = 11010{\bf 1} 0000$, so that $e_3 = \Upsilon(e_7)$. \item[(b)] In \cite[(3.4)]{GR}, we have $e_4\!=\!e_5$. We put $e_4:=1101000000$ (the element, $e_3$ of \cite[(3.4)]{GR}), so that $e_4=\Upsilon(e_8)$. \end{enumerate} \end{Remark} \bigskip \begin{Lem} \label{SL2} Suppose that $\Char(k)\!\ne\!2$. Let $G\!=\!\SL_5(k)$. Then $\mathcal{C}_2(\mathfrak{u})$ is equidimensional and $|\Irr(\mathcal{C}_2(\mathfrak{u}))|\!=\!5$. \end{Lem} \begin{proof} Let $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$ be a component and pick $x \in \mathcal{O}_C$, so that $C=\mathfrak{C}(x)$, cf.\ Lemma \ref{fmod2}. We consider \[ S_C := \msupp(C)\cup\msupp([\Upsilon\!\times\!\Upsilon](C)) = \msupp(x)\cup\msupp(\Upsilon(x)).\] According to Lemma \ref{SL1}, $S_C$ is a $\sigma$-stable subset of $R_T^+$. We will repeatedly apply Lemma \ref{CSC4} to $B$-submodules of $\mathfrak{u}$. \medskip (a) {\it We have $x \not\in \bigcup_{i=1}^3 kE_{i,i+2}\!\oplus\!\mathfrak{u}^3$}. \smallskip \noindent Suppose that $x \in kE_{i,i+2}\!\oplus\!\mathfrak{u}^3$ for some $i \in \{1,2,3\}$. Since $\mathfrak{u}^3\!=\!kE_{1,4}\!\oplus\!kE_{2,5}\!\oplus\!kE_{1,5}$, we have $[E_{2,3},\mathfrak{u}^3]\!=\!(0)$. It thus follows from Lemma \ref{CSC4} that $\deg(x)\!\le\!2$. Consequently, $\deg(x)\!=\!2$ and $|\msupp(x)|\!=\!1$. If $|S_C|\!=\!1$, then $i\!=\!2$. Since $[E_{2,3},kE_{2,4}\!+\!\mathfrak{u}^3]\!=\!(0)$, we may apply Lemma \ref{CSC4} to $\mathfrak{v}\!:=\!kE_{2,4}\!+\!\mathfrak{u}^3$ to obtain a contradiction. Alternatively, we may assume that $i\!=\!1$. As $[E_{2,4},kE_{1,3}\!+\!\mathfrak{u}^3]\!=\!(0)$, another application of Lemma \ref{CSC4} rules out this case. \hfill $\diamond$ \medskip (b) {\it We have $\deg(x)\!=\!1$ and $|S_C|\!=\! 2,4$}. \smallskip \noindent Suppose that $\deg(x)\!\ge\!2$. In view of (a), we have $\deg(x)\!=\!2$ and $|\msupp(x)|\!\ge \! 2$. If $|\msupp(x)|\!=\!2=\!|S_C|$, then $\msupp(x)\!=\!S_C$ is $\sigma$-stable, so that $\msupp(x) = \{\alpha_{1,3},\alpha_{3,5}\}$. Thus, $B\boldsymbol{.} x \subseteq \mathfrak{v}:= kE_{1,3}\!\oplus\!kE_{3,5}\!\oplus\!\mathfrak{u}^3$ (see also Lemma \ref{Ms1}). Since $E_{2,4} \in C_\mathfrak{u}(\mathfrak{v})$, Lemma \ref{CSC4} yields a contradiction. If $|\msupp(x)| \!= \!2$ and $|S_C|\!=\!3$, then $\msupp(x)\cap\msupp(\Upsilon(x))$ contains a fixed point of $\sigma$, and we may assume that $\msupp(x) = \{\alpha_{1,3}, \alpha_{2,4}\}$. In view of \cite[(3.4)]{GR}, we may assume that $x= e_{48}=E_{1,3}\!+\!E_{2,4}$. Since $B\boldsymbol{.} x \subseteq \mathfrak{v}:= kE_{1,3}\!\oplus\!kE_{2,4}\!\oplus\!\mathfrak{u}^3$, while $E_{1,2}\!+\!E_{3,4} \in C_\mathfrak{u}(x)$, Lemma \ref{CSC4} yields a contradiction. We thus assume that $|\msupp(x)|\!=\!3$. Then \cite[(3.4)]{GR} in conjunction with Lemma \ref{Ms1} gives $x\!=\!e_{47}\!=\!E_{1,3}\!+\!E_{2,4}\!+\!E_{3,5}$. Since $E_{1,2}\!+\!E_{3,4} \in C_\mathfrak{u}(x)$, while $B\boldsymbol{.} x \subseteq \mathfrak{u}^2$, this contradicts Lemma \ref{CSC4}. Consequently, $\deg(x)\!=\!1$, so that $\msupp(x) \subseteq \Delta$. Since $\sigma$ acts without fixed points on $\Delta$, every $\sigma$-orbit of $\Delta$ has two elements. As $S_C \subseteq \Delta$ is a disjoint union of $\sigma$-orbits, we obtain $|S_C|\!=\!2,4$. \hfill $\diamond$ \medskip (c) {\it We have $|\msupp(x)|\!\ge\!2$}. \smallskip \noindent Alternatively, (b) provides $i \in \{1,\ldots,4\}$ such that $B\boldsymbol{.} x \subseteq \mathfrak{v}:= kE_{i,i+1}\!+\!\mathfrak{u}^2$. Applying $\Upsilon$, if necessary, we may assume that $i \in \{1,2\}$. Suppose that $i\!=\!1$. Then Lemma \ref{Ms1} in conjunction with \cite[(3.4)]{GR} implies that we have to consider the following cases: \begin{eqnarray*} & x = e_{16} = E_{1,2}\!+\!E_{2,4}\!+\!E_{3,5} \ ; \ x=e_{17} = E_{1,2}\!+\!E_{2,4} \ ; \ x=e_{18} = E_{1,2}\!+\!E_{3,5}\!+\!E_{2,5} \ ; \\ & x=e_{19} = E_{1,2}\!+\!E_{3,5} \ ; \ x = e_{20} = E_{1,2}\!+\!E_{2,5} \ ; \ x=e_{21} = E_{1,2}. \end{eqnarray*} Consequently, $E_{3,4} \in C_\mathfrak{u}(x)\!\smallsetminus\!\mathfrak{v}$, which contradicts Lemma \ref{CSC4}. Suppose that $i\!=\!2$. Then \cite[(3.4)]{GR} implies \begin{eqnarray*} & x = e_{29} = E_{2,3}\!+\!E_{3,5}\!+\!E_{1,4} \ ; \ x=e_{30} = E_{2,3}\!+\!E_{3,5} \ ; \ x=e_{31} = E_{2,3}\!+\!E_{1,4} \ ; \\ & x=e_{32} = E_{2,3}\!+\!E_{1,5} \ ; \ x=e_{33} = E_{2,3}. \end{eqnarray*} Since $E_{4,5} \in [C_\mathfrak{u}(e_{30})\cap C_\mathfrak{u}(e_{32})\cap C_u(e_{33})]\!\smallsetminus\!\mathfrak{v}$, Lemma \ref{CSC4} rules out these possibilities. In view of $E_{4,5}\!+\!E_{1,3} \in C_\mathfrak{u}(e_{29})\!\smallsetminus\!\mathfrak{v}$, it remains to discuss the case where $x\!=\!e_{31}$. We consider the morphism \[ \mathfrak{x} : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto e_{31}\!+\!\alpha E_{3,5}.\] Then we have $\mathfrak{x}(\alpha) \in B\boldsymbol{.} e_{29}$ for all $\alpha \in k^\times$, while $\mathfrak{x}(0)=e_{31}$. Direct computation shows that \[ C_\mathfrak{u}(e_{31}) = kE_{2,3}\!\oplus\!kE_{1,3}\!\!\oplus\!kE_{2,4}\!\oplus\!\mathfrak{u}^3.\] For $y = a E_{2,3}\!+\!bE_{1,3}\!+\!cE_{2,4}\!+\!z \in C_\mathfrak{u}(e_{31})$, where $z \in \mathfrak{u}^3$, we consider the morphism \[ \mathfrak{y} : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto y\!+\!b\alpha E_{4,5}\!+\!a\alpha E_{3,5}.\] Since $[\mathfrak{x}(\alpha),\mathfrak{y}(\alpha)]\!=\!0$ for all $\alpha \in k^\times$, Lemma \ref{CSC2} yields \[ (e_{31},y) = (\mathfrak{x}(0),\mathfrak{y}(0)) \in \mathfrak{C}(\mathfrak{x}(1))=\mathfrak{C}(e_{29}).\] Consequently, $\mathfrak{C}(e_{31}) \subseteq \mathfrak{C}(e_{29})$. Since $\mathfrak{C}(e_{29}) \not \in \Irr(\mathcal{C}_2(\mathfrak{u}))$, we again arrive at a contradiction. \hfill $\diamond$ \medskip (d) { \it We have $|S_C|\!=\!4$}. \smallskip \noindent Suppose that $|S_C|\!\ne\!4$. Then (b) implies $|S_C|\!=\!2$ and (c) shows that $\msupp(x) \subseteq \Delta$ is $\sigma$-stable with $2$ elements. Consequently, $\msupp(x)=\{\alpha_1,\alpha_4\}$, or $\msupp(x)=\{\alpha_2,\alpha_3\}$. If $x\!=\!E_{2,3}\!+\!E_{3,4}\!+\!y$, where $y \in \mathfrak{u}^2$, then \cite[(3.4)]{GR} yields $x \in B\boldsymbol{.} e_{23}\cup B\boldsymbol{.} e_{24}$, where $e_{23} := E_{2,3}\!+\!E_{3,4}\!+\!E_{1,5}$ and $e_{24} := E_{2,3}\!+\!E_{3,4}$. We may invoke Lemma \ref{CSC3} to see that $\mathfrak{C}(e_{24}) \subseteq \mathfrak{C}(e_{23})$. It was shown in \cite[(3.4)]{GR}, that $\mathfrak{C}(e_{23})\subseteq \mathfrak{C}(e_1)$. Hence $\mathfrak{C}(x)$ is not a component, a contradiction. It follows that $\msupp(x)=\{\alpha_1,\alpha_4\}$, so that \cite[(3.4)]{GR} implies \[x \in B\boldsymbol{.} e_{13} \cup B\boldsymbol{.} e_{14} \cup B\boldsymbol{.} e_{15},\] where $e_{15}:=E_{1,2}\!+\!E_{4,5}$, $e_{14} := e_{15}+E_{2,5}$ and $e_{13} := e_{15}\!+\!E_{2,4}$. In view of $C_\mathfrak{u}(e_{15}) \subseteq kE_{1,2}\!\oplus\!kE_{4,5}\!\oplus\!\mathfrak{u}^2$, we have $[C_\mathfrak{u}(e_{15}),E_{2,5}] \subseteq k[E_{1,2},E_{2,5}] =k[e_{15},E_{2,5}]$. Lemma \ref{CSC3} thus shows that $\mathfrak{C}(e_{15}) \subseteq \mathfrak{C}(e_{14})$. In \cite[(3.4)]{GR} it is shown that $\mathfrak{C}(e_{14}) \subseteq \mathfrak{C}(e_3)$. According to (b), the latter set is not a component, so that $\mathfrak{C}(e_{14})$ isn't either. It remains to dispose of the case $x\!=\!e_{13}$. For $(\alpha, \beta) \in k^2$, we consider the elements \[ e_1(\alpha,\beta) := E_{1,2}\!+\!\alpha E_{2,3}\!+\!\beta E_{3,4}\!+\!E_{4,5} \ \ \text{and} \ \ e_{13}(\alpha,\beta) := e_1(\alpha,\beta)\!+\!E_{2,4}\] of $\mathfrak{u}$. Let $u_{i,j}(t):=1\!+\!tE_{i,j} \in U \ \ (t \in k)$, so that $u_{i,j}(t)\boldsymbol{.} x = x\!+\!t[E_{i,j},x]$ for all $x \in \mathfrak{u}$. We thus obtain $e_{13}(\alpha,\beta) = u_{2,3}(\beta^{-1})u_{1,2}(\alpha^{-1}\beta^{-1})\boldsymbol{.} e_1(\alpha,\beta)$ for $\alpha\beta \ne 0$. As a result, \[ e_{13}(\alpha,\beta) \in B\boldsymbol{.} e_1 \ \ \ \ \text{for} \ \alpha\beta \ne 0,\] where $e_1\!=\!e_1(1,1)$. Direct computation shows that \[ C_\mathfrak{u}(e_{13}) = ke_{13}\!\oplus\!kE_{1,3}\!\oplus\!kE_{3,5}\!\oplus\!k(E_{1,4}\!+\!E_{2,5})\!\oplus\!kE_{1,5}.\] Let $y= a e_{13}\!+\!bE_{1,3}\!+\!c E_{3,5}\!+\!d(E_{1,4}\!+\!E_{2,5})\!+\!e E_{1,5} \in C_\mathfrak{u}(e_{13})$ be such that $b,c \ne 0$. We consider the morphisms \[ \mathfrak{x} : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto e_{13}(\alpha,\alpha cb^{-1}) \ \ \ \text{and} \ \ \ \mathfrak{y} : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto y\!+\!\alpha a E_{2,3}\!+\!\alpha acb^{-1}E_{3,4}\!+\!\alpha c E_{2,4}\] and observe that \begin{enumerate} \item[(a)] $\mathfrak{x}(\alpha) \in B\boldsymbol{.} \mathfrak{x}(1)$ for all $\alpha \in k^\times$, and \item[(b)] $[\mathfrak{x}(\alpha),\mathfrak{y}(\alpha)] = 0$ for all $\alpha \in k$. \end{enumerate} Thus, Lemma \ref{CSC2} implies that $(e_{13},y) = (\mathfrak{x}(0),\mathfrak{y}(0)) \in \mathfrak{C}(\mathfrak{x}(1))=\mathfrak{C}(e_1)$. Since the set of those $y$ with $bc\ne 0$ lies dense in $C_\mathfrak{u}(e_{13})$, it follows that $\mathfrak{C}(e_{13}) \subseteq \mathfrak{C}(e_1)$, a contradiction. \hfill $\diamond$ \medskip \noindent If $\msupp(x)\!=\!S_C$, (d) shows that $\deg(x)\!=\!1$ and $|\msupp(x)|\!=\!4$. Hence $x$ is regular and $\mathfrak{C}(x)\!=\!\mathfrak{C}(e_1)$ is an irreducible component. If $|\msupp(x)|\! =\! 2$, then $S_C = \msupp(x) \sqcup \sigma(\msupp(x))$ and we only need to consider the cases \[ \msupp(x)=\{\alpha_1,\alpha_2\} \ ; \ \{\alpha_1,\alpha_3\}. \] If $\msupp(x)=\{\alpha_1,\alpha_2\}$, then Lemma \ref{Ms1} yields $B\boldsymbol{.} x \subseteq \mathfrak{v}:=kE_{1,2}\!+\!kE_{2,3}\!+\!\mathfrak{u}^2$, while \cite[(3.4)]{GR} implies \[ x= e_5 = E_{1,2}\!+\!E_{2,3}\!+\!E_{3,5} \ ; \ x = e_6 = E_{1,2}\!+\!E_{2,3}.\] Consequently, $E_{4,5} \in C_\mathfrak{u}(x)\!\smallsetminus\!\mathfrak{v}$, a contradiction. If $\msupp(x)=\{\alpha_1,\alpha_3\}$, then $B\boldsymbol{.} x \subseteq \mathfrak{v}:= kE_{1,2}\!+\!kE_{3,4}\!+\mathfrak{u}^2$ and \cite[(3.4)]{GR} implies \begin{eqnarray*} x= e_9 = E_{1,2}\!+\!E_{3,4}\!+\!E_{2,4}\!+\!E_{2,5} \ ; \ x= e_{10} = E_{1,2}\!+\!E_{3,4}\!+\!E_{2,4} \ ; \\ x= e_{11} = E_{1,2}\!+\!E_{3,4}\!+\!E_{2,5} \ ; \ x= e_{12} = E_{1,2}\!+\!E_{3,4}. \end{eqnarray*} Given $(\alpha,\beta) \in k^2$, we put \[ x(\alpha,\beta) = E_{1,2}\!+\!E_{3,4}\!+\!\alpha E_{2,4}\!+\! \beta E_{2,5}.\] Note that \[ x(\alpha,\beta) \in B\boldsymbol{.} x(1,1)=B\boldsymbol{.} e_9 \ \ \ \ \ \ \text{for} \ \alpha, \beta \ne 0.\] We put $\mathfrak{w}:= kE_{3,4}\!\oplus\!k(E_{1,3}\!+\!E_{2,4})\!\oplus\!kE_{3,5}\!\oplus\!kE_{1,4}\!\oplus\!kE_{1,5}$. Direct computation shows that \[ C_\mathfrak{u}(x(\alpha,\beta)) = k(E_{1,2}\!+\!\alpha E_{2,4}\!+\!\beta E_{2,5})\!\oplus\! \mathfrak{w}\] for all $(\alpha,\beta) \in k^2$. We have $e_i = x(\delta_{i,10},\delta_{i,11})$ for $i \in \{10,11,12\}$. Thus, if $y = a(E_{1,2}\!+\!\delta_{i,10}E_{2,4}\!+\!\delta_{i,11}E_{2,5})\!+\!w \in C_\mathfrak{u}(e_i)$, where $a \in k$ and $w \in \mathfrak{w}$, then \[ y(\alpha,\beta) = y\!+\!(a\alpha\!-\!a)\delta_{i,10}E_{2,4}+\!(a\beta\!-\!a)\delta_{i,11}E_{2,5} \in C_\mathfrak{u}(x(\alpha,\beta)).\] Let $i \in \{10,11,12\}$. Then the morphisms \[ \mathfrak{x}_i : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto x(\alpha(\delta_{i,11}\!+\!\delta_{i,12})\!+\!\delta_{i,10}, \alpha(\delta_{i,10}\!+\!\delta_{i,12})\!+\!\delta_{i,11})\] and \[ \mathfrak{y}_i : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto y(\alpha(\delta_{i,11}\!+\!\delta_{i,12})\!+\!\delta_{i,10}, \alpha(\delta_{i,10}\!+\!\delta_{i,12})\!+\!\delta_{i,11})\] Fulfill the conditions of Lemma \ref{CSC2}, so that \[ (e_i, y) = (\mathfrak{x}_i(0),\mathfrak{y}_i(0)) \in \mathfrak{C}(\mathfrak{x}_i(1)) = \mathfrak{C}(e_9).\] As a result, $\mathfrak{C}(e_i) \subseteq \mathfrak{C}(e_9)$ for $10\!\le\!i\!\le\!12$. We have $\dim_k \im (\ad e_9)(\mathfrak{b})\!=\!\dim_k \im (\ad e_9)\!+\!4$, so that $C_\mathfrak{u}(e_9)\!=\!C_\mathfrak{b}(e_9)$. Thus, Proposition \ref{SpI2} implies \[ \dim C_B(e_9) \le \dim_k C_\mathfrak{b}(e_9) = \dim_k C_\mathfrak{u}(e_9) = \dim C_U(e_9),\] so that $C_B(e_9)^\circ \!= \! C_U(e_9)^\circ$. Consequently, the element $e_9$ is distinguished and $\mathfrak{C}(e_9)$ is a component. Hence $\Upsilon(e_9)$ is also distinguished and \cite[(3.4)]{GR} in conjunction with Corollary \ref{DE4} implies that $\mathfrak{C}(e_{25})$ is also a component. It remains to consider the case, where $|\msupp(x)|\!=\!3$. Then $\msupp(x)\cap\sigma(\msupp(x))$ is a $\sigma$-stable subset of $\Delta$ of cardinality $2$, so that \[\msupp(x)\cap\sigma(\msupp(x)) = \{\alpha_1,\alpha_4\} \ ; \ \{\alpha_2,\alpha_3\}.\] Suppose that $\msupp(x)\cap\sigma(\msupp(x)) = \{\alpha_1,\alpha_4\}$. Then we may assume that $\msupp(x) = \{\alpha_1,\alpha_2,\alpha_4\}$. Thanks to \cite[(3.4)]{GR} this yields $x=e_3, e_4$. The above methods show that $\mathfrak{C}(e_4) \subseteq \mathfrak{C}(e_3)$, while $e_3$ is a distinguished element. Hence $\mathfrak{C}(e_3)$ and $\Upsilon(\mathfrak{C}(e_3))=\mathfrak{C}(e_7)$ are components of $\mathcal{C}_2(\mathfrak{u})$. We finally consider $\msupp(x)\cap\sigma(\msupp(x)) = \{\alpha_2,\alpha_3\}$ and assume that $\msupp(x)=\{\alpha_1,\alpha_2,\alpha_3\}$. By \cite[(3.4)]{GR}, this implies \[x=e_2 = E_{1,2}\!+\!E_{2,3}\!+\!E_{3,4}.\] As $\mathfrak{C}(e_2) \subseteq \mathfrak{C}(e_1)$, this case yields no additional components. It follows that \[ \Irr(\mathcal{C}_2(\mathfrak{u}))=\{\mathfrak{C}(e_1),\mathfrak{C}(e_3),\mathfrak{C}(e_7),\mathfrak{C}(e_9),\mathfrak{C}(e_{25})\},\] so that $|\Irr(\mathcal{C}_2(\mathfrak{u}))|\!=\!5$. \end{proof} \bigskip \begin{Lem} \label{SL3} Let $G\!=\!\SL_4(k)$. Then $\mathcal{C}_2(\mathfrak{u})$ is equidimensional with $|\Irr(\mathcal{C}_2(\mathfrak{u}))|\!=\!2$. \end{Lem} \begin{proof} We consider $\GL_n(k)\!=\! \SL_n(k)Z(\GL_n)$ along with its standard Borel subgroup $B_n\!=\! U_n\!\rtimes\!T_n$ of upper triangular matrices, where $U_n$ and $T_n$ are the groups unitriangular and diagonal matrices, respectively. The $B$ orbits of $\mathfrak{u}_n\!:=\!\Lie(U_n)$ coincide with those of the standard Borel subgroup $B_n\cap \SL_n(k)$ of $\SL_n(k)$. We consider $G'\!:=\! \GL_5(k)$ along with its commuting variety $\mathcal{C}_2(\mathfrak{u}')$. In view of Lemma \ref{SL2}, we have \[ \Irr(\mathcal{C}_2(\mathfrak{u}'))=\{\mathfrak{C}(e'_1),\mathfrak{C}(e'_3),\mathfrak{C}(e'_7),\mathfrak{C}(e'_9),\mathfrak{C}(e'_{25})\}.\] Let $A'$ and $A$ be the associative algebras of upper triangular $(5\!\times\!5)$-matrices and upper triangular $(4\!\times\!4)$-matrices, respectively. Then \[ \pi : A' \longrightarrow A \ \ ; \ \ (a_{ij}) \mapsto (a_{ij})_{1\le i \le j \le 4}\] is homomorphisms of $k$-algebras. Thus, if we identify $G:=\GL_4(k)$ with a subgroup of the Levi subgroup of $G'$, given by $\Delta_4:= \{\alpha'_1,\alpha'_2,\alpha'_3\}$, then the restriction \[ \pi : B' \longrightarrow B\] is a homomorphism of groups such that $\pi|_B\! =\! \id_B$. It follows that the differential \[ \mathsf{d}(\pi) : \mathfrak{u}' \longrightarrow \mathfrak{u}\] of the restriction $\pi|_{U'} : U' \longrightarrow U$ is split surjective such that \[ \mathsf{d}(\pi)(b'\boldsymbol{.} x') = \pi(b')\boldsymbol{.} \mathsf{d}(\pi)(x') \ \ \ \ \ \ \ \text{for all} \ b' \in B', x' \in \mathfrak{u}'.\] As a result, the morphism \[ [\mathsf{d}(\pi)\!\times\!\mathsf{d}(\pi)] : \mathcal{C}_2(\mathfrak{u}') \longrightarrow \mathcal{C}_2(\mathfrak{u}) \] is surjective and such that \[ [\mathsf{d}(\pi)\!\times\!\mathsf{d}(\pi)](B'\boldsymbol{.} (\{x'\}\!\times\!C_{\mathfrak{u}'}(x')) \subseteq B\boldsymbol{.} (\{\mathsf{d}(\pi)(x')\}\!\times\! C_{\mathfrak{u}}(\mathsf{d}(\pi)(x'))),\] whence \[ [\mathsf{d}(\pi)\!\times\!\mathsf{d}(\pi)](\mathfrak{C}(x')) \subseteq \mathfrak{C}(\mathsf{d}(\pi)(x')) \ \ \ \ \ \ \ \ \text{for all} \ x' \in \mathfrak{u}'.\] Consequently, \[\Irr(\mathcal{C}_2(\mathfrak{u})) \subseteq \{\mathfrak{C}(\mathsf{d}(\pi)(e'_1)),\mathfrak{C}(\mathsf{d}(\pi)(e'_3)),\mathfrak{C}(\mathsf{d}(\pi)(e'_7)),\mathfrak{C}(\mathsf{d}(\pi)(e'_9)),\mathfrak{C}(\mathsf{d}(\pi)(e'_{25}))\}.\] Thanks to \cite[(3.3),(3.4)]{GR}, we obtain \[ \mathsf{d}(\pi)(e'_1)=e_1 \ ; \ \mathsf{d}(\pi)(e'_3) \in B\boldsymbol{.} e_2 \ ; \ \mathsf{d}(\pi)(e'_7) = e_3 \ ; \ \mathsf{d}(\pi)(e'_9) = e_3 \ ; \ \mathsf{d}(\pi)(e'_{25}) = e_8.\] In \cite[(3.3)]{GR}, the authors show that $\mathfrak{C}(e_8) \subseteq \mathfrak{C}(e_1)$. By applying Lemma \ref{CSC2} to the morphism \[ \mathfrak{x} : k \longrightarrow \mathfrak{u} \ \ ; \ \ \alpha \mapsto E_{1,2}\!+\!E_{2,3}\!+\!\alpha E_{3,4}\] we obtain $\mathfrak{C}(e_2) \subseteq \mathfrak{C}(e_1)$. Since the element $e_1$ is regular, it is distinguished. As $\dim_k (\ad e_3)(\mathfrak{b}) = \dim_k(\ad e_3)(\mathfrak{u})\!+\!3=5$, we obtain, observing Proposition \ref{SpI2}, \[ \dim C_B(e_3) \le \dim_kC_\mathfrak{b}(e_3)=\dim_k C_\mathfrak{u}(e_3)= \dim C_U(e_3),\] so that $C_B(e_3)^\circ\! =\! C_U(e_3)^\circ$. Hence $e_3$ is distinguished for $B$, and $\Irr(\mathcal{C}_2(\mathfrak{u}))\! =\!\{\mathfrak{C}(e_1),\mathfrak{C}(e_3)\}$. \end{proof} \bigskip \noindent The same method readily shows: \bigskip \begin{Lem} \label{SL4} Let $G\!=\!\SL_n(k)$, where $n\!=\!2,3$. Then $\mathcal{C}_2(\mathfrak{u})$ is irreducible. \end{Lem} \bigskip \subsection{Symplectic groups} The following result disposes of the remaining case: \bigskip \begin{Lem} \label{Sp1} Suppose that $\Char(k)\!\ne\!2$. Let $G\!=\!\Sp(4)$ be of type $B_2\!=\!C_2$. Then $\mathfrak{C}_2(\mathfrak{u})$ is equi-dimensional with $|\Irr(\mathcal{C}_2(\mathfrak{u}))|\!=\!2$. \end{Lem} \begin{proof} Recall that $R_T^+:=\{\alpha, \beta, \alpha\!+\!\beta, \alpha\!+\!2\beta\}$ is a system of positive roots, where $\Delta=\{\alpha,\beta\}$. Suppose that $\mathfrak{C}(x)$ is a component. Since $[\mathfrak{u}_\alpha, \mathfrak{u}^{(\ge 2)}]=(0)$, Lemma \ref{CSC4} implies $\deg(x)\!=\!1$. Suppose that $|\msupp(x)|\!=\!1$. If $\msupp(x)\!=\!\{\alpha\}$, then \cite[(3.5)]{GR} yields $x \in B\boldsymbol{.} x_\alpha\! \cup B\boldsymbol{.} (x_\alpha\!+\!x_{\alpha+2\beta})$, while Lemma \ref{CSC3} gives $\mathfrak{C}(x_\alpha) \subseteq \mathfrak{C}(x_\alpha\!+\!x_{\alpha+2\beta})$. Alternatively, $x \in B\boldsymbol{.} x_\beta$. Since $C_\mathfrak{u}(x_\beta)= kx_\beta\!\oplus\!kx_{\alpha+2\beta}$, we have $[x_\alpha, C_\mathfrak{u}(x_\beta)] = k [x_\alpha, x_\beta]$ and Lemma \ref{CSC3} implies $\mathfrak{C}(x_\beta) \subseteq \mathfrak{C}(x_\alpha\!+\!x_\beta)$. As a result, \[\mathcal{C}_2(\mathfrak{u})=\mathfrak{C}(x_{\alpha}\!+\!x_{\beta})\cup\mathfrak{C}(x_\alpha\!+\!x_{\alpha+2\beta}).\] Since $\Char(k)\!\ne\!2$, the arguments of Lemma \ref{SL3} show that these elements are distinguished. Consequently, $\Irr(\mathcal{C}_2(\mathfrak{u}))=\{\mathfrak{C}(x_{\alpha}\!+\!x_{\beta}), \mathfrak{C}(x_\alpha\!+\!x_{\alpha+2\beta})\}$. \end{proof} \bigskip \subsection{Proof of Proposition \ref{Asg1}} \begin{proof} (1) Let us first consider an almost simple group $G$ of type $A_n$ for $n \in \{1,\ldots, 4\}$. In view of \cite[(II.1.13),(II.1.14)]{Ja03}, there is a covering $\pi :\SL_{n+1}(k) \longrightarrow G$. Hence $\pi$ is surjective and $\ker \pi \subseteq Z(G)$ is diagonalizable. Let $B_{n+1} \subseteq \SL_{n+1}(k)$ be a Borel subgroup, $U_{n+1} \unlhd B_{n+1}$ be its unipotent radical with Lie algebra $\mathfrak{u}_{n+1}$. Then $B\!:=\! \pi(B_{n+1})$ is a Borel subgroup of $G$ with unipotent radical $U\!:=\!\pi(U_{n+1})$. Since $\ker \pi\cap U_{n+1} = \{1\}$, it follows that $\pi|_{U_{n+1}}$ is a closed embedding, so that $\pi|_{U_{n+1}} : U_{n+1} \longrightarrow U$ is an isomorphism. Consequently, its differential \[ \mathsf{d}(\pi) : \mathfrak{u}_{n+1} \longrightarrow \mathfrak{u}\] is an isomorphism of Lie algebras such that \[\pi(b)\boldsymbol{.} \mathsf{d}(\pi)(x) = \mathsf{d}(\pi)(b\boldsymbol{.} x) \ \ \ \ \ \ \ \ \ \forall \ x \in \mathfrak{u}_{n+1}, b \in B_{n+1}.\] Thanks to Section \ref{S:SL}, the variety $\mathcal{C}_2(\mathfrak{u}_{n+1})\!\cong\!\mathcal{C}_2(\mathfrak{u})$ is equidimensional with $|\Irr(\mathcal{C}_2(\mathfrak{u}))|\!=\!|\Irr(\mathcal{C}_2(\mathfrak{u}_{n+1}))|$. (2) Since $\Sp(4)$ is simply connected, we may use the foregoing arguments in conjunction with Lemma \ref{Sp1}. \end{proof} \bigskip \subsection{Irreducibility and equidimensionality of $\mathcal{C}_2(\mathfrak{u})$} We record the following direct consequence of Proposition \ref{Asg1}: \bigskip \begin{Cor} \label{Irr1} Let $G$ be connected, reductive such that $\Char(k)$ is good for $G$. Suppose that $B \subseteq G$ is a Borel subgroup with unipotent radical $U$, whose Lie algebra is denoted $\mathfrak{u}$. \begin{enumerate} \item If $\modd(B;\mathfrak{u})\!=\!0$, then $\mathcal{C}_2(\mathfrak{u})$ is equidimensional. \item $\mathcal{C}_2(\mathfrak{u})$ is irreducible if and only if every almost simple component of $(G,G)$ is of type $A_1$ or $A_2$. \end{enumerate} \end{Cor} \begin{proof} Let $G_1, \ldots, G_n$ be the almost simple components of $G$. As before, we may write \[ B=Z(G)^\circ B_1\cdots B_n,\] where $B_i \subseteq G_i$ is a Borel subgroup. Letting $U_i$ be the unipotent radical of $B_i$ and setting $\mathfrak{u}_i := \Lie(U_i)$, we have $\mathcal{C}_2(\mathfrak{u}) \cong \prod_{i=1}^n\mathcal{C}_2(\mathfrak{u}_i)$. This shows that \[ \Irr(\mathcal{C}_2(\mathfrak{u})) = \{ \prod_{i=1}^n C_i \ ; \ C_i \in \Irr(\mathcal{C}_2(\mathfrak{u}_i)) \ \ 1\!\le\!i\!\le\!n\}.\] (1) The Theorem of Hille-R\"ohrle shows that each $G_i$ is of type $(A_n)_{n\le 4}$ or $B_2$. Thanks to Proposition \ref{Asg1}, each $\mathcal{C}_2(\mathfrak{u}_i)$ is equidimensional. Hence $\mathcal{C}_2(\mathfrak{u})$ enjoys the same property. (2) If $\mathcal{C}_2(\mathfrak{u})$ is irreducible, then so is each $\mathcal{C}_2(\mathfrak{u}_i)$, and a consecutive application of Theorem \ref{Df2}, Lemma \ref{Df3}, \cite[(1.1)]{HR} and Proposition \ref{Asg1} ensures that each almost simple group $G_i$ is of type $A_1$ or $A_2$. The reverse direction follows directly from Proposition \ref{Asg1}. \end{proof} \bigskip \begin{Remark} Suppose that $G$ is almost simple of type $A-D$. If $p\!\ge\!h(G)$ is good for $G$, then \cite[(1.7),(1.8)]{SFB} in conjunction with the foregoing result implies that the variety $V(U_2)$ of infinitesimal one-paramenter subgroups of the second Frobenius kernel $U_2$ of $U$ is irreducible if and only if $G$ is of type $A_1$ or $A_2$. \end{Remark} \bigskip \section{The variety $\mathbb{A}(2,\mathfrak{u})$} Let $\mathfrak{u}\!:=\!\Lie(U)$ be the Lie algebra of the unipotent radical $U$ of a Borel subgroup $B$ of a connected reductive group $G$. In this section, we are interested in the projective variety \[ \mathbb{A}(2,\mathfrak{u}) := \{ \mathfrak{a} \in \Gr_2(\mathfrak{u}) \ ; \ [\mathfrak{a},\mathfrak{a}]=(0)\}\] of two-dimensional abelian subalgebras of $\mathfrak{u}$. Recall that \[ \mathcal{O}_2(\mathfrak{u}):=\{(x,y) \in \mathcal{C}_2(\mathfrak{u}) \ ; \ \dim_kkx\!+\!ky=2\}\] is an open, $\GL_2(k)$-stable subset of $\mathcal{C}_2(\mathfrak{u})$, while the map \[ \varphi : \mathcal{O}_2(\mathfrak{u}) \longrightarrow \mathbb{A}(2,\mathfrak{u}) \ \ ; \ \ (x,y) \mapsto kx\!+\!ky\] is a surjective morphism such that $\varphi^{-1}(\varphi(x,y)) = \GL_2(k)\boldsymbol{.} (x,y)$ for all $(x,y) \in \mathcal{O}_2(\mathfrak{u})$. Note that $\GL_2(k)$ acts simply on $\mathcal{O}_2(\mathfrak{u})$, so that each fiber of $\varphi$ is $4$-dimensional. The Borel subgroup $B$ acts on $\mathbb{A}(2,\mathfrak{u})$ via \[ b\boldsymbol{.} \mathfrak{a} := \Ad(b)(\mathfrak{a}) \ \ \ \ \ \ \ \ \forall \ b \in B, \mathfrak{a} \in \mathbb{A}(2,\mathfrak{u}).\] Moreover, the set $\mathcal{O}_2(\mathfrak{u})$ is $B$-stable and $\varphi : \mathcal{O}_2(\mathfrak{u}) \longrightarrow \mathbb{A}(2,\mathfrak{u})$ is $B$-equivariant. \bigskip \begin{Lemma} \label{Var1} Suppose that $\ssrk(G)\!\ge\!2$. Then the following statements hold: \begin{enumerate} \item Given $x \in \mathfrak{u}\!\smallsetminus\!\{0\}$, there is $y \in \mathfrak{u}$ such that $(x,y) \in \mathcal{O}_2(\mathfrak{u})$. \item $\mathcal{O}_2(\mathfrak{u})$ lies dense in $\mathcal{C}_2(\mathfrak{u})$. \end{enumerate} \end{Lemma} \begin{proof} (1) Let $z \in C(\mathfrak{u})\!\smallsetminus\!\{0\}$. If $x \in \mathfrak{u}\!\smallsetminus\!kz$, then $(x,z) \in \mathcal{O}_2(\mathfrak{g})$. Alternatively, $x \in kz\!\smallsetminus\!\{0\}$. Since $\ssrk(G)\!\ge\!2$, we have $\dim_k\mathfrak{u}\!>\!1$, so that there is $y \in \mathfrak{u}\!\smallsetminus\!kx$. It follows that $(x,y) \in \mathcal{O}_2(\mathfrak{u})$. (2) Let $x \in \mathfrak{u}\!\smallsetminus\!\{0\}$. By (1), there is $y \in \mathfrak{u}$ such that $(x,y) \in \mathcal{O}_2(\mathfrak{u})$. Given $\beta \in k$, we consider the morphism \[ f_\beta : k \longrightarrow \mathcal{C}_2(\mathfrak{u}) \ \ ; \ \ \alpha \mapsto (x,\beta x\!+\!\alpha y).\] Then we have $f_\beta(k^\times) \subseteq \mathcal{O}_2(\mathfrak{u})$, so that $f(k) \subseteq \overline{\mathcal{O}_2(\mathfrak{u})}$. In particular, $(x,\beta x) = f(0) \in \overline{\mathcal{O}_2(\mathfrak{u})}$. Setting $\beta=0$, we obtain $(x,0) \in \overline{\mathcal{O}_2(\mathfrak{u})}$. Using the $\GL_2(k)$-action, we conclude that $(0,x) \in \overline{\mathcal{O}_2(\mathfrak{u})}$. Since \[ g : k \longrightarrow \mathcal{C}_2(\mathfrak{u}) \ \ ; \ \ \alpha \mapsto (\alpha x,0)\] is a morphism such that $g(k^\times) \subseteq \overline{\mathcal{O}_2(\mathfrak{u})}$, we conclude that $(0,0) \in \overline{\mathcal{O}_2(\mathfrak{u})}$. As a result, $\mathcal{C}_2(\mathfrak{u})\!=\!\overline{\mathcal{O}_2(\mathfrak{u})}$. \end{proof} \bigskip \begin{Lemma} \label{Var2} Suppose that $\Char(k)$ is good for $G$ and that $\ssrk(G)\!\ge\!2$. Let $\mathcal{O} \subseteq \mathfrak{u}\!\smallsetminus\!\{0\}$ be a $B$-orbit, $x \in \mathcal{O}$. \begin{enumerate} \item We have $\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))\!=\!\{\mathfrak{a} \in \mathbb{A}(2,\mathfrak{u}) \ ; \ \mathfrak{a}\cap \mathcal{O}\ne \emptyset\}$. \item If $\mathcal{O}\!=\!\mathcal{O}_{\rm reg}\cap\mathfrak{u}$, then $\overline{\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))}$ is an irreducible component of $\mathbb{A}(2,\mathfrak{u})$ of dimension $\dim B\!-\!\dim Z(G)\!-\!4$. \end{enumerate} \end{Lemma} \begin{proof} (1) We put $\mathbb{A}(2,\mathfrak{u})_\mathcal{O} := \{\mathfrak{a} \in \mathbb{A}(2,\mathfrak{u}) \ ; \ \mathfrak{a}\cap \mathcal{O}\ne \emptyset\}$. Let $y \in C_\mathfrak{u}(x)$ be such that $(x,y) \in \mathcal{O}_2(\mathfrak{u})$.Then $x \in \varphi(x,y)\cap \mathcal{O}$, so that $\varphi(x,y) \in \mathbb{A}(2,\mathfrak{u})_\mathcal{O}$. Since $\mathbb{A}(2,\mathfrak{u})_\mathcal{O}$ is $B$-stable, it follows that $\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u})) = B\boldsymbol{.} \varphi((\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u})) \subseteq \mathbb{A}(2,\mathfrak{u})_\mathcal{O}$. Now suppose that $\mathfrak{a} \in \mathbb{A}(2,\mathfrak{u})_\mathcal{O}$, and write $\mathfrak{a} = ky\!\oplus\!kz$, where $y \in \mathcal{O}$. Then there is $b \in B$ such that $x\!=\!b\boldsymbol{.} y$, so that $b\boldsymbol{.}\mathfrak{a} \in \varphi((\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$. As a result, $\mathfrak{a} \in \varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$. (2) General theory tells us that $\mathcal{O}\!=\!\mathcal{O}_{\rm reg}\cap \mathfrak{u}$ is an open $B$-orbit of $\mathfrak{u}$. Note that $\mathcal{O}_{\rm reg}$ is a conical subset of $\mathfrak{g}$, so that $\mathcal{O}_{\rm reg}\cap\mathfrak{u}$ is a conical subset of $\mathfrak{u}$. It now follows from (1) and \cite[(3.2)]{CF} that $\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$ is an open subset of $\mathbb{A}(2,\mathfrak{u})$. In view of Lemma \ref{Var1}, the irreducible set $\{x\}\!\times\!C_\mathfrak{u}(x)$ meets $\mathcal{O}_2(\mathfrak{u})$, so that $B\boldsymbol{.}((\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))=B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u})$ is irreducible. Hence $\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$ is a non-empty irreducible, open subset of $\mathbb{A}(2,\mathfrak{u})$. Let $C\supseteq \varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$ be an irreducible component of $\mathbb{A}(2,\mathfrak{u})$. Then $\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap \mathcal{O}_2(\mathfrak{u}))$ lies dense in $C$, so that $C=\overline{\varphi(B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap\mathcal{O}_2(\mathfrak{u}))}$. Observing Lemma \ref{Df1}, we thus obtain \[ \dim C = \dim B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\cap\mathcal{O}_2(\mathfrak{u})\!-\!4 = \dim B\boldsymbol{.}(\{x\}\!\times\!C_\mathfrak{u}(x))\!-\!4=\dim B \!-\!\dim Z(G)\!-\!4,\] as desired. \end{proof} \bigskip \noindent Given $x \in \mathfrak{u}$, we put \[ \mathbb{A}(2,\mathfrak{u},x):=\{ \mathfrak{a} \in \mathbb{A}(2,\mathfrak{u}) \ ; \ x \in \mathfrak{a}\}.\] \bigskip \begin{Proposition} \label{Var3} Suppose that $\Char(k)$ is good for $G$ and that $\ssrk(G)\!\ge\!2$. \begin{enumerate} \item $\dim \mathbb{A}(2,\mathfrak{u}) \!=\! \dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u})\!-\!4$. \item The variety $\mathbb{A}(2,\mathfrak{u})$ is equidimensional if and only if every almost simple component of $(G,G)$ is of type $(A_n)_{n\le 4}$ or $B_2$. In that case, every irreducible component $C \in \Irr(\mathbb{A}(2,\mathfrak{u}))$ is of the form $C=\overline{B\boldsymbol{.}\mathbb{A}(2,\mathfrak{u},x)}$ for some $B$-distinguished element $x \in \mathfrak{u}$. \item The variety $\mathbb{A}(2,\mathfrak{u})$ is irreducible if and only if every almost simple component of $(G,G)$ is of type $A_1$ or $A_2$. \end{enumerate} \end{Proposition} \begin{proof} (1) We write \[ \mathcal{C}_2(\mathfrak{u}) = \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))}C\] as the union of its irreducible components. Since $\ssrk(G)\!\ge\!2$, Lemma \ref{Var1} shows that $\mathcal{O}_2(\mathfrak{u})$ is a dense open subset of $\mathcal{C}_2(\mathfrak{u})$. As a result, every irreducible component $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$ meets $\mathcal{O}_2(\mathfrak{u})$. In view of Theorem \ref{Df2}, we obtain \[ \dim \mathcal{O}_2(\mathfrak{u}) = \dim \mathcal{C}_2(\mathfrak{u}) = \dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u}).\] Let $C\in \Irr(\mathcal{C}_2(\mathfrak{u}))$. Then $C\cap\mathcal{O}_2(\mathfrak{u})$ is a $\GL_2(k)$-stable, irreducible variety of dimension $\dim C$, so that \[\dim \overline{\varphi(C\cap\mathcal{O}_2(\mathfrak{u}))} = \dim C\cap\mathcal{O}_2(\mathfrak{u})\!-\!4 = \dim C\!-\!4.\] Consequently, \[ \dim \mathbb{A}(2,\mathfrak{u}) = \max_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{\varphi(C\cap\mathcal{O}_2(\mathfrak{u}))}= \dim \mathcal{C}_2(\mathfrak{u})\!-\!4 = \dim B\!-\!\dim Z(G)\!+\!\modd(B;\mathfrak{u})\!-\!4.\] (2) Suppose that $\mathbb{A}(2,\mathfrak{u})$ is equidimensional. As Lemma \ref{Var2} provides $C \in \Irr(\mathbb{A}(2,\mathfrak{u}))$ such that $\dim C\!=\!\dim B\!-\!\dim Z(G)\!-\!4$, it follows from (1) that $\modd(B;\mathfrak{u})\!=\!0$. The Theorem of Hille-R\"ohrle (see Proposition \ref{fmod1}) ensures that every almost simple component of $(G,G)$ is of the asserted type. Assuming this to be the case, Corollary \ref{Irr1} implies that $\mathcal{C}_2(\mathfrak{u})$ is equidimensional. In view of \cite[(2.5.1)]{CF}, $\mathcal{O}_2(\mathfrak{u})$ is equidimensional as well. We may thus apply \cite[(2.5.2)]{CF} to the canonical surjection $\mathcal{O}_2(\mathfrak{u}) \twoheadrightarrow \mathbb{A}(2,\mathfrak{u})$ and the $\GL_2(k)$-action on $\mathcal{O}_2(\mathfrak{u})$ to conclude that $\mathbb{A}(2,\mathfrak{u})$ is equidimensional. Given $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$, Lemma \ref{fmod2} provides $x_C \in \mathfrak{u}$ such that $C\! =\! \mathfrak{C}(x_C)$. In view of Lemma \ref{Df1}, our current assumption shows that $x_C$ is distinguished for $B$. According to Lemma \ref{Var1}, we have $(\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u}) \ne \emptyset$, while Lemma \ref{Var2} yields $\varphi(B\boldsymbol{.}(\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u}))= B\boldsymbol{.}\mathbb{A}(2,\mathfrak{u},x_C)$. Let $a \in \mathfrak{C}(x_C)\cap\mathcal{O}_2(\mathfrak{u})$. If $\mathcal{U} \subseteq \mathcal{C}_2(\mathfrak{u})$ is an open subset containing $a$, then $\mathcal{U}\cap (B\boldsymbol{.}(\{x_C\}\!\times\!C_\mathfrak{u}(x_C))$ is a non-empty open subset of the irreducible set $B\boldsymbol{.}(\{x_C\}\!\times\!C_\mathfrak{u}(x_C))$. Since this also holds for $B\boldsymbol{.} (\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u})$, we conclude that $\mathcal{U}\cap B\boldsymbol{.} (\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap \mathcal{O}_2(\mathfrak{u}) \ne \emptyset$. This shows that $a \in \overline{B\boldsymbol{.} (\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u})}$. Consequently, \begin{eqnarray*} \mathbb{A}(2,\mathfrak{u}) & = &\bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \varphi(\mathfrak{C}(x_C)\cap\mathcal{O}_2(\mathfrak{u})) \subseteq \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \varphi(\overline{B\boldsymbol{.} (\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u})}) \\ & \subseteq & \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{\varphi(B\boldsymbol{.} (\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u}))} \subseteq \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{\varphi(B\boldsymbol{.} [(\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u})])} \\ & = & \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{B\boldsymbol{.} \varphi((\{x_C\}\!\times\!C_\mathfrak{u}(x_C))\cap\mathcal{O}_2(\mathfrak{u}))} = \bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{B\boldsymbol{.} \mathbb{A}(2,\mathfrak{u},x_C)} \subseteq \mathbb{A}(2,\mathfrak{u}), \end{eqnarray*} so that $\mathbb{A}(2,\mathfrak{u})=\bigcup_{C \in \Irr(\mathcal{C}_2(\mathfrak{u}))} \overline{B\boldsymbol{.} \mathbb{A}(2,\mathfrak{u},x_C)}$ is a finite union of closed irreducible subsets. It follows that every irreducible component of $\mathbb{A}(2,\mathfrak{u})$ is of the form $\overline{B\boldsymbol{.} \mathbb{A}(2,\mathfrak{u},x_C)}$ for some $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$. (3) Suppose that $\mathbb{A}(2,\mathfrak{u})$ is irreducible. Then (2), Proposition \ref{fmod1}, and Corollary \ref{Irr1} show that the variety $\mathcal{C}_2(\mathfrak{u})$ is equidimensional. Using \cite[(2.5.2)]{CF}, we conclude that $\mathcal{C}_2(\mathfrak{u})$ is irreducible and Corollary \ref{Irr1} implies that $G$ has the asserted type. The reverse implication is a direct consequence of Corollary \ref{Irr1}. \end{proof} \bigskip \begin{Remark} The arguments of (2) can actually be used to show that $\mathcal{C}_2(\mathfrak{u})$ and $\mathbb{A}(2,\mathfrak{u})$ have the same number of components in case one (and hence both) of these spaces is (are) equidimensional: Let $C \in \Irr(\mathcal{C}_2(\mathfrak{u}))$. Returning to the proof of Proposition \ref{Pre3}(3), we find a subset $X_C \subseteq \mathfrak{u}$ such that \[ C= \overline{\pr_1^{-1}(X_C)}.\] Since $C$ is $\GL_2(k)$-stable, we conclude that $X_C\not \subseteq\{0\}$. Let $x \in X_C\!\smallsetminus\!\{0\}$. Then $\{x\}\!\times\!C_\mathfrak{u}(x) \subseteq C$. The assumption $C_\mathfrak{u}(x)\!=\!kx$ implies $x \in C(\mathfrak{u})$ and hence $\dim_k\mathfrak{u}\!=\!1$, a contradiction. As a result, $C\cap\mathcal{O}_2(\mathfrak{u})\!\ne\!\emptyset$. In view of \cite[(2.5.1)]{CF}, the variety $\mathcal{O}_2(\mathfrak{u})$ is therefore equidimensional with $|\Irr(\mathcal{O}_2(\mathfrak{u}))|\!=\!|\Irr(\mathcal{C}_2(\mathfrak{u}))|$. By virtue of \cite[(2.5.2)]{CF}, we obtain $|\Irr(\mathcal{O}_2(\mathfrak{u}))|\!=\!|\Irr(\mathbb{A}(2,\mathfrak{u}))|$. \end{Remark} \bigskip \bigskip
5fc53db6f9249aa04f4feb36e11daf251f24d8cd
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:00} The so-called Hartree and Gross-Pitaevskii hierarchies are infinite systems of coupled PDEs taking usually the following form, \begin{equation} \label{GPH-cubic} \left\{ \begin{aligned} &i \partial_t \gamma^{(k)} = (-\Delta_{X_k}+\Delta_{X'_k}) \gamma^{(k)} + \, B_{k}\gamma^{(k+1)}\,, & \\ &\gamma_{|t=0}^{(k)}=\gamma_0^{(k)}\,,& \\ \end{aligned} \qquad \forall k\in\mathbb{N}\,, \right. \end{equation} where $\Delta_{X_k}$ (or $\Delta_{X'_k}$) denotes the standard Laplacian on $\mathbb{R}^{dk}$ with $X_k=(x_1,\cdots,x_k)\in\mathbb{R}^{dk}$, $X'_k=(x'_1,\cdots,x'_k)\in\mathbb{R}^{dk}$ are the configuration space coordinates and $x_j,x'_j\in \mathbb{R}^d$ for each $j\in\{1,\cdots, k\}$. The solutions of these hierarchies are sequences of complex-valued functions $(\gamma^{(k)})_{k\in\mathbb{N}}$ such that $\gamma^{(k)}: (t,X_k,X'_k)\longmapsto \gamma^{(k)}(t,X_k,X'_k)$ is defined over a domain $I\times \mathbb{R}^{dk}\times \mathbb{R}^{dk}$ with $I$ is a real interval containing the origin; and $(\gamma^{(k)})_{k\in\mathbb{N}}$ fulfills the equation \eqref{GPH-cubic} with a prescribed initial condition $\gamma_0^{(k)}:\mathbb{R}^{dk}\times \mathbb{R}^{dk}\longrightarrow \mathbb{C}$. Furthermore, the interaction term in the right hand side of \eqref{GPH-cubic} is given by the following expression: \begin{equation} \label{eq.bk} \begin{aligned} B_{k}\gamma^{(k+1)}&:=&B_{k}^{+} \gamma^{(k+1)}-B_{k}^{-} \gamma^{(k+1)}\\ &:=&\sum_{j=1}^{k}B^{+}_{j,k}\gamma^{(k+1)}-\sum_{j=1}^{k}B^{-}_{j,k}\gamma^{(k+1)}\,, \end{aligned} \end{equation} where $B^\pm_{j,k}$ are defined for $1\leq j\leq k$ by \begin{equation} \label{Bjk-1} (B^+_{j,k}\gamma^{(k+1)})(t, X_k, X'_k) :=\int_{\mathbb{R}^d} \gamma^{(k+1)}(t,X_k, y, X_k', y) \,V (x_j - y)\,dy\,, \end{equation} and \begin{equation} \label{Bjk-2} (B^-_{j,k} \gamma^{(k)})(t, X_k, X_k') := \int_{\mathbb{R}^d} \gamma^{(k+1)}(t,X_k, y, X_k', y) \, V (x'_j - y) \,dy\,. \end{equation} Here $V$ is either an even real-valued measurable function $V:\mathbb{R}^d\to \mathbb{R}$ in the case of the Hartree hierarchy or a multiple of a Dirac delta function $V=\lambda\delta_0$ in the case of the Gross-Pitaevskii hierarchy. The parameter $\lambda$ refers to a coupling constant taking positive or negative values and accounting respectively for a repulsive (defocusing) or an attractive (focusing) interaction in the Gross-Pitaevskii case. Additionally, one usually requires two further physical constraints on the solutions $(\gamma^{k})_{k\in\mathbb{N}}\,$, namely: \begin{itemize} \item \underline{Symmetry}: For any permutations $\sigma,\pi$ in the symmetric group $\mathfrak{S}_k$, \begin{equation} \label{cont1} \gamma^{(k)}(t, x_{\sigma(1)},\cdots,x_{\sigma(k)}; x'_{\pi(1)},\cdots,x'_{\pi(k)})=\gamma^{(k)}(t, X_k; X'_k). \end{equation} \item \underline{Finite density}: Each $\gamma^{(k)}$ is the kernel of a trace class operator on $L^2(\mathbb{R}^{dk})$ satisfying the following inequality in the operator sense, \begin{equation} \label{cont2} 0\leq \gamma^{(k)} \leq 1 \,. \end{equation} \end{itemize} The main questions that can be raised about the equations \eqref{GPH-cubic} as systems of PDEs are of course uniqueness, existence and stability of solutions. Although these hierarchy equations \eqref{GPH-cubic} are physically relevant and they have been known in the physical literature since long time (see e.g.~ \cite{MR0223148, MR578142} and references therein), their mathematical investigation started rather recently. Indeed, the study of the Hartree and the Gross-Pitaevskii hierarchies have started attracting a wide increasing interest, only after a remarkable progress was made in the mean field theory of Bose gases (see e.g.~\cite{MR2257859,MR2276262,MR2525781,MR2680421,MR2504864,MR2657816}). In particular, the uniqueness property of solutions for the Gross-Pitaevskii hierarchy equation was pointed out as a crucial steep for the derivation of the dynamics of Bose-Einstein condensates from many-body quantum mechanics, see \cite{MR2209131,MR2276262,MR2377632}. Since then, the issues of well-posedness and uniqueness for hierarchy equations similar to \eqref{GPH-cubic} are considered to be interesting problems combining specific combinatorial and nonlinear analytic difficulties \cite{MR3210237,MR3385343,MR3165917,MR3246038,MR3551830,MR3500833,MR3419755}. Consequently, this stimulated a current trend among the PDE and mathematical physics communities, focused on the study of hierarchy equations for their own interest, see e.g.~\cite{MR3360742,MR3293448,MR3013052,MR3395127,MR3466843,MR3170216}. The mathematical mean field theory of Bose gases was initiated in the pioneering works of Hepp \cite{MR0332046} and Ginibre-Velo \cite{MR530915,MR539736} in the 70s. Afterward, the subject has been revived under the impetus of several contributions, e.g.~ \cite{MR2104130,MR1869286, MR2291792,MR2821235, MR1926667}. One of the purposes of this topic was the rigorous justification of the mean field approximation and the derivation of the Hartree, NLS and Gross-Pitaevskii equations from first principles of quantum mechanics. Thus, ultimately explaining how microscopic effects come at play into the physical macroscopic phenomena of Bose-Einstein condensates. For the sake of clarity, we briefly recall the relationship between the hierarchy equations \eqref{GPH-cubic} and the mean-field approximation of many-body quantum dynamics. For that, consider the many-body Schr\"odinger operator \begin{equation} \label{eq.hn} H_{n}=\sum_{i=1}^{n}-\Delta_{x_i}+\frac{1}{n}\sum_{1\leq i<j \leq n}W_{n}(x_i-x_j),\, \end{equation} where $\,W_{n}(x)=n^{d\beta}W(n^{\beta}x)$ for any $x \in \mathbb{R}^{d}$ and some $\beta \in [0,1]$ with $W$ is a real-valued even potential. Under some reasonable assumptions on $W$, the operator $H_{n}$ is self-adjoint over the space $L_s^{2}(\mathbb{R}^{dn})$ and the quantum dynamics related to \eqref{eq.hn} are well-defined. Here $L_s^{2}(\mathbb{R}^{dn})$ is the space of symmetric square integrable functions $\Psi\in L^{2}(\mathbb{R}^{dn})$ such that $\Psi(x_1,\cdots,x_n)=\Psi(x_{\sigma_1},\dots,x_{\sigma_n}) $ for any permutation $\sigma$ in $\mathfrak{S}_n$. In particular, if $\varrho_n$ is a normal state or a density matrix (i.e., a normalized positive trace class operator on $L_s^{2}(\mathbb{R}^{dn})$) describing the quantum system at the initial time $t=0$, then according to the Heisenberg equation the system evolves at time $t$ towards the state, \begin{equation} \label{qstate} \varrho_n(t)=e^{it H_n} \varrho_n e^{-it H_n}\,. \end{equation} One of the popular methods that explains the mean-field approximation is inspired by classical statistical mechanics and known as the BBGKY hierarchy approach. In fact, consider all the marginals $\varrho^{(k)}_{n}(t)$, $1\leq k\leq n$, given by their kernels \begin{equation} \label{corr-funct} \varrho^{(k)}_{n}(t,X_k;X'_k):=\displaystyle\int_{\mathbb{R}^{d(n-k)}} \varrho_n(t,X_k,y;X'_k,y)\,\,dy, \quad \text{ for all } \;(X_k,X'_k)\in \mathbb{R}^{2dk}, \end{equation} then $\varrho^{(k)}_{n}(t)$ are density matrices over $L_s^{2}(\mathbb{R}^{dk})$ usually called reduced density matrices of the state $\varrho_n(t)$. Thus, the Heisenberg equation on $\varrho_n(t)$ yields the so-called BBGKY hierarchy on the marginals, \begin{equation} \label{BBGKY} \begin{aligned} i \partial_{t}\varrho^{(k)}_{n}(t)&= \displaystyle\sum_{j=1}^{k}\big[-\Delta_{x_{j}},\varrho_{n}^{(k)}(t)\big]+\frac{1}{n}\displaystyle \sum_{1\leq i<j\leq k} \big[W_n(x_i-x_j),\varrho_n^{(k)}(t)\big]\\ &+\frac{n-k}{n} \, \sum_{j=1}^{k}\mathrm{Tr}_{k+1}[W_n(x_j-x_{k+1}),\varrho_{n}^{(k+1)}(t)]\,, \end{aligned} \end{equation} where the brackets $[\cdot,\cdot]$ denotes the commutator defined as $[A,B]=AB-BA$ and $\mathrm{Tr}_{k+1}$ denotes the partial trace over the last variable $x_{k+1}$. This means that $\mathrm{Tr}_{k+1}[W_n(x_j-x_{k+1}),\varrho_{n}^{(k+1)}(t)] $ is an operator who's kernel $K$ is given by the following formula, \begin{eqnarray*} K(X_k,X'_k)=\int_{\mathbb{R}^d} \gamma^{(k+1)}(t,X_k,x_{k+1}, X_k', x_{k+1}) \,W_n (x_j - x_{k+1})\,dx_{k+1} -\hspace{1in}\\ \hspace{2in}\int_{\mathbb{R}^d} \gamma^{(k+1)} (t,X_k, x_{k+1}', X_k', x_{k+1}') \, W_n(x'_j - x_{k+1}') \,dx_{k+1}'\,. \end{eqnarray*} The mean field approximation of quantum dynamics is usually understood within the BBGKY hierarchy approach as the two following statements: \begin{enumerate}[label=\textnormal{(\alph*)}] \item \label{chaos1} If $\varrho_n$ is an uncorrelated initial state $\varrho_n=|\varphi^{\otimes n}\rangle\langle \varphi^{\otimes n}|$, with $||\varphi||_{L^2(\mathbb{R}^d)}=1$, then the marginal $\varrho_n^{(k)}(t)$ satisfying the equation \eqref{BBGKY} converges as $n\to \infty$ towards a state $|\varphi_t^{\otimes k}\rangle\langle \varphi_t^{\otimes k}|$ for any $k\in\mathbb{N}$ and any time $t\in\mathbb{R}$ in the sense that, \begin{eqnarray*} \lim_{n\to\infty}\mathrm{Tr}[\varrho_n^{(k)}(t) A]=\langle\varphi_t^{\otimes k}, A \varphi_t^{\otimes k}\rangle_{L^2(\mathbb{R}^{dk})}, \; \end{eqnarray*} for any bounded (or compact) operator $A$ on $L^{2}(\mathbb{R}^{dk})$ ($k$ is kept fixed while $n\to \infty$). \item \label{chaos2} Furthermore, $\varphi_t$ is the solution of the nonlinear NLS or Hartree equation \begin{eqnarray} \label{hartree} \left\{ \begin{array}[c]{l} i\partial_t \varphi_t=-\Delta \varphi_t+(V*\left|\varphi_t\right|^{2})\varphi_t\,,\\ \varphi_{0}=\varphi\,, \end{array} \right. \end{eqnarray} \end{enumerate} with $V$ is a potential that depends on $W$ and the parameter $\beta$ according to the following dichotomy: \begin{equation} \label{eq.lambda} V= \begin{cases} W\, &\text{ if } \;\beta=0,\\ \lambda\;\delta_0 &\text{ if } \;0<\beta\leq 1. \end{cases} \end{equation} Here $\lambda$ is a coupling constant which depends on the value of $\beta$. The above statements usually go under the name of propagation of chaos \cite{MR2327286}. For a general overview and more details on the subject we refer to the recent book \cite{MR3382225}. The strategy of the BBGKY approach goes through three steps that are: \begin{description} \item (i) Compactness. \item (ii) Convergence. \item (iii) Uniqueness. \end{description} Step (ii) and (iii) are the most delicate parts specially when $W$ is an unbounded potential or $\beta>0$ ($\beta=1$ seems the most difficult case even if $W$ is smooth and compactly supported). Under favorable assumptions by using a compactness argument one proves that up to extracting a subsequence $\varrho^{(k)}_{n}(t)$ converges when $n\to\infty$, with respect to the weak-$*$ topology, to a positive trace class operator denoted $\gamma^{(k)}(t)$ for each $k\in\mathbb{N}$ and any time $t$, i.e., \begin{eqnarray*} \lim_{n\to\infty}\mathrm{Tr}[\varrho_n^{(k)}(t) \;A]=\mathrm{Tr}[\gamma^{(k)}(t) \;A], \end{eqnarray*} for any $A$ compact operator on $L_s^{2}(\mathbb{R}^{dk})$. This is the step (i) and in particular one remarks that the kernels of the operators $(\gamma^{(k)}(t))_{k\in\mathbb{N}}$ satisfy the constraints \eqref{cont1}-\eqref{cont2}. Step (ii) consists in letting $n$ tend to infinity in the BBGKY hierarchy \eqref{BBGKY} and proving the convergence towards the Hartree or Gross-Pitaevskii hierarchy \eqref{GPH-cubic} written in its equivalent form in terms of trace class operators, i.e., \begin{equation} \label{eq.infbbgky} i \partial_{t}\gamma^{(k)}= \displaystyle\sum_{j=1}^{k}\big[-\Delta_{x_{j}},\gamma^{(k)}\big]+B_{k}\gamma^{(k+1)}. \end{equation} Here, the term $B_{k}\gamma^{(k+1)}$ is understood as an operator taking the short writing $$ B_{k}\gamma^{(k+1)}= \displaystyle\sum_{j=1}^{k}\mathrm{Tr}_{k+1}[V(x_j-x_{k+1}),\gamma^{(k+1)}]\,, $$ with its kernel coinciding with the one given in \eqref{eq.bk}-\eqref{Bjk-2} according to the following identification, $$ (B^+_{j,k}\gamma^{(k+1)})(t, X_k, X'_k) \;\text{ is the kernel of } \; \mathrm{Tr}_{k+1}[V(x_j-x_{k+1})\gamma^{(k+1)}] $$ and $$ (B^-_{j,k}\gamma^{(k+1)})(t, X_k, X'_k) \;\text{ is the kernel of } \; \mathrm{Tr}_{k+1}[\gamma^{(k+1)}V(x_j-x_{k+1})]. $$ Notice that the hierarchy equation \eqref{GPH-cubic} or \eqref{eq.infbbgky} may not make sense. In fact, some additional regularity on $(\gamma^{(k)})_{k\in\mathbb{N}}$ is required in order to make every thing consistent. Nevertheless, at a formal level the limit of the BBGKY hierarchy \eqref{BBGKY} seems coherent with the equation \eqref{eq.infbbgky}, although the question of convergence is more subtle than a straightforward limit. The last step (iii) consists of proving uniqueness of solutions for the hierarchy equations \eqref{GPH-cubic} or equivalently \eqref{eq.infbbgky}. In fact, the proof goes as follows. Consider an uncorrelated state $\varrho_n=|\varphi^{\otimes n}\rangle\langle \varphi^{\otimes n}|$ as in \ref{chaos1}, the steps (i)-(ii) yield a solution $(\gamma^{(k)})_{k\in\mathbb{N}}$ of the hierarchy equation \eqref{eq.infbbgky} satisfying the initial condition $\gamma^{(k)}_0=|\varphi^{\otimes k}\rangle\langle \varphi^{\otimes k}|$ for all $k\in\mathbb{N}$. Moreover, one easily checks that if $\varphi_t$ solves the NLS equation \eqref{hartree} then $|\varphi_t^{\otimes k}\rangle\langle \varphi_t^{\otimes k}|$ is also a solution of the hierarchy equation \eqref{eq.infbbgky} satisfying the same initial condition as $\gamma^{(k)}(t)$. Therefore, if one proves that the Gross-Pitaevskii or the Hartree hierarchy \eqref{eq.infbbgky} admits a single solution for each initial datum, then $\gamma^{(k)}(t)=|\varphi_t^{\otimes k}\rangle\langle \varphi_t^{\otimes k}|$ for all times. Since the assertions \ref{chaos1}-\ref{chaos2} are proved independently from any extraction of subsequences, then the propagation of chaos is established in this way. The above strategy was designed after several contributions that started with the work of Spohn in \cite{MR578142} for bounded $W$ potentials then improved by Bardos-Golse-Mauser in \cite{MR1869286} (proof of (i)-(ii) for the Coulomb potential) and in \cite{MR1926667} (proof of (iii) for the Coulomb potential) and later on enhanced into a powerful method in a series of papers by Erd\H{o}s-Schlein-Yau \cite{MR2680421,MR2525781,MR2276262,MR2257859} in order to tackle the dynamics of Bose-Einstein condensates. There are of course other approaches that justify the mean field approximation of quantum dynamics (see e.g. \cite{MR2953701,MR3379490,MR2657816,MR2504864,MR2313859,MR3317556,MR2821235}) and there are also other trends that focus for instance on rate of convergence \cite{MR2839064,MR3117522,MR3681700,MR3506807,MR3391830,MR2836427,MR2530155} or on the stationary mean field approximation of ground states for Bose gases (see e.g.~\cite{MR3161107,MR3310520,MR2143817}). Notice that it is not necessary to start with exceptional (uncorrelated) states as in \ref{chaos1}-\ref{chaos2}; one can formulate a stronger form for the mean field approximation which is state independent as suggested in the work of Ammari and Nier (see \cite{MR2465733,MR2513969}). Our main motivation in this article is the uniqueness or well posedness problem (iii) for general hierarchy equations of type \eqref{GPH-cubic} as they may emerge from a mean field theory. So, we are not concerned with (i) and (ii), although if these steps are proved then we expect that our results can be easily used to complete the BBGKY strategy and deduce the validity of the mean field approximation. Before explaining our contribution, it is useful to highlight some remarkable results concerning the problem (iii). The first uniqueness results in the defocusing case ($\lambda>0$) for the Gross-Pitaeivskii hierarchy were obtained in \cite{MR2680421,MR2525781,MR2276262,MR2257859} by using some sophisticated Feynman graph expansions. Later on, Klainerman and Machedon \cite{MR2377632} proved a {\it conditional} uniqueness theorem using a board game argument inspired by the Feynman graph expansion with space-time multilinear estimates based on the following a priori condition, \begin{equation} \label{klma} \forall T>0, \exists R>0 \;\text{ s.t. } \quad \int_{0}^{T}\|S^{(k)}\;B_{j,k}^{\pm}\,\gamma^{(k)}(t)\|_{L^{2}(\mathbb{R}^{dk}\times \mathbb{R}^{dk})}\,dt < R^{k},\,\quad \forall k\in \mathbb{N}. \end{equation} Here $S^{(k)}$ denotes the following operator, \begin{align} \label{eq.tch1} \mathcal{S}^{(k)}:=\prod_{j=1}^{k}(1-\Delta_{x_{j}})^{1/2}(1-\Delta_{x'_{j}})^{1/2}\,, \end{align} which acts on the kernel of $B_{j,k}^{\pm}\gamma^{(k)}(t)$. A large number of remarkable results followed soon after these breakthroughs, see e.g.~ \cite{MR2988730,MR3013052,MR2747009,MR2683760,MR2662450, MR3500833,MR3116008}. Moreover, other interesting aspects started to be explored like the mean field problem on tori \cite{MR3449225,MR3466843,MR3425265,MR3360742,MR3170216} or the focusing case ($\lambda<0$) \cite{MR3641881,MR3674169,MR3488534,MR3425265,MR2683760,MR2600687,MR2662450}. Furthermore, in \cite{MR3385343} the authors gave a new proof, based on a quantum de Finetti theorem (see \cite{MR2802894,MR3506807,MR3335056,MR3161107} and the discussion in Subsect.~\ref{sec:02}), leading to \emph{unconditional} uniqueness results for the Gross-Pitaevskii hierarchy \eqref{GPH-cubic}. Subsequently, such proof was extended to show low regularity well-posedness results and also to study the quintic Gross-Pitaevskii hierarchy equation, see \cite{MR3419755,MR3395127}. \bigskip Beyond the importance and the profound implications of these aforementioned uniqueness and well-posedness results, they all relay on the same guiding idea which suggests the extension of nonlinear techniques (Strichartz, Morawetz, space-time inequalities, randomization\dots) to the hierarchy equations \eqref{GPH-cubic}. This may seem somewhat surprising since the latter are linear equations. On another side, the hierarchy equations \eqref{GPH-cubic} inherit important properties from the BBGKY hierarchy \eqref{BBGKY} revealing their statistical nature and which are usually neglected. In particular, a quantum de Finetti theorem, that will be explained in Subsect.~\ref{sec:02}, says that all the physically relevant solutions of \eqref{GPH-cubic} have the following form for any $k\in \mathbb{N} $, \begin{equation} \label{int-dfinet} \gamma^{(k)}(t)=\int_{L^2(\mathbb{R}^d)} \,|\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu_t, \end{equation} where $\mu_t$ is a Borel probability measure on $L^2(\mathbb{R}^d)$ and $|\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | $ is a rank one projector over the space $L_s^2(\mathbb{R}^{dk})$. Our first observation is that any solutions of the hierarchy equation \eqref{GPH-cubic} yields in a natural way a solution $(\mu_t)_{t\in I}$ of a Liouville (or transport) equation \begin{equation} \label{int-liouville} \partial_{t} \mu_{t} + \nabla^{T} (v \cdot \mu_{t}) = 0\,, \end{equation} with $\mu_t$ is the probability measure defined according to \eqref{int-dfinet} and vice versa. Here, $v$ is the vector field that defines the NLS equation \eqref{hartree} (i.e., $v(u)= -\Delta u+ (V*|u|^2)u$) and $\nabla^{T}$ denotes the transpose operation of the real gradient. The writing in \eqref{int-liouville} is formal and it is inspired by the finite dimensional form of transport or continuity equations; but it should be understood in a weak sense as \begin{equation} \label{int.eq.transport} \displaystyle \int_{\mathbb{R}}\int_{L^2(\mathbb{R}^d)}\partial_{t}\varphi(t,u)+{\rm Re}\langle v(u),\nabla\varphi(t,u)\rangle \; d\mu_{t}(u)\,dt=0, \end{equation} for all functions $ \varphi:\mathbb{R}\times L^2(\mathbb{R}^d)\to \mathbb{R} $ in a certain class of cylindrical smooth test functions $\mathscr C_{0,cyl}^{\infty}$ that will be detailed in Subsect.~\ref{fram}. Thus, by establishing the above duality or equivalence between hierarchies and transport equations we enter the realm of kinetic theory where powerful ideas have been flourishing for a while. Our second observation is that the uniqueness problem for \eqref{int.eq.transport} can be solved by a general and powerful argument that is model independent with respect to the initial value problem \eqref{hartree}. The idea of this argument is in some sense related to the well-known method of characteristics in kinetic theory and the related recent advances in the case of non-smooth vector fields \cite{MR2759545,MR1022305,MR2400257,MR2439520,MR2668627,MR2129498,MR2839299,MR2335089}. Indeed, inspired by these methods the authors in \cite{MR3721874,MR3379490} developed a uniqueness theory for continuity equations defined over arbitrary rigged Hilbert spaces. A further improvement of these results is needed in this article. So, thanks to the above duality and transport techniques one solves the problem of uniqueness for hierarchies without appealing to Feynman graph expansion, nor board game argument nor any multilinear estimates of any kind. The only thing that counts is that the initial value problem of type \eqref{hartree} satisfies the uniqueness property for its weak solutions on some natural functional spaces. This reduces the problem (iii) to the investigation of an initial value problem like NLS or Hartree and usually there is an abundant literature on these questions for various nonlinear PDEs. We believe that the approach we suggest here is quite natural. In fact, one can argue that the mean field limit of quantum states \eqref{qstate} are probability distributions $\mu_t$ satisfying a continuity (Liouville) equation because of conservation laws. On the other side, the $k$-point correlation functions \eqref{corr-funct} of the quantum system should converge at least formally towards the classical correlation functions of the same probability distribution $\mu_t$. Hence, the Liouville and hierarchy equations provide a dynamical statistical descriptions of the same classical system and so their are equivalent. Furthermore, working with Liouville equation is more advantageous since we do not need to control all the moments with respect to the probability distribution $\mu_t$ and this explains in some sense why we get rid of the combinatorial and the nonlinear problems that surround the hierarchy equations. We will describe our main results in more details in the following subsections. \bigskip \noindent In a broader perspective, our main purpose in this article is the study of the relationships between an initial value problem like the NLS and Hartree equations and two distinguished descriptions of its statistical dynamics. Indeed, to adopt a statistical point of view starting from a Cauchy problem, there are at least two distinguished visions. The first consists on writing a Liouville equation, as in finite dimension, while the second consists on writing a hierarchy equation. The letter point of view requires that the initial value problem is invariant with respect to the gauge group $U(1)$ while the first is more natural and follows the original spirit of statistical mechanics. Our main contribution here is essentially the clarification of how the uniqueness and well posedness properties of each the above formulations are interconnected to the others in full generality. More precisely, the main results of this article can be summarized as follows: \begin{itemize} \item {\it Duality}: We prove in full generality a natural duality between hierarchy equations of type \eqref{GPH-cubic} and Liouville equations of the from \eqref{int-liouville}. \item {\it Uniqueness and existence principle}: We establish a general principle saying: \begin{itemize} \item [(i)] Uniqueness for a hierarchy equation holds whenever the related initial value problem satisfies a uniqueness property for its weak solutions. \item [(ii)] Existence of solution to an $U(1)$-invariant initial value problem implies the existence of solutions for the related hierarchy equation of type \eqref{GPH-cubic}. \end{itemize} \item {\it Applications}: We provide in Subsection \ref{sub.sec.highresult} several examples focused around the NLS and Hartree equations. In particular, our work lifts straightforwardly to hierarchy equations \eqref{GPH-cubic} or \eqref{int-hier} the landmark results of \cite{MR1383498,MR2474179,MR2361505,MR3056755,MR1992354} on \emph{unconditional} uniqueness for NLS equations. Furthermore, we formulate \emph{conditional} uniqueness results for solutions of hierarchy equations in the critical and subcritical cases. We also provide a counter-example showing that uniqueness fails for a critical hierarchy equation if a conditional assumption is not assumed. \item {\it Transport techniques}: We show that transport techniques are very powerful tools in solving the questions of uniqueness and well-posedness for general hierarchy equations. In particular, we staidly push forward in this article the attempt to build a unified statistical theory of nonlinear PDEs which sprang up in \cite{MR3379490} and continued in \cite{MR3721874}. We believe that such transport techniques are very useful and would have fruitful applications in various fields as hydrodynamics, nonlinear dispersive PDEs, integrable systems and quantum field theory (see e.g. \cite{MR3721874,MR3737034,MR3255099}). \end{itemize} \subsection{Preliminaries} \label{fram} We introduce below some notations and the general framework that we shall follow throughout the article. This is particularly useful to state clearly our main results in the next subsection. \bigskip \noindent \emph{Notations}: Let $\mathfrak{H}$ be a separable Hilbert space. We use $\mathscr{L}^k(\mathfrak{H})$, $1\leq k\leq \infty$, to denote the Schatten classes. In particular, $\mathscr{L}^1(\mathfrak{H})$ and $\mathscr{L}^\infty(\mathfrak{H})$ are the spaces of trace class and compact operators respectively. Two natural topologies over $\mathscr{L}^1(\mathfrak{H})$ will be often used, namely the norm topology $||\cdot||_{ \mathscr{L}^1(\mathfrak{H})}$ and the weak-$*$ topology. The latter is inherited from the duality $ \mathscr{L}^1(\mathfrak{H})= \mathscr{L}^\infty(\mathfrak{H})^*$ and leads to the following sequential convergence: $$ B_n\overset{ *}{ \rightharpoonup} B \quad\hbox{ if and only if } \quad \lim_{n\to\infty}\mathrm{Tr}[B_n \,K]=\mathrm{Tr}[B \, K], \quad \hbox{ for any } \; K\in \mathscr{L}^\infty(\mathfrak{H}). $$ \medskip Let $X_r=B_{\mathfrak{H}}(0,r)$ denotes the closed ball of radius $r>0$ in $\mathfrak{H}$. The spaces of Borel probability measures on $\mathfrak{H}$ or on $X_r$ will be denoted by $\mathfrak{P}(\mathfrak{H})$ and $\mathfrak{P}(X_r)$ respectively. Obviously, a measure $\mu\in\mathfrak{P}(X_r)$ is also a measure in $\mathfrak{P}(\mathfrak{H})$ which concentrates on $X_r$, i.e.~$\mu(X_r)=1$. Consider $$ U(1):=\{e^{i\theta}, \theta\in\mathbb{R}\} $$ to be the circle group. We say that a measure $\mu\in \mathfrak{P}(\mathfrak{H})$ is $U(1)$-invariant if for any bounded Borel function $\varphi:\mathfrak{H}\to \mathbb{R}$ and for any $\theta\in\mathbb{R}$, $$ \int_{\mathfrak{H}} \varphi(e^{i\theta} x) \;d\mu(x) = \int_{\mathfrak{H}} \varphi( x) \;d\mu(x)\,. $$ The following set of probability measures on $\mathfrak{H}$ will be important latter on, \begin{equation} \label{invmea} \mathscr{M}(\mathfrak{H}):= \{\mu\in \mathfrak{P}(X_1): \mu \textit{ is } U(1)\textit{-invariant}\}\,. \end{equation} There is two natural topologies on $\mathfrak{P}(\mathfrak{H})$ that we shall systematically use, namely the strong and weak narrow convergence topologies. We say that a sequence $(\mu_i)_{i\in\mathbb{N}}$ in $\mathfrak{P}(\mathfrak{H})$ converges weakly narrowly to a $\mu\in \mathfrak{P}(\mathfrak{H})$ if: \begin{eqnarray} \label{defnarroww} \mu_i\rightharpoonup \mu & \Longleftrightarrow &\big(\int_{\mathfrak{H}} f \,d\mu_i \to \int_{\mathfrak{H}} f \,d\mu, \quad \forall f\in \mathscr{C}_b(\mathfrak{H}_w)\,\big)\,, \end{eqnarray} and strongly narrowly if: \begin{eqnarray} \label{defnarrows} \mu_i\to\mu &\Longleftrightarrow &\big( \int_{\mathfrak{H}} f \,d\mu_i \to \int_{\mathfrak{H}} f \,d\mu, \quad\forall f\in \mathscr{C}_b(\mathfrak{H}_s) \,\big)\,, \end{eqnarray} where $\mathscr{C}_b(\mathfrak{H}_w)$ and $\mathscr{C}_b(\mathfrak{H}_s)$ are the spaces of bounded continuous functions with respect to the weak and norm topology of the Hilbert space $\mathfrak{H}$ respectively. Recall that the weak topology in $\mathfrak{H}$ is metrizable on bounded sets. This can be seen using for instance the following distance, \begin{equation} \label{dweak} d_w(x,y):=\sqrt{\sum_{n\in\mathbb{N}} \frac{1}{2^n} \,|\langle x-y, e_n\rangle|^2}\,, \end{equation} where $(e_n)_{n\in\mathbb{N}}$ is an O.N.B of $\mathfrak{H}$. In the same way, we define the weak and strong narrow convergence topology in $\mathfrak{P}(X_r)$ as the limits in \eqref{defnarroww}-\eqref{defnarrows} with the functional spaces in the right hand side replaced by $\mathscr{C}_b(X_r,d_w)$ and $\mathscr{C}_b(X_r,||\cdot||_{\mathfrak{H}})$ respectively. \bigskip \noindent \emph{Rigged Hilbert spaces}: Consider a {separable} Hilbert space $\mathscr{Z}_{0}$ and a self-adjoint operator $A$ with a domain $D(A) \subset \mathscr{Z}_{0}$ verifying : \begin{equation} \label{condopA} \exists c > 0 ~,~ A \geq c \, 1\,. \end{equation} Using the operator $A$ one can build a natural scale of Hilbert spaces indexed by a parameter $\tau \in \mathbb{R}$. Indeed, consider for every $\tau\in\mathbb{R}$ the inner products : \[ \forall x,y \in D(A^{\frac{\tau}{2}}) ~,\qquad \langle x,y \rangle_{\mathscr{Z}_{\tau}} := \langle A^{\tau/2} x,A^{\tau/2}y \rangle_{\mathscr{Z}_{0}}\,, \] and let $\mathscr{Z}_{\tau}$ denotes the completion of the pre-Hilbert space $(D(A^{\frac{\tau}{2}}), \langle \cdot,\cdot\rangle_{\mathscr{Z}_{\tau}})$. Then for any $s,\sigma\geq 0$, one has the canonical continuous and dense embeddings, $$ \mathscr{Z}_{s} \hookrightarrow \mathscr{Z}_{0} \hookrightarrow \mathscr{Z}_{-\sigma}\,. $$ Remark that $\mathscr{Z}_{-\sigma}$ identifies with the dual space $\mathscr{Z}_{\sigma}^{'}$ of $\mathscr{Z}_{\sigma}$ with respect to the inner product of $\mathscr{Z}_{0}$. A simple example is provided for instance by the Sobolev spaces with $\mathscr{Z}_{s} = H^{s}(\mathbb{R}^{d})$ and $\mathscr{Z}_{-s}= H^{-s}(\mathbb{R}^{d})$ for $s \geq 0$. \bigskip \noindent \emph{Initial value problem}: Consider a (possibly) non-autonomous vector field ${v} : \mathbb{R} \times \mathscr{Z}_{s} \to \mathscr{Z}_{-\sigma}$, with $0 \leq s \leq \sigma$. Here the spaces $ \mathscr{Z}_{s}$ and $\mathscr{Z}_{-\sigma}$ are defined according to the above paragraph. Then the initial value problem defined by the vector field $v$, is the following differential equation valued in $\mathscr{Z}_{-\sigma}$ and defined over an open time bounded interval $I$ as, \begin{equation} \label{IVP}\tag{{\it ivp}} \left\{ \begin{aligned} &\dot{u}(t) \ = {v}(t,u(t))\,,& \\ &u(t_{0}) \ = x \in \mathscr{Z}_{s}\,.& \\ \end{aligned} \right. \end{equation} Here $t_0\in I$ is a fixed initial time. We shall require the following assumption on the vector field, \begin{equation} \label{A0}\tag{A0} \left\{ \begin{aligned} &v \text{ is } \text{ Borel and bounded on bounded sets },\\ &v \text{ is } U(1)-\text{invariant }\,. \end{aligned} \right. \end{equation} The $U(1)$-invariance means that $v(t,e^{i\theta} x)=e^{i\theta} v(t, x)$ for all $\theta\in \mathbb{R}$ and $(t,x)\in \mathbb{R}\times \mathscr{Z}_{s}$. There are at least two distinct notions of solutions for the above initial value problem. \begin{defn} \label{w-ssol} Consider the initial value problem \eqref{IVP} with a vector field satisfying \eqref{A0}. Then: \begin{itemize} \item[(i)] A weak solution of \eqref{IVP} over $I$ is a function $u: t \in I \longrightarrow u(t)$ belonging to the space $L^{\infty}(I,\mathscr{Z}_{s}) \cap W^{1,\infty}(I,\mathscr{Z}_{-\sigma})$ satisfying \eqref{IVP} for a.e.~$t \in I$ and for some $t_{0} \in I$. \item[(ii)] A strong solution of \eqref{IVP} over $I$ is a function $u: t \in I \longrightarrow u(t)$ belonging to the space $\mathscr C(I,\mathscr{Z}_{s}) \cap \mathscr C^{1}(I,\mathscr{Z}_{-\sigma})$ satisfying \eqref{IVP} for all $t \in I$ and for some $t_{0} \in I$. \end{itemize} \end{defn} Here $\mathscr C(I,\mathfrak{H})$ and $\mathscr C^1(I,\mathfrak{H})$ are respectively the spaces of continuous and $\mathscr{C}^1$-functions valued in a given Hilbert space $\mathfrak{H}$. While $W^{1,p}(I,\mathscr{Z}_{-\sigma})$, $1\leq p\leq\infty$, are the Sobolev spaces of classes of functions in $L^p(I,\mathscr{Z}_{-\sigma})$ with distributional first derivatives in $L^p(I,\mathscr{Z}_{-\sigma})$. Recall that any $u\in W^{1,p}(I,\mathscr{Z}_{-\sigma})$ is an absolutely continuous curve in $\mathscr{Z}_{-\sigma}$ with almost everywhere defined derivatives in $\mathscr{Z}_{-\sigma}$ satisfying $\dot u\in L^p(I,\mathscr{Z}_{-\sigma})$. The initial value problem \eqref{IVP} makes sense in the space $L^\infty(I,\mathscr{Z}_{s})\cap W^{1,\infty}(I,\mathscr{Z}_{-\sigma})$ since weak solutions of \eqref{IVP} are weakly continuous maps $u:\bar I\to\mathscr Z_s$ which are differentiable almost everywhere on $I$. Furthermore, it is easy to check using the assumption \eqref{A0} that any function $u\in L^\infty(I,\mathscr{Z}_{s})$ satisfying the Duhamel formula, \begin{equation} \label{int-form} u(t)=x+\int_{t_0}^t v(\tau,u(\tau)) ds\,, \text{ for a.e. } t\in I\,, \end{equation} is a weak solution of \eqref{IVP}. Conversely, any weak solution $u$ of \eqref{IVP} satisfies \eqref{int-form}. Similarly, if we assume that the vector field $v$ is continuous then strong solutions of \eqref{IVP} over $I$ are exactly continuous curves in $\mathscr{C}(I,\mathscr{Z}_{s})$ satisfying the Duhamel formula \eqref{int-form} for all $t\in I$ (see e.g.~\cite{MR2002047} for more details). \bigskip \noindent \emph{Hierarchy equations}: Consider a normal state or a density matrix $\varrho_n\in \mathscr{L}^1(\vee^n\mathscr{Z}_{0})$ (i.e., $\varrho_n\geq 0$ and $ \mathrm{Tr}[\varrho_n]=1$). Then the $k$-particles {reduced density matrix} of $\varrho_n$, for $0\leq k\leq n$, is by definition the unique non-negative trace-class operator denoted by $\varrho_n^{(k)}\in \mathscr{L}^1(\vee^k\mathscr{Z}_{0}) $ and satisfying: \begin{equation} \label{redmat} \mathrm{Tr}_{\otimes^n \mathscr{Z}_{0}} \big[\varrho_n \, A\otimes 1^{(n-k)}\big]=\mathrm{Tr}_{\vee^k \mathscr{Z}_{0}} \big[\varrho_n^{(k)} \, A\big]\,, \quad \forall A \in \mathscr{L}(\vee^k\mathscr{Z}_{0})\,. \end{equation} We call a \emph{symmetric hierarchy} (or simply a hierarchy) any subsequential limit of $(\gamma_n^{(k)})_{k\in \mathbb{N}}$, with respect to the weak-$*$ topology in $ \mathscr{L}^1(\vee^k\mathscr{Z}_{0})$ . Moreover, we denote the set of all these subsequential limits by $\mathscr{H}(\mathscr{Z}_{0})$. In Section \ref{sec:01}, we prove that $\mathscr{H}(\mathscr{Z}_{0})$ is a non trivial convex set admitting the following characterization: \begin{equation} \label{HKchar} \mathscr{H}(\mathscr{Z}_{0})=\left\{ \int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu(\varphi)\,, \;\mu\in \mathscr{M}(\mathscr{Z}_{0})\right\}\,, \end{equation} where $\mathscr{M}(\mathscr{Z}_{0})$ is the set, given by \eqref{invmea} with $\mathfrak{H}=\mathscr{Z}_{0}$, and constituted of probability measures which are $U(1)$-invariant and concentrated on the closed unit ball $ X_1:=B_{\mathscr{Z}_{0}}(0, 1)$ of $\mathscr{Z}_{0}$. In fact, for any $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathscr{Z}_{0})$ there exists a unique Borel probability measure $\mu$ on $ X_1$ such that it is $U(1)$-invariant and satisfying for any $k\in\mathbb{N}$, \begin{equation} \label{pre-dfinet} \gamma^{(k)}=\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu(\varphi)\,. \end{equation} Conversely, for any $\mu\in \mathscr{M}(\mathscr{Z}_{0})$ the above expression defines a symmetric hierarchy (see Prop.~\ref{str.BEh} and \ref{bij}). In the following, we want to write a hierarchy equation that generalizes the ones in \eqref{GPH-cubic} to any nonlinearity or equivalently to any initial value problem \eqref{IVP} which is $U(1)$-invariant. For that purpose, we introduce two operations extending the actions $B^\pm_{j,k}$ given in \eqref{Bjk-1}-\eqref{Bjk-2} to any initial value problem \eqref{IVP} satisfying \eqref{A0}. Indeed, we define for any $k\in\mathbb{N}$ and $j=1,\cdots,k$, the following operations on $\gamma \in \mathscr{H}(\mathscr{Z}_{0})$ as, \begin{equation} \begin{aligned} \bullet\; C_{j,k}^{+} \gamma^{} &:= \displaystyle\int_{\mathscr{Z}_{s}\cap X_1} \big| x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j}\big| \;d\mu_{}(x) \,,\\ \medskip \bullet \;C_{j,k}^{-} \gamma_{}^{} &:= \displaystyle\int_{\mathscr{Z}_{s}\cap X_1} \big| x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j} \rangle \langle x^{\otimes k} \big| \;d\mu_{}(x) \,, \end{aligned} \end{equation} where $\gamma$ and $\mu$ are related according to the integral representation \eqref{pre-dfinet}. We call a \emph{(symmetric) hierarchy equation} related to the initial value problem \eqref{IVP} the following integral equation defined on a open bounded interval $I$, \begin{equation} \label{int-hier} \forall t \in I ~,\qquad \gamma_{t}^{(k)} = \gamma_{t_{0}}^{(k)} + \int_{t_{0}}^{t} \sum_{j=1}^{k} (C_{j,k}^{+} \gamma_{\tau}^{} + C_{j,k}^{-} \gamma_{\tau}^{}) \;d\tau ~,~ \end{equation} with an initial datum $\gamma_{t_{0}}\in\mathscr{H}(\mathscr{Z}_0)$ for some $t_0\in I$. Of course, the latter equation may not make sense and as usual some regularity is required on the solutions. So, we will be interested only on solutions which are curves of symmetric hierarchies $t\in I\to \gamma_t\in \mathscr{H}(\mathscr{Z}_0)$ satisfying the following regularity assumption: \begin{equation} \left\{ \label{A2}\tag{A1} \begin{array}{rl} \bullet & \gamma_t\in\mathscr{H}(\mathscr{Z}_0) \text{ for all } t\in I\,,\\[8pt] \bullet & \exists R>0, \; \displaystyle ||(A^{s/2})^{\otimes k}\gamma_t^{(k)} (A^{s/2})^{\otimes k} ||_{ \mathscr{L}^1(\vee^k\mathscr{Z}_0)}\leq R^{2k}, \; \forall k\in\mathbb{N}, \forall t\in I\,,\\[8pt] \bullet & \forall k\in\mathbb{N}, \; t\in I\to (A^{-\sigma/2})^{\otimes k}\gamma_t^{(k)} (A^{-\sigma/2})^{\otimes k} \;\text{ is weak-$*$ continuous in } \mathscr{L}_{}^1(\vee^k\mathscr{Z}_0)\,. \end{array} \right. \end{equation} Such assumption is actually quite natural since the first condition emerges from well justified physical constraints, while the second puts the regularity of $\gamma_t$ at the same level as the one for the related initial value problem \eqref{IVP} thus justifying the operations $C^\pm_{j,k}$; and the last is a very mild condition that ensures a rigorous meaning to the right hand side of \eqref{int-hier} as a Bochner integral in some Banach spaces (see Section \ref{sec.equiv} for more details). \bigskip \noindent \emph{Liouville equations}: The Liouville equation, related to the initial value problem \eqref{IVP}, is given as in finite dimension by the formal expression, \[ \partial_{t} \mu_{t} + \nabla^{T} ({v}\mu_{t}) = 0\,. \] However, since we are in infinite dimensional spaces we shall understand the above equation in a weak sense using a convenient space of cylindrical test functions defined below. In particular, we shall use the real structure of the Hilbert space $\mathscr Z_{-\sigma}$ and interpret $\nabla$ as a real gradient and the subscript $^T$ as a transpose operation with respect to the real scalar product of $ \mathscr Z_{-\sigma}$. We refer the reader to \cite{MR3721874} for more details. \bigskip Consider $\mathscr{Z}_{-\sigma}$ as a real Hilbert space $ \mathscr{Z}_{-\sigma,\mathbb{R}}$ endowed with the scalar product ${\rm Re}\langle \cdot, \cdot \rangle_{\mathscr{Z}_{-\sigma}}$ simply denoted by $\langle \cdot, \cdot \rangle_{\mathscr Z_{-\sigma,\mathbb{R}}}$. Let $\mathbb{P}_n$ be the set of all projections $\pi:\mathscr Z_{_{-\sigma,\mathbb{R}}} \to \mathbb{R}^n$ given by \begin{equation} \label{eq.pi} \pi(x)=(\langle x, e_1\rangle_{\mathscr Z_{-\sigma},\mathbb{R}}, \cdots, \langle x, e_n\rangle_{\mathscr Z_{-\sigma},\mathbb{R}})\,, \end{equation} where $\{e_1,\cdots,e_n\}$ is any orthonormal family of $\mathscr Z_{-\sigma, \mathbb{R}}$. We denote by $\mathscr{C}_{0,cyl}^{\infty}(\mathscr Z_{-\sigma})$ the space of functions $\varphi=\psi\circ \pi$ with $\pi\in \mathbb{P}_n$ for some $n\in\mathbb{N}$ and $\psi\in \mathscr C_0^\infty(\mathbb{R}^n)$. In particular, one checks that the gradient (or the $\mathbb{R}$-differential) of $\varphi$ is equal to $$ \nabla_{\mathscr Z_{-\sigma,\mathbb{R}}}\varphi=\pi^T\circ\nabla\psi\circ\pi, $$ where $\pi^T:\mathbb{R}^n\to \mathscr{Z}_{-\sigma,\mathbb{R}} $ denotes the transpose map of $\pi$ given by $$ \pi^T(x_1,\cdots,x_n)=\sum_{i=1}^n x_i \,e_i\,. $$ Let $I$ be a bounded open interval, then we say that a function $\varphi : I \times \mathscr{Z}_{-\sigma} \to \mathbb{R}$ belongs to $\mathscr C_{0,cyl}^{\infty}(I \times \mathscr{Z}_{-\sigma})$ if there exists, for some $n\in\mathbb{N}$, a projection $\pi \in \mathbb{P}_n$ and $\phi \in \mathscr C_{0}^{\infty}(I \times \mathbb{R}^n)$ such that: \[ \forall (t,z) \in I \times\mathscr{Z}_{-\sigma} ~,\quad \varphi(t,z) = \phi(t,\pi(z)) \,. \] In particular, the functions of the form $\chi(.)\phi(.)$ where $\chi \in \mathscr C_{0}^{\infty}(I)$ and $\phi \in \mathscr C_{0,cyl}^{\infty}(\mathscr{Z}_{-\sigma})$ are in the space of cylindrical test functions $\mathscr C_{0,cyl}^{\infty}(I \times \mathscr{Z}_{-\sigma})$. \bigskip We consider the Liouville equation, related to the initial value problem \eqref{IVP} and defined on a bounded open interval $I$, as the following integral equation, \begin{equation} \label{eq.transport} \displaystyle\int_{I}\int_{\mathscr Z_{-\sigma}}\partial_{t}\varphi(t,x)+\langle v(t,x),{\nabla_{\mathscr Z_{-\sigma,\mathbb{R}}}}\varphi(t,x)\rangle_{{\mathscr Z_{-\sigma,\mathbb{R}}}} \; d\mu_{t}(x)\,dt=0, \quad\forall \varphi \in \mathscr{C}_{0,cyl}^{\infty}(I \times \mathscr Z_{-\sigma})\,, \end{equation} where $\langle \cdot,\cdot\rangle_{\mathscr Z_{-\sigma,\mathbb{R}}}={\rm Re} \langle \cdot,\cdot\rangle_{\mathscr Z_{-\sigma}}$, $\nabla_{\mathscr Z_{-\sigma,\mathbb{R}}}$ is the $\mathbb{R}$-gradient (or differential) in $\mathscr Z_{-\sigma,\mathbb{R}}$ and $t\in I\to \mu_t$ is a curve in the space of Borel probability measures $\mathfrak{P}(\mathscr{Z}_{-\sigma})$. The equation \eqref{eq.transport} is always supplemented by a prescribed initial condition $\mu_{t_0}\in \mathfrak{P}(\mathscr{Z}_{-\sigma})$ at a given time $t_0$. We are interested on solutions of the Liouville equation \eqref{eq.transport} which satisfy the following assumption: \begin{equation} \left\{ \label{A1}\tag{A2} \begin{array}{rl} \bullet & \mu_t\in\mathscr{M}(\mathscr{Z}_0), \text{ for all } t\in I.\\[8pt] \bullet & \mu_t(B_{\mathscr{Z}_s}(0,R)) =1, \text{ for some } R>0 \text{ and for all } t\in I.\\[8pt] \bullet & t\in I\to\mu_t \text{ is weakly narrowly continuous in } \mathfrak{P}(\mathscr{Z}_{-\sigma}). \end{array} \right. \end{equation} Remark that Borel sets of $\mathscr Z_s$ are also Borel set of $\mathscr Z_{-\sigma}$, see for instance \cite[Appendix]{MR3721874}. So, the assumption \eqref{A1} implies that the integral with respect to $\mu_t$ in \eqref{eq.transport} is actually over the closed ball $B_{\mathscr{Z}_s}(0,R)$. Furthermore, the integrand \eqref{eq.transport} is bounded and the integration with respect to $\mu_t$ and $dt$ is well defined. The first and last requirements in \eqref{A1} are quite natural as they are justified by physical constraints and a mild time regularity. However, to make the Liouville equation rigourously make sense it is not necessary to assume the concentration condition $\mu_t(B_{\mathscr{Z}_s}(0,R)) =1$ which can be relaxed. This confers already a serious advantage to the Liouville equation \eqref{eq.transport} over the symmetric hierarchy equation \eqref{int-hier}. \begin{remark} \label{balls} In the assumptions \eqref{A2} and \eqref{A1}, we explicitly asked respectively for the following bounds to hold for all $t\in I$, $$ (\forall k\in\mathbb{N} ,\quad \mathrm{Tr}_{\vee^k \mathscr{Z}_{0}}[\gamma_t^{(k)}]\leq 1) \quad \text{ and } \quad \mu_t(B_{\mathscr{Z}_{0}}(0, 1))=1\,. $$ Such requirements are only made to meet the important physical constraint \eqref{cont2} related to the mean-field theory of Bose gases. From a pure mathematical point of view, one can relax these conditions by simply assuming for some $R'>0$, \begin{equation} \label{bdrk} (\forall k\in\mathbb{N} ,\quad \mathrm{Tr}_{\vee^k \mathscr{Z}_{0}}[\gamma_t^{(k)}]\leq R'^k) \quad \text{ and } \quad \mu_t(B_{\mathscr{Z}_{0}}(0, R'))=1\,. \end{equation} In this case, the second condition in \eqref{A2} and respectively in \eqref{A1} imply the bounds \eqref{bdrk} with $R'=\frac{R}{c}$ (where $c$ is the constant in the inequality \eqref{condopA}). Our main results in Subsect.~\ref{sub.sec.highresult} still hold true under this small modification. And this explains why we do not require that the norm $||\cdot||_{\mathscr{Z}_{0}}$ is a conserved quantity for the initial value problem \eqref{IVP} since our results are also valid for non-conservative dynamical systems. \end{remark} \subsection{Highlighted results} \label{sub.sec.highresult} In this Subsection, we present our main contributions previously discussed in the introduction; namely duality between Liouville and hierarchy equations (Theorem \ref{sec.0.thm1}) and uniqueness and existence principles (Theorem \ref{sec.0.thm2} and \ref{ext-2}). These results will be stated in full generality. Moreover, some appealing applications to the NLS and Hartree equations will be discussed below. \begin{thm}(Duality) \label{sec.0.thm1} Let ${v}: \mathbb{R} \times \mathscr{Z}_{s} \mapsto \mathscr{Z}_{-\sigma}$ be a vector field satisfying \eqref{A0} and $I$ a bounded open interval. Then $t\in I\to\gamma_t=(\gamma_t^{(k)})_{k\in\mathbb{N}}$ is a solution of the symmetric hierarchy equation \eqref{int-hier} satisfying \eqref{A2} if and only if $t\in I\to\mu_t$ is a solution of the Liouville equation \eqref{eq.transport} satisfying \eqref{A1} with $\mu_t$ is related to $\gamma_t$ according to \begin{equation*} \gamma_t^{(k)}=\int_{\mathscr{Z}_{s}} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu_t(\varphi)\,, \quad \forall k\in\mathbb{N}. \end{equation*} \end{thm} Recall that the notion of weak solutions of the \eqref{IVP} is given in Definition \ref{w-ssol}. \begin{thm}(Uniqueness principle) \label{sec.0.thm2} Let ${v}: \mathbb{R} \times \mathscr{Z}_{s} \mapsto \mathscr{Z}_{-\sigma}$ be a vector field satisfying \eqref{A0}. Then uniqueness of weak solutions over a bounded open interval $I$ for the initial value problem \eqref{IVP} implies the uniqueness of solutions over $I$ of the symmetric hierarchy equation \eqref{int-hier} satisfying the assumption \eqref{A2}. \end{thm} \begin{thm}(Existence principle) \label{ext-2} Let $v:\mathbb{R}\times \mathscr Z_s\to \mathscr Z_{-\sigma}$ a vector field satisfying \eqref{A0} and let $I$ be a bounded open interval with $t_0\in I$ a fixed initial time. Assume that there exist a Borel subset $\mathcal{A}$ of $\mathscr{Z}_{s}$ and a Borel map $\phi:\bar I\times \mathcal{A}\to \mathscr{Z}_{s}$ which is bounded on bounded sets and such that for any $x\in \mathcal{A}$ the curve $t\in I \to \phi(t,x)$ is a weak solution of the initial value problem \eqref{IVP} satisfying $ \phi(t_0,x)=x$. Furthermore, suppose that $||\phi(t,x)||_{\mathscr{Z}_{0}}= ||x||_{\mathscr{Z}_{0}}$ and $\phi(t,e^{i \theta} x)= e^{i \theta} \phi(t, x)$ for any $x\in \mathcal{A}$, $t\in \bar I$ and $\theta\in\mathbb{R}$. Then for any symmetric hierarchy $\gamma=(\gamma^{(k)})_{k\in\mathbb{N}}\in\mathscr{H}(\mathscr{Z}_{0})$ satisfying: \begin{eqnarray*} &&\forall k\in\mathbb{N}, \quad \gamma^{(k)}=\int_{\mathscr{Z}_{s}} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\nu(\varphi)\,, \quad \text{ with } \quad \nu(\mathcal{A})=1 , \\ &&\exists R>0, \;\; \displaystyle ||(A^{s/2})^{\otimes k}\gamma^{(k)} (A^{s/2})^{\otimes k} ||_{ \mathscr{L}^1(\vee^k\mathscr{Z}_0)}\leq R^{2k}, \; \forall k\in\mathbb{N}, \end{eqnarray*} there exists a solution $t\in I\to \gamma_t=(\gamma_t^{(k)})_{k\in\mathbb{N}}\in\mathscr{H}(\mathscr{Z}_{0})$ of the hierarchy equation \eqref{int-hier} verifying the initial condition $\gamma_{t_0}=\gamma$ and the assumption \eqref{A2}. \end{thm} It is worth noticing that there is a converse for Thm.~\ref{sec.0.thm2} and \ref{ext-2}. Actually, the converse to Thm.~\ref{sec.0.thm2} is easy to establish and says that the uniqueness of hierarchy solutions satisfying \eqref{A2} implies uniqueness for the weak solutions of the initial value problem \eqref{IVP} modulo $U(1)$-gauge invariance. While, the converse of Thm.~\ref{ext-2} is more involved and will be considered elsewhere in connection with other applications. \subsection{Examples} \label{subsec.app} Consider the following nonlinear Schr\"odinger (NLS) equation on $\mathbb{R}^d$, \begin{equation} \label{eq.ivp} \left\{ \begin{aligned} &i \partial_{t}u=-\Delta u+g(u)\\ &u_{|t=0}=u_{0}\,, \end{aligned} \right. \end{equation} where $g:\mathbb{C}\to \mathbb{C}$ is a function satisfying the assumptions \begin{equation} \left\{ \label{cond-nls} \begin{aligned} & \quad g\in\mathscr{C}^1(\mathbb{R}^2,\mathbb{R}^2), \;g(0)=0, \\ & \quad \exists \alpha\geq 0, \; \exists M>0, \;|g'(\xi)|\leq M |\xi|^\alpha, \quad \forall \,|\xi|\geq 1,\\ & \quad g(e^{i\theta} x)= e^{i\theta} g( x), \quad \forall\, \theta\in\mathbb{R}\,, \forall\, x\in\mathbb{C}\,. \end{aligned} \right. \end{equation} Here $g'$ denotes the real derivative of $g$ considered as a function from $\mathbb{R}^2$ into itself. So that, $g'(\xi)$ can be identified with $(\partial_{z}g(\xi), \partial_{\bar z} g(\xi))$ and $$ |g'(\xi)|=|\partial_{z}g(\xi)|+|\partial_{\bar z}g(\xi)|\,. $$ One can easily see that the two first assertions in \eqref{cond-nls} are equivalent to assuming that the function $g$ splits into the sum of two function $g=g_1+g_2$ such that $ g_i\in\mathscr{C}^1(\mathbb{R}^2,\mathbb{R}^2)$, $g_i(0)=0$, for $i=1,2$ and satisfying additionally, \begin{eqnarray*} |g_1(\xi)-g_1(\eta)| \leq M \;|\xi-\eta| \,, \qquad \text{ and } \qquad |g_2(\xi)-g_2(\eta)| \leq M \;|\xi-\eta| \max(|\xi|^\alpha,|\eta|^\alpha)\,, \end{eqnarray*} for any $\xi,\eta\in\mathbb{C}$ (see e.g.~\cite{MR877998}). Notice also that using Sobolev's inequalities, if for $0\leq s< \frac d 2$ the power $\alpha$ in \eqref{cond-nls} verifies \begin{equation} \label{alpha} 0\leq \alpha\leq \frac{d+2s}{d-2s}\,, \end{equation} then there exists $\sigma\geq 0$ such that the nonlinearity in the NLS equation \eqref{eq.ivp}, \begin{equation} \label{nonlin} \begin{aligned} G: H^s(\mathbb{R}^d)&\longrightarrow & H^{-\sigma}(\mathbb{R}^d)\\ u&\longrightarrow & g(u) \end{aligned} \end{equation} is continuous and bounded on bounded sets. Moreover, for any $s\geq \frac{d}{2}$ and $\alpha\geq 0$ the map $G:H^s(\mathbb{R}^d)\to L^2(\mathbb{R}^d)$ is also continuous and bounded on bounded sets. This in particular implies that the initial value problem \eqref{eq.ivp} make sense in the spaces $H^s(\mathbb{R}^d)$. A mild solution $u$ of the NLS equation \eqref{eq.ivp}, defined over a bounded open interval $0\in I$, is a function $u\in L^\infty(I, H^s(\mathbb{R}^d))$ satisfying for any $t\in I$, \begin{equation} \label{nls-int} u(t)=\mathcal{U}(t)u(0)-i\int_0^t \mathcal{U}(t-\tau) g(u(\tau)) \; d\tau\,, \end{equation} where $\mathcal{U}(t)=e^{it \Delta}$. Since the nonlinearity $G$ is bounded on bounded sets one concludes that $u\in W^{1,\infty}(I,H^{-\sigma}(\mathbb{R}^d))$. Consequently, if one considers $\tilde u:=\mathcal{U}(-t)u$ then $u\in L^\infty(I, H^s(\mathbb{R}^d))$ is a mild solution of \eqref{eq.ivp} satisfying \eqref{nls-int} if and only if $\tilde u\in L^\infty(I, H^s(\mathbb{R}^d)) \cap W^{1,\infty}(I, H^{-\sigma}(\mathbb{R}^d))$ is a weak solution, in the sense of Def.~\ref{w-ssol}, of the initial value problem \eqref{IVP} with the vector field $v:\mathbb{R}\times H^s(\mathbb{R}^d)\to H^{-\sigma}(\mathbb{R}^d)$ defined by \begin{equation} \label{nls-v} v(t,x):=-i\, \mathcal{U}(-t) \;g(\mathcal{U}(t) x)\,. \end{equation} So, one notices that the vector field $v$ satisfies the assumption \eqref{A0} whenever $g$ verifies \eqref{cond-nls} and $\alpha$ is such that \eqref{alpha} holds true if $0\leq s<\frac{d}{2}$ or $\alpha\geq 0$ if $s\geq \frac{d}{2}$. Hence, one can consider the related hierarchy equation \eqref{int-hier} with this vector field $v$, given in \eqref{nls-v}, and apply Theorem \ref{sec.0.thm2} with \begin{equation} \label{spec-fram} A=-\Delta+1, \qquad \mathscr{Z}_{s}=H^s(\mathbb{R}^d), \qquad \mathscr{Z}_{-\sigma}=H^{-\sigma}(\mathbb{R}^d) . \end{equation} Since there is a multitude of results on unconditional uniqueness for NLS and Hartree equations, so one can lift them straightforwardly to the corresponding hierarchy equations \eqref{int-hier} using Thm.~\ref{sec.0.thm2} (see e.g.~\cite{MR3447005,MR1383498,MR2361505,MR3056755,MR2474179,MR2002047,MR1992354,MR1055532,MR952091}). We will not try to detail all the uniqueness results that can be lifted through the uniqueness principle of Thm.~\ref{sec.0.thm2}, but instead we will illustrate it by a few remarkable examples given below. Although Thm.~\ref{sec.0.thm2} seems only suitable for using unconditional uniqueness results in spaces of type $L^\infty(I, H^s(\mathbb{R}^d))$ for the initial value problem \eqref{IVP}, it is not difficult to see that its proof allows also to lift conditional uniqueness results for the \eqref{IVP} to the related hierarchy equations \eqref{int-hier}. However, to avoid intricate statements in this case, we refrain from giving an abstract conditional uniqueness result for hierarchy equations and prefer to treat some significant examples below. It seems that in the literature the only studied hierarchy equations are related to nonlinearities given by $g(u)=|u|^\alpha u$ with $\alpha=2,4$ or $g(u)=V*|u|^2 u$. So, here in particular we show how to define a general symmetric hierarchy equations for any consistent\footnote{We mean that $g$ defines a Borel map $G$ as in \eqref{nonlin} for some $s,\sigma\geq 0$ and it is bounded on bounded sets.} nonlinearity which is $U(1)$-invariant. Our formulation of the hierarchy equations in \eqref{int-hier} is equivalent to the usual writing in the special cases mentioned before. But there are two slight differences, which are worth highlighting: Firstly, we prefer to work in the interaction representation while in the literature integral equations are used and secondly we use trace-class operators while in the literature kernel representation is preferred. In Subsection \ref{reg-issue}, the above equivalence between these two writing is explained. \bigskip \noindent \emph{ Unconditional results}: One of the remarkable results on unconditional uniqueness for NLS equations \eqref{eq.ivp} is due to Kato in \cite{MR1383498}. It says that any two mild solutions in $L^\infty(I, H^s(\mathbb{R}^d))$ of \eqref{eq.ivp} with the same initial condition coincide under some assumptions on the dimension $d\geq 1$, the power $\alpha\geq 0$ and the Sobolev scale $s\geq 0$. Such a result has the following implication on the related hierarchies. \begin{prop}[Kato] \label{uniq-kato} Let $g:\mathbb{C}\to\mathbb{C}$ a function satisfying \eqref{cond-nls} and take $s\geq 0$. Consider $v:\mathbb{R}\times H^s(\mathbb{R}^d) \to H^{-\sigma}(\mathbb{R}^d)$ to be the vector field given by \eqref{nls-v}. Then any two solutions of the hierarchy equation \eqref{int-hier} satisfying \eqref{A2}, with the same initial condition, coincide in the following cases: \begin{itemize} \item [(i)] $s\geq \frac{d}{2}$. \item [(ii)] $d\geq 2$, $0\leq s <\frac{d}{2}$ and $\alpha< \min\{\frac{4}{d-2s},\frac{2s+2}{d-2s}\}$. \item [(iii)] $d=1$, $0\leq s<\frac{1}{2}$ and $ \alpha\leq \frac{1+2s}{1-2s}$. \end{itemize} \end{prop} There are several improvement (see e.g. \cite{MR2361505,MR3056755,MR2474179,MR2002047,MR1992354,MR1055532,MR952091}) of the unconditional uniqueness result of Kato when the nonlinearity is specified as \begin{equation} \label{nonlin-p} g(u)=\pm |u|^\alpha u\,. \end{equation} We summarize a straightforward consequence of some of these uniqueness results on the hierarchy equations \eqref{int-hier}. We respectively refer to \cite[Theorem 1.5]{MR1992354}, \cite[Theorem 1.1]{MR2361505} and \cite[Theorem 1.5]{MR3056755}. \begin{prop}[] \label{uniq-power} Consider $v:\mathbb{R}\times H^s(\mathbb{R}^d) \to H^{-\sigma}(\mathbb{R}^d)$ to be the vector field defined in \eqref{nls-v} with the nonlinearity given by \eqref{nonlin-p}. Then any two solutions of the corresponding hierarchy equation \eqref{int-hier} satisfying \eqref{A2}, with the same initial condition coincide in the following cases: \begin{itemize} \item (Furioli $\&$ Terraneo): \begin{eqnarray*} && 3\leq d\leq 5, \quad 0<s<1, \quad \alpha \leq \frac{d+2-2s}{d-2s}, \\\text{ and } &&\max\{1, \frac{2s}{d-2s}\}<\alpha<\min\{\frac{4}{d-2s},\frac{ d+2s}{d-2s},\frac{ 4s+2}{d-2s}\}. \end{eqnarray*} \item (Rogers): \begin{equation*} d\geq3,\quad 0\leq s\leq 1, \quad \frac{2+2s}{d-2s}\leq\alpha<\min\{\frac{2+4s-\frac{4s}{d}}{d-2s},\ \frac{4}{d-2s}\}, \end{equation*} \item (Han $\&$ Fang): \begin{eqnarray*} &&d=2, \quad 0<s<1, \quad \alpha=\frac{2+2s}{2-2s} ,\\ \text{ or } && d=3, \quad \frac{1}{4}<s<\frac{1}{2}, \quad \alpha=\frac{3+2s}{3-2s}. \end{eqnarray*} \end{itemize} \end{prop} In particular, for cubic nonlinearity $\alpha=2$, we recover uniqueness for the hierarchy equation \eqref{GPH-cubic} proved in \cite{MR3395127} in the cases $d=1,2$ for any $s\geq\frac{d}{6}$ and $d\geq 3$ for any $s>\frac{d-2}{2}$. While for the quartic nonlinearity $\alpha=3$, we obtain unconditional uniqueness for hierarchy equations in the cases: $d=1,2$ for any $s\geq\frac{d}{4}$ and $d\geq 3$ for any $s>\frac{3d-4}{6}$. \bigskip \noindent \emph{ Conditional results}: In this paragraph we provide two further uniqueness results of conditional type for hierarchy equations \eqref{int-hier}. Conditional here means that we require additional implicit conditions on hierarchy solutions in the spirit of nonlinear dispersive equations. Indeed, consider a mapping \begin{equation} \label{unc-g} \begin{aligned} &g:L^2(\mathbb{R}^d)\cap L^r(\mathbb{R}^d)\longrightarrow L^{r'}(\mathbb{R}^d)\,,&\\ \text{ with } & 2\leq r <\frac{2d}{d-2} \;\text{ if } \; d\geq 2 \;\text{ or }\; 2\leq r \leq \infty \text{ if } d=1 \quad(\text{ Here } \; \frac{1}{r'}+\frac{1}{r}=1); & \end{aligned} \end{equation} and assume the existence of a constant $\alpha>0$ such that for any $M>0$ there exists $C(M)<\infty$ satisfying: \begin{equation} \label{unc.eq.1} ||g(u_1)-g(u_2)||_{L^{r'}(\mathbb{R}^d)}\leq C(M) \; \bigg( ||u_1||_{L^r(\mathbb{R}^d)}^\alpha+ ||u_2||_{L^r(\mathbb{R}^d)}^\alpha\bigg)\; ||u_1-u_2||_{L^r(\mathbb{R}^d)}\,, \end{equation} for any $u_1,u_2\in L^2(\mathbb{R}^d)\cap L^r(\mathbb{R}^d) $ such that $||u_1||_{L^2(\mathbb{R}^d)}\leq M$ and $||u_2||_{L^2(\mathbb{R}^d)}\leq M$. \medskip We are interested on the NLS equation \eqref{eq.ivp}, with $g(u)$ given as before, and its corresponding hierarchy equation \eqref{int-hier}. To fit our general setting for hierarchy equations \eqref{int-hier} we need the following observations: Firstly, one can easily extend the mapping $g:L^2(\mathbb{R}^d)\cap L^r(\mathbb{R}^d)\to L^{r'}(\mathbb{R}^d)$ to a Borel map $G:L^2(\mathbb{R}^d)\to H^{-1}(\mathbb{R}^d)$ (see Lemma \ref{ext-borel-field} in the Appendix). Secondly, the results which we state below are independent from the choice of the extension. \begin{prop} \label{uniq-unc1} Let $g$ be a $U(1)$-invariant mapping satisfying \eqref{unc-g}-\eqref{unc.eq.1} and such that: $$ \frac{2}{q}:=d\,(\frac{1}{2}-\frac{1}{r})<\frac{2}{\alpha+2}\,. $$ Consider the Borel vector field $v$ given by \begin{equation} \label{vect-field} v(t,x):=-i \,\mathcal{U}(-t) \,G(\mathcal{U}(t) x) \end{equation} with $\mathcal{U}(t)=e^{it \Delta}$ and $G$ is any Borel extension of the mapping $g$. Let $I$ a bounded open interval and $t\in I\to \gamma_t,\tilde\gamma_t \in\mathscr{H}(L^2(\mathbb{R}^d))$ two solutions of the hierarchy equation \eqref{int-hier} associated to $v$ and satisfying the assumption \eqref{A2} (with $s=0$, $\sigma=1$ and $A=-\Delta+1$). Furthermore, suppose that \begin{equation} \label{cond-Stri} \int_{I} \int_{L^2} ||\,\mathcal{U}(t) \,x||_{L^r}^q\; d\mu_t(x)\,dt<\infty \quad \text{ and } \quad \int_{I} \int_{L^2} ||\,\mathcal{U}(t)\, x||_{L^r}^q\; d\tilde\mu_t(x)\,dt<\infty\,, \end{equation} where $\mu_t$ and $\tilde\mu_t$ are the two Borel probability measures verifying for all $k\in\mathbb{N}$, \begin{equation} \label{cond-eq2} \gamma_t^{(k)}=\int_{L^2} |x^{\otimes k} \rangle \langle x^{\otimes k} | \;d\mu_t(x)\, \quad \text{ and } \quad \tilde\gamma_t^{(k)}=\int_{L^2} |x^{\otimes k} \rangle \langle x^{\otimes k} | \;d\tilde\mu_t(x)\,. \end{equation} Then $\gamma_{t_0}=\tilde\gamma_{t_0}$ for some $t_0\in I$ implies that $\gamma_{t}=\tilde\gamma_{t}$ for all $t\in I$. \end{prop} In contrast with the previous examples for unconditional uniqueness, in this case we do not assume that the vector field $v$ is bounded on bounded sets. Nevertheless, the hierarchy equation \eqref{int-hier} still make sense for solutions $t\in I\to \gamma_t$ satisfying \eqref{A2} and \eqref{cond-Stri}. In fact, one easily checks that $$ \int_I \int_{L^2} ||v(t,x)||_{H^{-1}(\mathbb{R}^d)} \;d\nu_t(x) \,dt<\infty \,, $$ where $\nu_t$ is either $\mu_t$ or $\tilde\mu_t$ given in the right hand side of \eqref{cond-eq2}. So, the related Liouville equation \eqref{int-liouville} makes sense even though the vector field $v$ may not be bounded on bounded sets. Consequently, using \eqref{est-vect1} one notices that the hierarchy equation \eqref{int-hier} is meaningful since $$ \int_I ||(A^{-1/2})^{\otimes k}\gamma^{(k)} \,C^{\pm}_{j,k} \gamma_t \,(A^{-1/2})^{\otimes k}||_{\mathscr{L}^1(L^2(\mathbb{R}^{dk}))} \,dt<\infty \,, $$ for any $k\in\mathbb{N}$ and $1\leq j\leq k$. Notice also that the conditional uniqueness result given in Prop.~\ref{uniq-unc1} holds for nonlinearity of the type $$ g(u)=V u+f(\cdot,u(\cdot))+(W*|u|^2) \,u\,, $$ where $V\in L^\delta(\mathbb{R}^d)+L^\infty(\mathbb{R}^d)$ for some $\delta\geq 1, \delta>\frac{d}{2}$, $W\in L^\beta(\mathbb{R}^d)+L^\infty(\mathbb{R}^d)$ for some $\beta\geq 1, \beta>\frac{d}{2}$ and $f$ satisfying $f(x,0)=0$ for all $x\in\mathbb{R}^d$ and $$ |f(x,z)-f(x,\tilde z)|\leq C (1+|z|+|\tilde z|)^\alpha \,|z-\tilde z|\,, $$ for some $\alpha<\frac{4}{d}$ (see \cite[Corollary 4.6.5]{MR2002047}). \medskip We can also prove the following uniqueness result in the $L^2$-critical case. The proofs of Prop.~\ref{uniq-unc1} and \ref{uniq-unc2} are sketched in Subsection \ref{sub.sec.uniq-ex}. \begin{prop} \label{uniq-unc2} Take $g(u)=\lambda |u|^\alpha u$ with $\alpha=\frac{4}{d}$ and $\lambda\in\mathbb{C}$. Then the conclusion of Prop.~\ref{uniq-unc1} holds true also in this case if we assume \eqref{cond-Stri} for $q=\alpha+2$. \end{prop} \bigskip \noindent \emph{Counter-example:} Finally, we borrow a nice counter-example from the work of Win-Tsutsumi \cite{MR2474179} for the $L^2$-critical NLS equation (i.e., $g(u)=- |u|^\alpha u$ with $\alpha=\frac{4}{d}$) in order to show that Prop.~\ref{uniq-unc2} breaks down if we remove the assumption \eqref{cond-Stri}. Indeed, consider the function $$ u(t,x)=\frac{1}{(2t)^{d/2}} \, e^{i |x|^2} \, e^{i/t} \, \phi(\frac{x}{2t})\,, $$ where $\phi$ is a solution of the nonlinear elliptic equation $$ -\Delta\phi+\phi-\phi^{1+4/d}=0, \quad \phi>0, \quad \phi\in H^1(\mathbb{R}^d)\,. $$ Then one checks that $t\in\mathbb{R}\setminus\{0\}\to u(t):=u(t,\cdot)\in L^2(\mathbb{R}^d)$ is continuous and $u(t)$ converges weakly to $0$ when $t\to 0$. So that $t\to u(t)\in L^2(\mathbb{R}^d)$ is a weakly continuous solution of the NLS equation \eqref{eq.ivp} satisfying $u(0)=0$. Consequently, the curve $t\to \gamma_t\in\mathscr{H}(L^2(\mathbb{R}^d))$ given for any $k\in\mathbb{N}$ by $$ \gamma_t^{(k)}=\int_{L^2} |x^{\otimes k} \rangle \langle x^{\otimes k} | \;d\mu_t(x)\,, $$ such that $\mu_t= \delta_{\mathcal{U}(-t) u(t)}$, satisfies \eqref{A2}, $\gamma_0=0$ and the hierarchy equation \eqref{int-hier} with the corresponding vector field \eqref{vect-field}. So, we conclude that the hierarchy equation \eqref{int-hier} lacks uniqueness since $\gamma_t\neq 0$ if $t\neq 0$ while the null hierarchy is also a solution. \bigskip \noindent \section{Structure of symmetric hierarchies} \label{sec:01} According to the fundamental postulates of quantum mechanics, a {normal state} of a system of $n$-quantum particles is described by a \emph{density operator}, which is a non-negative trace class operator $\varrho_n$ of trace one acting on a tensor product of $n$-Hilbert spaces $\mathfrak{H}_1\otimes\cdots\otimes \mathfrak{H}_n$. Moreover, when the particles are bosons and identical, the system should obey the Bose-Einstein statistics. This means that the Hilbert spaces are identical $\mathfrak{H}=\mathfrak{H}_1=\cdots= \mathfrak{H}_n$ and the multiple-particle state $\varrho_n$ satisfies the following symmetry, \begin{equation} \label{eq.1} U_\pi \,\varrho_n =\varrho_n \,, \end{equation} where $U_\pi$ is the operator defined for any permutation $\pi$ as $$ U_\pi f_1\otimes \cdots \otimes f_n:= f_{\pi(1)}\otimes\cdots \otimes f_{\pi(n)}\,. $$ In particular, the following operator $$ S_n :=\frac{1}{n!} \sum_{\pi} U_\pi $$ defines an orthogonal projection on $\mathfrak{H}^{\otimes n}$ and the relation \eqref{eq.1} yields $$ S_n \varrho_n=\varrho_n S_n = \varrho_n. $$ This means that a normal state of a system of $n$-identical bosons is in fact a density operator on the $n$-symmetric tensor product $$ \vee^n\mathfrak{H}:=S_n\,\mathfrak{H}^{\otimes n}\, . $$ \subsection{Symmetric hierarchies} Consider a sequence of normal states $(\varrho_n)_{n\in \mathbb{N}}\in \Pi_{n\in \mathbb{N}} \mathscr{L}^1(\vee^n\mathfrak{H})$. Then for each fixed $k$ the $k$-particle reduced density matrices $(\varrho_n^{(k)})_{n\geq k}$, defined according to \eqref{redmat}, form a sequence of trace-class operators in the same space $\mathscr{L}^1(\vee^k\mathfrak{H})$. Hence, one can study the weak-$*$ convergence of $(\varrho_n^{(k)})_{n\geq k}$. The following result asserts the existence of subsequential limits given by the same subsequence for all $k\in\mathbb{N}$. \begin{lem} \label{lem.1} For any sequence of normal states $(\varrho_n)_{n\in \mathbb{N}}\in \Pi_{n\in \mathbb{N}} \mathscr{L}^1(\vee^n\mathfrak{H})$ there exists a subsequence $(\varrho_{n_i})_{i\in \mathbb{N}}$ and a sequence $(\gamma^{(k)})_{k\in \mathbb{N}} \in \Pi_{k\in \mathbb{N}} \mathscr{L}^1(\vee^k\mathfrak{H})$ such that for any $k\in \mathbb{N}$, \begin{equation} \label{eq.sec1.1} \lim_{i\to \infty} \mathrm{Tr}_{\vee^k \mathfrak{H}} \big[\varrho_{n_i}^{(k)} \, A\big]=\mathrm{Tr}_{\vee^k \mathfrak{H}} \big[\gamma^{(k)} \, A\big]\,, \quad \forall A \in \mathscr{L}^\infty(\vee^k\mathfrak{H})\,. \end{equation} \end{lem} \begin{proof} This follows by Banach-Alaoglu theorem on $ \mathscr{L}^1(\vee^n\mathfrak{H})= \mathscr{L}^\infty(\vee^n\mathfrak{H})^*$ together with a diagonal extraction argument. \end{proof} The subsequential limits provided by Lemma \ref{lem.1} are in fact the main analyzed quantities in the study of the mean field theory of Bose gases. As explained in the introduction, the many-body Schr\"odinger dynamics \eqref{eq.hn} yields the Gross-Pitaevskii and Hartree hierarchies \eqref{GPH-cubic} in the mean-field limit. Furthermore, the solutions $(\gamma^{(k)}(t))_{k\in\mathbb{N}}$ are actually the kernels of the subsequential limits of reduced density matrices originated from the quantum states $(\varrho_n(t))_{n\in\mathbb{N}}$ given by \eqref{qstate}. So, this means that the physically relevant solutions of the Gross-Pitaevskii and Hartree hierarchies \eqref{GPH-cubic} are not any arbitrary sequences $(\gamma^{(k)}(t))_{k\in\mathbb{N}} \in \Pi_{k\in\mathbb{N}}\mathscr{L}^1(\vee^k \mathfrak{H})$ of trace-class operators but they have a more specific structure inherited from the relation \eqref{eq.sec1.1}. So, this motivated the following definition. \begin{defn} \label{sec1.def1} We call a { symmetric hierarchy} any sequence $(\gamma^{(k)})_{k\in \mathbb{N}}$ \, $ \in \Pi_{k\in \mathbb{N}} \mathscr{L}^1(\vee^k\mathfrak{H})$ such that there exists a sequence of normal states $(\varrho_n)_{n\in \mathbb{N}}\in \Pi_{n\in \mathbb{N}} \mathscr{L}^1(\vee^n\mathfrak{H})$ and a subsequence $ (\varrho_{n_i})_{i\in \mathbb{N}}$ satisfying \eqref{eq.sec1.1} for all $k\in \mathbb{N}$. The set of all symmetric hierarchies will be denoted by $\mathscr{H}(\mathfrak{H})$. \end{defn} The terminology of \emph{ symmetric hierarchy} is probably uncommon in the literature, nevertheless it seems justified on one hand by the Bose-Einstein symmetry and on the other one by the fact that these subsequential limits $(\gamma^{(k)})_{k\in \mathbb{N}}$ are countable collections of linked trace-class operators (i.e., $\gamma^{(k+1)}$ and $\gamma^{(k)}$ are related in some sense) and hence deserving the name of hierarchies. \medskip Some simple consequences may be derived from the Definition \ref{sec1.def1}. In particular, the weak-$*$ convergence yields that any element of $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$ satisfies the constraint \eqref{cont2}, i.e.: \begin{eqnarray*} 0 \leq \gamma^{(k)} \leq 1, \quad \forall k\in \mathbb{N}\,. \end{eqnarray*} One also can see that we have the following simple example: \begin{itemize} \item Let $\varrho_n=|\varphi_n^{\otimes n} \rangle \langle \varphi_n^{\otimes n} | $ such that $ || \varphi_n||=1$ and $\varphi_n \rightharpoonup \varphi$ (with $|| \varphi||\leq 1$ ). Then $\varrho_n^{(k)} =|\varphi_n^{\otimes k} \rangle \langle \varphi_n^{\otimes k} | \overset{*}{\rightharpoonup}|\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | $. Hence, for any $\varphi$ in the closed unit ball of $\mathfrak{H}$, \begin{equation} \label{pure.points} (|\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} |)_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H}). \end{equation} \end{itemize} This shows for instance that $\mathscr{H}(\mathfrak{H})$ is a nontrivial (uncountable) set. More fundamental properties of $\mathscr{H}(\mathfrak{H})$ will be given in the next section. \subsection{Integral representation} \label{sec:02} The set of symmetric hierarchies $\mathscr{H}(\mathfrak{H})$ can be described as a convex combinations of the simple elements in \eqref{pure.points}. Such a superposition principle is usually interpreted as a non-commutative de Finetti theorem. Long ago St{\o}rmer in \cite{MR0241992} have proved for a different purpose, and using Choquet theory, a non-commutative de Finetti theorem for invariant states over some $C^*$-algebras. Thereafter, Hudson and Moody in \cite{MR2108315,MR660367,MR0397421} specified St{\o}rmer's result to the framework of normal states and gave an integral representation for elements $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$ which satisfy $\mathrm{Tr}[\gamma^{(k)}]=1$ for all $k\in \mathbb{N}$. With a different point of view, Ammari and Nier have proved in \cite{MR2465733} an integral representation for all elements of $\mathscr{H}(\mathfrak{H})$ (see Prop.~\ref{str.BEh}) by appealing to Wigner measures. Subsequently, Lewin, Nam and Rougerie gave in \cite{MR3161107} an alternative proof. Thereafter, several other applications to this superposition principle appeared (see e.g.~\cite{MR3335056,MR3385343,MR3210237}). In fact, each symmetric hierarchy in $\mathscr{H}(\mathfrak{H})$ can be written as an integral, over a probability measure, of elements in \eqref{pure.points}. Here we recall such a result and refer the reader for instance to \cite[Proposition 1.1]{MR3506807} for more details. \begin{prop} \label{str.BEh} For each $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$ there exists a unique Borel probability measure $\mu$ on the closed unit ball $ X_1:=B_\mathfrak{H}(0, 1)$ of $\mathfrak{H}$ such that it is $U(1)$-invariant and satisfies for any $k\in\mathbb{N}$, \begin{equation} \label{decomp} \gamma^{(k)}=\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu(\varphi)\,. \end{equation} \end{prop} \noindent The following comments are useful: \begin{itemize} \item The Borel $\sigma$-algebras on $X_1$ equipped respectively with the norm or the weak topology coincide. Moreover, $\mu$ can be considered as Borel probability measure on $\mathfrak{H}$ concentrated on the unit ball $X_1$ (i.e., $\mu(X_1)=1$). \item Actually, $\mu$ can be interpreted as the Wigner measure of the subsequence $(\varrho_{n_i})_{i\in \mathbb{N}} $ in Lemma \ref{lem.1} that leads to the symmetric hierarchy $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$. For a short review on the relationship between symmetric hierarchies and Wigner measures, we refer to \cite[Section 3]{MR3506807}. \item The integral \eqref{decomp} is well defined in a weak sense and also as a Bochner integral in $ \mathscr{L}^1(\vee^k\mathfrak{H})$ for each $k\in \mathbb{N}$. \end{itemize} \begin{prop} \label{prop.BEh} Let $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$ and $\mu\in\mathscr{M}(\mathfrak{H})$ given by Prop.~\ref{str.BEh} and satisfying \eqref{decomp}. Then the following assertions are equivalent: \begin{description} \item (i) The measure $\mu$ concentrated on the unit sphere of $\mathfrak{H}$ (i.e., $\mu(S_{\mathfrak{H}}(0,1))=1$). \item (ii) $ \mathrm{Tr}[ \gamma^{(k)}] =1$ for all $k\in\mathbb{N}$ . \item (iii) If $(\varrho_n)_{n\in \mathbb{N}}$ is a sequence of normal states in $ \Pi_{n\in \mathbb{N}} \mathscr{L}^1(\vee^n\mathfrak{H})$ satisfying \eqref{eq.sec1.1} (i.e., there exits a subsequence $(n_i)_{i\in\mathbb{N}}$ such that $\varrho_{n_i}^{(k)}\overset{ *}{ \rightharpoonup} \gamma^{(k)}$ ) then $(\varrho_{n_i}^{(k)})_{i\in \mathbb{N}}$ converges to $ \gamma^{(k)}$ in the norm topology of $ \mathscr{L}^1(\vee^k\mathfrak{H})$ for all $k\in\mathbb{N}$. \item (iv) $ \mathrm{Tr}[ \gamma^{(k)}] =1$ for some $k\in\mathbb{N}$ . \end{description} \end{prop} \begin{proof} (i)$\Rightarrow$(ii): The integral representation \eqref{decomp} yields for all $k\in\mathbb{N}$, $$ \mathrm{Tr}[\gamma^{(k)}]=\int_{X_1} ||\varphi||^{2k} \; d\mu(\varphi)\,. $$ Hence, if the measure $\mu$ concentrates on the unit sphere then $\mathrm{Tr}[\gamma^{(k)}]=1$ for all $k\in\mathbb{N}$. \\ (ii)$\Rightarrow$(iii): Follows by a general property of spaces of trace-class operators called the Kadec-Klee property (KK*); and which ensures that weak-$*$ and norm convergence coincide on the unit sphere of $\mathscr{L}^1(\vee^k\mathfrak{H})$ (see Appendix \ref{KKstar} ). Hence, (ii) converts the weak-$*$ convergence of $(\varrho_{n_i}^{(k)})_{i\in\mathbb{N}}$ towards $\gamma^{(k)}$ to a norm convergence. \\ (iii)$\Rightarrow$(iv): is trivial since $\mathrm{Tr}[\varrho_{n_i}^{(k)}]=1$ for all $i\in\mathbb{N}$.\\ (iv)$\Rightarrow$(i): For some $k\in\mathbb{N}$, \begin{equation*} \mathrm{Tr}[\gamma^{(k)}]=\int_{||\varphi||=1} 1 \; d\mu(\varphi)+ \int_{||\varphi||<1} ||\varphi||^{2k} \; d\mu(\varphi)=1\,. \end{equation*} So, this implies \begin{equation*} \int_{||\varphi||<1} 1- ||\varphi||^{2k} \; d\mu(\varphi)=0 \end{equation*} Taking any $0< R<1$, we see that $0\leq (1-R^{2k}) \mu( B_\mathfrak{H}(0,R))\leq \int_{||\varphi||<1} 1- ||\varphi||^{2k} \; d\mu(\varphi)$. Hence, $\mu(B_\mathfrak{H}(0,R))=0$ and consequently the measure $\mu$ concentrates on the unit sphere of $\mathfrak{H}$. \end{proof} \begin{lem} Let $(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathfrak{H})$ and $\mu\in\mathscr{M}(\mathfrak{H})$ given by Prop.~\ref{str.BEh} and satisfying \eqref{decomp}. Then $ \mathrm{Tr}_{k+1} [ \gamma^{(k+1)}] = \gamma^{(k)}$ for all $k\in\mathbb{N}$ if and only if $\mu(\{0\}\cup S_{\mathfrak{H}}(0,1))=1$. \end{lem} \begin{proof} Using the integral representation \eqref{decomp}, one deduces \begin{equation} \label{eq.trac} \int_{X_1} |\varphi^{\otimes k}\rangle\langle \varphi^{\otimes k}| \;d\mu(\varphi)\,=\gamma^{(k)}=\mathrm{Tr}_{k+1} [ \gamma^{(k+1)}] =\int_{X_1} ||\varphi||^2\, |\varphi^{\otimes k}\rangle\langle \varphi^{\otimes k}| \;d\mu(\varphi)\, . \end{equation} Taking the trace in the latter identity, implies $$ \int_{X_1} ||\varphi||^{2k}\, (1- ||\varphi||^{2}) \;d\mu(\varphi)\,=0 $$ Hence, this shows that $\mu(\{0\}\cup S_{\mathfrak{H}}(0,1))=1$. Conversely, if the measure $\mu$ concentrates on the origin and the unit sphere then \eqref{eq.trac} holds true. \end{proof} It is useful to identify the set of symmetric hierarchies $\mathscr{H}(\mathfrak{H})$ in a more simpler way without appealing to convergence of subsequences as in Definition \ref{sec1.def1}. In fact, we can show that the two sets $\mathscr{H}(\mathfrak{H})$ and $\mathscr{M}(\mathfrak{H})$ are in one-to-one correspondence. \begin{prop} \label{bij} The following mapping \begin{eqnarray} \Phi:\mathscr{M}(\mathfrak{H})&\rightarrow & \mathscr{H}(\mathfrak{H})\\ \mu &\rightarrow & (\gamma^{(k)})_{k\in \mathbb{N}} =\left(\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu\right)_{k\in \mathbb{N}} \,, \end{eqnarray} is a (bijective) one-to-one correspondence. \end{prop} \begin{proof} It is enough to prove that for any $\mu\in \mathscr{M}(\mathfrak{H})$ the sequence $\Phi(\mu)$ is, according to the Definition \ref{sec1.def1}, a symmetric hierarchy. \\ Let $\{e_n\}_{n\in \mathbb{N}}$ be an O.N.B of $\mathfrak{H}$. For any $\varphi\in X_1$, we have the decomposition $ \varphi=\sum_{k=1}^\infty \varphi_k e_k$ . Consider the mapping \begin{eqnarray*} \Psi_n: B_\mathfrak{H}(0,1)& \to & S_\mathfrak{H}(0,1)\\ \varphi &\to & \Psi_n(\varphi):= \bigg(\sqrt{ 1-\sum_{k\neq n} |\varphi_k|^2 } - \varphi_n\bigg) e_n+ \varphi\,. \end{eqnarray*} Remark that for every $\varphi\in B_\mathfrak{H}(0,1)$, $$ ||\Psi_n(\varphi)||^2= 1- \sum_{k\neq n} |\varphi_k|^2 +\sum_{k\neq n} |\varphi_k|^2=1\,. $$ We claim that $\Psi_n$ is continuous for each $n\in\mathbb{N}$. In fact, the following inequality holds \begin{equation} \label{contpsi} \|\Psi_n(\varphi)-\Psi_n(\tilde\varphi)\|\leq \| \varphi-\tilde\varphi\| + \| \sqrt{1-\sum_{k\neq n} |\varphi_k|^2 }- \sqrt{ 1-\sum_{k\neq n} |\tilde\varphi_k|^2 }\|\,, \end{equation} and if $\varphi\to \tilde\varphi$ in $X_1$ then $ P_n (\varphi)\to P_n (\tilde\varphi)$ with $P_n$ is the orthogonal projection over the subspace ${\rm Vect}\{e_k,k\neq n\} $. This shows that the right hand side of \eqref{contpsi} converges to $0$ when $\varphi\to \tilde\varphi$ at fixed $n$. \\ Let $\mu\in \mathscr{M}(\mathfrak{H})$ and consider the sequence, \begin{equation} \label{eq.seq1} \varrho_n=\int_{X_1} |\Psi_n(\varphi)^{\otimes n} \rangle \langle \Psi_n(\varphi)^{\otimes n} | \;d\mu\,. \end{equation} Then one checks that $(\varrho_n)_{n\in\mathbb{N}}\subset \mathscr{L}^1(\vee^n\mathfrak{H})$ with $\varrho_n\geq 0$ and $ \mathrm{Tr}[\varrho_n]=1$ since $\Psi_n(\varphi)\in S_\mathfrak{H}(0,1)$ for every $\varphi\in B_{\mathfrak{H}}(0,1)$. Notice that the function $\varphi\to |\Psi_n(\varphi)^{\otimes n} \rangle \langle \Psi_n(\varphi)^{\otimes n} |\in \mathscr{L}^1(\vee^k\mathfrak{H})$ is continuous and Bochner integrable. Moreover, we have the reduced density matrices $$ \varrho_n^{(k)}= \int_{X_1} |\Psi_n(\varphi)^{\otimes k} \rangle \langle \Psi_n(\varphi)^{\otimes k} | \;d\mu\,. $$ The point is that $\Psi_n(\varphi)\underset{n\to\infty}{\rightharpoonup} \varphi$ for any $\varphi\in B_\mathfrak{H}(0,1)$. Hence, one shows $$ |\Psi_n(\varphi)^{\otimes k}\rangle \langle \Psi_n( \varphi)^{\otimes k} |\overset{*}{\rightharpoonup} |\varphi^{\otimes k}\rangle \langle \varphi^{\otimes k} |\,. $$ when $n\to \infty$. Dominated convergence yields that for any $k\in \mathbb{N}$, $$ \varrho_n^{(k)} \overset{*}{\rightharpoonup} \left(\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu\right)_{k\in \mathbb{N}} . $$ \end{proof} \begin{defn} Thus, one can naturally propose a second definition for symmetric hierarchies, \begin{equation} \mathscr{H}(\mathfrak{H})=\left\{\left(\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu\right)_{k\in \mathbb{N}} ,\; \mu\in \mathscr{M}(\mathfrak{H})\right\}\,. \end{equation} \end{defn} The previous identification yields the following simple consequence. \begin{cor} $\mathscr{H}(\mathfrak{H})$ is a convex subset of $\Pi_{k\in \mathbb{N}} \mathscr{L}^1(\vee^k\mathfrak{H})$. \end{cor} \begin{proof} This follows from the fact that $\mathscr{M}(\mathfrak{H})$ is a convex set. \end {proof} \subsection{Topological isomorphism } It is useful to endow the set of symmetric hierarchies $\mathscr{H}(\mathfrak{H})$ with two natural topologies. Remember that the Hilbert space $\mathfrak{H}$ is separable. Consequently, for all $k\in\mathbb{N}$ the Banach spaces $\mathscr{L}^\infty(\vee^k \mathfrak{H})$ are also separable. Moreover, $\mathscr{L}^1(\vee^k \mathfrak{H})=\mathscr{L}^\infty(\vee^k \mathfrak{H})^*$ are endowed with two distinguished topologies, namely the weak-$*$ and the norm topology. Recall that the weak-$*$ topology is metrisable on bounded sets of $\mathscr{L}^1(\vee^k \mathfrak{H})$. For instance, the bounded set $$ \mathscr{L}_0^1(\vee^k\mathfrak{H}):=\{\gamma\in \mathscr{L}^1(\vee^k\mathfrak{H}), 0\leq \gamma\leq 1\}\,, $$ can be equipped with the following metric of weak-$*$ convergence, \begin{eqnarray*} \mathbf{d}^{(k)}(\gamma,\tilde\gamma):= \sum_{i\in\mathbb{N}} \frac{1}{2^i} \;\frac{|| K_i \,(\gamma-\tilde \gamma)||_{ \mathscr{L}^1(\vee^k\mathfrak{H})}}{ 1+|| K_i \,(\gamma-\tilde \gamma)||_{ \mathscr{L}^1(\vee^k\mathfrak{H})}}\,, \end{eqnarray*} with $\{K_i\}_{i\in \mathbb{N}}$ is a dense countable set in $ \mathscr{L}^\infty(\vee^k\mathfrak{H})$. So, the distance $\mathbf{d}^{(k)}$ induces the weak-$*$ topology on $\mathscr{L}_0^1(\vee^k\mathfrak{H})$. Notice that \begin{equation} \label{eq.2} \mathscr{H}(\mathfrak{H})\subset\Pi_{k\in\mathbb{N}} \mathscr{L}_0^1(\vee^k\mathfrak{H}) \subset\Pi_{k\in\mathbb{N}} \mathscr{L}^1(\vee^k\mathfrak{H})\,, \end{equation} and that the Cartesian product in the right hand side of \eqref{eq.2} can be equipped with a product topology (with the weak-$*$ or norm topology on each component $ \mathscr{L}^1(\vee^k\mathfrak{H})$). Thus, one can consider the set of symmetric hierarchies $\mathscr{H}(\mathfrak{H})$ as a metric space endowed with one of the two product distances $\mathbf{d}_w$ or $\mathbf{d}_s$ given below, \begin{eqnarray} \label{distw} \mathbf{d}_w(\gamma,\tilde\gamma)&:=& \sum_{k\in\mathbb{N}} \frac{1}{2^k} \;\mathbf{d}^{(k)}\big( \gamma^{(k)},\tilde \gamma^{(k)}\big)\,, \\ \label{dists} \mathbf{d}_s(\gamma,\tilde\gamma)&:=& \sum_{k\in\mathbb{N}} \frac{1}{2^k} \,|| \gamma^{(k)}-\tilde \gamma^{(k)}||_{ \mathscr{L}^1(\vee^k\mathfrak{H})}\,, \end{eqnarray} for all $\gamma=(\gamma^{(k)})_{k\in \mathbb{N}}$ and $ \tilde\gamma=(\tilde\gamma^{(k)})_{k\in \mathbb{N}}$ in $\mathscr{H}(\mathfrak{H})$. So, $\mathbf{d}_w$ (resp.~$\mathbf{d}_s$) induces the product topology in $\mathscr{H}(\mathfrak{H})$ with the weak-$*$ (resp.~ norm) topology in each component $\mathscr{L}^1(\vee^k\mathfrak{H})$. \bigskip Remember that the set $\mathscr{M}(\mathfrak{H})$ is endowed with two distinguished topologies, namely the weak and strong narrow convergence topologies (see Sect. \ref{fram}). It is well known that the weak and strong narrow topologies on the set of Borel probability measures $\mathfrak{P}(X_1)$, with $X_1=B_{\mathfrak{H}}(0,1)$, are metrisable and in particular $\mathfrak{P}(X_1)$ is a compact metric space when endowed with the weak narrow topology. \begin{prop} \label{prop.weakcv} Let $(\gamma,(\gamma_j)_{j\in\mathbb{N}})$ and $(\mu,(\mu_j)_{j\in\mathbb{N}})$ two sequences respectively in $\mathscr{H}(\mathfrak{H})$ and $\mathscr{M}(\mathfrak{H})$ with $\Phi(\mu_j)=\gamma_j$ and $\Phi(\mu)=\gamma$. Then $\mu_j\rightharpoonup \mu$ weakly narrowly in $\mathscr{M}(\mathfrak{H})$ if and only if for all $k\in\mathbb{N}$, \begin{equation} \label{convstar} \gamma_j^{(k)}=\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu_j \overset{*}{\rightharpoonup} \int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu= \gamma^{(k)}\,. \end{equation} \end{prop} \begin{proof} For any $A\in \mathscr{L}^\infty ( \vee^k\mathfrak{H})$, $$ \mathrm{Tr}[\gamma_j^{(k)}\, A]=\int_{X_1} \langle\varphi^{\otimes k}, A \varphi^{\otimes k} \rangle \;d\mu_j\,. $$ Remark that the function $\chi: X_1\to \mathbb{C}$ , $ \chi(\varphi)= \langle\varphi^{\otimes k}, A \varphi^{\otimes k} \rangle $ is bounded and continuous with respect to the distance $d_w$ (of the weak topology in $X_1$). Hence, the weakly narrow convergence $\mu_j\rightharpoonup \mu$ shows that for any $A\in \mathscr{L}^\infty ( \vee^k\mathfrak{H})$, $$ \lim_{j} \mathrm{Tr}[\gamma_j^{(k)}\, A]=\int_{X_1} \langle\varphi^{\otimes k}, A \varphi^{\otimes k} \rangle \;d\mu= \mathrm{Tr}[\gamma^{(k)}\, A]\,, $$ with $\gamma^{(k)}=\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu$. This proves that $\gamma_j^{(k)} \overset{*}{\rightharpoonup} \gamma^{(k)}$. \bigskip The proof of the inverse statement is a bit more involved. Suppose that a sequence $(\gamma_j)_{j\in\mathbb{N}}$ converges to $\gamma\in \mathscr{H}(\mathfrak{H})$ with respect to the distance $\mathbf{d}_w$. This means that \eqref{convstar} holds for any $k\in\mathbb{N}$. In particular, taking $A= |\xi^{\otimes k} \rangle \langle \xi^{\otimes k} |\in\mathscr{L}^\infty(\vee^k\mathfrak{H})$, \begin{equation} \label{eq.3} \lim_j\mathrm{Tr}[\gamma_j^{(k)}\, A]=\lim_j\int_{X_1} \langle\varphi^{\otimes k}, \xi^{\otimes k} \rangle \langle \xi^{\otimes k}, \varphi^{\otimes k} \rangle \;d\mu_j= \int_{X_1} | \langle\varphi, \xi \rangle|^{2k} \;d\mu\,, \end{equation} with $\mu\in \mathscr{M}(\mathfrak{H})$ such that $\Phi(\mu)=\gamma$. So, the characteristic function of $\mu_j$ have the following absolutely convergent expansion, \begin{eqnarray*} \int_{X_1} e^{i{\rm Re} \langle\varphi, \xi \rangle_\mathfrak{H}} \, d\mu_j&=& \sum_{k=0}^\infty \frac{i^k}{k!} \,\int_{X_1} {\rm Re} \langle\varphi, \xi \rangle^k \; d\mu_j \\ &=& \sum_{k=0}^\infty \frac{(-1)^k}{4^k (2k)!} \,\int_{X_1} | \langle\varphi, \xi \rangle|^{2k} \; d\mu_j, \end{eqnarray*} where the last equality is a direct consequence of the $U(1)$-invariance of $\mu_j$. So, the dominated convergence and \eqref{eq.3} yield the following convergence for the characteristic functions, $$ \lim_j \int_{X_1} e^{i{\rm Re} \langle\varphi, \xi \rangle_\mathfrak{H}} \, d\mu_j= \int_{X_1} e^{i{\rm Re} \langle\varphi, \xi \rangle_\mathfrak{H}} \, d\mu\,. $$ According to Theorem \ref{thmA}, in the Appendix \ref{measure}, the sequence $(\mu_j)_{j\in\mathbb{N}}$ converges towards $\mu$ weakly narrowly. \end{proof} \begin{prop} \label{prop.strcv} Let $(\gamma,(\gamma_i)_{i\in\mathbb{N}})$ and $(\mu,(\mu_i)_{i\in\mathbb{N}})$ two sequences respectively in $\mathscr{H}(\mathfrak{H})$ and $\mathscr{M}(\mathfrak{H})$ with $\Phi(\mu_i)=\gamma_i$ and $\Phi(\mu)=\gamma$. Then $\mu_i\to \mu$ strongly narrowly in $\mathscr{M}(\mathfrak{H})$ if and only if for all $k\in\mathbb{N}$, $$ \gamma_i^{(k)}=\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu_i\, {\rightarrow} \int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu= \gamma^{(k)}\,, $$ in the norm topology of $\mathscr{L}^1(\vee^k\mathfrak{H})$. \end{prop} \begin{proof} For any $A\in \mathscr{L} ( \vee^k\mathfrak{H})$, $$ \mathrm{Tr}[\gamma_i^{(k)}\, A]=\int_{X_1} \langle\varphi^{\otimes k}, A \varphi^{\otimes k} \rangle \;d\mu_i\,. $$ In particular, this implies that $\gamma_i^{(k)} \overset{*}{\rightharpoonup} \gamma^{(k)}$ and $ \mathrm{Tr}[\gamma_i^{(k)}]\to \mathrm{Tr}[\gamma_i^{(k)}]$ by taking $A=1$. Hence, appealing to the Kadec-Klee property (KK*) of the space $\mathscr{L}^1(\vee^k\mathfrak{H})$ (see Thm.~\ref{th.kadec} in Appendix \ref{KKstar}), one shows that $\gamma_i^{(k)} {\to} \gamma^{(k)}$ in the norm topology. \bigskip The inverse statement is proved with the help of Thm.~\ref{thmB}. Suppose that for any $k\in \mathbb{N}$, $\gamma_i^{(k)} \to \gamma^{(k)}$ in the norm topology and consider the sequence of operators $$ A_N =\sum_{i=N}^\infty |e_i\rangle\langle e_i| \in \mathscr{L}(\mathfrak{H})\,, $$ with $(e_i)_{i\in\mathbb{N}}$ is an O.N.B of the Hilbert space $\mathfrak{H}$. Then, we have \begin{equation} \label{eq.4} \mathrm{Tr}[ \gamma^{(1)}_n \, A_N] =\int_{X_1}\, \sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \, d\mu_n \underset{n\to \infty}{\longrightarrow } \int_{X_1}\, \sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \, d\mu\,, \end{equation} uniformly in $N\in \mathbb{N}$ since for all $N\in\mathbb{N}$, $$ \left| \mathrm{Tr}[ (\gamma_n^{(1)}- \gamma^{(1)}) A_N] \right| \leq || \gamma_n^{(1)}- \gamma^{(1)}||_{\mathscr{ L}(\mathfrak{H})} \underset{n\to \infty}{\rightarrow} 0\,. $$ The Chebyshev's inequality gives for any $\varepsilon>0$, $$ \int_{X_1} \displaystyle 1_{\{\sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \geq \varepsilon\}} \, d\mu_n \leq \frac{1}{\varepsilon} \int_{X_1} \sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \, d\mu_n \,. $$ Therefore, one deduces from the uniform convergence \eqref{eq.4} and the above inequality the following statement, $$ \lim_{N\to \infty } \sup_{n\in\mathbb{N}} \int_{X_1} \displaystyle 1_{\{\sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \geq \varepsilon\}} \, d\mu_n \leq \frac{1}{\varepsilon} \lim_{N\to \infty } \sup_{n\in\mathbb{N}} \int_{X_1} \sum_{i=N}^\infty | \langle\varphi, e_i\rangle |^2 \, d\mu_n =0\,. $$ Moreover, by Proposition \ref{prop.weakcv}, one already knows that $\mu_n\rightharpoonup\mu$ weakly narrowly in $\mathscr{M}(\mathfrak{H})$. Thus, applying Theorem \ref{thmB} in the Appendix \ref{measure}, one proves the convergence $\mu_n\to \mu$ with respect to the strong narrow topology in $\mathscr{M}(\mathfrak{H})$. \end{proof} The following corollary provides a useful characterisation of the relevant topologies on the set of symmetric hierarchies. \begin{cor} \label{homeo} The mapping \begin{eqnarray*} \Phi:(\mathscr{M}(\mathfrak{H}), \tau) &\rightarrow & (\mathscr{H}(\mathfrak{H}), \mathbf{d})\\ \mu &\rightarrow & (\gamma^{(k)})_{k\in \mathbb{N}} =\left(\int_{X_1} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu\right)_{k\in \mathbb{N}} \,, \end{eqnarray*} defines an homeomorphism in the two cases: \begin{description} \item (i) $\tau$ is the strong narrow convergence topology and $\mathbf{d}$ is the product distance $\mathbf{d}_s$ defined in \eqref{dists}. \item(ii) $\tau$ is the weak narrow convergence topology and $\mathbf{d}$ is the product distance $\mathbf{d}_w$ defined in \eqref{distw}. \end{description} \end{cor} \begin{proof} Follows by Propositions \ref{prop.weakcv} - \ref{prop.strcv}. \end{proof} Remark that the above homeomorphism allows to use the Krein-Milman and the Choquet-Bishop-de Leeuw theorems on the convex set of symmetric hierarchies. \section{The Liouville-hierarchy duality} \label{sec.equiv} We discuss, in this section, the rigorous formulation of the Liouville and the hierarchy equations and establish their equivalence in full generality as stated in Theorem \ref{sec.0.thm1}. More precisely, we will prove that any curve $t\to \mu_t$ in $\mathscr{M}(\mathscr{Z}_0)$ satisfying \eqref{A2} and solving the Liouville equation \eqref{int-liouville} will give a curve $t\to\Phi( \mu_t)=\gamma_t$ in $\mathscr{H}(\mathscr{Z}_0)$ satisfying \eqref{A1} and solving the hierarchy equation \eqref{int-hier} and vice versa. Remember that $(\mathscr{Z}_0,\mathscr{Z}_s, \mathscr{Z}_{-\sigma})$ is the triple of spaces introduce in Subsection \ref{fram} with $0\leq s\leq\sigma$; and $\Phi$ is the homeomorphism, in Corollary \ref{homeo}, relating symmetric hierarchies in $\mathscr{H}(\mathscr{Z}_0)$ to probability measures in $\mathscr{M}(\mathscr{Z}_0)$. \subsection{Regularity issues} \label{reg-issue} We first emphasis a general property showing a correspondence between the concentration property of measures $\mu\in \mathscr{M}(\mathscr{Z}_0)$ and regularity of the hierarchies $\gamma=\Phi(\mu)\in \mathscr{H}(\mathscr{Z}_0)$. \begin{lem} \label{concent} Let $\mu\in\mathscr{M}(\mathscr{Z}_0)$ and $\gamma=(\gamma^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathscr{Z}_0)$ such that $\Phi(\mu)=\gamma$. Then for any $s\geq 0$ and $R>0$: \begin{equation} \label{eq.5} \mu(B_{\mathscr{Z}_s}(0,R)) =1 \Leftrightarrow \left( \mathrm{Tr}[ (A^{s/2})^{\otimes k} \, \gamma^{(k)} \, (A^{s/2})^{\otimes k} ] \leq R^{2k}, \forall k\in\mathbb{N} \right)\,. \end{equation} \end{lem} \begin{proof} The equivalence is a consequence of the identity, $$ \mathrm{Tr}[ (A^{s/2})^{\otimes k} \, \gamma^{(k)} \, (A^{s/2})^{\otimes k} ] =\int_{X_1} ||\varphi||_{\mathscr{Z}_s}^{2k} \, d\mu\,. $$ If the measure $\mu$ is concentrated on the closed ball $B_{\mathscr{Z}_s}(0,R)$ then the inequalities in the right hand side of \eqref{eq.5} hold true. Conversely, the right hand side of \eqref{eq.5} yields for all $k\in\mathbb{N}$, $$ \int_{X_1} ||\varphi||_{\mathscr{Z}_s}^{2k} \, d\mu\,\leq R^{2k}\,, $$ which in turn gives, by Chebyshev's inequality, the concentration of the measure $\mu$ on $ B_{\mathscr{Z}_s}(0,R)$. \end{proof} An important ingredient in the problems of well-posedness and uniqueness of the Liouville and hierarchy equations is the regularity of the solutions with respect to time. The following proposition identifies the relevant notions of regularity that we shall use. \begin{prop} \label{regcurv} Let $I$ be an interval and consider two curves $t\in I\to\mu_t\in\mathscr{M}(\mathscr{Z}_0)$ and $t\in I\to\gamma_t=(\gamma_t^{(k)})_{k\in \mathbb{N}}\in \mathscr{H}(\mathscr{Z}_0)$ such that $\Phi(\mu_t)=\gamma_t$ for all $t\in I$. Assume that for some $s\geq 0$ and $R>0$, $\mu_t(B_{\mathscr{Z}_s}(0,R)) =1$ for all $t\in I$. Then for all $\tau\in (-\infty,s]$, \begin{enumerate} \item $t\in I\to\mu_t\in\mathfrak{P}(\mathscr{Z}_\tau)$ weakly narrowly continuous if and only if $ t\to (A^{\tau/2})^{\otimes k} \, \gamma_t^{(k)} \, (A^{\tau/2})^{\otimes k} $ is continuous with respect to the weak-$*$ topology in $\mathscr{L}^1(\vee^k\mathscr{Z}_0)$ for all $k\in\mathbb{N}$. \item $t\in I\to\mu_t\in\mathfrak{P}(\mathscr{Z}_\tau)$ strongly narrowly continuous if and only if $ t\to (A^{\tau/2})^{\otimes k} \, \gamma_t^{(k)} \, (A^{\tau/2})^{\otimes k} $ is continuous with respect to the norm topology in $\mathscr{L}^1(\vee^k\mathscr{Z}_0)$ for all $k\in\mathbb{N}$. \end{enumerate} \end{prop} \begin{proof} The proof follows by the same arguments as in Propositions \ref{prop.strcv}-\ref{prop.weakcv}. \end{proof} As a consequence of the above observations, we have the following equivalence between the two main assumptions \eqref{A2} and \eqref{A1}. \begin{lem} \label{a1a2} A curve $t\in I\to \gamma_t\in \mathscr{H}(\mathscr{Z}_0)$, defined over an interval $I$, satisfies the assumption \eqref{A2} if and only if the curve $t\in I\to \mu_t=\Phi^{-1}(\gamma_t)\in \mathscr{M}(\mathscr{Z}_0)$ satisfies \eqref{A1}. \end{lem} \begin{proof} The homeomorphism $\Phi$ in Corollary \ref{homeo} with Lemma \ref{concent} and Proposition \ref{regcurv} give the equivalence between the two assumptions \eqref{A1} and \eqref{A2}. \end{proof} \bigskip In the two next paragraphs we rigorously justify that the Liouville and the symmetric hierarchy equations are meaningful under the assumptions \eqref{A0}, \eqref{A2} and \eqref{A1}. \medskip \emph{Liouville equations}: In the Liouville equation \eqref{eq.transport}, one presumes that the integral with respect to time is well defined. The following Lemma guaranties this property using \eqref{A0} and \eqref{A1}. \begin{lem} Let $v:\mathbb{R}\times \mathscr{Z}_{s}\to\mathscr{Z}_{-\sigma}$ a vector field satisfying \eqref{A0} and $t\in I\to \mu_t\in \mathscr{M}(\mathscr{Z}_0)$ a curve satisfying \eqref{A1}. Then for any $\varphi\in\mathscr C_{0,cyl}^{\infty}(I \times \mathscr{Z}_{-\sigma})$ the map, \begin{eqnarray} \label{mes-time} t\in I &\longrightarrow& \int_{\mathscr{Z}_{s}} {\mathrm Re} \langle v(t,x), \nabla \varphi(t,x)\rangle_{\mathscr{Z}_{-\sigma}} \;d\mu_t \,, \end{eqnarray} belongs to $L^\infty(I,dt)$ when $I$ is a bounded open interval. \end{lem} \begin{proof} Since $t\in I\to \mu_t\in \mathfrak{P}(\mathscr{Z}_{-\sigma})$ is weakly narrowly continuous then it is a Borel family of probability measures (i.e., for any Borel set $B$ of $ \mathscr{Z}_{-\sigma}$ the map $t\to \mu_t(B)$ is measurable). Thanks to \eqref{A0} and \eqref{A1}, a simple bound gives $$ \int_{\mathscr{Z}_{s}} \bigg| \langle v(t,x), \nabla \varphi(t,x)\rangle_{\mathscr{Z}_{-\sigma}} \bigg|\;d\mu_t \leq c \int_{\mathscr{Z}_{s}} \|v(t,x)\|_{\mathscr{Z}_{-\sigma}}\;d\mu_t<\infty\,. $$ \end{proof} \bigskip \emph{Symmetric hierarchy equations}: In order to give a rigorous meaning to the symmetric hierarchy equation given formally in \eqref{int-hier}, we introduce the following Hilbert rigging $\mathcal{D}_\alpha^{(k)} \subset \vee^k\mathscr{Z}_0\subset \mathcal{D}_{-\alpha}^{(k)}$, for $\alpha>0$, as in the preliminary Subsect.~\ref{fram}, with $\mathcal{D}_\alpha^{(k)}=D((A^{\alpha/2})^{\otimes k})$, equipped with its graph norm and $\mathcal{D}_{-\alpha}^{(k)}$ identifies with the dual of the latter space with respect to the inner product of $ \vee^k\mathscr{Z}_0$. Moreover, consider the following two Banach spaces: \begin{eqnarray} \label{Ls} \mathscr{L}_s^1(\vee^k\mathscr{Z}_0)&:=& \big\{ T\in \mathscr{L}( \mathcal{D}_{-s}^{(k)}, \mathcal{D}_{s}^{(k)}), \, (A^{s/2})^{\otimes k} \, T \, (A^{s/2})^{\otimes k} \in \mathscr{L}^1(\vee^k\mathscr{Z}_0)\big\}\,,\\ \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)&:=& \big\{ T\in \mathscr{L}(\mathcal{D}_{\sigma}^{(k)},\mathcal{D}_{-\sigma}^{(k)}), \, (A^{-\sigma/2})^{\otimes k} \, T \, (A^{-\sigma/2})^{\otimes k} \in \mathscr{L}^1(\vee^k\mathscr{Z}_0)\big\}\,, \end{eqnarray} endowed respectively with the norms, \begin{eqnarray*} ||T||_{ \mathscr{L}_s^1(\vee^k\mathscr{Z}_0)} &:=&|| (A^{s/2})^{\otimes k} \, T \, (A^{s/2})^{\otimes k}||_{ \mathscr{L}^1(\vee^k\mathscr{Z}_0)}\,, \\ ||T||_{ \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)} &:=&|| (A^{-\sigma/2})^{\otimes k} \, T \, (A^{-\sigma/2})^{\otimes k}||_{ \mathscr{L}^1(\vee^k\mathscr{Z}_0)}\,. \end{eqnarray*} For a curve of symmetric hierarchies, $t\in I\to \gamma_t\in \mathscr{H}(\mathscr{Z}_0)$, satisfying the assumption \eqref{A2} such that $\mu_t=\Phi^{-1}(\gamma_t)\in \mathscr{M}(\mathscr{Z}_0)$, we have defined in Section \ref{fram} the following operations on $\gamma_t$, \begin{equation} \label{defBjk} \begin{aligned} \bullet\;\; C_{j,k}^{+} \gamma_t^{} &:= \displaystyle\int_{\mathscr{Z}_{s}} \big| x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j}\big| \;d\mu_{t}(x) \,,\\ \medskip \bullet \;\;C_{j,k}^{-} \gamma_{t}^{} &:= \displaystyle\int_{\mathscr{Z}_{s}} \big| x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j} \rangle \langle x^{\otimes k} \big| \;d\mu_{t}(x) \,, \end{aligned} \end{equation} for any $t\in I$, $k\in\mathbb{N}$ and $j=1,\cdots,k$. Here the projectors in the above integrals, $$ P=\big| x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j}\big| \quad \text{ and } \quad Q=\big| x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j} \rangle \langle x^{\otimes k} \big|\,, $$ are well defined operators in $\mathscr{L}(\mathscr{D}_{\sigma}^{(k)},\mathscr{D}_{-\sigma}^{(k)})$, respectively acting as follows for each fixed $x\in \mathscr{Z}_s$ and for any $\phi\in\mathscr{D}_{\sigma}^{(k)}$, $$ P( \phi)= \langle x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j},\phi\rangle_{\vee^k \mathscr{Z}_0}\; x^{\otimes k} \; \text{ and } \; Q( \phi)= \langle x^{\otimes k}, \phi\rangle_{\vee^k \mathscr{Z}_0}\; \; x^{\otimes j-1} \otimes v(t,x)\otimes x ^{\otimes k-j} \,. $$ \begin{lem} \label{opBjk} Let $v:\mathbb{R}\times \mathscr{Z}_{s}\to\mathscr{Z}_{-\sigma}$ a vector field satisfying \eqref{A0} and $t\in I\to \gamma_t\in \mathscr{H}(\mathscr{Z}_0)$ a curve satisfying \eqref{A2}. Then for any $t\in I$, $k\in\mathbb{N}$ and $j\in\{1,\cdots,k\}$, the operations \eqref{defBjk}, \begin{eqnarray*} C_{j,k}^\pm: \mathscr{H}(\mathscr{Z}_0) &\longrightarrow & \Pi_{k\in \mathbb{N}} \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)\\ (\gamma_t^{(k)})_{k\in\mathbb{N}} &\longrightarrow& \bigg(C_{j,k}^\pm \gamma_t^{} \bigg)_{k\in\mathbb{N}}\,, \end{eqnarray*} are well defined as Bochner integrals in $\mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)$ with respect to $\mu_t$. \end{lem} \begin{proof} One can show that the following maps, \begin{eqnarray*} x\in \mathscr{Z}_s \to | x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(t,x) \otimes x^{\otimes k-j} |\in \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0),\\ \noalign{\medskip} x\in \mathscr{Z}_s \to | x^{\otimes j-1} \otimes v(t,x) \otimes x^{\otimes k-j}\rangle \langle x^{\otimes k} |\in \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0), \end{eqnarray*} are weakly measurable. Since $\mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)$ is a separable Banach space then by Pettis theorem the above maps are $\mu_t$-Bochner measurable. Moreover, using assumption \eqref{A0},\eqref{A2} and Lemma \ref{a1a2}, one shows \begin{eqnarray} \label{est-vect1} \int_{\mathscr{Z}_s} \bigg\| | x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(t,x) \otimes x^{\otimes k-j} |\bigg\|_{ \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)} \, d\mu_t &\leq& \int_{\mathscr{Z}_s} \| x\|_{\mathscr{Z}_{0}}^{2 k-1} \, \|v(t,x)\|_{\mathscr{Z}_{-\sigma}} \, d\mu_t \\ \label{est-vect2} &\leq& M \,, \end{eqnarray} for some finite constant $M>0$. So, we see that the operations \eqref{defBjk} are well defined as Bochner integrals in $\mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)$ with respect to $\mu_t$. \end{proof} \begin{lem} \label{intBjk} Let $v:\mathbb{R}\times \mathscr{Z}_{s}\to\mathscr{Z}_{-\sigma}$ a vector field satisfying \eqref{A0} and $t\in I\to \gamma_t\in \mathscr{H}(\mathscr{Z}_0)$ be a curve satisfying \eqref{A2}. Then, for any $k\in\mathbb{N}$ and $j\in\{1,\cdots,k\}$, the map $t\in I\to C_{j,k}^\pm \gamma_t \in \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)$ is Bochner integrable with respect to the Lebesgue measure over the bounded interval $I$. \end{lem} \begin{proof} The map $t\to (A^{-\sigma/2})^{\otimes k} C_{j,k}^\pm \gamma_t (A^{-\sigma/2})^{\otimes k} \in \mathscr{L}^1(\vee^k\mathscr{Z}_0)$ is weakly measurable. So, by Pettis theorem $t\to C_{j,k}^\pm \gamma_t \in \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)$ is Bochner measurable. Moreover, one easily checks using \eqref{est-vect1}-\eqref{est-vect2}, (for $t>t_0$), \begin{eqnarray*} \int_{t_0}^t || C_{j,k}^\pm \gamma_\tau||_{ \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)} d\tau &\leq & c_1 \int_{t_0}^t \int_{\mathscr{Z}_s} \| x\|_{\mathscr{Z}_{0}}^{2 k-1} \, \|v(\tau,x)\|_{\mathscr{Z}_{-\sigma}} \, d\mu_\tau d\tau \\ &\leq & c_2 \, |t-t_0|\,, \end{eqnarray*} for some constants $c_1,c_2$. \end{proof} Hence, under the assumption \eqref{A0} and the a priori condition \eqref{A2} on solutions of the symmetric hierarchy equation \eqref{int-hier}, the following integral make sense, \begin{equation} \label{hier} t\in I \to \int_{t_{0}}^{t} \sum_{j=1}^{k} (C_{j,k}^{+} \gamma_{\tau}^{} + C_{j,k}^{-} \gamma_{\tau}^{}) \;d\tau \,\in \mathscr{L}_{-\sigma}^1(\vee^k\mathscr{Z}_0)\,, \end{equation} and it is continuous with respect to time. So, the hierarchy equation is meaningful and it is consistent with the requirement \eqref{A2}. \bigskip \emph{Kernel representation of hierarchy equations}: In this paragraph we show that both the Gross-Pitaevskii and the Hartree hierarchies in \eqref{GPH-cubic} are particular cases of the abstract symmetric hierarchy equation given in \eqref{int-hier} corresponding respectively to the nonlinearities $g(u)=|u|^2 u$ and $g(u)=V*|u|^2 u$ with the vector field $v$ given according to \eqref{nls-v} with the choice \eqref{spec-fram} and such that the mapping \eqref{nonlin} is continuous and bounded on bounded sets. The hierarchy equations \eqref{GPH-cubic} are usually understood via the corresponding integral equation, \begin{equation} \label{hier-kernel-int} \gamma^{(k)}(t)= \mathcal{U}^{(k)}(t)\gamma^{(k)}(0)-i \int_{0}^t \mathcal{U}^{(k)}(t-\tau) \, B_k\gamma^{(k+1)}(\tau) \,d\tau\,, \end{equation} where $\gamma^{(k)}(t)\in L^2(\mathbb{R}^{dk}\times \mathbb{R}^{dk})$ satisfying \eqref{cont1}-\eqref{cont2} are kernels of non-negative trace class operators and $\mathcal{U}^{(k)}(t)$ is the one-parameter group, $$ \mathcal{U}^{(k)}(t)=\displaystyle \prod_{j=1}^{k} \;e^{i t (\Delta_{x_j}-\Delta_{x'_j})}\,. $$ For the equation \eqref{hier-kernel-int} to make sense, one usually looks for solutions satisfying for some $R>0$ and for all $k\in\mathbb{N}$ and all $t\in I$ the estimate, $$ \sup_{t\in I} \bigg\|\prod_{j=1}^k (-\Delta_{x_j}+1)^{s/2} (-\Delta_{x'_j}+1)^{s/2}\,\gamma^{(k)}(t) \bigg\|_{ \mathscr{L}^1( L^2(\mathbb{R}^{dk}))}\leq R^{2k}\,. $$ This is exactly the second assumption in \eqref{A2}, with the choice \eqref{spec-fram}, expressed in terms of trace-class operators. Moreover, if we consider $\tilde \gamma^{(k)}(t):= \mathcal{U}^{(k)}(t)\gamma^{(k)}(t)$ then the integral equation \eqref{hier-kernel-int} gives, \begin{equation} \label{equiv-hier} \tilde\gamma^{(k)}(t)= \tilde\gamma^{(k)}(0)+ \int_{0}^t \, \sum_{j=1}^{k} (C_{j,k}^{+} \tilde\gamma_{\tau}^{} + C_{j,k}^{-} \tilde\gamma_{\tau}^{}) \;d\tau\,, \end{equation} where $C_{j,k}^{\pm}$ are the operations defined in \eqref{defBjk}. In fact, a simple computation yields \begin{eqnarray*} \langle \psi, C^+_{j,k}\tilde \gamma(\tau) \,\phi\rangle_{L^2(\mathbb{R}^{dk})}&= & \int_{H^1(\mathbb{R}^d)} \langle\psi | x^{\otimes k} \rangle \langle x^{\otimes j-1} \otimes v(\tau,x)\otimes x ^{\otimes k-j}| \phi\rangle \;d\tilde\mu_{\tau}(x) \\ &=& -i \int_{H^1(\mathbb{R}^d)} \langle \mathcal{U}(\tau)^{\otimes k}\psi |(\mathcal{U}(\tau)x)^{\otimes k} \rangle \times \\ &&\hspace{.6in}\langle (\mathcal{U}(\tau)x)^{\otimes j-1} \otimes g( \mathcal{U}(\tau)x)\otimes (\mathcal{U}(\tau)x) ^{\otimes k-j}| \mathcal{U}(\tau)^{\otimes k}\phi\rangle \;d\tilde\mu_{\tau}(x) \\ &=& -i \int_{H^1(\mathbb{R}^d)} \langle \mathcal{U}(\tau)^{\otimes k}\psi |y^{\otimes k} \rangle \langle y^{\otimes j-1} \otimes g(y)\otimes y ^{\otimes k-j}| \mathcal{U}(\tau)^{\otimes k}\phi\rangle \;d\mu_{\tau}(y)\,. \end{eqnarray*} Here, $\tilde\mu_t=\Phi^{-1}((\tilde \gamma^{(k)})_{k\in\mathbb{N}}) $ and $\mu_t=\Phi^{-1}(( \gamma^{(k)})_{k\in\mathbb{N}})$ where $\Phi$ is the homeomorphism of Corollary \ref{homeo}. On the other hand, using \eqref{Bjk-1}-\eqref{Bjk-2} one obtains \begin{eqnarray*} \langle \psi, \mathcal{U}^{(k)}(-\tau)B^+_{j,k} \gamma^{(k+1)}(\tau) \,\phi\rangle_{L^2(\mathbb{R}^{dk})}&= & \langle \mathcal{U}(\tau)^{\otimes k}\psi, B^+_{j,k} \gamma^{(k+1)}(\tau) \,\mathcal{U}(\tau)^{\otimes k}\phi\rangle_{L^2(\mathbb{R}^{dk})}\,, \end{eqnarray*} and a simple computation gives the following kernel-operator identification $$ B^+_{j,k} \gamma^{(k+1)}(\tau) \equiv \int_{H^1(\mathbb{R}^d)} \big|y^{\otimes k} \rangle \langle y^{\otimes j-1} \otimes g(y)\otimes y ^{\otimes k-j}\big| \;d\mu_{\tau}(y) \in\mathscr{L}_{-\sigma}^1(L_s^2(\mathbb{R}^{dk}))\,. $$ So that, one proves $$ -i\;\mathcal{U}^{(k)}(-\tau)B^+_{j,k} \gamma^{(k+1)}(\tau) \equiv C^+_{j,k}\tilde \gamma(\tau) \,. $$ This shows the equivalence between the two formulations \eqref{equiv-hier} and \eqref{hier-kernel-int} . \subsection{Characteristic equation} In the sequel, we prove that the Liouville equation \eqref{eq.transport} is equivalent to another simpler formulation in terms of characteristic functions. This is based on the elementary fact that probability measures can be characterized by their characteristic functions. \begin{prop} \label{lm.cha} Let $v:\mathbb{R}\times \mathscr{Z}_{s}\to\mathscr{Z}_{-\sigma}$ a vector field satisfying \eqref{A0} and $(\mu_{t})_{t \in I}$ a curve in $\mathscr{M}(\mathscr{Z}_0)$ satisfying the assumption \eqref{A1}. Then the following assertions are equivalent: \begin{itemize} \item[(i)] $(\mu_{t})_{t \in I}$ is a solution of the Liouville equation \eqref{eq.transport}. \item[(ii)] $(\mu_{t})_{t \in I}$ solves the following characteristic equation, i.e.: $\forall t \in I ~,~ \forall y \in \mathscr{Z}_{\sigma} $, \begin{equation} \label{K} \mu_{t}(e^{2i \pi \mathrm{Re} \langle y,. \rangle_{\mathscr{Z}_{0}}}) = \mu_{t_{0}}(e^{2i \pi \mathrm{Re} \langle y,. \rangle_{\mathscr{Z}_{0}}}) + 2i\pi \int_{t_{0}}^{t} \mu_{\tau}\big( e^{2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}}} \;\mathrm{Re} \langle v(\tau,x);y \rangle_{\mathscr{Z}_{0}}\big)\,d\tau\,, \end{equation} \end{itemize} where we have used the notation $\mu_{t}(e^{2i \pi Re \langle y,. \rangle_{\mathscr{Z}_{0}}}) = \displaystyle\int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}}} \,d\mu_{t}(x)$. \end{prop} \begin{proof} We suppose that $(\mu_{t})_{t \in I}$ is a solution of the Liouville equation \eqref{eq.transport} satisfying the assumption \eqref{A1}. Consider a test function $\varphi(t,x) = \chi(t) \varphi_{m}(x)$, with $\chi \in \mathscr C_{0}^{\infty}(I)$ and $\varphi_{m}$ is given by: \[ \varphi_{m}(x) = \cos(2\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) \;\psi(\frac{\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}}{m}) ~, \] for some $z \in \mathscr{Z}_{-\sigma}$ fixed and $ \psi \in \mathscr C_{0}^{\infty}(\mathbb{R}) $ such that $0\leq \psi \leq 1$ and equal to $1$ in a neighbourhood of $0$. So that, the functions $\varphi_{m}$ converges pointwisely to $\cos(2\pi \mathrm{Re} \langle z,. \rangle_{\mathscr{Z}_{-\sigma}})$ when $m$ tend to $+ \infty$. As $(\mu_{t})_{t \in I}$ satisfies the Liouville equation \eqref{eq.transport} and $\varphi\in \mathscr C_{0,cyl}^{\infty}(I\times \mathscr{Z}_{-\sigma})$, one can write: \[ \int_{I}\int_{\mathscr{Z}_{s}} \chi^{'}(t)\varphi_{m}(x) + \mathrm{Re} \langle v(t,x); \nabla \varphi_{m}(x) \rangle_{\mathscr{Z}_{-\sigma}} \;\chi(t)\, d\mu_{t}(x)dt=0. \] A simple computation yields the gradient of $\varphi_{m}$, \begin{eqnarray*} \nabla \varphi_{m}(x) &=& -2\pi \sin(2\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}})\psi(\frac{\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}}{m})\cdot z+ \\&&\cos(2\pi \mathrm{Re}\langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) \frac{1}{m} \,\psi^{'}( \frac{\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}}{m}) \cdot z\in \mathscr{Z}_{-\sigma}. \end{eqnarray*} Then, if we replace this writing of $\nabla \varphi_{m}$ in the previous integral, we have, using the dominated convergence theorem and the Fubini's theorem, \[ \int_{I} \chi^{'}(t)\int_{\mathscr{Z}_{s}} \cos(2\pi \mathrm{Re}\langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t}(x)dt = \int_{I} \chi(t)\int_{\mathscr{Z}_{s}} 2\pi \sin(2\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) \mathrm{Re} \langle v(t,x); z \rangle_{\mathscr{Z}_{-\sigma}} d\mu_t(x)dt . \] In the same way (with $\varphi_{m}(x) = \sin(2\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) \psi( \frac{\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}}{m})$), we have a similar identity: \[ \int_{I} \chi^{'}(t)\int_{\mathscr{Z}_{s}} \sin(2\pi \mathrm{Re}\langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t}(x)dt =- \int_{I} \chi(t)\int_{\mathscr{Z}_{s}} 2\pi \cos(2\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}) \mathrm{Re} \langle {v}(t,x); z \rangle_{\mathscr{Z}_{-\sigma}} d\mu_tdt . \] And if one sums the first integral with $i$ times the second one, one obtains: \[ \int_{I} \chi^{'}(t)\int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x)dt = -2i\pi \int_{I} \chi(t)\int_{\mathscr{Z}_{s}} \mathrm{Re} \langle {v}(t,x); z \rangle_{\mathscr{Z}_{-\sigma}} e^{2i \pi \mathrm{Re} \langle x,z \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_t(x)dt. \] Set \[ m(t) := \int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x). \] Then the previous equation becomes, in a distributional sense, \[ \frac{d}{dt} m(t) = 2i \pi \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle{v}(t,x) ; z \rangle_{\mathscr{Z}_{-\sigma}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x). \] Since $m \in L^{1}(I,dt)$ and $m{'} \in L^{1}(I,dt)$, then $m \in W^{1,1}(I,\mathbb{R})$. This proves that $m$ is absolutely continuous over $I$ and so the fundamental theorem of analysis holds true for $m$, i.e, \[ \forall (t,t_{0}) \in I^{2} ~,~ m(t) = m(t_{0}) + \int_{t_{0}}^{t}m{'}(s) ds. \] Rewriting the latter equality, one essentially obtains the characteristic equation for all $ z \in \mathscr{Z}_{-\sigma}$: \begin{eqnarray*} \int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x) &=& \int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t_{0}}(x) +\\&& 2i\pi \int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle {v}(\tau,x) ; z \rangle_{\mathscr{Z}_{-\sigma}} e^{2i\pi \mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x)dt. \end{eqnarray*} The scalar product $\langle z,x \rangle_{\mathscr{Z}_{-\sigma}} = \langle A^{-\sigma}z,x \rangle_{\mathscr{Z}_{0}}$. So that, if we set $y = A^{-\sigma}z \in \mathscr{Z}_{\sigma}$, we have: \begin{equation*} \int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}}} d\mu_{t}(x) = \int_{\mathscr{Z}_{s}} e^{2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}}} d\mu_{t_{0}}(x) + 2i\pi \int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle {v}(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} e^{2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}}} d\mu_{\tau}(x)d\tau. \end{equation*} \bigskip Conversely, suppose that the measures $(\mu_{t})_{t \in I}$ satisfy the characteristic equation \eqref{K}. Let $\psi \in \mathscr C_{0,cyl}^{\infty}(\mathscr{Z}_{-\sigma})$, then we can write $\psi(x) \equiv \phi(p(x))$ where $p$ is an orthogonal projection on a finite dimensional subspace of $\mathscr{Z}_{-\sigma}$ and $\phi \in \mathscr C_{0}^{\infty}(p(\mathscr{Z}_{-\sigma}))$. As $\psi$ is real-valued, one can write using inverse Fourier transform, \begin{equation} \label{fourier-inv} \psi(x) = \int_{p(\mathscr{Z}_{-\sigma})} \cos(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) \mathscr{F}_{R}(\psi)(z) + \sin(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}})\mathscr{F}_{I}(\psi)(z) dL(z)\,, \end{equation} where $dL$ denotes the Lebesgue mesure on $p(\mathscr{Z}_{-\sigma})$ and \begin{eqnarray*} \mathscr{F}_{R}(\psi)(z) &= & \int_{p(\mathscr{Z}_{-\sigma})} \cos(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) \;\psi(x) \;dL(x), \\ \mathscr{F}_{I}(\psi)(z) &= &\int_{p(\mathscr{Z}_{-\sigma})} \sin(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) \;\psi(x) \;dL(x)\,. \end{eqnarray*} Splitting the characteristic equation \eqref{K} into real and imaginary part, yields: \begin{eqnarray*} A = \int_{\mathscr{Z}_{-\sigma}} \cos (2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t}(x) &=& \int_{\mathscr{Z}_{-\sigma}} \cos (2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t_{0}}(x) + \\ && \hspace{-.8in}\int_{t_{0}}^{t} \int_{\mathscr{Z}_{-\sigma}} \mathrm{Re} \langle v(\tau,x);z \rangle_{\mathscr{Z}_{-\sigma}} (-2\pi \sin(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}})d\mu_{\tau}(x)d\tau, \end{eqnarray*} \begin{eqnarray*} B = \int_{\mathscr{Z}_{-\sigma}} \sin (2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t}(x) &= & \int_{\mathscr{Z}_{-\sigma}} \sin(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) d\mu_{t_{0}}(x) + \\ && \hspace{-.8in}\int_{t_{0}}^{t} \int_{\mathscr{Z}_{-\sigma}} \mathrm{Re} \langle v(\tau,x);z \rangle_{\mathscr{Z}_{-\sigma}} (2\pi \cos(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}})d\mu_{\tau}(x)d\tau. \end{eqnarray*} Now, a computation of $\displaystyle\int_{p(\mathscr{Z}_{-\sigma})} (\mathscr{F}_{R}(\psi) \times A + \mathscr{F}_{I}(\psi) \times B) \,dL(z)$ gives: \[ \int_{\mathscr{Z}_{-\sigma}} \psi(x) d\mu_{t}(x) = \int_{\mathscr{Z}_{-\sigma}} \psi(x) d\mu_{t_{0}}(x) + \int_{t_{0}}^{t} \int_{\mathscr{Z}_{-\sigma}} \mathrm{Re} \langle v(\tau,x); \nabla \psi(x) \rangle_{\mathscr{Z}_{-\sigma}} d\mu_{\tau}(x)d\tau, \] with the last term in the above right hand side is obtained using the Fourier inverse formula, \begin{eqnarray*} \nabla\psi(x)[u]&=& \int_{p(\mathscr{Z}_{-\sigma})} \bigg(2\pi \mathrm{Re}\langle u,z\rangle_{\mathscr{Z}_{-\sigma}} \; \cos(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}})\mathscr{F}_{I}(\psi)(z) -\\ &&\hspace{1in}2\pi \mathrm{Re}\langle u,z\rangle_{\mathscr{Z}_{-\sigma}} \; \sin(2\pi \mathrm{Re} \langle z;x \rangle_{\mathscr{Z}_{-\sigma}}) \mathscr{F}_{R}(\psi)(z) \bigg)\,dL(z)\,, \end{eqnarray*} for any $x,u\in p(\mathscr{Z}_{-\sigma})$. This equality shows that, in the distributional sense, we have \[ \frac{d}{dt} \int_{\mathscr{Z}_{-\sigma}} \varphi(x) d\mu_{t}(x) = \int_{\mathscr{Z}_{-\sigma}} \mathrm{Re} \langle v(t,x); \nabla \varphi(x) \rangle_{\mathscr{Z}_{-\sigma}} \;d\mu_{t}(x). \] Multiplying by $\chi(t)$, with $\chi\in \mathscr C_{0}^{\infty}(I)$, the latter equality and integrating by part the right hand side, one concludes that $(\mu_{t})_{t \in I}$ satisfies the Liouville equation \eqref{eq.transport} for any test functions of the form $\varphi(t,x)\equiv\chi(t) \psi(x)$ with $\psi \in \mathscr C_{0,cyl}^{\infty}(\mathscr{Z}_{-\sigma})$. Then using a standard density argument one obtains the Liouville equations \eqref{eq.transport} for any $\varphi\in \mathscr C_{0,cyl}^{\infty}(I\times \mathscr{Z}_{-\sigma})$. \end{proof} \subsection{Duality} Once we have defined the characteristic equation \eqref{K} and proved its equivalence with the Liouville equation, we proceed to the proof of the duality between the hierarchy equations \eqref{hier} and the Liouville equations \eqref{eq.transport} stated in Theorem \ref{sec.0.thm1}. \begin{prop} \label{thm.main} Let ${v}: \mathbb{R} \times \mathscr{Z}_{s} \mapsto \mathscr{Z}_{-\sigma}$ be a vector field satisfying \eqref{A0} and $t \in I \to \mu_{t}$ a curve in $\in \mathscr{M}(\mathscr{Z}_{0})$ verifying \eqref{A1} and solving the Liouville equation \eqref{eq.transport}. Then $t\in I\to\gamma_t=\Phi(\mu_{t})$ is a curve in $\mathscr{H}(\mathscr{\mathscr{Z}_{0}})$ satisfying \eqref{A2} and solving the symmetric hierarchy equation \eqref{int-hier}. \end{prop} \begin{proof} Remember that Lemma \ref{a1a2} says that the assumptions \eqref{A1} and \eqref{A2} are equivalent. So, it is enough to prove that $t\to \gamma_t=\Phi(\mu_{t})$ solves the hierarchy equation \eqref{int-hier}, i.e., \[ \forall t \in I ~,~ \gamma_{t}^{(k)} = \gamma_{t_{0}}^{(k)} + \int_{t_{0}}^{t} \sum_{j=1}^{k} (C_{j,k}^{+} \gamma_{\tau}^{} + C_{j,k}^{-} \gamma_{\tau}^{}) d\tau \;\in \mathscr{L}^1_{-\sigma}(\vee^k\mathscr{Z}_{0}) ~.~ \] According to Proposition \ref{lm.cha}, $(\mu_{t})_{t \in I}$ satisfies the characteristic equation \eqref{K}. So, for any $y \in \mathscr{Z}_{\sigma}$, we have: \[ \frac{d}{dt} \mu_{t}(e^{2i\pi \mathrm{Re} \langle y;x \rangle_{\mathscr{Z}_{0}}}) = 2i \pi \displaystyle\int_{\mathscr{Z}_{s}} \mathrm{Re} \langle v(t,x); y \rangle_{\mathscr{Z}_{0}}\, e^{2i\pi \mathrm{Re} \langle y;x \rangle_{\mathscr{Z}_{0}}} d\mu_{t}(x) ~,~ ~a.e.~ t \in I. \] We use the analyticity of the function $ \lambda \to \mu_{t}(e^{2i\pi \mathrm{Re} \langle y;x \rangle_{\mathscr{Z}_{0}}. \lambda})$ in order to obtain equalities between different order derivatives. Indeed, since $\mu_{t}(B_{\mathscr{Z}_{0}}(0,1))=1$, there exists a constant $C >0$ such that \[ \displaystyle\int_{\mathscr{Z}_{s}} | \mathrm{Re} \langle y;x \rangle_{\mathscr{Z}_{0}} |^{n} d\mu_{t}(x) \leq || y ||_{\mathscr{Z}_{0}}^{n} \displaystyle\int_{\mathscr{Z}_{s}} ||x||_{\mathscr{Z}_{0}}^{n} d\mu_{t}(x) \leq C^{n}. \] One also obtains a similar estimate, \begin{eqnarray*} \int_{I} \int_{\mathscr{Z}_{s}} | \mathrm{Re} \langle v(t,x) ; y \rangle_{\mathscr{Z}_{0}} |.| \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}} |^{n} d\mu_{t}(x)dt &\leq& \int_{I}\int_{\mathscr{Z}_{s}} ||v(t,x)||_{\mathscr{Z}_{-\sigma}}\,||x||_{\mathscr{Z}_{0}}^{n} d\mu_{t}(x)dt \times ||y||_{\mathscr{Z}_{\sigma}}^{n+1}\,, \\ &\leq& ||y||_{\mathscr{Z}_{\sigma}}^{n+1} \int_{I} \int_{\mathscr{Z}_{s}} ||v(t,x)||_{\mathscr{Z}_{-\sigma}}d\mu_{t}(x)dt \leq C^{n+2}\,, \end{eqnarray*} since $I$ is bounded, $v$ satisfies \eqref{A0} and $\mu_t$ concentrates on $B_{\mathscr{Z}_{s}}(0,R)$ for some constant $R>0$ independent of time. Hence, with these inequalities, we can write, since $(\mu_{t})_{t \in I}$ satisfies \eqref{K}: \begin{eqnarray*} \int_{\mathscr{Z}_{s}} \sum_{n=0}^{\infty} \frac{(2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}})^{n}\lambda^{n}}{n!}d\mu_{t}(x) &=& \int_{\mathscr{Z}_{s}} \sum_{n=0}^{\infty} \frac{(2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}})^{n}\lambda^{n}}{n!}d\mu_{t_{0}}(x) + \\ && \int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle v(\tau,x) ; \lambda y \rangle_{\mathscr{Z}_{0}} \sum_{n = 0}^{\infty} \frac{(2i\pi \mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}})^{n}\lambda^{n}}{n!} d\mu_{\tau}(x)dt\,. \end{eqnarray*} Part by part, this gives the equality: \begin{eqnarray*} \int_{\mathscr{Z}_{s}} \frac{(2i\pi)^{n}}{n!}(\mathrm{Re} \langle y,x \rangle _{\mathscr{Z}_{0}})^{n} d\mu_{t}(x) &=& \int_{\mathscr{Z}_{s}} \frac{(2i\pi)^{n}}{n!}(\mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}} )^{n} d\mu_{t_{0}}(x) + \\ &&2i\pi \int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle v(\tau,x);y \rangle_{\mathscr{Z}_{0}} \frac{(2i\pi)^{n-1}}{(n-1)!} (\mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}} )^{n-1} d\mu_{\tau}(x)d\tau \end{eqnarray*} Writing $$ (\mathrm{Re} \langle y,x \rangle_{\mathscr{Z}_{0}} )^{n} = \frac{1}{2^{n}} \sum_{k=0}^{n} C_{n}^{k} \langle y,x \rangle_{\mathscr{Z}_{0}}^{k} \langle x,y \rangle_{\mathscr{Z}_{0}}^{n-k}\,, $$ and noticing that the $U(1)$-invariance of the measures $\mu_t$ yields for any $0\leq k\leq n$ except $k=n/2$, $$ \int_{\mathscr{Z}_{s}} \langle y,x \rangle^{k} \langle x,y \rangle^{n-k} \;d\mu_{t}(x) = 0\,; $$ then one concludes that in the case $n=2k$, \[ \int_{\mathscr{Z}_{s}} (\mathrm{Re} \langle y,x \rangle)^{n} \;d\mu_{t}(x) = \frac{1}{2^{2k}} \int_{\mathscr{Z}_{s}} C_{2k}^{k} | \langle y,x \rangle |^{2k} d\mu_{t}(x)\,. \] This gives \begin{eqnarray*} \frac{1}{2^{2k}} \int_{\mathscr{Z}_{s}} C_{2k}^{k} | \langle y,x \rangle |^{2k} d\mu_{t}(x) &=& \frac{1}{2^{2k}} \int_{\mathscr{Z}_{s}} C_{2k}^{k} | \langle y,x \rangle |^{2k} d\mu_{t_{0}}(x) + \\ &&\hspace{-.6in}\int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle v(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} \bigg( \frac{2k}{2^{2k-1}} \sum_{j=0}^{2k-1} C_{2k-1}^{j} \langle y,x \rangle^{j} \langle x,y \rangle^{2k-1-j}\bigg) d\mu_{\tau}(x)d\tau \,. \end{eqnarray*} The last term in the above equality can also be simplified thanks the $U(1)$-invariance of the vector field $v$ and the measures $\mu_t$. Indeed, if we develop \[ \mathrm{Re} \langle v(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} = \frac{1}{2} \langle v(\tau,x) , y \rangle_{\mathscr{Z}_{0}} +\frac{1}{2} \langle y, v(\tau,x) \rangle_{\mathscr{Z}_{0}}, \] and write, \begin{eqnarray} \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle v(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} \bigg( \sum_{j=0}^{2k-1} C_{2k-1}^{j} \langle y,x \rangle_{\mathscr{Z}_{0}}^{j} \langle x,y \rangle_{\mathscr{Z}_{0}}^{2k-1-j}\bigg) d\mu_{\tau}(x) &=& \nonumber\\ \label{equiv.eq1} &&\hspace{-2.4in} \frac{1}{2} \sum_{j=0}^{2k} C_{2k-1}^{j} \int_{\mathscr{Z}_{s}} \langle v(\tau,x),y \rangle_{\mathscr{Z}_{0}} \langle x,y \rangle_{\mathscr{Z}_{0}}^{2k-1-j}\langle y,x \rangle_{\mathscr{Z}_{0}}^{j}d\mu_{\tau}(x) \\ \label{equiv.eq2} &&\hspace{-2.4in}+ \frac{1}{2} \sum_{j=0}^{2k} C_{2k-1}^{j} \int_{\mathscr{Z}_{s}} \langle y,v(\tau,x) \rangle_{\mathscr{Z}_{0}} \langle x,y \rangle_{\mathscr{Z}_{0}}^{2k-1-j}\langle y,x \rangle_{\mathscr{Z}_{0}}^{j}d\mu_{\tau}(x)\,, \end{eqnarray} then using the gauge invariance, one notices that the sum in \eqref{equiv.eq1} reduces to $j=k$ while the sum \eqref{equiv.eq2} reduces to $j=k-1$. So that, we have: \[ \int_{\mathscr{Z}_{s}} \mathrm{Re} \langle {v}(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} ( \sum_{j=0}^{2k-1} C_{2k-1}^{j} \langle y,x \rangle_{\mathscr{Z}_{0}}^{j} \langle x,y \rangle_{\mathscr{Z}_{0}}^{2k-1-j}) d\mu_{\tau}(x) = \] \[ \frac{1}{2} C_{2k-1}^{k} \int_{\mathscr{Z}_{s}} \langle {v}(\tau,x) ; y \rangle_{\mathscr{Z}_{0}} \langle x,y \rangle_{\mathscr{Z}_{0}}^{k-1} \langle y,x \rangle_{\mathscr{Z}_{0}}^{k} d\mu_{\tau}(x) + \frac{1}{2} C_{2k-1}^{k-1} \int_{\mathscr{Z}_{s}} \langle y, {v}(\tau,x) \rangle_{\mathscr{Z}_{0}} \langle x,y \rangle_{\mathscr{Z}_{0}}^{k} \langle y,x \rangle_{\mathscr{Z}_{0}}^{k-1} d\mu_{\tau}(x)\,. \] And then, we can finally write: \begin{eqnarray*} \int_{\mathscr{Z}_{s}} | \langle y,x \rangle |^{2k} d\mu_{t}(x) &= & \int_{\mathscr{Z}_{s}} | \langle y,x \rangle |^{2k} d\mu_{t_{0}}(x) + \\ && \int_{t_{0}}^{t} \int_{\mathscr{Z}_{s}}\bigg( \frac{2k C_{2k-1}^{k}}{C_{2k}^{k}} \langle y^{\otimes k};x^{\otimes k} \rangle_{\mathscr{Z}_{0}} \langle {v}(\tau,x) \otimes x^{\otimes (k-1)}; y^{\otimes k} \rangle_{\mathscr{Z}_{0}} +\\&&\hspace{.7in} \frac{2k C_{2k-1}^{k-1}}{C_{2k}^{k}} \langle y^{\otimes k}; {v}(\tau,x) \otimes x^{\otimes (k-1)} \rangle_{\mathscr{Z}_{0}} \langle x^{\otimes k}; y^{\otimes k} \rangle_{\mathscr{Z}_{0}}\bigg) \;d\mu_{\tau}(x)dt \end{eqnarray*} Checking that \[ \frac{2k C_{2k-1}^{k}}{C_{2k}^{k}} = k \qquad \text{ and } \qquad \frac{2k C_{2k-1}^{k-1}}{C_{2k}^{k}} = k\,, \] and using the integral representation of $\gamma_t$ given by $\gamma_t=\Phi(\mu_{t})$, the last equality can be written as, \begin{equation} \label{equiv.eq4} \begin{aligned} \langle y^{\otimes k}, \gamma_{t}^{(k)} y^{\otimes k} \rangle_{\mathscr{Z}_{0}} &=\langle y^{\otimes k}, \gamma_{t_{0}}^{(k)} y^{\otimes k} \rangle _{\mathscr{Z}_{0}}+ \\ &\int_{t_{0}}^{t} \sum_{j=1}^{k} \int_{\mathscr{Z}_{s}} \bigg( \langle y^{\otimes k} | x^{\otimes (j-1)} \otimes {v}(\tau,x) \otimes x^{\otimes (k-j)}\rangle_{\mathscr{Z}_{0}} \langle x^{\otimes k} | y^{\otimes k}\rangle_{\mathscr{Z}_{0}}+\\& \hspace{.9in} \langle y^{\otimes k} | x^{\otimes k} \rangle_{\mathscr{Z}_{0}} \langle x^{\otimes (j-1)} \otimes {v}(\tau,x)\otimes x^{\otimes (k-j)} | y^{\otimes k} \rangle_{\mathscr{Z}_{0}} \bigg) \;d\mu_{\tau}(x) d\tau \end{aligned} \end{equation} So, one recognizes the operations $C_{j,k}^\pm$ given by \eqref{defBjk}, \begin{equation} \label{equiv.eq3} \langle y^{\otimes k}, \gamma_{t}^{(k)} \,y^{\otimes k} \rangle_{\mathscr{Z}_{0}} = \langle y^{\otimes k}, \big( \gamma_{t_{0}}^{(k)} +\int_{t_{0}}^{t} \sum_{j=1}^{k} \int_{\mathscr{Z}_{s}} C_{j,k}^{+} \gamma_{\tau}^{} + C_{j,k}^{-} \gamma_{\tau}^{} d\mu_{\tau}(x)d\tau \big) \,y^{\otimes k} \rangle_{\mathscr{Z}_{0}} \,. \end{equation} Notice that the assumptions \eqref{A2}-\eqref{A1} are satisfied and $y\in\mathscr{Z}_\sigma$, so all the above calculations are well justified. To conclude, one just have to use a polarization formula plus a standard density argument. Indeed, the identity \eqref{equiv.eq3} extends to $\langle \eta^{\otimes k} , \gamma_{t}^{(k)} y^{\otimes k} \rangle$ for all $\eta, y\in\mathscr{Z}_{\sigma}$ using \[ \langle \eta^{\otimes k} , \gamma_{t}^{(k)} y^{\otimes k} \rangle = \int_{0}^{1} \int_{0}^{1} \langle (e^{2i\pi \theta} \eta + e^{2i\pi \varphi}y)^{\otimes k} , \gamma_{t}^{(k)} (e^{2i\pi \theta} \eta + e^{2i\pi \varphi}y)^{\otimes k} \rangle e^{2i\pi (k\theta-k\varphi)} d\theta d\varphi \,. \] Now, since ${\rm Vect}\{\eta^{\otimes k}, \eta\in \mathscr{Z}_\sigma\} $ is dense subspace of $\mathcal{D}_{\sigma}^{(k)}= D((A^{\sigma/2})^{\otimes k})$ endowed with the graph norm, one obtains the hierarchy equation \eqref{int-hier} as an integral equation valued in $\mathscr{L}^1_{-\sigma}(\vee^k\mathscr{Z}_{0})$. \end{proof} \begin{prop} \label{thm.main2} Let ${v}: \mathbb{R} \times \mathscr{Z}_{s} \mapsto \mathscr{Z}_{-\sigma}$ be a vector field satisfying \eqref{A0} and $t\in I\to\gamma_t$ is a curve in $\mathscr{H}(\mathscr{\mathscr{Z}_{0}})$ satisfying \eqref{A2} and solving the symmetric hierarchy equation \eqref{hier}. Then $t \in I \to \mu_{t}=\Phi^{-1}(\gamma_{t})$ is a curve in $\in \mathscr{M}(\mathscr{Z}_{0})$ verifying \eqref{A1} and solving the Liouville equation \eqref{eq.transport}. \end{prop} \begin{proof} Thanks to the concentration of $\mu_{t}$ over a bounded set of $\mathscr{Z}_{s}$ (i.e., $\mu_t(B_{\mathscr{Z}_{s}}(0,R))=1$), the following series expansion of $ \displaystyle\int_{\mathscr{Z}_{-\sigma}} e^{2i\pi \mathrm{Re} \langle x,z\rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x) $ holds true, \[ \displaystyle\int_{\mathscr{Z}_{-\sigma}} e^{2i\pi \mathrm{Re} \langle x,z \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x) = \sum_{k \geq 0} \frac{(2i\pi)^{k}}{k!} \sum_{j=0}^{k} \frac{C_{j}^{k}}{2^{k}} \displaystyle\int_{\mathscr{Z}_{-\sigma}} \langle x,z \rangle_{\mathscr{Z}_{-\sigma}}^{j} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}^{k-j} d\mu_{t}(x) ~.~ \] Indeed, we have the estimate, \[ \displaystyle\int_{\mathscr{Z}_{-\sigma}} |\langle x,z \rangle_{\mathscr{Z}_{-\sigma}}^{2k}| d\mu_{t}(x) \leq ||z||_{\mathscr{Z}_{-\sigma}}^{2k} \displaystyle\int_{\mathscr{Z}_{-\sigma}} C^{2k} ||x||_{\mathscr{Z}_{s}}^{2k} d\mu_{t}(x) \leq ||z||_{\mathscr{Z}_{-\sigma}}^{2k} C^{2k} R^{2k}\,. \] Taking into account the $U(1)$- invariance of the measures $\mu_{t}$, one shows \[ \displaystyle\int_{\mathscr{Z}_{-\sigma}} e^{2i \pi\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x) = \sum_{k \geq 0} \frac{(-1)^{k}\pi^{2k}}{(k!)^{2}} \displaystyle\int_{\mathscr{Z}_{-\sigma}} |\langle z,x \rangle_{\mathscr{Z}_{-\sigma}}^{2k}| d\mu_{t}(x) \,. \] Taking $y=A^{-\sigma/2} z\in \mathscr{Z}_{\sigma}$ hence $\langle z, x\rangle_{\mathscr{Z}_{-\sigma}}=\langle y, x\rangle_{\mathscr{Z}_{0}}$, and consequently \[ \displaystyle\int_{\mathscr{Z}_{-\sigma}} e^{2i \pi\mathrm{Re} \langle z,x \rangle_{\mathscr{Z}_{-\sigma}}} d\mu_{t}(x) = \sum_{k \geq 0} \frac{(-1)^{k}\pi^{2k}}{(k!)^{2}} \,\langle y^{\otimes k}, \gamma_{t}^{(k)} \,y^{\otimes k} \rangle_{\mathscr{Z}_{0}} \,, \] with $\gamma_{t}$ satisfying the symmetric hierarchy equation \eqref{int-hier} and having the integral representation given by $\gamma_{t}=\Phi(\mu_{t})$. This leads us to the equality \eqref{equiv.eq4}. As the computation of the last proof can be done reversely, we conclude that the the set of measures $(\mu_{t})_{t \in I}$ satisfies the characteristic equation \eqref{K} and therefore the Liouville \eqref{eq.transport} according to Prop.~\ref{lm.cha}. \end{proof} \section{Uniqueness and existence principles } As explained in the introduction the duality between the hierarchy and the Liouville equations allows us to benefit from the recent advances in measure transportation theory, see e.g. \cite{MR2400257,MR2439520, MR2129498,MR2335089} . In particular, the questions of uniqueness for continuity equations in finite dimension are by now quite well understood through either the method of characteristics or a superposition principle \cite{MR2668627}. The latter approach is very powerful and it is based in a sort of probabilistic representation of solutions of a continuity equation. In particular, the two first authors consider in \cite{MR3721874} the question of well-posedness of general Liouville equations related to nonlinear PDEs. In this section, we will improve the results of \cite{MR3721874} . \subsection{Probabilistic representation} \label{subsec.probrep} We recall a powerful probabilistic representation for solutions of the Liouville's equation \eqref{eq.transport} given in Prop.~\ref{prob-rep} below and proved in a previous work of the two first authors in \cite{MR3721874}. This result was inspired and stimulated by the ones proved in finite dimension in \cite[Theorem 4.1]{MR2335089} and \cite[Theorem 8.2.1 and 8.3.2]{MR2129498} and the one in infinite dimension proved in \cite[Proposition C.2]{MR3379490}. \bigskip Recall that $(\mathscr{Z}_{s},\mathscr{Z}_{0},\mathscr{Z}_{-\sigma})$ is the triple of spaces introduced in Section \ref{fram} with $0\leq s\leq \sigma$ and $I$ is always considered as bounded open interval. We denote by \begin{equation} \label{eq.X} \mathfrak X=\mathscr{Z}_{-\sigma}\times \mathscr{C}(\bar I, \mathscr{Z}_{-\sigma}) \,, \end{equation} and endow such a product space with the following norm \begin{equation} \label{normX} ||(x,\varphi)||_{\mathfrak X}= ||x||_{\mathscr{Z}_{-\sigma}}+\sup_{t\in \bar I}||\varphi(t)||_{\mathscr{Z}_{-\sigma}}\, . \end{equation} Here $\mathscr{Z}_{-\sigma}$ is considered as a real Hilbert space. For each $t \in I$, we define the evaluation map over the space $\mathfrak X$ as, $$ e_{t}:(x,\varphi) \in \mathfrak X \longmapsto \varphi(t) \in \mathscr{Z}_{-\sigma}\,. $$ \begin{prop} \label{prob-rep} Let $v:\mathbb{R}\times \mathscr Z_s\to \mathscr Z_{-\sigma}$ be a Borel vector field such that $v$ is bounded on bounded sets. Let $t\in I\to\mu_{t}\in \mathfrak{P}(\mathscr Z_s)$ be a weakly narrowly continuous solution in $\mathfrak{P}(\mathscr Z_{-\sigma})$ of the Liouville equation \eqref{eq.transport} defined on an open bounded interval $I$. Then there exists $\eta$ a Borel probability measure, on the space $(\mathfrak X, ||\cdot||_{\mathfrak X})$ given in \eqref{eq.X}, satisfying: \begin{enumerate}[label=\textnormal{(\roman*)}] \item \label{concen-item1}$\eta$ is concentrated on the set of points $(x,\gamma)\in \mathfrak X$ such that the curves $\gamma\in W^{1,1}(I, \mathscr Z_{-\sigma})$ and are solutions of the initial value problem $\dot\gamma(t)= v(t,\gamma(t))$ for a.e. $t\in I$ and $\gamma(t)\in \mathscr Z_s$ for a.e. $t\in I$ with $\gamma(t_0)=x\in \mathscr Z_s$ for some fixed $t_0\in I$. \item \label{concen-item2} $\mu_t=(e_{t})_{\sharp}\eta$ for any $t\in I$. \end{enumerate} \end{prop} \begin{remark} Proposition \ref{prob-rep} is proved in \cite[Prop.~4.1]{MR3721874}. However, some slight differences between the two statements may catch the reader's attention. So, we explain how the Prop.~\ref{prob-rep} is a straightforward reformulation of \cite[Prop.~4.1]{MR3721874}. In fact, there are two points: \begin{itemize} \item In \cite[Prop.~4.1]{MR3721874}, an abstract rigged Hilbet space $(\mathscr{Z}_{1},\mathscr{Z}_{0},\mathscr{Z}'_{1})$ is used. With the framework here we are allowed to take $\mathscr{Z}_{1}\equiv \mathscr{Z}_{s}$, $\mathscr{Z}_{0}\equiv \mathscr{Z}_{\frac{s-\sigma}{2}}$ so that $\mathscr{Z}'_{1}$ identifies with $\mathscr{Z}_{-\sigma}$. \item The space $\mathfrak X$ in \cite[Prop.~4.1]{MR3721874} is equipped with a different norm from the one used here. However, it is easy to see that the Borel sets of $\mathfrak X$ are the same for both norms (see Lemma \ref{tribu} in the Appendix C). \end{itemize} \end{remark} As one can see below the existence of such measure $\eta$ has important implications. In particular, the existence of well defined flow for the initial value problem \eqref{IVP}. For this we have first to establish some properties of the measure $\eta$. Consider the set \begin{equation} \label{spa.infty} \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})=\{u\in \mathscr{C}(\bar I, \mathscr{Z}_{-\sigma}) : \sup_{t\in \bar I} ||u(t)||_{\mathscr{Z}_{s}}<\infty\}\,. \end{equation} \begin{lem} \label{mes-lem1} Assume the same assumptions as in Prop.~\ref{prob-rep} and suppose that the curve $t\in I\to\mu_{t}\in \mathfrak{P}(\mathscr Z_s)$ satisfies \eqref{A1}. Then $$ \mathcal{F}_{t_0}:=\left\{ (x,\gamma)\in \mathscr{Z}_s\times \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s}); \,\gamma(t)=x+ \int_{t_0}^t v(s,\gamma(s)) \, ds, \,\forall t\in \bar I\right\}, $$ is a Borel subset of $\mathfrak{X}$ satisfying $\eta( \mathcal{F}_{t_0})=1$ where $\eta$ is the probability measure given in Prop \ref{prob-rep}. \end{lem} \begin{proof} We first prove that $ \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})$ is a Borel subset of the space $\mathscr{C}(\bar I, \mathscr{Z}_{-\sigma})$ endowed with the norm of the uniform convergence, $$ ||u||_{\mathscr{C}(\bar I, \mathscr{Z}_{-\sigma})}=\sup_{t\in \bar I}||u(t)||_{\mathscr{Z}_{-\sigma}}\,. $$ Indeed, the map \begin{eqnarray*} \phi_n: \mathscr{C}(\bar I, \mathscr{Z}_{-\sigma}) &\longrightarrow & \mathbb{R}\\ u&\longrightarrow & \sup_{t\in\bar I} || A^{\frac{s+\sigma}{2}} (1+\frac{A}{n})^{-\frac{s+\sigma}{2}} u(t)||_{\mathscr{Z}_{-\sigma}}\, \end{eqnarray*} is clearly continuous (since $\phi_n$ defines an equivalent norm on $\mathscr{C}(\bar I, \mathscr{Z}_{-\sigma})$) and converges, as $n\to \infty$, to \begin{equation} \phi(u)= \left\{ \begin{aligned} &\infty & \text{ if } u\notin \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})\\ &\sup_{t\in\bar I} || u(t)||_{\mathscr{Z}_{s}} & \text{ if } u\in \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})\,. \end{aligned} \right. \end{equation} Since $\phi$ is measurable, the subsets $$ \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})=\phi^{-1}(\mathbb{R}) \qquad \text{ and } \qquad \mathfrak{L}_m^\infty(\bar I,\mathscr{Z}_{s}):=\phi^{-1}([0,m]), $$ are Borel. Furthermore, with a similar argument, one also proves that $\mathscr{Z}_{s}$ is a Borel subset of $\mathscr{Z}_{-\sigma}$ (see e.g.~\cite[Appendix]{MR3721874}). Hence, $\mathscr{Z}_{s}\times \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s})$ is a Borel subset of the space $\mathfrak X$ endowed with the norm \eqref{normX}. And consequently, the Borel $\sigma$-algebra of $(\mathscr{Z}_{s}\times\mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}), ||\cdot||_{\mathfrak X})$ coincides with the $\sigma$-algebra of all Borel sets of $(\mathfrak X,||\cdot||_{\mathfrak X})$ contained in $\mathscr{Z}_{s}\times\mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s})$. \bigskip \noindent Now, we claim that the map $\psi_m: \mathscr{Z}_{s}\times \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) \longrightarrow \mathbb{R}$ defined by \begin{eqnarray*} \psi_m(x,u)= \sup_{t\in\bar I} ||u(t)-x-\int_{t_0}^t v(\tau,u(\tau)) d\tau||_{\mathscr{Z}_{-\sigma}}\, \end{eqnarray*} is measurable. In fact, we have the following composition of measurable maps \[ \begin{array}{ccccccc} [t_0,t]\times \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) & \overset{(1)}{\longrightarrow} & \bar I\times \mathscr{Z}_{s} & \overset{(2)}{\longrightarrow} & \mathscr{Z}_{-\sigma} & \overset{(3)}{\longrightarrow} & \mathbb{R} \\ (\tau,u) & \longrightarrow & (\tau,u(\tau)) & \longrightarrow & v(\tau,u(\tau)) & \longrightarrow & \mathrm{Re}\langle v(\tau,u(\tau)), y\rangle_{\mathscr{Z}_{-\sigma}} \, \end{array} \] where $(2)$ is measurable by \eqref{A0}, $(3)$ is continuous for any $y\in\mathscr{Z}_{-\sigma}$ and $(1)$ is also measurable since $\mathscr{Z}_{s}$ is a Borel subset of $\mathscr{Z}_{-\sigma}$ and (1) is continuous if it is considered as a mapping into $I\times\mathscr{Z}_{-\sigma}$. Applying Lemma \ref{classmo} in the Appendix C, one concludes that the following mappings are measurable for any $t\in \bar I$, \begin{eqnarray*} \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) &\longrightarrow & \mathbb{R}\\ u&\longrightarrow & \int_{t_0}^t \mathrm{Re} \langle v(\tau,u(\tau)),y\rangle_{\mathscr{Z}_{-\sigma}}\, d\tau \,. \end{eqnarray*} Since $\mathscr{Z}_{-\sigma}$ is a separable Hilbert space then by Pettis theorem, weak measurability and strong measurability coincide; and this implies that the mappings \begin{eqnarray*} \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) &\longrightarrow & \mathscr{Z}_{-\sigma}\\ u&\longrightarrow & \int_{t_0}^t v(\tau,u(\tau))\, d\tau \,. \end{eqnarray*} are actually measurable for any $t\in \bar I$. Notice that the latter integrand is Bochner integrable thanks to the assumption \eqref{A0} and the fact that $u(\cdot)$ is a bounded function valued in $\mathscr{Z}_{s}$. So, combining this with the continuity of the mappings, \[ \begin{array}{ccc} \mathscr{Z}_{s}\times \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) & {\longrightarrow} & \mathscr{Z}_{-\sigma} \\ (x,u) & \longrightarrow & u(t)-x \,, \end{array} \] then one concludes that \[ \begin{array}{ccc} \mathscr{Z}_{s}\times \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) & \longrightarrow & \mathbb{R} \\ (x,u) & \longrightarrow & \displaystyle\sup_{t\in\mathbb{Q}\cap I}||u(t)-x-\int_{t_0}^t v(\tau,u(\tau)) \, d\tau||_{\mathscr{Z}_{-\sigma}}\, \end{array} \] is measurable. Using the assumption \eqref{A0}, one shows that the curve $t\to u(t)-x-\int_{t_0}^t v(\tau,u(\tau)) \, d\tau\in\mathscr{Z}_{-\sigma}$ is continuous for any fixed $x\in\mathscr{Z}_{s}$ and $u\in \mathfrak{L}_m^\infty(\bar I, \mathscr{Z}_{s}) \subset \mathscr{C}(\bar I, \mathscr{Z}_{-\sigma})$. Hence, $$ \sup_{t\in\mathbb{Q}\cap I}||u(t)-x-\int_{t_0}^t v(\tau,u(\tau)) \, d\tau||_{\mathscr{Z}_{-\sigma}}=\sup_{t\in\bar I}||u(t)-x-\int_{t_0}^t v(\tau,u(\tau)) \, d\tau||_{\mathscr{Z}_{-\sigma}}\,, $$ and therefore $$ \mathcal{F}_{t_0}=\bigcup_{m\in\mathbb{N}}\psi_m^{-1}(\{0\}) $$ is a Borel subset of $\mathfrak X$. Furthermore, using Prop.~\ref{prob-rep}-\ref{concen-item2}, one proves for all $t\in I$, $k\in\mathbb{N}$ and $M\geq R$, $$ \int_{\mathscr{Z}_{-\sigma}} 1_{B_{\mathscr{Z}_{s}}(0,M)}(x) \, ||x||^{2k}_{\mathscr{Z}_s} \,d\mu_t(x)=\int_{\mathfrak{X}} 1_{B_{\mathscr{Z}_{s}}(0,M)}(u(t)) \, ||u(t)||^{2k}_{\mathscr{Z}_s} \,d\eta(x,u)\leq R^{2k}\,, $$ since the function $\varphi:\mathscr{Z}_{-\sigma}\to \mathbb{R}$, $\varphi(x)=1_{B_{\mathscr{Z}_{s}}(0,M)}(x) \, ||x||^{2k}_{\mathscr{Z}_s} $ is Borel and bounded. So, by Fatou's lemma and assumption \eqref{A1}, letting $M\to\infty$ yields : $$ \int_{\mathfrak{X}} ||u(t)||^{2k}_{\mathscr{Z}_s} \,d\eta(x,u)\leq R^{2k}\,. $$ On the other hand, by H\"older's inequality $$ \int_{\mathfrak{X}} ||u(\cdot)||_{L^{2k}(I, \mathscr{Z}_s)} \,d\eta(x,u)\leq \left(\int_I \int_{\mathfrak{X}} ||u(t)||^{2k}_{\mathscr{Z}_s} \,d\eta(x,u) \,dt\right)^{1/2k} \leq |I|^{1/2k} R\,. $$ Again by Fatou's lemma, letting $k\to \infty$ gives $$ \int_{\mathfrak{X}} ||u(\cdot)||_{L^{\infty}(I, \mathscr{Z}_s)} \,d\eta(x,u)\leq R\,. $$ So the norm $ ||u(\cdot)||_{L^{\infty}(I, \mathscr{Z}_s)}$ is finite for $\eta$-a.e.~$(x,u) \in \mathfrak{X}$. Combining this fact with Prop.~\ref{prob-rep}-\ref{concen-item1} one concludes that there exists an $\eta$-negligible set $\mathcal{N}$ such that $$ \mathcal{N}^c\subset \mathcal{F}_{t_0} \quad \text{ and } \quad \eta(\mathcal N)=0\,. $$ Notice that if $u(\cdot)$ is a solution of \eqref{IVP} with $u(\cdot) \in L^{\infty}(I, \mathscr{Z}_s) \cap W^{1,1}(I,\mathscr{Z}_{-\sigma})$ then $u(\cdot)$ satisfies the integral equation \eqref{int-form} and $u(\cdot)\in L^{\infty}(I, \mathscr{Z}_s)\cap W^{1,\infty}(I,\mathscr{Z}_{-\sigma})$ (i.e. $u(\cdot)$ is a weak solution according to Definition \ref{w-ssol}). In particular, $u(\cdot)$ belongs to $\mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})$ and it is weakly continuous in $\mathscr{Z}_{s}$. \\ Finally since $\mathcal{F}_{t_0}$ is measurable, then $\eta( \mathcal{F}_{t_0})=1$. \end{proof} \begin{lem} \label{mes-lem2} Assume the same assumptions as in Prop.~\ref{prob-rep} and suppose that uniqueness of weak solution for the \eqref{IVP} holds true. Then $$ \mathcal{G}_{t_0}:=\left\{ x\in \mathscr{Z}_s : \exists \gamma\in \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s}) \text{ s.t. } (x,\gamma)\in \mathcal{F}_{t_0}\right\}, $$ is a Borel subset of $ \mathscr{Z}_s$. \end{lem} \begin{proof} Recall the following known result in measure theory \cite[Thm.~3.9]{MR0226684}. Let $X_1,X_2$ two complete metric spaces and $E_1\subset X_1$, $E_2\subset X_2$ two subsets such that $E_1$ is Borel. If $\varphi: E_1\to X_2$ is a measurable one-to-one map such that $\varphi(E_1)=E_2$; then $E_2$ is a Borel subset of $X_2$. Using such argument, one shows the claimed result. Indeed, consider $X_1=(\mathfrak X,||\cdot||_{\mathfrak X})$, $X_2= (\mathscr{Z}_{-\sigma}, ||\cdot||_{\mathscr{Z}_{-\sigma}})$ two complete normed spaces and $\varphi$ given by \[ \begin{array}{cccc} \varphi : & \mathfrak X & \longrightarrow & \mathscr{Z}_{-\sigma}\\ & (x,u) & \longrightarrow & x\,. \end{array} \] Then, clearly $\varphi$ is a continuous map. Hence by Lemma \ref{mes-lem1}, its restriction $\varphi_{| \mathcal{F}_{t_0}}:\mathcal{F}_{t_0}\to \mathscr{Z}_{-\sigma}$ is a measurable map. Now, the uniqueness hypothesis for weak solutions of the initial value problem \eqref{IVP}, shows that $\varphi_{| \mathcal{F}_{t_0}}$ is one-to-one. Hence, the set $$ \mathcal{G}_{t_0}= \varphi_{|\mathcal{F}_{t_0}}(\mathcal{F}_{t_0}), $$ is Borel in $\mathscr{Z}_{-\sigma}$. Since $\mathcal{G}_{t_0}\subset \mathscr{Z}_{s}$ then it is also a Borel subset of $\mathscr{Z}_{s}$. \end{proof} \begin{prop} \label{flotpp} Assume the same assumptions as in Prop.~\ref{prob-rep} and suppose that uniqueness of weak solutions of the \eqref{IVP} holds true. Then the map \begin{eqnarray*} \phi(t,t_0): \mathcal{G}_{t_0} &\to & \mathscr{Z}_s \\ x &\to & u(t) \end{eqnarray*} where $u(\cdot)$ is the unique curve in $ \mathfrak{L}^\infty(\bar I,\mathscr{Z}_{s})\cap W^{1,\infty} (I, \mathscr{Z}_{-\sigma}) $ satisfying $u(t)=x+ \int_{t_0}^t v(\tau,u(\tau)) \, d\tau,$ for all $ t\in \bar I$; is a well defined Borel map. \end{prop} \begin{proof} Examining the proof of Lemma \ref{mes-lem2}, one notices that the inverse map \[ \begin{array}{cccc} \varphi^{-1} : & \mathcal{G}_{t_0} & \longrightarrow & \mathcal{F}_{t_0}\\ & x & \longrightarrow & (x,u) \end{array} \] is well defined and measurable. Moreover, the function $u$ such that $\varphi^{-1}(x)=(x,u)$ is the unique weak solution of the initial value problem \eqref{IVP} satisfying $u(t_0)=x$. The following composition \[ \begin{array}{cccccc} \phi(t,t_0): & \mathcal{G}_{t_0} & \overset{\varphi^{-1}}{\longrightarrow} & \mathcal{F}_{t_0} &\overset{e_t}{\longrightarrow} & \mathscr{Z}_{-\sigma}\\ & x & \longrightarrow & (x,u) & \longrightarrow & u(t) \end{array} \] yields a well defined measurable map. Since $\phi(t,t_0)(\mathcal{G}_{t_0} )\subset \mathscr{Z}_{s}$ and $\mathscr{Z}_{s}$ is a Borel subset of $\mathscr{Z}_{-\sigma}$, then the claimed statement is proved. \end{proof} \subsection{Existence and uniqueness of solutions} \label{sub.sec.uniq-ex} \begin{thm} \label{sec.0.thm3} Let ${v}: \mathbb{R} \times \mathscr{Z}_{s} \mapsto \mathscr{Z}_{-\sigma}$ be a vector field satisfying \eqref{A0}. Then uniqueness of weak solutions over a bounded open interval $I$ for the initial value problem \eqref{IVP} implies the uniqueness of solutions over $I$ of the Liouville equation \eqref{eq.transport} satisfying the assumption \eqref{A1}. \end{thm} {\bf Proof of Thm.~\ref{sec.0.thm2} and \ref{sec.0.thm3}}: Thanks to the duality between hierarchy equations and Liouville equations, one only needs to prove Thm.~\ref{sec.0.thm3}. Indeed, assume the assumptions stated in Thm.~\ref{sec.0.thm3} and suppose that we have two curves $t\in I\to \mu_t\in\mathfrak P(\mathscr{Z}_{s})$ and $t\in I\to \nu_t\in\mathfrak P(\mathscr{Z}_{s})$ both satisfying \eqref{A2} and such that $\mu_{t_0}=\nu_{t_0}$ for some $t_0\in I$. Then applying Prop. \ref{prob-rep}, one gets the existence of two probability measures $\eta_1$ and $\eta_2$ on the space $\mathfrak X$ satisfying respectively \ref{concen-item1}-\ref{concen-item2}. So that, for any bounded Borel function $f:\mathscr{Z}_{s} \to \mathbb{R}$, we have $$ \int_{\mathscr{Z}_{s}} f(x) \, d\mu_t(x)= \int_{\mathfrak X} f(u(t)) \, d\eta_1(x,u) = \int_{\mathcal F_{t_0}} f(\phi(t,t_0)(x)) \, d\eta_{1}(x,u)= \int_{\mathcal G_{t_0}} f\circ\phi(t,t_0)(x) \, d\mu_{t_0}(x)\,. $$ Recall that $\mathcal F_{t_0}$, $\mathcal G_{t_0}$ and $\phi(\cdot,\cdot)$ are given respectively in Lemma \ref{mes-lem1}, Lemma \ref{mes-lem2} and Proposition \ref{flotpp}. Moreover, in the last identities we have used the fact that $(e_t)_{\sharp}\eta_1=\mu_t$, the concentration property $\eta_1(\mathcal F_{t_0})=1$ in Lemma \ref{mes-lem1} and the measurability of the map $\phi(t,t_0)$ in Lemma \ref{mes-lem2}. So, one concludes for any $t\in I$, $$ \mu_t= \phi(t,t_0)_{\sharp}\mu_{t_0}\,. $$ Repeating the same argument for $\nu_t$ yields the same result so that for any $t\in I$, $$ \mu_t= \phi(t,t_0)_{\sharp}\mu_{t_0}=\phi(t,t_0)_{\sharp}\nu_{t_0}=\nu_t\,. $$ $\hfill\square$ \bigskip \noindent We state an existence result of solutions for the Liouville equation \eqref{eq.transport}. \begin{prop} \label{ext-1} Let $v:\mathbb{R}\times \mathscr Z_s\to \mathscr Z_{-\sigma}$ a Borel vector field which is bounded on bounded sets and let $I$ be a bounded open interval with $t_0\in I$ a fixed initial time. Assume that there exists a Borel set $\mathcal{A}$ of $\mathscr{Z}_{s}$ and a Borel map $\phi:\bar I\times \mathcal{A}\to \mathscr{Z}_{s}$ which is bounded on bounded sets and such that for any $x\in \mathcal{A}$ the curve $t\in \bar I \to \phi(t,x)$ is a weak solution of the initial value problem \eqref{IVP} satisfying $ \phi(t_0,x)=x$. Then for any Borel probability measure $\nu\in\mathfrak{P}(\mathscr{Z}_{s})$, such that $\nu$ is concentrated on $ \mathcal{A}$ and on a bounded subset of $\mathscr{Z}_{s}$, there exists a solution $t\in I\to \mu_t$ to the Liouville equation \eqref{int-liouville} given by \begin{equation} \label{mes-propag} \mu_t=\phi(t,\cdot)_{\sharp}\nu\,, \end{equation} satisfying $\mu_{t_0}=\nu$. Furthermore, $t\in I\to \mu_t$ is strongly narrowly continuous in $\mathfrak{P}(\mathscr{Z}_{-\sigma})$. \end{prop} \begin{proof} Since the map $\phi$ is Borel then $\phi(t, \cdot): \mathcal{A}\to \mathscr{Z}_{s}$ is also Borel. So, one can define $\mu_t$ according to \eqref{mes-propag} as a Borel probability measure on $\mathscr{Z}_{s}$ or $\mathscr{Z}_{-\sigma}$. Moreover, for any bounded continuous function $f:\mathscr{Z}_{-\sigma}\to \mathbb{R}$, one easily checks that $$ t\in I\longrightarrow \int_{\mathscr{Z}_{-\sigma}} f(x) \,d\mu_t= \int_{\mathcal{A}} f(\phi(t,x)) \,d\nu\,, $$ is continuous. So, the curve $t\in I\to \mu_t$ is strongly (weakly) narrowly continuous in $\mathfrak{P}(\mathscr{Z}_{-\sigma})$ and satisfies $\mu_{t_0}=\nu$. It still to prove that $t\in I\to \mu_t$ is a solution of the Liouville equation \eqref{int-liouville}. Let $\psi\in \mathscr{C}_0^\infty(\mathbb{R}^n)$ and $(e_1,\cdots,e_n)$ an orthonormal family in $\mathscr{Z}_{-\sigma}$. Here $\mathscr{Z}_{-\sigma} \equiv \mathscr{Z}_{-\sigma,\mathbb{R}}$ is considered as real Hilbert space. We use the notation in \eqref{eq.pi}, \begin{equation*} \pi(x)=(\langle x, e_1\rangle_{\mathscr Z_{-\sigma},\mathbb{R}}, \cdots, \langle x, e_n\rangle_{\mathscr Z_{-\sigma},\mathbb{R}})\,, \end{equation*} so that $\varphi(x)=\psi(\pi(x))\in\mathscr{C}_{0,cyl}^\infty(\mathscr{Z}_{-\sigma})$. Then a simple computation yields \begin{eqnarray*} \frac{d}{dt} \int_{\mathscr{Z}_{s}} \varphi(x) \, d\mu_t(x) &=& \int_{\mathcal{A}} \sum_{j=1}^n \frac{d}{dt} \langle \phi(t,x), e_j\rangle_{\mathscr{Z}_{-\sigma},\mathbb{R}} \; \partial^j \psi\big(\langle \phi(t,x), e_1\rangle_{\mathscr{Z}_{-\sigma},\mathbb{R}},\cdots, \langle \phi(t,x), e_n\rangle_{\mathscr{Z}_{-\sigma},\mathbb{R}} \big) \, d\nu \\ &=& \int_{\mathscr{Z}_{s}} \langle v(t,y), \nabla\varphi(y)\rangle_{\mathscr{Z}_{-\sigma},\mathbb{R}} \, d\mu_t(y)\,. \end{eqnarray*} The last equalities follow using two arguments: First, $t\to \phi(t,x)$ is a weak solution of the initial value problem \eqref{IVP} for any $x\in\mathcal{A}$ and it is an absolutely continuous curve in $\mathscr{Z}_{-\sigma}$. Second, $\nu$ is concentrated on a bounded set of $\mathscr{Z}_{s}$ and $\phi$ and $v$ are bounded on bounded sets. So, one can use dominated convergence in order to switch between time derivatives and integration. A standard density argument gives the Liouville equation \eqref{int-liouville} with the measures $(\mu_t)_{t\in I}$. \end{proof} {\bf Proof of Thm.~\ref{ext-2}}: Again we use the duality between hierarchy equations and Liouville equations proved in Prop.~\ref{thm.main} and \ref{thm.main2}. Recall that we have also proved a duality between the assumptions \eqref{A2} and \eqref{A1} in Lemma \ref{a1a2}. So, for any $\gamma\in \mathscr{H}(\mathscr{Z}_{0})$ satisfying the hypothesis of Thm.~\ref{ext-2} there exists, by Prop.~\ref{str.BEh}, a probability measure $\nu\in\mathscr{M}(\mathscr{Z}_{0})$ such that \begin{equation*} \gamma^{(k)}=\int_{\mathscr{Z}_{s}} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\nu(\varphi)\,, \quad\forall k\in\mathbb{N}\,. \end{equation*} Moreover, according to Lemma \ref{concent} the measure $\nu$ is concentrated on a bounded subset of $\mathscr{Z}_{s}$. Applying Prop.~\ref{ext-1} then there exits a solution $t \in I \to \mu_t\in\mathfrak{P}(\mathscr{Z}_{s})$ of the Liouville equation \eqref{eq.transport} satisfying the initial condition $\mu_{t_0}=\nu$. Notice also that since the map $\phi$ transform bounded sets on bounded sets of $\mathscr{Z}_{s}$, then $\mu_t$ concentrates on a ball $B_{\mathscr{Z}_{s}}(0,R)$ for all $t\in I$. Taking for any $k\in\mathbb{N}$, $$ \gamma_t^{(k)}=\int_{\mathscr{Z}_{s}} |\varphi^{\otimes k} \rangle \langle \varphi^{\otimes k} | \;d\mu_t(\varphi)\,, $$ then one easily checks that $t \in I \to \gamma_t=(\gamma_t^{(k)})_{k\in\mathbb{N}}$ satisfies \eqref{A2} and solves the hierarchy equation \eqref{int-hier} according to Prop.~\ref{thm.main} since $t \in I \to \mu_t$ solves the Liouville equation. $\hfill\square$ \medskip {\bf Proof of Prop~\ref{uniq-unc1} and \ref{uniq-unc2}}: We can not use directly Thm.~\ref{sec.0.thm2} because the results in Prop~\ref{uniq-unc1} and \ref{uniq-unc2} are of conditional type. However, the proof is quite similar and uses Proposition \ref{prob-rep} as well and follows the same scheme as in Thm.~\ref{sec.0.thm2}. So, we just indicate the main point in the proof. Notice that the vector field $v$ given in \eqref{vect-field} may not be bounded on bounded sets of $L^2(\mathbb{R}^d)$. Nevertheless, the Proposition \ref{prob-rep} (or \cite[Prop.~4.1]{MR3721874}) still holds true under the following weaker assumption, $$ \int_{I} \int_{L^2} || v(t,x) ||_{H^{-1}(\mathbb{R}^d)} \;d\mu_t(x) \,dt <\infty\,. $$ The above inequality can be checked by simply using \eqref{unc.eq.1} and \eqref{cond-Stri}. Hence, there exists a probability measure $\eta$ over the space $\mathfrak{X}$, defined as in \eqref{eq.X}, which concentrates on the solutions $u\in L^\infty(I, L^2(\mathbb{R}^d))\cap W^{1,1}(I, H^{-1}(\mathbb{R}^d))$ of the NLS equation \eqref{eq.ivp} written in the interaction representation, i.e., $$ u(t)=x+\int_{0}^t v(\tau,u(\tau)) \, d\tau\,, $$ where $v$ is the Borel vector field given in \eqref{vect-field}. The additional assertion $u\in L^\infty(I, L^2(\mathbb{R}^d))$ is deduced from the assumption \eqref{A1} with $s=0$ and $A=-\Delta+1$ satisfied by $(\mu_t)_{t\in I}$ solution of the corresponding Liouville equation. Moreover, the requirement \eqref{cond-Stri} with Prop.~\ref{prob-rep}-\ref{concen-item2} yield $$ \int_{I} \int_{L^2} ||\,\mathcal{U}(t) \,x||_{L^r}^q\; d\mu_t(x)\,dt =\int_{\mathfrak{X}} \int_I ||\,\mathcal{U}(t) \,u(t)||_{L^r}^q \,dt\,d\eta(x,u)<\infty\,. $$ Hence, one notices that $\mathcal{U}(\cdot)u(\cdot) \in L^q(I, L^r(\mathbb{R}^d))$ for $\eta$-a.e.~$(x,u)\in \mathfrak{X}$. Applying Strichartz's estimate to the Duhamel formula \begin{equation} \label{pro.duha} \mathcal{U}(t)u(t)=\mathcal{U}(t)x-i\int_{0}^t \mathcal{U}(t-\tau) \,G(\mathcal{U}(\tau) u(\tau)) \, d\tau\,, \end{equation} one concludes that $\mathcal{U}(\cdot)u(\cdot)\in \mathscr{C}(\bar I, L^2(\mathbb{R}^d))\cap L^q(I, L^r(\mathbb{R}^d))$ for $\eta$-a.e.~$(x,u)\in \mathfrak{X}$. So now using the result of Tsutsumi \cite{MR915266} or more precisely \cite[Theorem 4.6.1]{MR2002047} one can complete the proof as in Prop.~\ref{sec.0.thm3} and obtains the uniqueness for the Liouville solutions satisfying \eqref{A1} and \eqref{cond-Stri}. Consequently, the duality result of Thm.~\ref{sec.0.thm2} gives the uniqueness for the corresponding hierarchy equation. $\hfill\square$
eb18001efbb6735d2459f9a49864a235139f07a2
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Lepton flavour is an accidental symmetry of the Standard Model (SM), and there are many extensions of the SM, like the seesaw models, supersymmetric SM, flavour-changing $Z'$ or scalars, leptoquarks, or left-right symmetric models, that can naturally break this symmetry. Even within the ambit of SM, neutrino mixing provides a source of leptonic flavour violation (LFV), but the rates are too small to be observed in any near future \cite{LFVrates}. Thus, observation of any LFV decay is a smoking gun signal of New Physics (NP). In general four types of LFV processes have been looked for: (i) leptonic decays ($\tau\to 3e$, $\tau\to 3\mu$, $\mu\to 3e$, $\tau\to 1e+2\mu$, $\tau\to 1\mu+2e$), (ii) radiative decays ($\tau\to e\gamma$, $\tau\to\mu\gamma$, $\mu\to e\gamma$), (iii) semileptonic decays ($\ell_1\to \ell_2M$, where $M$ is some meson), and (iv) conversion (like $\mu \to e$). They are not all independent, {\em e.g.}, a flavour-changing electromagnetic penguin can also give rise to leptonic LFV decays. Most of the decays, however, have very stringent limits \cite{LFVrates}, the branching ratios (BR) being typically of the order of $10^{-8}$ or even smaller. The interest in such LFV decays have recently been rekindled from the observation that some of the semileptonic $B$ decay modes show anomalous deviations from the SM expectations, which may possibly be explained by lepton flavour non-universality (LFNU) as well as LFV. As an example, let us refer the reader to a recent attempt in Refs.\ \cite{CKMS,CKMS2}, where the authors have shown that both $R_K, R_{K^*}$ and $R(D), R(D^*)$ anomalies can be explained satisfactorily with only two new operators, if the weak and mass bases of the charged leptons $\{\mu,\tau\}$ are related by a field rotation \cite{GGL}. The apparent excess in the LFV decay channel of the Higgs boson, $h\to\mu\tau$, as was once reported by the CMS Collaboration \cite{1502.07400}, coupled with such $B$ decay anomalies, could lead to some well-motivated and fairly constrained models of LFV \cite{Choudhury:2016ulr}. The LFV operators with only leptonic fields cam also be induced by Renormalisation Group (RG) running of semileptonic operators \cite{feruglio}. Thus, there is enough motivation to seriously look into such LFV channels; they might be observed at the Large Hadron Collider (LHC) \cite{lhc-mutau}, or dedicated super-B factories like Belle-II. Implications of such LFV decays may also be found in Ref.\ \cite{request}. In this paper, we will focus solely on the leptonic channel $\tau\to 3\mu$. The channel has already been studied in detail in the literature; there are both model-independent \cite{0711.0792,1506.07786,1511.07434} as well as partially model-dependent \cite{0802.0049,1701.00870} studies. The Belle Collaboration has an upper bound on the BR \cite{1001.3221} \begin{equation} {\rm BR}(\tau\to 3\mu) < 2.1\times 10^{-8} \label{e:bound} \end{equation} at 90\% confidence level (CL). One reason for looking at this particular channel is the possibility of leptonic rotation in the $\{\mu,\tau\}$ sector as mentioned above, which invariably leads to such LFV channels out of lepton flavour conserving operators in the weak basis. Another reason, of course, is the relative ease with which the final state muons can be detected in both hadronic and $e^+e^-$ colliders. Here we would like to push the studies on LFV in $\tau\to 3\mu$ further by asking and answering a few questions. Observation of even a single $\tau\to 3\mu$ event is a definite signal for NP. Assuming that one observes, possibly at a super-B factory like Belle-II, a few events for the LFV decay in question, will one be able to unearth the nature of the possible operators that can lead to such a decay? It has been shown \cite{0711.0792,0802.0049} that there can be six independent LFV operators in the chiral basis that lead to $\tau\to 3\mu$. If the final state muon polarisations are not measured, all the operators are {\em a priori} equally probable, and obviously only the number of events will not tell us anything about the presence or absence of any of these operators; it can only yield some estimate on the respective Wilson coefficients (WC). So, are there observables which will help us to differentiate between these operators? We will show that this is indeed possible, without using higher-order differential cross-sections like Ref.\ \cite{0711.0792} which may have very few number of events in each bin. At this point, we also note that some more operators can be generated through Fierz reordering, but obviously they are not independent of the first six, and therefore we will not consider them any further. The second question that we would like to ask is whether the existence of more than one such operators can be disentangled from the data. Here the answer will be partially positive, unless, again, the muon polarisations are measured. If one can have a sizeable number of events, and measure the muon polarisations too, one may have in principle further observables, but we would like to be conservative. Anyway, as we will show, one does not expect more than 70 events or so at the most with an integrated luminosity ${\cal L}_{\rm int} = 50$ ab$^{-1}$ at Belle-II. We will use the method of Optimal Observables (OO), which has already been used in different areas of particle physics \cite{gunion,atwood}, and in particular, for flavour physics \cite{S3,zc}. This method displays the amount of significance level (``how many sigma" in standard parlance) by which one point in the allowed parameter space can be separated from another point. This is the only way to approach the question of model differentiation before the arrival of the data. Once one has the data, other methods, like the unbinned multivariate maximum likelihood, may be employed. A related question is the number of events with which one can have a successful differentiation among models, where a model is specified by its operator structure and WCs. As expected, if the number of events is too small, it will be harder to differentiate among various models, or in other words, the significance level will be lower. We will quantify this statement subsequently. The paper is arranged as follows. In Section 2, we enlist all the possible NP operators that can give rise to the $\tau\to 3\mu$ decay, and the observables that we deal with are discussed in Section 3. In Section 4, we show the differentiation among models with only one NP operator. Section 5 deals with models with two such NP operators, and we discuss how well the presence of the second operator can be found out from various observables. Section 6 summarizes and concludes the paper. \section{The New Physics Operators} For this section, we will follow the notation and convention of Ref.\ \cite{0802.0049}. The most general LFV Lagrangian can be written as \begin{eqnarray} {\cal L} &=& \frac{1}{\Lambda^2} \bigg[g^S_{LL} (\overline\mu_L\mu_R)(\overline\mu_R\tau_L) + g^S_{LR} (\overline\mu_L\mu_R)(\overline\mu_L\tau_R) + g^S_{RL} (\overline\mu_R\mu_L)(\overline\mu_R\tau_L) + g^S_{RR} (\overline\mu_R\mu_L)(\overline\mu_L\tau_R) \nonumber\\ && \left. + g^V_{LL} (\overline\mu_R\gamma^\alpha\mu_R)(\overline\mu_L\gamma_\alpha\tau_L) + g^V_{LR} (\overline\mu_R\gamma^\alpha\mu_R)(\overline\mu_R\gamma_\alpha\tau_R) \right.\nonumber\\ && \left. + g^V_{RL} (\overline\mu_L\gamma^\alpha\mu_L)(\overline\mu_L\gamma_\alpha\tau_L) + g^V_{RR} (\overline\mu_L\gamma^\alpha\mu_L)(\overline\mu_R\gamma_\alpha\tau_R) \right.\nonumber\\ && + \frac12 g^T_{LR} (\overline\mu_L\sigma^{\alpha\beta}\mu_R)(\overline\mu_L\sigma_{\alpha\beta}\tau_R) + \frac12 g^T_{RL} (\overline\mu_R\sigma^{\alpha\beta}\mu_L)(\overline\mu_R\sigma_{\alpha\beta}\tau_L) \bigg]\,, \label{lfvlag} \end{eqnarray} and we will denote the operator accompanying $g^X_{IJ}$ ($X=S,V,T$, and $I,J=L,R$) as $O^X_{IJ}$. $\Lambda$ is the cutoff scale, which we will set at 5 TeV for our analysis. We separate the operators into three major classes: ${\sf S}$ (operators of the form $O^S_{IJ}$), ${\sf V}$ (the $O^V_{IJ}$ operators) and ${\sf T}$ (the tensor operators $O^T_{IJ}$). Thus, the effective Lagrangian is of the form \begin{equation} {\cal L} = \frac{1}{\Lambda^2} \left[ \sum_{I,J=L,R} \left(g^S_{IJ} O^S_{IJ} + g^V_{IJ} O^V_{IJ}\right) + \sum_{I\not=J} g^T_{IJ} O^T_{IJ}\right]\,. \label{e:lag-conc} \end{equation} In the above mentioned basis, not all the ten operators are independent; Fierz transformation relates the two tensor operators with the rest, and the pairs $O^S_{LL}$-$O^V_{LL}$ and $O^S_{RR}$-$O^V_{RR}$ are also related \cite{0711.0792}. Thus, only four scalar and two vector operators are enough to span the operator basis. However, we keep all of them for the time being, as the mediator that has been integrated out may give rise to operators that are linear combinations of the six independent ones, like the tensor operators that can be generated from some hypothetical spin-2 mediators. Writing the BR in terms of the new WCs \cite{0802.0049}, one may easily show that the decay $\tau \to 3\mu$ has maximal sensitivity to {\sf V} or {\sf T} operators. As an example, the present bound on ${\rm BR}(\tau\to 3\mu) < 2.1\times 10^{-8}$ translates to $|g_{RL}^S| \approx 1$, while $|g_{RL}^T|, |g_{RL}^V| \sim {\cal O}(0.1)$. Thus, for a given number of events, the reach for {\sf V} or {\sf T} WCs is better than that for the {\sf S} ones. The left-chiral fields being $SU(2)$ doublets, one can also get a neutrino-antineutrino pair out of the operators $O^S_{RR}$ and $O^V_{RR}$, which technically gives an extra contribution to the SM $\tau$ decay channel $\tau\to \mu\overline\nu_\mu \nu_\tau$. However, the couplings will turn out to be so constrained as not to affect this channel in any significant amount. Similar LFV operators for $\mu\to 3e$ may affect the extraction of the Fermi coupling $G_F$ from the muon lifetime in a measurable way. One may try to look for $\tau\to\mu\gamma$ by contracting a pair of muons and taking the photon with momentum $q^\mu$ out from the loop. For the scalar operators, the contribution vanishes in the $q^2=0$ limit, and for the vector operators, this amounts to charge and not the transition magnetic moment renormalisation. To begin with, we will consider the presence of one operator at a time. This generates six independent models spanning over the ${\sf S}$ and ${\sf V}$ classes. Next, we will consider the presence of two operators at a time, which include the well-motivated combinations like \begin{eqnarray} O^S_{9L} = O^S_{LL} + O^S_{RL}\,, && O^S_{10L} = O^S_{LL} - O^S_{RL}\,, \nonumber\\ O^S_{9R} = O^S_{LR} + O^S_{RR}\,, && O^S_{10R} = O^S_{LR} - O^S_{RR}\,, \end{eqnarray} and similarly for the ${\sf V}$ class. Our goal will be to pinpoint whether or not these two coupling scenarios can be differentiated from those involving only one coupling at a time. \section{Observables} In this section, we will define the observables which we have used in our analysis to differentiate the effects of different NP operators. As an example, we consider the ${\sf S}$ class of operators, $O^S_{IJ}$, taken from Eq.\ (\ref{e:lag-conc}), and consider the decay $\tau^-\to \mu^-\mu^+\mu^-$. The double differential cross-section for the antimuon is given, after integrating out the phase space for the two muons, by \begin{align} \frac{d B_\tau}{dx\, d(\cos\theta)} &= \frac{T_\tau\, m_\tau^5}{ 128 \times 48\pi^3 \Lambda^4}\left[ 3x^2 (|g_{RL}^S|^2 +|g_{LL}^S|^2+|g_{RR}^S|^2+|g_{LR}^S|^2)\right.\nonumber\\ &\left.-x^3 (3|g_{RL}^S|^2+2|g_{LL}^S|^2+2|g_{RR}^S|^2 + 3 |g_{LR}^S|^2)\right. \nonumber \\ &\left.+\,x^2 \cos\theta\,(3 |g_{RL}^S|^2 + |g_{LL}^S|^2 - |g_{RR}^S|^2 - 3|g_{LR}^S|^2 )\right.\nonumber\\ &\left.- x^3 \cos\theta\,(3|g_{RL}^S|^2+ 2|g_{LL}^S|^2- 2|g_{RR}^S|^2- 3 |g_{LR}^S|^2)\right]\,, \label{e:dgdx} \end{align} where we take the muons to be massless, and use the notation \begin{equation} B_\tau \equiv {\rm BR} (\tau\to 3\mu)\,. \end{equation} Here, $T_\tau = 1/\Gamma$ is the lifetime of the $\tau$ lepton, $x = 2E_{\overline\mu}/m_\tau$ is the reduced energy of the antimuon, and $\theta$ is angle between the polarization of the $\tau$ and the momentum of the antimuon, following the convention of Ref.\ \cite{1506.07786}. For further discussion, let us define \begin{eqnarray} g_1 &\equiv& |g_{RL}^S|^2 +|g_{LL}^S|^2+|g_{RR}^S|^2+|g_{LR}^S|^2\,,\nonumber\\ g_2 &\equiv& 3|g_{RL}^S|^2+2|g_{LL}^S|^2+2|g_{RR}^S|^2 + 3 |g_{LR}^S|^2\,,\nonumber\\ g_3 &\equiv& 3 |g_{RL}^S|^2 + |g_{LL}^S|^2 - |g_{RR}^S|^2 - 3|g_{LR}^S|^2\,, \nonumber\\ g_4 &\equiv& 3|g_{RL}^S|^2+ 2|g_{LL}^S|^2- 2|g_{RR}^S|^2- 3 |g_{LR}^S|^2\,. \end{eqnarray} Thus \begin{equation} B_\tau = \frac{T_\tau\, m_\tau^5}{ 128 \times 24\pi^3 \Lambda^4}\left[g_1 - \frac14 g_2\right]\,. \label{brscalar} \end{equation} The number of events gives the information only on the combination $g_1-\frac14 g_2$. Another observable is the observed integrated forward-backward asymmetry $A_{FB}$, defined as \begin{equation} A_{FB} = \frac{N_F - N_B}{N_F + N_B}\,, \label{e:afb} \end{equation} with \begin{eqnarray} N_F &=& \sigma_{\rm Prod} {\cal L}_{\rm int}\epsilon\, \int_{0}^{1}\, dx\, \int_{0}^{1}\, d(\cos\theta) \frac{dB_\tau}{dx\, d(\cos\theta)}\,,\nonumber\\ N_B &=& \sigma_{\rm Prod} {\cal L}_{\rm int} \epsilon\, \int_{0}^{1}\, dx\, \int_{-1}^{0}\, d(\cos\theta) \frac{dB_\tau}{dx\, d(\cos\theta)}\,, \end{eqnarray} where $\sigma_{\rm Prod}$ is the $\tau$ production cross-section, ${\cal L}_{\rm int}$ is the integrated luminosity, and $\epsilon$ is the combined detection efficiency in the $\tau\to 3\mu$ channel. We will also define the $x$-dependent asymmetry, normalised to the total decay width, as \begin{equation} \frac{dA_{FB}}{dx} \equiv A'_{FB}(x) = \sigma_{\rm Prod} {\cal L}_{\rm int} \epsilon\, \frac{\int_{0}^{1}\, d(\cos\theta)\, \frac{dB_\tau}{dx\, d(\cos\theta)} - \int_{-1}^{0}\, d(\cos\theta)\, \frac{dB_\tau}{dx\, d(\cos\theta)}}{N_B + N_F} \equiv \frac{N_F(x) - N_B(x)}{N}\,, \label{afbx1} \end{equation} where $N=N_F + N_B$ gives the total number of signal events. Instead of the antimuon, one can play an identical game with one of the same-sign muons ({\em i.e.}, the one with the same sign as the decaying $\tau$), say the more energetic of the two. Let us define \begin{eqnarray} g'_1 &\equiv& |g_{RL}^S|^2 + 3 |g_{LL}^S|^2+ 3 |g_{RR}^S|^2+ |g_{LR}^S|^2\,,\nonumber\\ g'_2 &\equiv& |g_{RL}^S|^2+ 4 |g_{LL}^S|^2+ 4 |g_{RR}^S|^2 + |g_{LR}^S|^2\,,\nonumber\\ g'_3 &\equiv& |g_{RL}^S|^2 + 7 |g_{LL}^S|^2 - 7 |g_{RR}^S|^2 - |g_{LR}^S|^2\,, \nonumber\\ g'_4 &\equiv& |g_{RL}^S|^2+ 4 |g_{LL}^S|^2- 4 |g_{RR}^S|^2- |g_{LR}^S|^2\,, \end{eqnarray} so that \begin{equation} \frac{dB_\tau}{dy\, d(\cos\alpha)} = \frac{T_\tau\, m_\tau^5}{128 \times 96\pi^3 \Lambda^4}\left[ 3y^2 g'_1 - 2y^3 g'_2 + y^2 \cos\alpha\, g'_3 -2y^3\cos\alpha\, g'_4\right]\,, \label{e:dgdy} \end{equation} where $y = 2E_{\mu}/m_\tau$ is the reduced energy of the more energetic same-sign muon, and $\alpha$ is angle between its direction and the polarisation of the $\tau$ lepton. In an analogous way to Eq.\ (\ref{afbx1}), one can define \begin{equation} {\cal A}'_{FB}(y) = \frac{N_F(y)-N_B(y)}{N}= \sigma_{\rm Prod} {\cal L}_{\rm int} \epsilon\, \frac{\int_{0}^{1}\, d(\cos\alpha)\, \frac{dB_\tau}{dy\, d(\cos\alpha)} - \int_{-1}^{0}\, d(\cos\alpha)\, \frac{dB_\tau}{dy\, d(\cos\alpha)}}{N}. \label{afby1} \end{equation} As will be shown later, the observables $A_{FB}^{\prime}$ and ${\cal A}_{FB}^{\prime}$ are useful to differentiate the sensitivities of the subtypes of operators within a particular type, say {\sf S} or {\sf V}. Similarly, for the {\sf V} class of models, one obtains, in an analogous way to Eq.\ (\ref{e:dgdx}), \begin{align} \frac{d B_\tau}{dx\, d(\cos\theta)} &= \frac{T_\tau\, m_\tau^5}{ 128 \times 12\pi^3 \Lambda^4}\left[ 3x^2 (4|g_{RL}^V|^2 +|g_{LL}^V|^2+|g_{RR}^V|^2+4|g_{LR}^V|^2)\right.\nonumber\\ &\left.-2x^3 (6|g_{RL}^V|^2+|g_{LL}^V|^2+|g_{RR}^V|^2 + 6 |g_{LR}^V|^2)\right. \nonumber \\ &\left.+ x^2 \cos\theta(12 |g_{RL}^V|^2 + |g_{LL}^V|^2 - |g_{RR}^V|^2 - 12|g_{LR}^V|^2 )\right.\nonumber\\ &\left.- 2x^3 \cos\theta\,(6|g_{RL}^V|^2+ |g_{LL}^V|^2- |g_{RR}^V|^2- 6 |g_{LR}^V|^2)\right]\,. \label{e:dgdxv} \end{align} The corresponding BR is \begin{equation} B_\tau = \frac{T_\tau\, m_\tau^5}{ 128 \times 12\pi^3 \Lambda^4}\left[2 g_1 - g_2\right]\,, \label{brvector} \end{equation} where, \begin{eqnarray} g_1 &\equiv& 4|g_{RL}^V|^2 +|g_{LL}^V|^2+|g_{RR}^V|^2+4|g_{LR}^V|^2\,,\nonumber\\ g_2 &\equiv& 6|g_{RL}^V|^2+|g_{LL}^V|^2+|g_{RR}^V|^2 + 6 |g_{LR}^V|^2\,,\nonumber\\ g_3 &\equiv& 12 |g_{RL}^V|^2 + |g_{LL}^V|^2 - |g_{RR}^V|^2 - 12|g_{LR}^V|^2\,, \nonumber\\ g_4 &\equiv& 6|g_{RL}^V|^2+ |g_{LL}^V|^2- |g_{RR}^V|^2- 6 |g_{LR}^V|^2\,. \end{eqnarray} From Eqs.\ (\ref{e:dgdx}), (\ref{brscalar}), (\ref{e:dgdxv}), and (\ref{brvector}), one finds that {\sf V}-type operators generate more events than {\sf S}-type operators, if the orders of magnitude of their WCs are similar. The number of events as well as the angular distribution depend on the model subtype. We refer the reader to Fig.\ \ref{f:brvswc}, which shows this explicitly. \begin{figure}[htbp] \centering \subfloat[\label{brvsgrlzoomed}] {\includegraphics[height=5.5cm]{Fig1a.png}} \subfloat[\label{brvsgrrzoomed}] {\includegraphics[height=5.5cm]{Fig1b.png}} \caption{ (a) Variation of ${\rm BR}(\tau \to 3\mu)$ with the WCs $g_{RL}^I$ ($I = S,V,T$). (b) The same for $g_{RR}^I$ ($I = S,V$). The horizontal line shows the present limit. The results for $g^I_{LR}$ are identical to those for $g^I_{RL}$, and the results for $g^I_{LL}$ are identical to those for $g^I_{RR}$.} \label{f:brvswc} \end{figure} For the same-sign muon, one gets \begin{equation} \frac{dB_\tau}{dy\, d(\cos\alpha)} = \frac{T_\tau\, m_\tau^5}{128 \times 24\pi^3 \Lambda^4}\left[ 3y^2 g'_1 - 8y^3 g'_2 + y^2 \cos\alpha\, g'_3 - 8y^3\cos\alpha\, g'_4\right]\,, \end{equation} with \begin{eqnarray} g'_1 &\equiv& 4|g_{RL}^V|^2 + 3|g_{LL}^V|^2+ 3|g_{RR}^V|^2+ 4|g_{LR}^V|^2\,,\nonumber\\ g'_2 &\equiv& |g_{RL}^V|^2+ |g_{LL}^V|^2 + |g_{RR}^V|^2 + |g_{LR}^V|^2\,,\nonumber\\ g'_3 &\equiv& 4 |g_{RL}^V|^2 + 7|g_{LL}^V|^2 -7 |g_{RR}^V|^2 - 4|g_{LR}^V|^2\,, \nonumber\\ g'_4 &\equiv& |g_{RL}^V|^2+ |g_{LL}^V|^2- |g_{RR}^V|^2- |g_{LR}^V|^2\,. \end{eqnarray} In an analogous way to Eqs.\ (\ref{afbx1}) and (\ref{afby1}), one can define $A'_{FB}(x)$ and ${\cal A}'_{FB}(y)$. While we will not discuss the {\sf T}-type operators separately, the double differential decay distribution is given by, \begin{equation} \frac{d B_\tau}{dx\, d(\cos\theta)} = \frac{T_\tau\, m_\tau^5}{ 128 \times 4\pi^3 \Lambda^4} \, 9 x^2(1-x)\, \left[g_1 + g_2 \cos\theta\right]\,, \end{equation} where, \begin{equation} g_1 \equiv |g_{RL}^T|^2+ |g_{LR}^T|^2\,,\ \ g_2 \equiv |g_{RL}^T|^2- |g_{LR}^T|^2\,. \end{equation} Thus \begin{equation} B_\tau = \frac{3 T_\tau\, m_\tau^5}{ 128 \times 8\pi^3 \Lambda^4}\, g_1\,. \end{equation} \section{Analysis} In this section, we discuss the current and future sensitivities of the {\sf S}, {\sf V}, and {\sf T} operators on the observables $B_{\tau}$, $A'_{FB}$. and ${\cal A}'_{FB}$. The next subsection deals with the simplified cases where only one operator is considered to be present at a time. While this is instructive and sheds a lot of light on the differentiating power of the observables, a more realistic scenario might involve more than one operators. Thus, in the next Section, we discuss the cases where two operators are simultaneously present, and try to see whether those two-operator models can be separated from the single-operator ones. \subsection{One Operator Models} Let us assume, to start with, that only one out of the ten possible operators shown in Eq.\ (\ref{lfvlag}) is present, notwithstanding the fact that not all of them are mutually independent, some being related to the others through Fierz rearrangement. In Fig.\ \ref{f:brvswc}, we show how the branching ratio $B_{\tau}$ depends on the WCs $g_{RL}^{S,V,T}$ and $g_{RR}^{S,V}$. Identical plots are obtained if one replaces $LR$ with $RL$, and $RR$ with $LL$. This $R\leftrightarrow L$ symmetry is true for all subsequent observables and their distributions, which reduces possible independent cases worth discussing by a factor of 2. Given the combination $IJ$, if theory tells us the approximate magnitude of the WC $g^X_{IJ}$, even with the number of events as the sole observable, one can almost immediately differentiate $X=S$ case with $X=V$ or $T$. With higher statistics even a differentiation between {\sf V} and {\sf T} may be possible. The present limit on $B_\tau$ is translated to $\vert g^T_{RL}\vert \leq 0.13$, $\vert g_{RL}^{V}\vert \leq 0.20$, $\vert g^S_{RL}\vert \leq 0.80$, $\vert g_{RR}^{V}\vert \leq 0.28$, and $\vert g^S_{RR}\vert \leq 0.57$. However, as we do not have any {\em a priori} knowledge of the magnitudes of the WCs, we have to look for some other observables and use the number of events as a normalisation. In other words, we will assume that the total number of events has been given to the community by the experimentalists and see how much extra information we can extract. In Fig.\ \ref{f:dbdx}, we show how the differential rate $dB_\tau/dx$ varies with the muon energy variable $x$ for a fixed value of the WC, set at $0.1$. With the normalisation included, the area under the curve gives the total number of events in different $x$-bins. Note that due to possible paucity of events, one may have only a few bins, 2 or 3, before the data starts thinning out too much to have any statistical significance. Thus, the continuous distribution showed in Fig.\ \ref{f:dbdx} is an idealised scenario. Even then, what we find is that the number of events will be markedly different for different classes of operators if the WCs are of the same order, which is very much along the expected line. On the other hand, the asymmetry variables $A'_{FB}(x)$ or ${\cal A}'_{FB}(y)$ must show identical pattern for all operators, {\sf S}, {\sf V}, or {\sf T}, with a fixed chirality structure, because the overall normalisation cancels in the ratio. \begin{figure}[t] \begin{center} \includegraphics[height=4.5cm]{Fig2a.png}~~~ \includegraphics[height=4.5cm]{Fig2b.png} \end{center} \caption{The decay rate distribution $dB_\tau/dx$ for different operators, with the relevant $g^{S,V,T}_{IJ} \approx 0.1$. } \label{f:dbdx} \end{figure} The next task would be to differentiate among the various chiral subclasses of a particular class of model. For illustration, we will take the {\sf S} class of models, and consider the presence of one {\sf S}-type operators at one time. As the sensitivities are higher for {\sf V} and {\sf T} classes, whatever results one has for {\sf S} will only be more enhanced and pronounced for other classes. At the same time, if the underlying theory predicts $g_{IJ}$ values of the order of unity, very small number of events will be harder to sustain under {\sf V} or {\sf T} classes. \begin{figure}[t] \centering \subfloat[\label{Ax_{fb}(binned) for 50 events}] {\includegraphics[height=4.5cm]{Fig3a.png}} \subfloat[\label{Ax_{fb}(binned) for 20 events}] {\includegraphics[height=4.5cm]{Fig3b.png}} \subfloat[\label{Ay_{fb}(binned) for 50 events}] {\includegraphics[height=4.5cm]{Fig3c.png}} \subfloat[\label{Ay_{fb}(binned) for 20 events}] {\includegraphics[height=4.5cm]{Fig3d.png}} \caption{ $A'_{FB}(x)$ for the antimuon with (a) 50 and (b) 20 events for the four single coupling {\sf S} class of models. The same for the more energetic of the two muons, ${\cal A}'_{FB}(y)$, with $($c$)$ 50 and (d) 20 events. } \label{f:single} \end{figure} In the single-coupling scheme, we consider four different models, depending upon which operator contributes, and denote them as \begin{equation} {\rm Model~1}: O^S_{RL}\,,\ \ {\rm Model~2}: O^S_{LL}\,,\ \ {\rm Model~3}: O^S_{RR}\,,\ \ {\rm Model~4}: O^S_{LR}\,. \end{equation} If only one operator contributes, $A'_{FB}(x)$ becomes a function of $x$ only and does not depend on the magnitude of the WCs: \begin{equation} A'_{FB}(x)_{1} = -A'_{FB}(x)_{4} = 6(x^2-x^3)\,,\ \ A'_{FB}(x)_{2} = -A'_{FB}(x)_{3} = x^2-2x^3\,. \end{equation} The integrated asymmetry $A_{FB}(i)$ for the $i$-th model can be obtained by integrating $x\in [0:1]$, and the values are \begin{equation} A_{FB}(1) = -A_{FB}(4) = \frac12\,,\ \ A_{FB}(2) = -A_{FB}(3) = -\frac16\,. \end{equation} There is a zero crossing only for models 2 and 3 at $x=\frac12$. Similarly, for the forward-backward asymmetry ${\cal A}'_{FB}(y)$, we find \begin{equation} {\cal A}'_{FB}(y)_{1} = -{\cal A}'_{FB}(y)_{4} = y^2 - 2y^3\,,\ \ {\cal A}'_{FB}(y)_{2} = -{\cal A}'_{FB}(y)_{3} = \frac12 (7y^2-8y^3)\,. \end{equation} While all of them show zero-crossing, for the last two models such crossing occurs almost at the end of the kinematic range at $y=7/8$. The integrated asymmetries are (for $y\in [0:1]$) \begin{equation} {\cal A}_{FB}(1) = {\cal A}_{FB}(3) = - {\cal A}_{FB}(2) = -{\cal A}_{FB}(4) = -\frac16\,. \end{equation} The $A'_{FB}(x)$s for different models are shown in Figs.\ \ref{Ax_{fb}(binned) for 50 events} and \ref{Ax_{fb}(binned) for 20 events}. Similarly, ${\cal A}'_{FB}(y)$s are shown in Figs.\ \ref{Ay_{fb}(binned) for 50 events} and \ref{Ay_{fb}(binned) for 20 events}. In these figures, every theoretical line has broadened out to a thick band, often overlapping with each other. This happens because the number of the events is limited. For every $x$ ($y$), the error margin in $A'_{FB}(x)$ is approximately given by \begin{equation} \delta A'_{FB} = \sqrt{\left(\frac{\partial A'_{FB}(x)}{\partial N_F(x) }\right)^2 (\delta N_F(x))^2 + \left(\frac{\partial A'_{FB}(x)}{\partial N_B(x) }\right)^2 (\delta N_B(x))^2 + \left(\frac{\partial A'_{FB}(x)}{\partial N}\right)^2 (\delta N)^2} \end{equation} where \begin{equation} \delta N_F(x) = \sqrt{N_F(x)},\ \ \delta N_B(x) = \sqrt{N_B(x)}\,, \end{equation} are the statistical errors in the number of events in the forward and backward directions respectively, and $\delta N = \sqrt{N}$. The expression for ${\cal A}'_{FB}(y)$ is analogous. We have not considered the correlation between $N_F(x)$ and $N_B(x)$; depending on the sign of the correlation, the expression can be an overestimation or underestimation, but as we do not have any {\em a priori} knowledge of the distribution, it is better to stick to zero correlation. The bands in Fig.\ \ref{f:single} indicate the $1\sigma$ error margins. Clearly, the resolving power is much less for 20 events than with 50 events. Because of the probable paucity of events, the asymmetries may be measured only with a limited number of bins. But even with two bins, low-$x$ ($0<x<0.5$) and high-$x$ ($0.5<x<1$), one should be able to differentiate between competing models. The existing bound on $\tau\to 3\mu$ comes from the analysis of 782 fb$^{-1}$ data from the Belle collaboration \cite{1001.3221}, and 468 fb$^{-1}$ data from the BaBar collaboration \cite{1002.4550}. With a production cross-section of $0.919$ nb for the $\tau^+\tau^-$ pairs, one gets 720 million such pairs at Belle and 420 million at BaBar. For 50 ab$^{-1}$ of integrated luminosity at Belle-II, one expects $N_P=4.6\times 10^{10}$ $\tau^+\tau^-$ pairs. With a detection efficiency of $7.6\%$ \cite{1001.3221}, and using the present bound given in Eq.\ (\ref{e:bound}), the maximum number of such events is about 73. For our discussion, we will use two scenarios: one with $N=20$ and the other with $N=50$. Note that the errors are only statistical in nature. There may be other uncertainties, like fixing the direction of the $\tau$ polarisation, which will widen up the bands, but that effect is expected to be small with the $\tau$ detection ability of Belle-II. \begin{figure}[t] \centering \subfloat[\label{dbrdx for 50 events}] {\includegraphics[height=4.5cm]{Fig4a.png}} \subfloat[\label{dbrdy for 50 events}] {\includegraphics[height=4.5cm]{Fig4b.png}} \caption{ (a) $dB_\tau/dx$ and (b) $dB_\tau/dy$ for the four single coupling {\sf S} class of models. } \label{f:single2} \end{figure} If $A_{FB}$ turns out to be positive (negative), the viable models are 1 or 3 (2 or 4). Similarly, the positive (negative)-${\cal A}_{FB}$ models are 2 and 4 (1 and 3). Thus, measurement of only the sign of these asymmetries leave us with a twofold ambiguity. However, $x(y)$-dependent asymmetry measurements have the ability to resolve the same; {\sf S} and {\sf V}-type models behave identically. It is enough to measure the asymmetry in two bins: low $x(y)$ bin for $0\leq x(y) < \frac12$ and high $x(y)$ bin for $\frac12\leq x(y) \leq 1$. If these measurements were precise enough, it would have been sufficient not only to pinpoint the model but also to explore whether more than one operators are contributing \footnote{We, however, show a continuous $x(y)$-distribution of the asymmetries; one just needs to integrate over the respective bin to have an idea of the relative magnitudes.}. Unfortunately, with a limited number of events, the measurements cannot be that precise. Our results $dB_{\tau}/dx$ and $dB_{\tau}/dy$ in the single-coupling schemes are shown in Fig.\ \ref{f:single2}. The lines broaden out into bands if we take the errors and uncertainties into account. Such broadening, in all probability, will make the lines indistinguishable from one another. However, all these models can be separated from each other from the asymmetry measurements, particularly in the high-$x(y)$ bin. \section{Two Operator Models} Once we establish that given enough events ($\sim 50$), it will be straightforward to differentiate between several one-operator models, the next question is: what if the data is not compatible with any of them? Note that for the single-operator scheme to stand good, all the observables, and not only a few of them, have to be in the right ballpark specified by that model. However, even the principle of Occam's Razor may not be enough reason for not invoking the double-operator scheme. We will, as before, be confined within the {\sf S} class of models, and consider the cases where two WCs are nonzero at a time. As we have shown, this will be the case if the underlying theory forces the muon current to be pure $S$ or $P$ \footnote{And pure $V$ or $A$ for the {\sf V} class of models.}. Thus, the question we ask is: If the new physics is described by two $\tau\to 3\mu$ operators, with what confidence level can we differentiate that from those cases where only one of them is present? Note that the number of events will act as the tightest constraint on the parameter space. We will try to differentiate these models, hereafter called $O2$ for two effective operators, from that with only one operator, which we call the `seed' model. For example, we consider the following $O2$ models: \begin{eqnarray}\label{modlst1} {\rm Model~A} &:& O_{RL}^S~{\rm and}~O_{LL}^S\,,\nonumber\\ {\rm Model~B} &:& O_{RL}^S~{\rm and}~O_{RR}^S\,,\nonumber\\ {\rm Model~C} &:& O_{RL}^S~{\rm and}~O_{LR}^S\, . \end{eqnarray} We will compare the differentiability of these models (A, B and C) with the seed model with one operator $O^S_{RL}$. To achieve this goal, we need to find the parameter space spanned by $g^S_{RL}$ and another $g^S_{IJ}$, which depends on what $O2$ model we consider. We further check with what confidence level the allowed regions for models A, B and C can be separated from the single-operator model given by $O^S_{RL}$. To complete the study, we include three further $O2$ models, namely, \begin{eqnarray}\label{modlst2} {\rm Model~D} &:& O_{RR}^S~{\rm and}~O_{RL}^S\,,\nonumber\\ {\rm Model~E} &:& O_{RR}^S~{\rm and}~O_{LL}^S\,,\nonumber\\ {\rm Model~F} &:& O_{RR}^S~{\rm and}~O_{LR}^S\,, \end{eqnarray} where the first operator is treated as the seed. Models B and D are different, because of different seeds. The seed models are chosen in such a way as to have positive $A_{FB}$ for the opposite-sign muon for them; the negative $A_{FB}$ models will have a corresponding relationship, which can be obtained by flipping $L$ and $R$, $L \leftrightarrow R$ \footnote{ If we consider the asymmetry based on the more energetic like-sign muon, models A-F have negative asymmetry, while the corresponding $L\leftrightarrow R$ models have positive asymmetry.}. Let us mention here that the confidence interval contours will depend on the type of seed operators being considered. For this part of the analysis, we will use the Optimal Observable (OO) technique. For a detailed discussion on this technique, we refer the reader to Refs.\ \cite{gunion,atwood}. In the context of $B$ decays, this method has been applied in Refs.\ \cite{S3,zc}. The essential point of the OO technique is that this gives the optimal set of observables (which are in general functions of experimental observables) with which two points in the parameter space of different models can be differentiated with maximum efficiency. In other words, this gives the maximum possible separation, in terms of confidence level, between two points in the parameter space as a function of experimental observables. In practice, the systematic errors reduce the confidence intervals. As has been shown in Refs.\ \cite{S3,zc}, this method is all the more useful when one does not have any experimental data; in the presence of data, one can do a maximum likelihood analysis. This also means that not all the systematic uncertainties are taken into account. Thus, OO acts more as a motivational tool to the experimentalists than as an instrument for detailed quantitative theoretical studies. Even with only the {\sf S} class of operators, the parameter space of WCs is four-dimensional. A complete analysis is not only cumbersome but also of very little help in the real-life scenario where the number of events will definitely be below 100 and therefore a fine scan of the parameter space, with a two-dimensional binning on $x$($y$) and $\cos\theta(\cos\alpha)$, will have so few events per bin as to make the analysis meaningless. The only constraint on the WCs comes from the non-observability of the decay. In the OO technique, one writes any observable ${\cal O}$, depending on a variable $\phi$, as \begin{equation} {\cal O}(\phi) = \sum_i C_i f_i(\phi)\,, \label{e:def-oo} \end{equation} which can be generalised to a set of variables denoted by $\bm{\phi}$. Here, all the $f_i$s are independent, and $C_i$s are some constants. The major goal of this technique is to extract $C_i$s. In our case, $C_i$s will be functions of $g_i$s and $g'_i$s defined earlier. Our analysis can be done by defining a quantity analogous to $\chi^2$, such as \begin{align}\label{chidef} \chi^2 &= \sum_{i,j} (C_i - C_i^0) V_{ij}^{-1}(C_j - C_j^0) . \end{align} The $C_i^0$s are called the seed values, which can be considered as model inputs. The covariance matrix $V_{ij}$s are defined as \begin{equation} V_{ij} = \langle \Delta C_i \Delta C_j\rangle = \frac{ (M^{-1})_{ij} \sigma_T}{N}\,, \text{with}\ \ M_{ij} = \int \frac{f_i(\phi)f_j(\phi) }{O(\phi)} \, d\phi. \label{vijdef} \end{equation} In the above expression of $V_{ij}$, $\sigma_T = \int O(\phi)\, d\phi$ and $N = \sigma_{\rm Prod}{\cal L}_{\rm int} \epsilon B_\tau$, as defined earlier. For a specific model, $\chi^2$ gives the confidence level separation between the seed value $C_i^0$ (seed model) and the model under consideration, parametrized by $C_i$. Looking at Eq. (\ref{chidef}), it is clear that the shape of the fixed $\chi^2$ hypersurface depends on $V^{-1}_{i j}$, and the centroid of that (where $\chi^2 =\chi^2_{min} = 0$) changes with the seed values. These fixed $\chi^2$ surfaces are what determines the separation between models essentially. Thus separation between any two models 1 and 2, with seed at 1, will in general be not equal to the separation when the seed is at 2. This is the reason for treating Models B and D separately in Eqs.\ (\ref{modlst1}) and (\ref{modlst2}). In the case of single operator model, the seed values of the WCs corresponding to the 50 events are obtained as \begin{equation} |g^S_{RL}|^2 = 0.44~{\rm (A,B,C)}\,,\ \ \ |g^S_{RR}|^2 = 0.22~{\rm (D,E,F)}\,. \end{equation} For the negative $A_{FB}$ models, one may take $|g^S_{LR}|^2=0.44$ and $|g^S_{LL}|^2 = 0.22$. Our additional inputs are \begin{equation} m_\tau = 1.78~{\rm GeV}\,,\ \ T_\tau= 290.3~{\rm fs}\,, \ \ \Lambda = 5~{\rm TeV}\,. \end{equation} We show our results for the ${\sf S}$ class of models; ${\sf V}$ class of models will show identical results. The observables that we use are $A'_{FB}(x)$, ${\cal A}'_{FB}(y)$ (both defined in the previous section), as well as $dB_\tau/dx$ and $dB_\tau/dy$, the expressions for which can be obtained from Eqs.\ (\ref{e:dgdx}) and (\ref{e:dgdy}): \begin{eqnarray} \frac{dB_\tau}{dx} &=& \frac{m_\tau^5\, T_\tau}{ 128 \times 24\pi^3 \Lambda^4}(3x^2 g_1 - x^3 g_2)\,,\nonumber\\ \frac{dB_\tau}{dy} &=& \frac{m_\tau^5 T_\tau}{ 128 \times 48\pi^3 \Lambda^4}(3y^2 g'_1 - 2y^3 g'_2)\,. \end{eqnarray} From Eq.\ (\ref{afbx1}), one can write \begin{equation} A'_{FB}(x) = \frac{1}{B_\tau}\times T_\tau \frac{m_\tau^5}{ 128 \times 48\pi^3 \Lambda^4}(x^2 g_3 - x^3 g_4)\,. \end{equation} For 50 events, $B_\tau$ = $1.43\times 10^{-8}$. Similarly, \begin{equation} {\cal A}'_{FB}(y) = \frac{1}{B_\tau}\times T_\tau \frac{m_\tau^5}{ 128 \times 96\pi^3 \Lambda^4} (y^2 g'_3 - 2y^3 g'_4)\,. \label{afby} \end{equation} Before we show our results for all the 6 models, let us mention a few important points here. \begin{enumerate} \item The determination of $\chi^2$ involves an integration over the variable $\phi$ of Eq.\ (\ref{e:def-oo}). If over the region of integration the observable for the seed model becomes zero for any value of $\phi$, the integration diverges. Thus, one has to cut off such badly behaving regions. For example, if the observable for the seed model becomes zero at the end points, say $a$ and $b$, one has to perform the integral between $a+\epsilon$ and $b-\epsilon$, where $\epsilon$ is taken to be so small as not to affect the observable (like, say, the number of events). More concrete examples are given below. \item One may ask why we do not use a two-variable analysis and use the double differential cross-section as the observable. This would have certainly been useful, and more powerful as an analytical tool, if we could manage a large number of events so that even the two-dimensional bins have enough number of events. With a small number of events, such an analysis would not give much useful information. \end{enumerate} All our observables depend on only two functions, $f_1$ and $f_2$, with the argument being $x$ for the opposite-sign muon and $y$ for the like-sign more energetic muon. Depending on the observables, the combinations $C_1$ and $C_2$ are as follows. \subsection{Observable: $A'_{FB}(x)$} \begin{figure}[t] \centering \subfloat[\label{RL-LL}] {\includegraphics[height=5cm]{Fig5a.png}} \subfloat[\label{RL-RR}] {\includegraphics[height=5cm]{Fig5b.png}} \subfloat[\label{RL-LR}] {\includegraphics[height=5cm]{Fig5c.png}} \\ \subfloat[\label{RR-RL}] {\includegraphics[height=5cm]{Fig5d.png}} \subfloat[\label{RR-LL}] {\includegraphics[height=5cm]{Fig5e.png}} \subfloat[\label{RR-LR}] {\includegraphics[height=5cm]{Fig5f.png}} \caption{The differentiability of the models A-F, shown in (a)-(f) respectively, from the `seed' model, with $A'_{FB}(x)$ as the observable.} \label{f:afbx} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & Seed & Second operator & $C_1$ & $C_2$\\ \hline A & $RL$ & $LL$ & $3 |g_{RL}^S|^2 + |g_{LL}^S|^2$ & $3 |g_{RL}^S|^2 + 2 |g_{LL}^S|^2$\\ B & $RL$ & $RR$ & $3 |g_{RL}^S|^2 - |g_{RR}^S|^2$ & $3 |g_{RL}^S|^2 - 2 |g_{RR}^S|^2$\\ C & $RL$ & $LR$ & $3 |g_{RL}^S|^2 - 3 |g_{LR}^S|^2$ & $3 |g_{RL}^S|^2 - 3 |g_{LR}^S|^2$\\ D & $RR$ & $RL$ & $ - |g_{RR}^S|^2 + 3 |g_{RL}^S|^2$ & $-2 |g_{RR}^S|^2 + 3 |g_{RL}^S|^2$\\ E & $RR$ & $LL$ & $ - |g_{RR}^S|^2 + |g_{LL}^S|^2$ & $-2 |g_{RR}^S|^2 + 2 |g_{LL}^S|^2$\\ F & $RR$ & $LR$ & $ - |g_{RR}^S|^2 - 3 |g_{LR}^S|^2$ & $-2 |g_{RR}^S|^2 - 3 |g_{LR}^S|^2$\\ \hline \end{tabular} \caption{$C_1$ and $C_2$ for different models. The observable is $A'_{FB}(x)$.} \label{tab-afbx} \end{center} \end{table} We show the coefficients $C_1$ and $C_2$ in Table \ref{tab-afbx}, and Fig.\ \ref{f:afbx} displays our results for Models A-F. As the plots are not self-explanatory, let us clearly specify what they mean. The diagonal band with negative slope, in each of the plots, represents the allowed region in the parameter space of the various $O2$ models. Only the two relevant WCs are taken to be nonzero, keeping the others fixed at zero. Once the experimentalists obtain a certain number of events, this will specify a line in the two-dimensional parameter space over which the allowed models, each of them specified by some WCs, may lie. The exact position of the line will depend on what model one chooses, but the analysis must take into account the constraint imposed by this line \footnote{That is why even for a two-parameter model the degree of freedom is only 1.}. The uncertainties in the data will broaden the line to a band, whose width will ultimately depend on the number of events as well as the detector parameters. As a very rough guess, we take $\sqrt{N}$ to be the width for the line with $N$ events. The plots are drawn for $N=50$; thus, the band includes all the points for which the number of events lie approximately between 43 and 57. The separation contours are drawn on these bands only. We expect the bands to be narrower in actual experiments. Let us consider Fig.\ \ref{RL-LL}. This takes $|g^S_{RL}|^2 = 0.435$ as the seed value. The plot tells us that this one coupling model can be differentiated from the one with $|g^S_{RL}|^2 = 0.2$ and $|g^S_{LL}|^2 = 0.1$ at more than $3\sigma$ if we have approximately 50 events and use $A'_{FB}(x)$ as our observable. Similarly, the model with $|g^S_{RL}|^2 = 0.35$ and $|g^S_{LL}|^2 = 0.05$ can be separated from the above mentioned seed model by less than $2\sigma$. The actual numbers should be even worse as the systematic uncertainties will also creep in. Similar conclusions hold for Models B, C, D, E and F, for which the results are shown from Figs. \ref{RL-RR} to \ref{RR-LR} respectively. As we mentioned before, contours for Models B and D are not the same, although they involve the same set of operators. This is because the seed is different, which ultimately control the correlation matrix. For models D-F, the seed model has a zero crossing for $A'_{FB}(x)$ at $x=\frac12$. Unlike in the case of the differential decay distribution and observables proportional to it, the integrated observable in this case may become negative in different parts of the parameter space. This makes the covariance matrix $V_{i j}$ not positive definite. We note here that for our purpose, {\em i.e.} to construct $\chi^2$, the integrated observables serve only as the normalization of $V^{-1}_{i j} (= M_{i j})$. We have taken the modulus of the integrand for each value of $x (y)$ for this reason. This, while keeping the nature of the error ellipsoids intact, will always keep the covariance matrix positive definite. On the flip-side, this makes the integral diverge at the zero crossing point. Thus, to evaluate the correlation matrix $V_{ij}^{-1}$ and to cancel this divergence, one has to remove the tiny patch $0.495 < x < 0.505$ from integration. This has only a negligible effect on the number of events, but keeps the necessary integrations convergent. \begin{figure}[t] \centering \subfloat[\label{2coupdist1}] {\includegraphics[height=5cm]{Fig6a.png}} \subfloat[\label{2coupdist2}] {\includegraphics[height=5cm]{Fig6b.png}} \caption{Comparison of $A'_{FB}(x)$ for the $O2$ Models A-F with the respective `seed' model. The relevant WCs are taken from Fig.\ \ref{f:afbx}.} \label{2cdafbx} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $|g_{RL}^S|^2$ & $|g_{LL}^S|^2$ & $|g_{RR}^S|^2$ & $|g_{LR}^S|^2$ & $A_{FB}$ (int.) \\ \hline 1(Seed) & 0.435 & - & - & - & 0.5 \\ \hline A & 0.265 & 0.085 & - & - & 0.239 \\ \hline B & 0.186 & - & 0.125 & - & 0.309 \\ \hline C & 0.305 & - & - & 0.131 & 0.2 \\ \hline 3(Seed) & - & - & 0.218 & - & 0.167 \\ \hline D & 0.063 & - & 0.186 & - & 0.215 \\ \hline E & - & 0.110 & 0.108 & - & -0.001 \\ \hline F & - & - & 0.188 & 0.061 & 0.073 \\ \hline \end{tabular} \end{center} \caption{The WCs, as obtained from Fig.\ \ref{f:afbx}, for which Models A-F are separable from the respective seed models at $3\sigma$ level. The integrated asymmetries are also shown.} \label{t:2cdafbx} \end{table} In Fig. \ref{2cdafbx}, we show, as an illustration, the behaviour of $A'_{FB}(x)$ for Models A-C vis-a-vis the seed Model 1 and Models D-F with seed Model 2, for which the differentiability is at the $3\sigma$ level. The corresponding WCs, extracted from Fig.\ \ref{f:afbx}, are displayed in Table \ref{t:2cdafbx}. We note that Models A-C can be differentiated from the seed Model 1 with only $|g^S_{RL}|^2$ for all values of $x$. On the other hand, Models D and F can be differentiated from seed Model 2 with only $|g^S_{RR}|^2$ (Model 3) only for medium values of $x$, and zero-crossing of $A'_{FB}(x)$ plays a crucial role. \subsection{Observable: ${\cal A}'_{FB}(y)$ } In an analogous way, one can use the more energetic of the like-sign muons, and the corresponding asymmetry ${\cal A}'_{FB}(y)$. The coefficients $C_1$ and $C_2$, from Eqs.\ (\ref{e:dgdy}) and (\ref{afby1}), are shown in Table \ref{tab-afby}. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & Seed & Second operator & $C_1$ & $C_2$\\ \hline A & $RL$ & $LL$ & $ |g_{RL}^S|^2 + 7 |g_{LL}^S|^2$ & $ |g_{RL}^S|^2 + 4 |g_{LL}^S|^2$\\ B & $RL$ & $RR$ & $|g_{RL}^S|^2 - 7 |g_{RR}^S|^2$ & $|g_{RL}^S|^2 - 4 |g_{RR}^S|^2$\\ C & $RL$ & $LR$ & $|g_{RL}^S|^2 - |g_{LR}^S|^2$ & $|g_{RL}^S|^2 - |g_{LR}^S|^2$\\ D & $RR$ & $RL$ & $ -7 |g_{RR}^S|^2 + |g_{RL}^S|^2$ & $ -4 |g_{RR}^S|^2 + |g_{RL}^S|^2$\\ E & $RR$ & $LL$ & $ -7 |g_{RR}^S|^2 + 7 |g_{LL}^S|^2$ & $ -4 |g_{RR}^S|^2 + 4 |g_{LL}^S|^2$\\ F & $RR$ & $LR$ & $ - 7 |g_{RR}^S|^2 - |g_{LR}^S|^2$ & $ - 4 |g_{RR}^S|^2 - |g_{LR}^S|^2$\\ \hline \end{tabular} \caption{$C_1$ and $C_2$ for different models, with ${\cal A}'_{FB}(y)$ as the observable.} \label{tab-afby} \end{center} \end{table} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline Model & $|g_{RL}^S|^2$ & $|g_{LL}^S|^2$ & $|g_{RR}^S|^2$ & $|g_{LR}^S|^2$ & $A_{FB} (int.)$ \\ \hline 1(Seed) & 0.435 & - & - & - & -0.167 \\ \hline A & 0.314 & 0.061 & - & - & -0.074 \\ \hline B & 0.313 & - & 0.061 & - & -0.167 \\ \hline C & 0.216 & - & - & 0.219 & 0.001 \\ \hline 3(Seed) & - & - & 0.218 & - & -0.167 \\ \hline D & 0.187 & - & 0.124 & - & -0.167 \\ \hline E & - & 0.099 & 0.119 & - & -0.016 \\ \hline F & - & - & 0.121 & 0.194 & -0.018 \\ \hline \end{tabular} \end{center} \caption{The WCs, as obtained from Fig.\ \ref{f:afby}, for which Models A-F are separable from the respective seed models at $3\sigma$ level. The integrated asymmetries are also shown.} \label{t:2cdafbxy} \end{table} \begin{figure}[htbp] \centering \subfloat[\label{RL-LL3}] {\includegraphics[height=5cm]{Fig7a.png}} \subfloat[\label{RL-RR3}] {\includegraphics[height=5cm]{Fig7b.png}} \subfloat[\label{RL-LR3}] {\includegraphics[height=5cm]{Fig7c.png}}\\ \subfloat[\label{RR-RL3}] {\includegraphics[height=5cm]{Fig7d.png}} \subfloat[\label{RR-LL3}] {\includegraphics[height=5cm]{Fig7e.png}} \subfloat[\label{RR-LR3}] {\includegraphics[height=5cm]{Fig7f.png}} \caption{The differentiability of the models A-F, shown in (a)-(f) respectively, from the `seed' model, with ${\cal A}'_{FB}(y)$ as the observable.} \label{f:afby} \end{figure} \begin{figure}[t] \centering \subfloat[\label{2coupdist3}] {\includegraphics[height=5cm]{Fig8a.png}} \subfloat[\label{2coupdist4}] {\includegraphics[height=5cm]{Fig8b.png}} \caption{Comparison of the distributions of the asymmetry ${\cal A}'_{FB}(y)$ for the models A-F from the respective `seed' model. Here the relevant WCs are taken from the plots in figure \ref{f:afby}.} \label{f:2cdafby} \end{figure} In Fig.\ \ref{f:2cdafby}, we show the distribution of ${\cal A}'_{FB}(y)$ for Models A-F, comparing A-C with Model 1 as seed and D-F with Model 3 as seed respectively. The corresponding WCs are given in Table \ref{t:2cdafbxy}. While a $3\sigma$ separation between the models is possible, one notes that the differentiation works best in the middle-$y$ region, rather than at the endpoints. \subsection{Observable: $dB_\tau/dx$ and $dB_\tau/dy$} Study of the differential BRs is instructive. First, let us refer the reader to Tables \ref{t:2wcdbrdx} and \ref{t:2wcdbrdy} respectively for the coefficients $C_1$ and $C_2$ in all the models considered. For both $dB_\tau/dx$ and $dB_\tau/dy$, this shows immediately that Models A and B must yield identical distributions; same is true for the pair D and F. This is because the BR does not depend on the change $R \leftrightarrow L$. Models C and E are very poorly differentiable from their respective seeds (at less than $1\sigma$) and so we do not discuss them any further, neither do we show the corresponding separation plots. Even though the pattern seems similar, there is an important difference. With $dB_\tau/dx$ as the observable, we can separate Models A(B) or D(F) from the corresponding seed models at $3\sigma$ or more, depending on the respective WCs. This can be seen from Figure \ref{f:dbrdx}, as well as Table \ref{t:2cddbx}. With $dB_\tau/dy$ as the observable, there is no available parameter space with 50 events where any model can be separated at more than $2\sigma$ from the seed models. This is why we do not show the corresponding plots for $dB_\tau/dy$. Thus, as far as the measurement of the number of events in different energy bins goes, it is preferable to detect the unlike-sign muon, than one of the like-sign muons. As the {\sf V} class of models show an identical behaviour, we conclude that based on only the data and without any {\em a priori} knowledge of the WCs, it is impossible to differentiate between the classes, but within a particular class, it is possible to differentiate among the various Lorentz structures of the effective operators. With enough events, one should be able to differentiate single-operator models from the double-operator models, like those with pure $S$ or $V$ ($O_9$), or with pure $P$ or $A$ ($O_{10}$) muon current. If we have approximately 50 events, $A'_{FB}(x)$ may help us differentiate $O_{9L}$ or $O_{10L}$ models from $RL$ by about $5\sigma$, and $O_{9R}$ and $O_{10R}$ models from $RR$ by about $7\sigma$. With $dB_\tau/dx$, the former set is differentiable at about $3\sigma$, while the latter is at less than $2\sigma$. (The $9(10)L (R )$ models are specified by equal magnitudes of the two WCs.) As $C_1$ and $C_2$ involve $\vert g_{IJ}\vert^2$, and hence it is insensitive to the sign or phase of the WCs. \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & Seed & Second operator & $C_1$ & $C_2$\\ \hline A & $RL$ & $LL$ & $|g_{RL}^S|^2 + |g_{LL}^S|^2$ & $3 |g_{RL}^S|^2 + 2 |g_{LL}^S|^2$\\ B & $RL$ & $RR$ & $|g_{RL}^S|^2 + |g_{RR}^S|^2$ & $3 |g_{RL}^S|^2 + 2 |g_{RR}^S|^2$\\ C & $RL$ & $LR$ & $|g_{RL}^S|^2 + |g_{LR}^S|^2$ & $3 |g_{RL}^S|^2 + 3 |g_{LR}^S|^2$\\ D & $RR$ & $RL$ & $ |g_{RR}^S|^2 + |g_{RL}^S|^2$ & $2 |g_{RR}^S|^2 + 3 |g_{RL}^S|^2$\\ E & $RR$ & $LL$ & $ |g_{RR}^S|^2 + |g_{LL}^S|^2$ & $ 2 |g_{RR}^S|^2 + 2 |g_{LL}^S|^2$\\ F & $RR$ & $LR$ & $ |g_{RR}^S|^2 + |g_{LR}^S|^2$ & $2 |g_{RR}^S|^2 + 3 |g_{LR}^S|^2$\\ \hline \end{tabular} \caption{$C_1$ and $C_2$ for different models. The observable is $dB_\tau/dx$.} \label{t:2wcdbrdx} \end{center} \end{table} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & Seed & Second operator & $C_1$ & $C_2$\\ \hline A & $RL$ & $LL$ & $ |g_{RL}^S|^2 +3 |g_{LL}^S|^2$ & $ |g_{RL}^S|^2 +4 |g_{LL}^S|^2$\\ B & $RL$ & $RR$ & $|g_{RL}^S|^2 +3 |g_{RR}^S|^2$ & $ |g_{RL}^S|^2 +4 |g_{RR}^S|^2$\\ C & $RL$ & $LR$ & $ |g_{RL}^S|^2 + |g_{LR}^S|^2$ & $|g_{RL}^S|^2 + |g_{LR}^S|^2$\\ D & $RR$ & $RL$ & $ 3 |g_{RR}^S|^2 + |g_{RL}^S|^2$ & $ 4 |g_{RR}^S|^2 + |g_{RL}^S|^2$\\ E & $RR$ & $LL$ & $ 3 |g_{RR}^S|^2 + 3 |g_{LL}^S|^2$ & $ 4 |g_{RR}^S|^2 + 4 |g_{LL}^S|^2$\\ F & $RR$ & $LR$ & $ 3 |g_{RR}^S|^2 + |g_{LR}^S|^2$ & $ 4 |g_{RR}^S|^2 + |g_{LR}^S|^2$\\ \hline \end{tabular} \caption{$C_1$ and $C_2$ for different models. The observable is $dB_\tau/dy$.} \label{t:2wcdbrdy} \end{center} \end{table} \begin{figure}[t] \centering \subfloat[\label{RL-LL2}] {\includegraphics[height=3.9cm]{Fig9a.png}} \subfloat[\label{RR-RL2}] {\includegraphics[height=3.9cm]{Fig9b.png}} \caption{The differentiability of the Models A and D, shown in (a) and (b) respectively, from the `seed' model, with $dB_\tau/dx$ as the observable. Model B is identical with Model A, and Model F is identical with Model D } \label{f:dbrdx} \end{figure} \begin{table}[htbp] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Model & $|g_{RL}^S|^2$ & $|g_{LL}^S|^2$ & $|g_{RR}^S|^2$ & $|g_{LR}^S|^2$ \\ \hline 1(Seed) & 0.435 & - & - & - \\ \hline A & 0.202 & 0.117 & - & - \\ \hline 3(Seed) & - & - & 0.218 & - \\ \hline D & 0.372 & - & 0.032 & - \\ \hline \end{tabular} \end{center} \caption{The representative values of the WCs, obtained from Fig.\ref{f:dbrdx}, for which Models A and D can be differentiated from the respective seed models at $3\sigma$ level.} \label{t:2cddbx} \end{table} In general, LFV models can also involve electrons in the final state, from operators leading to $\tau^-\to e^- e^+ e^-$ or $\tau^- \to e^- e^+ \mu^-$. The BRs of these channels have bounds comparable to that of $\tau\to 3\mu$, so we may expect a similar number of events at Belle-II, and a similar analysis will work. However, detection of final state electrons in an $e^-e^+$ machine will have lesser efficiency than that of final state muons. \section{Conclusion} In this paper, we focus on the LFV decay $\tau \to 3\mu$. This is of crucial importance in the light of semileptonic $B$-decay anomalies, which hint at some new physics involving second and third generation leptons, probably a mixing among the charged leptons. The present limit on this mode translates to $\sim 70$ events at the most at Belle-II with 50 ab$^{-1}$ integrated luminosity. While even a single event will unequivocally indicate new physics, we try to answer a more ambitious question: is it possible to say anything about the underlying operators from the observables? Needless to say, the answer will be vital for model builders. The answer to this question would have been much easier if final state muon polarisations could have been measured. As far as the present technologies go, this is not easily attainable. However, as we show, one can form other observables, which are relatively clean and at the same time can yield significant information. One of the observables is the asymmetry of either the unlike-sign muon, or the like-sign more energetic muon, which is to be measured with respect to the initial $\tau$-polarisation direction. If one can measure the asymmetries, even with the associated error margins, in different energy bins, this can differentiate between the different types of operators in a particular class (scalar, vector, or tensor). Another important observable, as expected, is the number of events in different energy bins, either the unlike-sign or the like-sign muons. Just like the asymmetries, it can potentially differentiate among the different chiral structures of the operators, although to a lesser extent. Given the total number of events, one can also have an idea of the magnitude of the relevant WCs. We expect more events for {\sf V} or {\sf T} type operators, so their WCs, $g^V_{IJ}$ or $g^T_{IJ}$, can be probed better. It may so happen that there are more than one NP operators. A typical case is when the muon current is purely vector or axial-vector in nature. If we have enough number of events ($\sim 50$), we should be able to say whether there is only one underlying operator or two. Asymmetries are the better observables, but the distribution of the number of events can also help and act as complementary ones. One must, however, remember that such an analysis involves the risk of underestimating the errors by neglecting the systematic uncertainties. Thus, this is to be seen more as a motivation to the experimentalists. Once the data is available, other powerful analysis methods, like the maximum likelihood, can be applied. {\bf Acknowledgements}: AK thanks the Science and Engineering Research Board (SERB), Government of India, for a research grant. SKP is supported by the grants IFA12-PH-34 and SERB/PHY/2016348.
fe167f36d87288f8f2bd2464d821e3bbfe0afa01
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Extended stellar haloes containing a significant amount of streams and substructures are observed around the Milky Way (MW), M31, and there are indications that they may be a ubiquitous component of galaxies down to the scale of Large Magellanic Cloud (LMC)-like objects \citep[e.g., for recent works and reviews, see][]{Helmi2008, Mcconnachie2009, Martinez-Delgado2009, Mouhcine2011, Rich2012}. Within the $\Lambda$ Cold Dark Matter ($\Lambda$CDM) framework, stellar haloes are a natural outcome of the hierarchical build-up of structures \citep[e.g.,][]{Bullock2005, Cooper2010}. High resolution N-body and hydro-dynamical simulations \citep[e.g.,][]{Zolotov2009, Tissera2012, Tissera2013, Tissera2014, Pillepich2014} show that stellar haloes are expected to consist both of stars formed within the virial radius of the main progenitor, for instance in a disk structure from the dissipative collapse of smoothly accreted cold gas at high-redshift and later on put on halo orbits by some violent event, and by the shredded stellar component of smaller galactic systems accreted onto the main progenitor\footnote{Some of these systems will still be gas-rich when accreted and will continue forming stars, formally within the virial halo of the main progenitor; even though \citet{Tissera2013} treat them as a separate component, here we consider them as ``accreted stars'', since their properties should reflect the chemical enrichment and star formation history of the accreted sub-galactic system.}. The relative dominance of these mechanisms depends upon the specific build-up history of a given galaxy, but there is general consensus that the outer parts of stellar haloes almost exclusively host accreted stars, whilst the inner regions might also contain an important component of stars originated within the main progenitor. Another expectation from simulations is that stars from early accretion events are to be preferentially found in the inner regions of haloes, while the outer parts are in general dominated by late accretion events. Observational evidence for spatial variations in the properties of the MW stellar halo, likely related to different dominant formation mechanisms at play, was first put forward by \citet{Searle1978} on the basis of the metallicity and horizontal branch colors of MW globular clusters. The wealth of information brought about in the last years by very large area photometric and spectroscopic surveys, such as those within SDSS, has painted a complex observational picture, which confirms the existence of spatial variations in the kinematic, metallicity, abundance and age properties of the stellar halo, as well as the existence of a variety of substructures. The stellar halo of our Galaxy can be broadly described as consisting of at least two partially overlapping components: a flatter inner-halo population, with a small net prograde rotation and a metallicity distribution function peaking at [Fe/H]$\sim -1.6$, and a more extended and approximately spherical outer-halo population, showing no or little retrograde rotation \citep[but see the recent work by][]{Deason2017} and a metallicity distribution function peaking at [Fe/H]$\sim -2.3$ \citep[e.g.,][]{Carollo2007, Carollo2010, deJong2010, Beers2012, AllendePrieto2014, Fernandez-Alvar2015, Das2016}. The shift in dominance between the inner- and outer- halo component occurs at Galactocentric distances of $\sim$15kpc, which is also the distance range in which such transition is seen in the simulations. The age distribution of field blue horizontal branch (BHB) stars (e.g., Santucci et al. \citeyear{Santucci2015}: 4700 stars with spectroscopic from SDSS; Carollo et al. \citeyear{Carollo2016}: 130,000 color-selected BHB stars from SDSS photometry), shows a decrease of 1-1.5 Gyr in its mean value and a larger age spread from the inner 15kpc to the outer regions probed by these studies (45-50kpc); an age difference between stars likely associated to the inner- and outer-halo population, with the former being older than the latter, was also pointed out by \citet{schuster12} with a much smaller sample of solar neighborhood stars with halo kinematics for which exquisite high resolution spectroscopic data were obtained. From the analysis of 100,000 main-sequence turn-off stars with spectroscopy from SDSS out to 15kpc from the Sun, \citet{Lee2017} show that the median [C/Fe] increases as a function of height from the Galactic disk mid-plane, as so does the fraction of CEMP-no stars (those with no over-abundance of heavy neutron-capture elements) versus CEMP-s stars (those with over-abundance of heavy neutron-capture elements associated to the s-process); the authors argue this effect might be related to the mass function of the sub-haloes in which the stars in the inner- and outer-halo region formed. From an analysis of $\sim$4500 K-giants likely belonging to the halo with spectroscopy from SEGUE and SEGUE-2, \citet{Janesh2016} conclude that a larger amount of substructure is seen at [Fe/H] $>-1.2$ with respect to the stars in the sample with lower metallicities, and for those located beyond 30kpc from the MW center with respect to those found at smaller Galactocentric distances \citep[see also e.g.,][]{Xue2011}; the Sagittarius (Sag) stream appears responsible for the increase in substructure seen in metallicity and distance, in particular at [Fe/H] $> -1.9$. In general, the above observational picture appears consistent with the inner regions of the halo having been assembled earlier and in an environment experiencing a faster initial chemical enrichment than the outer halo; the outer halo containing the remnants of later accretion events, some of which have been identified as substructures even when lacking the full 6D phase space information, due to the larger dynamical mixing times at those distances. It is clear then that even though stellar haloes typically account for only a few percent of a galaxy stellar mass, studying their spatially varying properties allows us to retrieve crucial information on the galaxy build-up history, going back to its earliest phases. Satellite galaxies are arguably the sub-haloes that escaped tidal disruption during the halo assembly and survived until present day. The comparison between the chemical abundance patterns of their stars to those of halo stars offers then a particularly illuminating and direct way of identifying in which type of environment halo stars formed, as well as for constraining the timescales of accretion events. Until recently, this type of analysis, which requires high resolution spectroscopy, was by necessity restricted to samples of solar neighborhood stars, providing however already a wealth of information. The ratio of $\alpha$- elements over Fe in solar neighborhood sample was found to be super-solar for the overwhelming majority of stars with halo kinematics \citep[e.g.,][]{venn04}. A dichotomy is present at intermediate metallicities (especially visible in the range $-1.5 \lesssim$ [Fe/H] $\lesssim -0.8$), with a sequence of ``high-$\alpha$'' stars, showing an almost constant value at all metallicities (e.g., [Mg/Fe], [Si/Fe] $\sim$+0.3,+0.4; NLTE [O/Fe] $\sim$+0.5) and a sequence of ``low-$\alpha$'' stars, at 0.1,0.2dex lower values and with a slightly declining trend for increasing [Fe/H] \citep[e.g.,][]{Nissen1997, nissenschuster10, Ramirez2012, Hawkins2015}; the ``high-$\alpha$'' and ``low-$\alpha$'' sequences can be also traced in a number of other elements, as [Cu/Fe], [Zn/Fe], [Ba/Y], [Na/Fe], [Al/Fe], [Ni/Fe], [(C+N)/Fe] \citep[e.g.,][]{nissenschuster11, Hawkins2015}. Such chemical patterns have been interpreted as ``high-$\alpha$'' stars likely forming in regions with a star formation rate high enough that only massive stars and type~II supernovae contributed to the chemical enrichment; on the other hand, ``low-$\alpha$'' stars most likely originated in environments with a slower chemical evolution, experiencing enrichment also from type Ia supernovae and low-mass asymptotic giant branch (AGB) stars. Typically the ``low-alpha'' sequence is attributed to accreted systems. Interestingly, the analysis of space velocities of a few dozen halo stars of the solar neighborhood shows that ``low-$\alpha$'' stars have on average more eccentric orbits than ``high-$\alpha$'' stars, allowing them to reach larger apocenter distances and larger heights above the Galactic plane. This essentially indicates that ``low-$\alpha$'' stars, possibly born in accreted systems, are likely to belong to the outer-halo population \citep[e.g.,][]{Nissen1997, Fulbright2002, Roederer2009, nissenschuster10}. On the other hand, APOGEE spectra of 3200 giants have shown that the ``high-alpha'' sequence appears chemically indistinguishable from the canonical thick disk, with both components exhibiting a high degree of chemical homogeneity \citep{Hawkins2015}; an interpretation offered by Hawkins et al. for this finding is that the gas from which the inner regions of the MW halo formed was also the precursor of the thick disk. However, the aforementioned work does not carry out an analysis of how the chemical abundance properties of the various Galactic component might vary spatially, hence it is not possible to establish where the transition between ``canonical thick disk \& halo'' to accreted halo occurs. \citet{Fernandez-Alvar2015} analyze the trends in Ca, Mg, and Fe abundances as a function of Galactocentric distance out to $\sim$80 kpc for almost 4000 stars with low resolution spectroscopy from surveys within SDSS and, in the range $-1.6 <$[Fe/H]~$<-0.4$, find a decreasing trend for [Ca/Fe] but an increasing trend for [Mg/Fe]. \citet{Fernandez-Alvar2017} extend their previous analysis to several other chemical elements, but over a smaller range in Galactocentric distances ($5 < r_g \mathrm{[kpc]} < 30$), using infrared spectroscopy of giants from APOGEE; the median [X/Fe] for $\alpha$ elements is lower by 0.1dex (or more for O, Mg, S) for stars at $r_g >$15kpc at [M/H]$> -1.1$ with respect to closer by stars, and differences are also detected in other elements (Ni, K, Na and Al). This confirms that ``low-$\alpha$'' stars are found at large Galactocentric distances. It should be pointed out that even the so-called ``low-$\alpha$'' stars have chemical abundance patterns that do not match those of stars in the early-type dwarf galaxy satellites of the MW when compared at the same metallicity; in other words, in MW early type satellites the decline (``knee'') in [$\alpha$/Fe] (or individual $\alpha$-elements over Fe) is detected at much lower metallicities than in the halo (e.g., in [Mg/Fe]: at [Fe/H]$\sim -2$ in the Fornax dwarf spheroidal galaxy [hereafter, dSph], Hendricks et al. \citeyear{hendricks14}, Lemasle et al. \citeyear{lemasle14}; [Fe/H]$\sim -1.6$ in the Sculptor dSph, Tolstoy, Hill \& Tosi \citeyear{Tolstoy2009}; [Fe/H]$\sim -2.8$ in the Draco dSph, Cohen \& Huang \citeyear{Cohen2009}). The surviving MW satellites are on orbits that make them unlikely to contribute halo stars passing through the Solar Neighborhood \citep{Roederer2009}. The low metallicity part of the halo might then be compatible with having been assembled by the shredded ancient (prior to the time where SNIa started having a significant contribution to the Fe production) stellar component of systems resembling the MW early-type satellite progenitors; however, the super-solar values seen in the halo out to much larger metallicity most likely require the supposedly accreted component to have been formed in environments experiencing a higher initial SFR anyway \citep[see e.g.,][]{Fulbright2002, venn04}. Recently, low resolution spectroscopic data from SDSS/SEGUE allowed to place the $\alpha$-knee of likely Sag stream member stars at [Fe/H]$\sim -1.3$, only slightly more metal-poor than in the MW halo \citep{deBoer2014}: this might indicate that a small number of large systems experiencing an initial chemical enrichment like Sag stream stars might have contributed substantially to the build up of the MW stellar halo at old times. Detailed chemical abundance properties of distant halo stars for a variety of elements with different nucleosynthetic origins allows to make a more refined comparison to the properties of MW satellites and of the inner versus outer halo. This requires time-consuming high resolution spectroscopy of typically faint stars. The only study in which detailed chemical abundance properties of the stellar halo have been analyzed as a function of distance from the Galactic center for a large number of elements derived from high resolution spectroscopy is the work by \citet{Fernandez-Alvar2017} based on APOGEE infrared spectra. However, the authors do not carry out a comparison between the halo chemical abundance properties and those of MW satellites and do not take explicitly into account the possible presence of Sag stream stars and how this might affect the interpretation. In this work, we derive chemical abundances from high resolution optical spectroscopy for a sample of 28 halo stars which can be considered as highly likely outer-halo objects, due to their large present-day Galactocentric distances $>15$ kpc and heights from the MW mid-plane |z|$>$9kpc; we can in this way by-pass the typically large uncertainties in the measurements of space kinematics and related orbital properties in assigning stars presently found in the solar neighborhood to the inner- or outer-halo population. We then interpret the chemical abundance trends for this sample of outer halo stars in the light of what has been observed for solar neighborhood samples, MW satellites, including the LMC, and factoring in the possible presence of Sag stream stars in our sample. Our sample is genuinely new, with only one star overlapping with APOGEE \citep{Ahn2014} measurements, is competitive in size (e.g., there are $\sim$50 stars beyond a Galactocentric distance of 15kpc in Fernandez-Alvar et l. 2017) and provides abundances for several elements which are not measured from APOGEE spectra (Sc, Cr, Co, Cu, Zn, Sc, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu) as well as a set of elements in common (C, Mg, Ti, Si, Ca, O, Al, Na, Ni, Mn, V), handy for comparisons of trends. The article is organized as follows: in Sect.~\ref{sec:data} we describe the sample selection, observing facilities used and data reduction procedure; Sects.~\ref{sec:ews} summarizes the method for equivalent widths and radial velocity determination, and in Sect.~\ref{sec:analysis} we explain the analysis performed for deriving the elemental abundances. We proceed to comparing our radial velocities, metallicities and distances with results in the literature for the same stars in Sect.~\ref{sec:comp}. In Sect.~\ref{sec:trends} we comment on the chemical abundance trends found for our sample of halo stars, and compare them with those of solar neighborhood samples and MW satellites, including Sag and the LMC. Sect.~\ref{sec:substructures} is devoted to exploring whether the stars in our sample might belong to known substructures. We conclude with a discussion and summary in Sect.~\ref{sec:summary}. \section{Sample, observations and data reduction} \label{sec:data} Our sample consists of 28 individual red giant branch (RGB) stars observed at high spectral resolution with HET/HRS, Magellan/MIKE and VLT/UVES. Their location on the sky is shown in Fig.~\ref{fig:location}. Three stars were observed both with HET/HRS and VLT/UVES to test the consistency of results from the different facilities. The details of the target selection, instrument set-ups and data-reduction will be given below. Table~\ref{table:atmo} lists the Julian dates of the observations, total exposure times per target and combined signal to noise per pixel, together with the adopted stellar parameters and radial velocities. Table~\ref{table:Gaia} lists the equatorial and galactic coordinates provided by the Gaia satellite in the first data release, as well as the Gaia DR1 ID and the m$_\mathrm{G}$ magnitude \citep{BVP16,PBB16,LLB16,vLED17,ALB17,ERD17,CEM16}. \begin{figure*} \centering \includegraphics[width=\hsize]{pan-STARRS1_rgb_lb_hetmikeuves_2.png} \caption{Location of our targets overlaid on an RGB rendering of the distribution of Milky Way halo stars in an Equatorial- Carr\'{e} view. The latter has been produced by combining the 8 (blue channel), 15 (green channel), 25 (red channel) kpc slices from the Pan-STARRS1 3$\pi$ Survey data-set from https:\/\/zenodo.org\/record\/60518\#.WHTxMHqvelM, obtained by applying to the Pan-STARRS1 3$\pi$ Survey data-set matched-filter technique using an [Fe/H]=-1.5 and 12 Gyr old model in the g,r bands (for details, we refer the reader to Bernard et al. \citeyear{Bernard2016}). The grid is in Galactic coordinates.} \label{fig:location} \end{figure*} \subsection{The sample} Most of our target RGB stars were selected from the Spaghetti survey sample \citep[e.g.,][]{Morrison2000, Morrison2001, Dohm-Palmer2001, Morrison2003} as published in Starkenburg et al. \citeyear{Starkenburg2010, Starkenburg2011} (21 stars); of the remaining 7 stars, six targets were drawn from the catalog of distance determinations for the SEGUE K Giants by \citet{Xue2014} and one from the APOGEE sample. The main criterion for selection was for the giants to be placed at Galactocentric distances\footnote{We assumed that the Sun is found at 8 kpc from the Galactic center.} $r_g>$15\,kpc. Additionally, we preferred targets bright enough to be within the reach of high resolution spectroscopic observations on the chosen facilities in a reasonable exposure time ($V \lesssim$17.5) and made sure that the selected targets would cover a large metallicity range, in particular the [Fe/H] region where the abundances of $\alpha$-elements in solar neighborhood halo samples differs the most from those of stars in classical dSphs, that is beyond [Fe/H]$=-1.6$. \subsection{Observations} The observations were carried out with three high resolution spectrographs: \begin{itemize} \item MIKE (Magellan Inamori Kyocera Echelle) attached to the $6.5$m Magellan telescope at Las Campanas Observatory, Chile \citep{BSG03}. Its blue and red arms cover simultaneously the wavelength ranges $\sim3350-5060$\AA\ and $4860-9400$\AA\,; however, we only use the portion of the spectra redder than 3700\AA\, due to the low signal to noise ratio (S/N) of the spectra of our cool target stars in the bluest regions of the wavelength range. With the chosen $1$\arcsec\ slit, the resolving powers are $\sim 28\,000$ and $\sim 22\,000$ in the blue and red ranges respectively. A $2\times 2$ pixels binning was adopted on the detectors. The total exposure times varied between $4400$ and $5400$ seconds, distributed into two or three successive exposures. Observations were carried out in visitor mode in 2 nights (Mar 8 and June 19, 2014) under program CN2014A-20 (PI: Minniti). \item UVES (UV-Visual Echelle Spectrograph) attached to the $8.2$m Kueyen UT2 unit of the VLT telescope at Paranal Observatory operated by ESO, Chile \citep{DDK00}. It was used in the Dichroic \#1 mode, providing a spectral coverage $\sim3500-4525$\AA\ in the blue, and $\sim 4784-5759$\AA\ and $\sim 5838-6805$\AA\ in the red, with a resolving power $R\sim 45\,000$ for a $1$\arcsec\ slit. A $2\times 2$ pixels binning was used, as for MIKE. The exposure time was either $5\times 3000$ seconds ($5$ exposures) or $3\times 2400$ seconds. Observations were carried out in service mode under program 093.B-0615 (PI: Battaglia, 45h), and the different exposures on a given star are often separated in time by as much as a whole year. \item HRS (High Resolution Spectrograph) attached to the $10$m Hobby-Eberly Telescope (HET) at McDonald Observatory, Texas, USA \citep{T98,RAB98}. The instrument was configured to HRS\_15k\_central\_600g5822\_2as\_2sky\_IS0\_GC0\_2x5 to achieve R=18,000 spectra covering 4825\AA\ to 6750\AA\,. The data were acquired as part of normal queue scheduled observing \citep{Shetrone2007} under program UT13-2-007 (PI: Shetrone). The targets were observed between February and June of 2013 for a total of 33.7 hours of shutter open time. Directly after each target a Th-Ar exposure was taken and on nearly every night (weather permitting) a radial velocity standard was observed during twilight. The total exposure times varied between $6000$ and $12000$ seconds, distributed into two to five exposures of $1800$, $2400$, or $3000$ seconds spread over a few weeks. \end{itemize} \subsection{Data reduction} \subsubsection{MIKE} The spectra were reduced with the pipeline written by Dan Kelson\footnote{http://code.obs.carnegiescience.edu/mike} and based on the Carnegie Python Distribution (CarPy)\footnote{http://code.obs.carnegiescience.edu/carnegie-python-distribution}. The pipeline provides flat-fielded, optimally extracted and wavelength calibrated 1-D spectra for the successive spectral orders. We normalized them to the continuum in a preliminary way and merged them following the method proposed by \citet{A08}, using the IRAF package\footnote{IRAF is distributed by the National Optical Astronomy Observatory (NOAO), which is operated by the Association of Universities for Research in Astronomy (AURA), Inc., under cooperative agreement with the U.S. National Science Foundation}. The 1-dimensional spectra resulting from order merging were then visually examined and the remaining obvious cosmics removed by hand, using the IRAF \verb+splot+ subroutine. \subsubsection{UVES} The reduction was done with the ESO UVES pipeline (release 5.09) with optimal extraction. Cosmics and other anomalies (e.g., poorly subtracted telluric emission lines) were suppressed by hand, as for the MIKE spectra. \subsubsection{HRS} The spectra were reduced with IRAF ECHELLE scripts. The standard IRAF scripts for overscan removal, bias subtraction, flat fielding and scattered light removal were employed. For the HRS flat field we masked out the Li I, H I and Na D regions because the HET HRS flat field lamp suffered from faint emission lines. The sky fibers were extracted in the same manner as the star fibers. We also extracted the spectra of a sky flat and used that to determine the throughput differences between the sky fibers and the object fibers. The sky fibers were then scaled by this value and subtracted from the star flux. The spectra were combined into a single long spectrum for the blue and red chips. \section{Equivalent widths and radial velocity measurements} \label{sec:ews} \subsection{MIKE}\label{section:MIKE} To normalize the spectra to the continuum and determine line equivalent widths (EW), we have applied the 4DAO avatar \citep{M13} of the DAOSPEC code\footnote{DAOSPEC has been written by P. B. Stetson for the Dominion Astrophysical Observatory of the Herzberg Institute of Astrophysics, National Research Council, Canada.} \citep{SP08,SP10} to each of the blue and red spectral ranges, after masking the numerous telluric lines that plague the red range. In some cases we have divided the blue range in two portions, typically below and above $4200$~\AA, because of the very poor S/N of the bluer side, whose contribution is penalized with a low weight. The code also computes the radial velocity (RV) for each identified line and gives the mean RV and its rms scatter for each spectral range. The RVs given in Table~\ref{table:atmo} are weighted means of the RVs determined for each spectral range treated by the 4DAO code. The weights are the reciprocal of the squared respective errors, the latter being defined as the rms scatter given by DAOSPEC divided by the square root of the number of identified lines. The errors on the mean RV values correspond to their rms scatter divided by the square root of the number of spectral ranges (only 2 or 3 in practice). The DAOSPEC code fits saturated Gaussians to the line profiles and succeeds well for faint lines, but its EW estimate is biased for strong lines \citep{KC12}. The limiting EW above which the DAOSPEC determination becomes significantly biased depends on spectral resolution: the higher the resolution, the sooner a bias appears with increasing EW. To evaluate the bias, we have determined manually the EWs of a set of well isolated lines in the star \#13, using both direct integration and Gaussian fit. Both methods give similar results, and due to the modest spectral resolution used here, the bias appears only for $EW > 160$\,m\AA, as shown in Fig.~\ref{fig:mike_ew_bias}. Therefore, we decided to keep only lines with $EW\leq 160$\,m\AA\ for our analysis. The DAOSPEC code provides formal errors on EWs, that do not include systematic effects like uncertain continuum level. To make them more realistic, we arbitrarily add a $5$\% error in a quadratic way to the error computed by DAOSPEC. This avoids assigning too contrasted weights to some lines when computing mean abundances. \begin{figure} \centering \includegraphics[width=\hsize]{xue4_ew_gauss.pdf} \caption{Comparison of the EWs determined by DAOSPEC with those obtained by ``manual'' Gaussian fits for clean lines, for star \#13, of the MIKE sample. The red curve shows the bias found by Kirby \& Cohen (2012) for their spectra with a resolution $R\gtrsim 60\,000$.} \label{fig:mike_ew_bias} \end{figure} \subsection{UVES} We applied the same method as for MIKE to determine the continuum and EWs. However, because of the higher resolving power of UVES, we had to correct all EWs for the bias that we determined in the same way as above, but here for all stars of our UVES sample. The difference between EWs determined with DAOSPEC and manually is shown on Fig.~\ref{fig:uves_ew_bias} \begin{figure} \centering \includegraphics[width=\hsize]{diff_ew_all.pdf} \caption{Comparison of the EWs determined by DAOSPEC with those obtained by manual direct integration (top panel) and Gaussian fits (bottom panel) for clean lines, for all ten stars of the UVES sample. The red curve shows the bias found by Kirby \& Cohen (2012) for their own spectra ($R\gtrsim 60\,000$). The green lines show the linear regressions below and above 100\,m\AA.} \label{fig:uves_ew_bias} \end{figure} We still exclude very strong lines ($EW > 210$\,m\AA) and adopt the following correction to the EWs given by 4DAO: \begin{equation} EW = 1.0044\cdot EW(4DAO)+0.5 ~~\mathrm{for}~~ EW < 100\,\mathrm{m\AA,} \end{equation} \begin{equation} EW = 1.11\cdot EW(4DAO)-10.06 \mathrm{for}~ 100 < EW < 210\,\mathrm{m\AA.} \end{equation} This function is continuous at $EW=100$\,m\AA. The correction being small, we keep the same error on EW as provided by DAOSPEC, after adding quadratically a $5$\% error to them, as for the MIKE sample. The RVs were determined in the same way as for MIKE, and the values given in Table~\ref{table:atmo} are averaged over three determinations corresponding to the three UVES spectral ranges. The table lists also the velocities from the individual exposures, since some were observed as much as a whole year apart. However, we did not find any sign of evident RV variability in any of the ten UVES stars. \subsection{HRS} We have used the same method as for MIKE and UVES to determine the continuum and EWs. The resolution is lower than that of MIKE spectra, and we verified that the EWs given by the 4DAO code do not need any correction. The S/N is lower on average for this sample than for the MIKE and UVES ones, so we used the UVES data rather than the HRS ones for the three stars that were observed with both instruments. For the sake of completeness and for comparison purposes, we have nevertheless determined independently the stellar parameters and abundances of these three objects (designated \#21 UVES, \#26 UVES, and \#28 UVES in the tables) on the basis of HRS spectra alone. We note that the spectrum of \#28 UVES has a very poor S/N, so the corresponding results must be taken with caution. The EWs are provided in Tables \ref{table:lines_a}, \ref{table:lines_b}, \ref{table:lines_c}, \ref{table:lines_d}, \ref{table:lines_e}, \ref{table:lines_f}, and \ref{table:lines_g}. Radial velocities were determined from cross-correlation using the IRAF task fxcor using the Arcturus spectra \citep{Hinkle2000}. The heliocentric correction was made using the IRAF task rvcorrect. A correction for zero points was made based on the radial velocity standard taken in twilight. The final heliocentric velocities are given in Table~\ref{table:atmo}; also in this case we list radial velocities from the individual exposures, due to the different dates of observation in some cases. In this sample, star \#03 is found to be a RV variable. The spectra were then shifted and combined using the IRAF tasks dopcor and scombine. \section{Analysis} \label{sec:analysis} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{spectra3.pdf} \caption{Examples of observed spectra and best-fitting models over a wavelength region encompassing absorption lines from different elements (see labels). The first row refers to star \#7, observed with HET/HRS. The second and last rows show star~\#26, observed with HET/HRS and VLT/UVES, respectively.} \label{fig:spectra} \end{figure} \subsection{Models} We used the MARCS 1-D spherical atmosphere models with standard abundances, downloaded from the MARCS web site\footnote{marcs.astro.uu.se} \citep{GEE08}, and interpolated using Thomas Masseron's \verb+interpol_modeles+ code available on the same site. ``Standard composition'' means that the abundances are scaled solar, except for the $\alpha$ elements that are overabundant relative to solar by $0.4$~dex for [Fe/H]$\leq -1.0$, by $0.3$~dex for [Fe/H]$=-0.75$, by $0.2$~dex for [Fe/H]$=-0.5$ and by $0.1$~dex for [Fe/H]$=-0.25$. We adopted those computed for a microturbulence velocity $v_\mathrm{turb}= 2$~\kms\ and for $1$ solar mass. \subsection{Codes} All abundances were computed with the \verb+turbospectrum+ code \citep{AP98,P12}. It assumes local thermodynamic equilibrium (LTE) but is able to compute the line transfer in spherical geometry, and includes continuum scattering in the source function. We used this code in the ABFIND mode to get abundances from the EWs. In a few cases we also used \verb+turbospectrum+ to compute synthetic spectra and determine abundances through a fit to the observed spectrum, especially for blended zirconium lines. For this we developed a python routine \verb+fitspec.py+ which generates synthetic spectra with \verb+turbospectrum+ for various abundances of the element of interest, and convolves them with Gaussians with various FWHMs, and finds the optimal abundance, FWHM and $\lambda$ shift in the relevant short spectral range. The convolution is done with the \verb+faltbo3+ utility code provided with \verb+turbospectrum+. This code was also used to determine the carbon abundance, based on part of the CH molecular G band, in the $4322-4325$\,\AA\ spectral range. As in \citet{JNM15}, we assumed a solar [N/Fe] ratio, a carbon isotopic ratio $^{12}$C/$^{12}$C$=6$ typical of the tip of the RGB, and we adopted the [Mg/Fe] ratio as a proxy for [O/Fe] when the latter was not measured. To compute hyperfine structure (HFS) corrections of some lines, we used Chris Sneden's MOOG code\footnote{http://www.as.utexas.edu/$\sim$chris/moog.html} (2010 version) with the \verb+blend+ driver. More details are given below. Examples of observed spectra, with overlaid the best-fitting models, are shown in Fig.~\ref{fig:spectra}. \subsection{Line list and solar abundances} We adopted the line list of \citet{RFS08,RST10}, and complemented it with data from the VALD database \citep{PKR95,RPK97,KPR99,KRP00,RPK15}. The line wavelengths and oscillator strengths are given in Table~\ref{table:lines_a}, \ref{table:lines_b}, \ref{table:lines_c}, \ref{table:lines_d}, \ref{table:lines_e}, \ref{table:lines_f}, and \ref{table:lines_g} . The adopted solar abundances are taken from \citet{AG89} and \citet{GS98} and displayed in Table~\ref{tab:abundances}. \subsection{Preliminary stellar parameters}\label{sec:prel} In order to derive the stars' atmospheric parameters, we feed the spectroscopic analysis with first guesses for the effective temperature \teff, gravity \ensuremath{\log g}, microturbulent velocity $v_\mathrm{turb}$ and [Fe/H]. As initial [Fe/H] values we adopt those given in the original works from which the targets were selected (that have low spectral resolution, except for the star drawn from APOGEE); \teff and \ensuremath{\log g}\, are instead derived photometrically, and $v_\mathrm{turb}$ follows from the empirical relation $v_\mathrm{turb} = 2.0 - 0.2 \times log g$ by \citet{Anthony-Twarog2013}. The majority of the stars have SDSS ugriz and JHK photometry from the UKIDSS large area spectroscopic survey\footnote{The only exception is star \#21 that lacks a measurement of the J-magnitude in the UKIDSS survey and for which we used the 2MASS value.}, which allows us to constrain the star' photometric effective temperature $T_\mathrm{eff, p}$ by using several of the \citet{Ramirez2005} effective temperature-color-[Fe/H] calibrations $T_\mathrm{eff, p, col}$ (specifically T$_\mathrm{VI}$, T$_\mathrm{VJ}$, T$_\mathrm{VH}$ and T$_\mathrm{VK}$); on the other hand, for stars \#20, 22, 23, 24, 25, 27, we have used only $T_\mathrm{eff, p, VI}$, due to the availability of only Washington photometry, transformed into \mv \& \vi\ using the relation in Morrison et al. \citeyear{Morrison2003}. We transformed the SDSS photometry into Johnson-Cousins \mb, \mv\ and \mi\ using the transformations involving $gri$ in Lupton (2005)\footnote{\tiny{http://www.sdss3.org/dr8/algorithms/sdssUBVRITransform.php\#Lupton2005}}. The \bv, \vi, \vj, \vh, \vk\ colors are dereddened using the E(B-V) reddening from the \citet{Schlegel1998} maps, with the \citet{Schlafly2011} recalibration. $T_\mathrm{eff, p}$ is determined as the weighted average of the individual $T_\mathrm{eff, p, col}$ relations with the error given by the scatter between $T_\mathrm{eff, p, col}$ to which we add in quadrature the average error in T$_\mathrm{eff, p, col}$.\footnote{The error in each of the T$_\mathrm{eff, p, col}$ is derived by propagating the error in [Fe/H] (assumed to be a conservative 0.2dex) and in color, where the latter includes the scatter in the transformations between SDSS and Johnson-Cousin bands. To this we add in quadrature the scatter found by Ramirez \& Melendez (2005) around their \teff-[Fe/H]-color relations (40, 38, 32, 28K for T$_\mathrm{VI}$, T$_\mathrm{VJ}$, T$_\mathrm{VH}$, T$_\mathrm{VK}$ respectively). Finally, other 50K are added in quadrature to the average T$_\mathrm{eff,p, col}$ to account for the scatter between direct temperatures and those from the infrared flux method.} The error in $T_\mathrm{eff, p}$ typically ranges between 60 \& 80K. We exclude T$_\mathrm{BV}$ as it is the most sensitive to uncertainties in [Fe/H]. The first guess logg is obtained by finding the point along the RGB locus (logg $\le$ 3.5) with the closest matching \teff and [Fe/H] in a set of Dartmouth isochrones \citep{Dotter2008} of different ages, [Fe/H] and [$\alpha$/Fe] (age= 4, 8, 12 Gyr; -2.4 $<$ [Fe/H] $<$ -0.60, with a spacing of 0.2dex; [$\alpha$/Fe]=0, 0.2, 0.4); we explore different values for the ages and [$\alpha$/Fe] because these quantities are in practice unknown (at this stage) and we wish not to fix them a priori. We derive the error in logg$_{\rm p}$ by repeating N=100 times the search for the best-matching isochrone, where each time the effective temperature and [Fe/H] are randomly drawn from a Gaussian distribution centered on the input $T_\mathrm{eff, p}$ and [Fe/H] and with $\sigma$ given by the corresponding errors. \subsection{Adopted spectroscopic stellar parameters} We started from the photometrically estimated effective temperatures, surface gravities and microturbulent velocities as described in the previous section, and applied the usual spectroscopic diagnostics by plotting the \ion{Fe}{i} abundance as a function of excitation potential to constrain \teff\ and of EW to constrain v$_\mathrm{turb}$. Following \cite{tafel2010} and \cite{JNM15}, we discarded the Fe lines with $\chi_\mathrm{exc}< 1.4$~eV in order to minimize NLTE effects as much as possible, as well as lines fainter than $\sim20$\,m\AA\ and stronger than $\sim 200$\,m\AA\ ($160$\,m\AA\ for the MIKE sample). In the \ion{Fe}{i} abundance vs. $EW$ diagram, we used the {\sl predicted} rather than observed EWs to avoid any bias on the v$_\mathrm{turb}$\ determination, following \cite{M84}. The surface gravity was determined as usual from the ionization balance, by requiring the equality of the \ion{Fe}{i} and \ion{Fe}{ii} abundances. When a change in \teff\ and \ensuremath{\log g}\ was required by the relevant spectroscopic diagnostic, we tried to remain within $2\sigma$ of the photometric estimate, allowing the slopes of the diagnostic plots to differ from zero by no more than $2\sigma$ either. In only three cases we had to lower \teff\ by as much as $\sim2.5\,\sigma$ in order to fulfill the spectroscopic diagnostic (\#7, 9, 19). Regarding \ensuremath{\log g}, it was possible to maintain it within $2\sigma$ of the photometric estimate in all case but one (\#6). This trade-off between photometric and spectroscopic criteria often implies a spectroscopic \teff\ slightly lower than the photometric one (and/or a slightly negative slope in the \ion{Fe}{i} versus excitation potential plot) and a larger \ion{Fe}{ii} abundance relative to the \ion{Fe}{i} one. The latter difference $\Delta_{II-I}$ does not exceed $2\sigma$ in most cases ($\sigma$ being defined as the rms dispersion of the individual \ion{Fe}{ii} abundances divided by the square root of the number of lines), especially in the MIKE sample (where the only exception is star \#9: $3\sigma$) and in the HRS one (except for star \#7: $2.3\sigma$). In the UVES sample, which benefits from the best resolution and S/N, half the stars show differences larger than $2\sigma$ (between $2.2$ and $4.3\sigma$). This indicates a higher visibility of systematics like NLTE effects in this high quality data. We chose to be more tolerant regarding ionization equilibrium in the case of very metal poor stars ([Fe/H]$\lesssim -2$), because there are indications in the literature that NLTE effects are more pronounced for them than at solar metallicity. For instance, \citet{MFT12} compute the NLTE correction for both \ion{Fe}{i} and \ion{Fe}{ii} abundances for two models with \teff$=4500$\,K, \ensuremath{\log g}$=1.0$, [Fe/H]$=-1.50$ and [Fe/H]$=-2.00$; they find $\Delta_{II-I}=0.13$ for [Fe/H]$=-1.50$ and $\Delta_{II-I}=0.15$ for [Fe/H]$=-2.00$, when the stars are analyzed assuming LTE. This is consistent with our empirical $-0.02 < \Delta_{II-I} < 0.20$ range for UVES. $\Delta_{II-I}=0.24$ in two HRS stars, but the S/N of their spectra is poor and this represents no more than $2\sigma$. Disregarding the photometric \ensuremath{\log g}\ estimate and strictly forcing ionization equilibrium would imply decreasing \ensuremath{\log g}\, but also \teff\ because these quantities are correlated. We preferred to remain not too far from the photometric estimates, which are physically sound. The final stellar parameters are given in Table~\ref{table:atmo}. The typical error on \teff\ is about $100$~K, that on \ensuremath{\log g}\ about $0.2$~dex and that on v$_\mathrm{turb}$\ about $0.2$~\kms. \subsection{Hyperfine structure} The hyperfine structure (HFS) mainly affects elements with an odd atomic mass, namely Sc, Mn, Co, Cu, and Eu, but also odd isotopes of Ba. It broadens the line, thereby alleviating its saturation, so that estimating the abundance directly from the EW results in an overestimate, if one does not take into account the HFS structure. For a given EW, the HFS correction increases in absolute value as v$_\mathrm{turb}$\ decreases, because both tend to desaturate the line. Faint lines remain unaffected, being far from saturation, but for strong lines on the plateau of the curve of growth, the HFS correction may exceed $1.5$~dex in absolute value (the HFS correction is always negative). As mentioned before, we estimated HFS corrections to the raw EW abundances using the MOOG code, running it with both the \verb+blend+ driver and the \verb+abfind+ driver and computing the abundance difference, as described in \citet{NCJ12} and \cite{JNM15}. We used the HFS components with their oscillator strengths given in the Kurucz web site\footnote{http://kurucz.harvard.edu/linelists.html} for Co, Cu, most Ba lines and Eu, and by \citet{PMW00} for Sc and Mn. For \ion{Ba}{ii} (and especially for the $\lambda 4934$ line which has a wide HFS), which has both odd and even isotopes, we assumed a solar mix with an $18$\% fraction of odd isotopes \citep{LPG09}. For Eu, we assumed equal abundances of the $A=151$ and $153$ isotopes \citep{ZCZ10}. \subsection{Abundances} The final abundances given in Table~\ref{tab:abundances} are the weighted mean of the individual line abundances, the weights being the inverse variances of the single line abundances. These variances were propagated by \verb+turbospectrum+ from the errors on the corresponding EWs. We also give in the same table some upper limits evaluated from the small EW of marginally identified lines, for stars \#10 (O), \#12 (Pr), and \#23 (Mn). Such upper limits concern only abundances given by one or two lines and are indicated in the relevant tables by a \verb+<+ sign before their value. The carbon abundances are given in Table~\ref{tab:carbon_ab}. They are given only for the stars measured with MIKE and with UVES, because the spectral range of HRS does not include the CH G band. The spectrum of star \#12 is too noisy in the wavelength range of interest to provide a reliable estimate of the C abundance. For species represented by only one line, the errors indicated have to be considered as lower limits, because they include only the formal error given by the 4DAO code (plus 5\%), but not the uncertainty on the continuum level, oscillator strength, possible blend with a small telluric line (especially in the case of oxygen), or faint unrecognized cosmic hit. For the stars measured with HET/HRS, we did not discard very strong lines, and the abundance of some elements like Na, Mg, and Ba sometimes rely on lines with $EW > 200$\,m\AA\ and must be taken with caution: the Gaussian fit may not be quite appropriate for such lines, and abundances depend more critically on broadening parameters that may be uncertain. \subsubsection{Errors} As mentioned in Sect.~\ref{section:MIKE}, we added quadratically a 5\% error to the EW error estimated by the DAOSPEC code to make the errors on EW more realistic; thus, no EW has an error smaller than 5\%. These errors are generally larger than those obtained from the \citet{C88} formula revised by \citet{BIT08}. They are listed in Tables \ref{table:lines_a} to \ref{table:lines_g}. The errors listed in Table~\ref{tab:abundances} are defined in the same way as in \citet{JNM15}. \section{Comparison to previous works} \label{sec:comp} Both the Spaghetti survey \citep[e.g.,][see also Starkenburg et al. 2010 for additional constraints onto the luminosity classification]{Morrison2000, Dohm-Palmer2001} and the SDSS-SEGUE survey are carried out at relatively low spectral resolution ($\sim$2.5-3.5\AA\,). Here we compare the heliocentric velocity and [Fe/H] estimates from these original sources with our determinations at ~$\sim$10x larger spectral resolving power. Similarly, we revise the stars's distance determinations on the basis of the stellar gravity, temperature, [Fe/H] and [$\alpha$/Fe] from our analysis. \subsection{Heliocentric velocity and [Fe/H]} Figure~\ref{fig:comparisonvel} shows the comparison of the heliocentric radial velocities (left) and metallicities [Fe/H] (right) obtained in this work to those obtained in the original sources. As mentioned before, the sample contains two stars detected as radial velocity binaries (star \#3 as determined from our HET observations and \#11, also known as 2M12490495-0743456, with the binarity detected by APOGEE multiple visits), however their velocities happen to agree well with the measurements in the literature; there are only a few cases for which the velocities disagree beyond the 3$\sigma$ level, and we cannot exclude that these velocity differences are due to the stars being unidentified radial velocity binaries. As for the metallicity, the comparison can be deemed satisfactory. As expected, the high resolution observations have more precise [Fe/H] determinations with respect to the original measurements at low spectral resolution. There is only one star with clearly discrepant [Fe/H] (\#28): its Spaghetti survey metallicity was derived from the Mg triplet index, but was found to be very (and unusually) discrepant from the metallicity derived from the Ca~K line index, which returned [Fe/H] $= -1.30$, much closer to the high-resolution spectrum value (Starkenburg, priv. communication). Star 2M12490495-0743456 was also observed by APOGEE and its SDSS DR13 abundances are in excellent agreement with our determinations, except for [Co/Fe] and [Ni/Fe], which anyway agree within 2$\sigma$; the abundance of [Al/Fe] and [V/Fe] is undetermined in our data (SDSS DR13 : [Fe/H]$=-0.97 \pm 0.04$; [O/Fe]$= 0.21\pm 0.06$; [Mg/Fe]$= 0.18\pm 0.05$; [Al/Fe]$= -0.05\pm 0.14$; [Si/Fe]$= 0.17\pm 0.05$; [Ca/Fe]$= 0.27\pm 0.07$; [V/Fe]$= 0.57 \pm 0.24$; [Mn/Fe]$= -0.39 \pm 0.05$; [Co/Fe]$= 0.32 \pm 0.23$; [Ni/Fe]$= -0.01 \pm 0.04$. This work: [Fe/H]$=-0.96 \pm 0.11$; [O/Fe]$= 0.25 \pm 0.04$; [Mg/Fe]$= 0.24 \pm 0.11$; [Al/Fe]$=$ ND; [Si/Fe]$= 0.12 \pm 0.07$; [Ca/Fe]$= 0.30 \pm 0.13$; [V/Fe]$=$ ND; [Mn/Fe]$= -0.49 \pm 0.12$; [Co/Fe]$= -0.23 \pm 0.19$; [Ni/Fe]$= -0.17 \pm 0.09$). \subsection{Distances} In both the Spaghetti survey and \citet{Xue2014} the distance modulus (and hence, the heliocentric distance $d_h$) to a star is found by comparing the star's apparent magnitude with the absolute magnitude of globular clusters giant-branch color-luminosity fiducials, interpolated at the observed star color and spectroscopic [Fe/H]. In the Spaghetti survey the distance errors are obtained from Montecarlo simulations in which the effect of the color and metallicity error are factored in; to the latter is added in quadrature 0.25dex to account for possible systematic calibration errors. On the other hand, \citet{Xue2014} adopt a probabilistic approach to propagate the errors in metallicities, magnitudes, and colors into distance uncertainties, and be able to fold in also prior information about the giant-branch luminosity function and the different metallicity distributions of the SEGUE K-giant targeting sub-categories. This type of determination implicitly assumes that the age and [$\alpha$/Fe] of halo stars compare well to those of the globular clusters fiducials, which might not hold for the whole sample: for instance, a few halo stars are known to have sub-solar [$\alpha$/Fe] and we also want to account for the possibility of the late accretion of stars originated in satellites with prolonged star formation histories (e.g., Sag). Hence we relax the above assumption and revise the stars' distance determinations by applying the same method as in Sect.~\ref{sec:prel} but using the stars's \ensuremath{\log g}, \teff, [Fe/H] and [$\alpha$/Fe] (= ([Ca/Fe] + [Mg/Fe] + [Ti/Fe])/3) derived in our spectroscopic analysis and repeating the fit for different ages (4, 8 and 12 Gyr); the range of values for the isochrones [$\alpha$/Fe] grid goes now from -0.2 to +0.6, with steps of 0.2dex. Figure~\ref{fig:comparisondh} shows that our revised distances are typically larger than those in the literature, placing the great majority of the objects in the sample beyond the nominal separation between inner and outer halo. Assuming an age of 4 Gyr rather than 12 Gyr results in a 20-30\% distance increase for most of the stars. The determination of the distance errors is non-trivial: at face value, the 68\% confidence level obtained by repeating the fit with hundreds MonteCarlo realizations of the stars's \ensuremath{\log g}, \teff, [Fe/H] and [$\alpha$/Fe] drawn from Gaussian distributions centered around the spectroscopically determined values would yield relative errors of 30-40\%. However, this is likely to be an overestimate, because in practice several combinations of the randomly drawn \ensuremath{\log g}, \teff, [Fe/H] and [$\alpha$/Fe] would be rejected by the spectroscopic analysis because not compatible with the spectroscopic diagnostics. Since in the following the distance estimates are only used as additional, possible indicators of the stars's belonging to known substructures, we will consider as error the range of values due to the different ages assumed. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{comparison_velfeh_hetmikeuves_newHEThfs.pdf} \caption{Comparison between the heliocentric radial velocities (left) and metallicities [Fe/H] (right) obtained in this work to those obtained in the original sources.} \label{fig:comparisonvel} \end{figure*} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{revised_dh_040812gyr_newHEThfs.pdf} \caption{Comparison between the heliocentric distances derived in the literature and in this work; the different symbols indicate the different ages assumed in the isochrone fitting (see legend).} \label{fig:comparisondh} \end{figure} \begin{figure} \centering \includegraphics[width=\hsize]{abundances_cfe_hetmikeuves_newHEThfs.pdf} \caption{[C/Fe] as a function of luminosity for our sample (squares), split in stars that have spatial and kinematic properties compatible with membership to the Sag stream (green) and not compatible (purple), as described in Sect.~\ref{sec:substructures} (see also Fig.~\ref{fig:sag}). The dashed line indicates the \citet{Aoki2007} criterion for identifying carbon-enhanced metal-poor stars when taking into account evolutionary effects after the first dredge up; the solid line shows the trend of declining [C/Fe] with increasing stellar luminosity in three MW globular clusters as in \citep[see][]{Kirby2015}, which also agrees with the trends observed in that same work for the MW classical dSphs Ursa Minor, Draco, Sculptor and Fornax. In the corner we give a representative error-bar.} \label{fig:cfe} \end{figure} \begin{figure*} \centering \includegraphics[width=\hsize]{abundances_alphas_hetmikeuves_MW_newHEThfs.pdf} \caption{Abundance of the $\alpha$ elements Ca, Mg, Si, \ion{Ti}{ii} and combined $\alpha$ relative to iron (\ion{Fe}{I}) as a function of [\ion{Fe}{i}/H]. The sample is split in stars that have spatial and kinematic properties compatible with membership to the Sag stream (green) and not compatible (purple). The small squares indicate the chemical abundances for literature samples of MW halo stars (black ones: Venn et al. (2004), Barklem et al. (2005), Ishigaki et al. (2012, 2013); red and blue: ``low-$alpha$'' and ``high-$alpha$'' populations as identified in Nissen \& Schuster 2010, 2011). We warn the reader that the global [$\alpha$/Fe] shown here and in Figs.~\ref{fig:alpha_dwarfs}, \ref{fig:baeu} has been calculated as the average of the abundance ratio of the individual $\alpha$- elements to Fe that were available in each of the catalogs.} \label{fig:alpha_MW} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{abundances_alphas_hetmikeuves_comp_newHEThfs.pdf} \caption{Abundance of the $\alpha$ elements Ca, Mg, Si, \ion{Ti}{ii} and combined $\alpha$ relative to iron (\ion{Fe}{I}) as a function of [\ion{Fe}{i}/H]. The sample is split into stars that have spatial and kinematic properties compatible with membership to the Sag stream (green) and those unlikely to be compatible (purple). The other symbols indicate the chemical abundances for samples of stars in MW satellites, as described in the legend (Fnx = Fornax, Scl = Sculptor).} \label{fig:alpha_dwarfs} \end{figure*} \begin{figure*}[!h] \centering \includegraphics[width=\hsize]{abundances_ironpeak_scmnni_hetmikeuves_newHEThfs.pdf} \caption{As previous figure but for iron-peak elements Sc, Mn, Ni relative to iron (\ion{Fe}{I}) as a function of [\ion{Fe}{i}/H] (left: compared to MW halo samples; right: compared to samples of RGB stars in MW dwarf galaxies). The [Mn/Fe] abundances for MW dwarf galaxies are from \citet{NCJ12}.} \label{fig:ironp} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{abundances_nani_hetmikeuves_newHEThfs.pdf} \caption{Top: Abundance of Na relative to iron (\ion{Fe}{I}) as a function of [\ion{Fe}{i}/H]. Bottom: [Ni/Fe] vs [Na/Fe] for stars more metal-rich than [Fe/H]=-1.5. Left: compared to MW halo samples; right: compared to samples of RGB stars in MW dwarf galaxies.} \label{fig:nani} \end{figure*} \section{Chemical trends} \label{sec:trends} In this section we compare the chemical abundances of our sample of distant MW halo stars to those of MW halo stars from solar neighborhood samples \citep{venn04, barklem05, nissenschuster10, nissenschuster11, Ishigaki2012, ishigaki13} and a set of MW satellite galaxies, that is Sculptor \citep{Tolstoy2009}, Fornax \citep{letarte2010, hendricks14, lemasle14}, LMC (inner disk and bar: Van der Swaelmen et al. 2013), the core of Sag \citep{mcwilliam05,carretta10} and a sample of Sag stream stars by \cite{chou10}. For the MW comparison samples, given that our purpose is to compare chemical trends, we specifically focus on studies dealing with halo samples (or studies that allow to select halo stars by providing a membership probability to the various Galactic components) covering a large range in [Fe/H] and containing elemental abundances for a large number of elements and stars. It should be kept in mind that solar neighborhood samples of (kinematically selected) halo stars typically contain a mix of stars belonging to the inner- and outer-halo population, in different proportions, whose orbits are such to place them at present day in the vicinity of the Sun (for the outer-halo population this implies highly elongated orbits), with the inner-halo population dominating at [Fe/H]$> -2$ \citep{Carollo2007, An2013, An2015}. For example, \cite{schuster12} computed the orbital properties of the ``high'' and ``low''-$\alpha$ halo stars detected in \cite{nissenschuster10, nissenschuster11} and showed the former to be essentially confined within the inner-halo region (the orbits reach at most a maximum Galactocentric distance of $\sim$15 kpc and a maximum height from the plane of 6-8 kpc), while the latter reaches out to the outer-halo region (spanning a range of maximum Galactocentric distances from $\sim$5kpc to 30-40 kpc and a maximum height from the plane of $\lesssim$18 kpc). In the case of \citet{ishigaki13} the halo sample is estimated to have maximum apocentric distances within 30-40kpc, with the majority belonging to the inner-halo region. Although the large errors on the proper motion measurements of our sample of distant halo stars, which are typically as large as the proper motions themselves in UCAC5, prevents us from reconstructing their orbital properties, the stars in our sample have large present-day Galactocentric distances (from 12kpc to 73kpc, with a median value of 32kpc). Since the present-day distance is smaller than, or can at most be equal to, the apocentric distance, this implies that our sample probes on average much farther out in the MW outer-halo region with respect to our comparison MW halo samples. In this section, we display only the figures most relevant to highlight the main results, while the abundance ratios for the full set of elements derived in this work are shown in the Appendix. We will dedicate a separate subsection to discussing star~\#7, since its chemical properties appear distinct from the bulk of the sample in several elements. $\bullet$ {\bf Carbon} Figure~\ref{fig:cfe} shows that our measurements compare well with the decreasing trend of [C/Fe] as a function of increasing luminosity observed by \citet{Kirby2015} in four MW dwarf spheroidals and in three Galactic globular clusters. Clearly none of the stars in our sample is enhanced in [C/Fe], when comparing to the criterion of \citet{Aoki2007} which takes into account evolutionary effects after the first dredge up. If we were to adopt the corrections by \citet{Placco2014} as a function of stellar gravity and metallicity (see their Fig.~9), the [C/Fe] of our sample would increase to approximately solar, overlapping the distribution of MW halo stars by \citet{Lee2013} for the same metallicity but landing on its lower envelope. Probing out to R$\sim$20kpc and $\mid z \mid \, \sim$15kpc, \citet{Lee2017} used SDSS spectra to map the carbonicity of MW stars and found it to increase when moving away from the plane of the Galaxy, with the median [C/Fe] being $\gtrsim$+0.3-0.4 beyond $r_g \sim $20 kpc. The increase in carbonicity is accompanied by a decrease in the median [Fe/H], with the Lee et al. (2017) sample being dominated by stars more metal-poor than [Fe/H]$=-1.7$ at those distances. In this work we selected most of our stars from the Spaghetti survey, which targetted stars with relative featureless spectra in the Washington C and M photometric bands. Since these bands include the G-band Carbon feature, we do not claim to have an unbiased sample in [C/Fe]. Additionally, since more than half of our sample has [Fe/H]$> -1.7$ and the fraction of carbon-enhanced stars is known to increase with decreasing metallicity, these have a small chance of being carbon-rich anyway. When not distinguishing between CEMP-no and -s,-r/s and adopting a criterion of [C/Fe]$> + 0.7$, the fraction of CEMP stars at [Fe/H]$<-2$ was found to be 13\% by \citet{Lee2013} and 30\% by \citet{Placco2014}. In that regime our sample contains 7 stars; hence we cannot exclude our non-detection of carbon-enhanced metal-poor stars being due to small number statistics. $\bullet$ {\bf $\alpha$-elements} The ratio of $\alpha$-elements (O, Mg, Si, S, Ca, Ti) over Fe is typically used to trace the relative importance of the ISM chemical enrichment by SN~II and SN~Ia ejecta. At early times massive stars, which conclude their life as SNe~II, are the main players in the chemical enrichment of the ISM. SNe~II contribute mainly $\alpha$-elements and little Fe-peak elements, on time-scales closely tracking star formation, due to the short life-times of SN~II progenitors. On the other hand, SNe~Ia are the main producers of Fe and their progenitors can have long lifetimes \citep[e.g.,][]{Tinsley1979}. Under the assumption of an homogeneously chemically enriched ISM, a decline (``knee'') in [$\alpha$/Fe] is then typically interpreted as a consequence of this time-delay \citep[e.g.,][]{Gilmore1989}, with stars more metal-poor than the metallicity of the ``knee'' being born in an environment whose chemical enrichment was largely driven by SN~II ejecta. As visible in Fig.~\ref{fig:alpha_MW}, our distant halo stars display enhancements in the ratio of $\alpha$-elements over Fe with respect to the Solar values and, given the errors on our measurements, do not show significant differences from those of solar neighborhood halo (SoNH) stars over the full range of metallicities explored. Although we cannot state robustly whether, for example, the [$\alpha$/Fe] are more compatible with the ``high''-$\alpha$ population (possibly formed in-situ) or the ``low''-$\alpha$ population (possibly accreted) detected by Nissen \& Schuster (2010), in general we can say that these outer halo stars do not appear to have formed from an ISM in which chemical enrichment from SN~Ia ejecta was dominant. As for SoNH stars, at [Fe/H]$\lesssim -1.5$ the abundance ratios of our targets overlap those of red giant branch stars in MW satellite galaxies (Fig.~\ref{fig:alpha_dwarfs}). At higher metallicities their chemical trends depart from those of systems like Sculptor and Fornax, whose stars exhibit solar or even sub-solar ratios of $\alpha$-elements to iron; however, if we extend the comparison to massive dwarf galaxies such as Sag (from samples in the core) or even the LMC, then we find a good agreement with the values measured for our distant halo stars. Recent work based on APOGEE data by \citet{Hasselquist2017} on larger samples of Sag stars suggests that for this dwarf galaxy the change from halo-like "high" $\alpha$- abundances to distinctly lower abundances is happening in the metallicity regime of our most metal-rich targets. Tentatively, we find indeed in this metallicity regime a ``low''-$\alpha$ population, in particular in Mg. As discussed in Sect.~\ref{sec:substructures}, a few of these might be genuine Sag stars from their position and velocity, others might be originating from a Sagittarius-like system. We point out that we limit the comparison to systems as luminous as, or more luminous (massive) than, the Sculptor dSph, because a) fainter systems have metallicity distribution functions (MDFs) that barely, or do not, reach the largest [Fe/H] of our targets \citep[see e.g.,][]{Kirby2013}; b) at a given [Fe/H], the [$\alpha$/Fe] of fainter systems is even lower than in Sculptor, which already exhibits clearly solar or sub-solar values (depending on the $\alpha$-element) at [Fe/H]$\sim -1$. $\bullet$ {\bf Fe-peak elements} The iron-peak elements (see Fig.~\ref{fig:ironp} for Sc, Mn, Ni and Fig.~\ref{fig:ironp_app} in the Appendix for V, Cr, Co, Cu, Zn) are mainly formed in explosive nucleosynthesis \citep[but see e.g.,][for production of small amounts of Cu and Zn in massive AGB stars]{Karakas2008, Karakas2010}. Among them, Scandium is not synthesized in SNe~Ia \citep{Woosley2002}, and indeed in MW dwarf galaxies [Sc/Fe] appears to show a similar behavior as the ratio of $\alpha$-elements to Fe, with a ``knee''. In our sample, [Sc/Fe] appears to be fairly constant, around the Solar value, at all metallicities, exhibiting a range of values again compatible with those of massive dwarf galaxies at [Fe/H]$\gtrsim -1.5$, as well as with SoNH samples. We find [Mn/Fe] to be sub-solar, around $-0.5$dex, for the bulk of our sample, even at the largest metallicities we probe, while in SoNH samples [Mn/Fe] starts increasing at [Fe/H]$\gtrsim -1$ (this is clearly seen in Nissen \& Schuster 2011, and to a lesser extent in Sobeck et al. \citeyear{Sobeck2006}, using stars from Fulbright \citeyear{Fulbright2000} and Simmerer et al. \citeyear{Simmerer2004}). This increasing trend of [Mn/Fe] continues to higher metallicities for thick and thin disk stars, see for example \citet{Battistini2015}. As these authors comment, below [Fe/H]$\lesssim -1$, the low [Mn/Fe] ratios are mainly determined by the SN~II yields from massive stars \citep{Tsujimoto1998}, while the increase seen with increasing metallicities is interpreted as a contribution from SNe~Ia \citep{Kobayashi2006}, with suggestions in the literature that a dependence of Mn yields from SNe~Ia with metallicity may contribute to the increase in the [Mn/Fe] ratios \citep{Cescutti2008}. Within this context, the constant sub-solar values shown by our sample, which are lower than in Sculptor and Fornax but at a similar level as in the Sag core (measurements are not available for the LMC in the van der Swaelmen et al. \citeyear{VanderSwaelmen2013} work), could again point to a lack of strong contribution of SN~Ia ejecta to the enrichment of the ISM from which these stars were born. However, \citet{Battistini2015} also show that, when applying NLTE corrections to their sample of thin and thick disk stars, the corresponding Mn trend changes quite drastically, becoming essentially flat and pointing toward Mn sharing the same production site as Fe. Since we are not applying NLTE corrections, we then prefer not to over-interpret the results. Even though star \#4 exhibits super-solar values, it appears compatible with the large scatter in [Mn/Fe] found at low metallicities for SoNH stars. $\bullet$ {\bf Nickel and sodium} As for the other elements analyzed, also in [Ni/Fe] our outer halo stars cannot be distinguished from SoNH stars of similar metallicities at [Fe/H]$\lesssim -1.5$; at larger metallicities, while remaining compatible with the range of values exhibited by SoNH samples, our stars preferentially occupy the sub-solar values end. The same behavior is seen in [Na/Fe]. When considering [Ni/Fe] vs [Na/Fe] (Fig.~\ref{fig:nani}), the ``low-$\alpha$'' and ``high-$\alpha$'' population of Nissen \& Schuster (2010) occupy distinct regions of the diagram, with our targets mostly sharing a similar location to the ``low-$\alpha$'' population. However, this does not appear a particularly compelling diagnostic of whether a star is born in a dwarf galaxy: the RGB stars of similar metallicities in massive MW dwarf galaxies show a very broad range of values on this plane - smoothly transitioning from negative [Ni/Fe] and [Na/Fe] to solar (or almost solar) values when moving from Fornax to the LMC inner disk and then bar - hence almost overlapping with the region occupied by the ``high-$\alpha$'' stars. $\bullet$ {\bf n-capture elements} We now move on to consider the light neutron-capture element Y- and the heavy n-capture elements Ba- and Eu- (Figure~\ref{fig:ncapt}; see Figure~\ref{fig:ncapt_app} in the Appendix for Sr, Zr, La and values in the Tables for Ce, Pe, Nd, Sm). Neutron-capture elements are produced by adding neutrons to iron or iron-peak elements; depending on the rate of the captures, the process is called {\it rapid, r} or {\it slow, s}. While the contribution of core collapse supernova from massive stars and compact objects to the {\it r-} process is still debated \citep[e.g.,][]{Arnould2007}, the main {\it s-}process is constrained to occur in thermally pulsating AGB stars (1-4M$_\odot$~) \citep[see e.g.,][]{Busso1999, Travaglio2004}. As discussed in Venn et al. (2004), helium burning in massive stars (the {\it weak} s-process), only contributes elements lighter than or up to Zr. Hence the {\it r-}process contributes to the chemical enrichment of the ISM with very little delay with respect to the formation time of the sites of production, while the {\it s-}process contributes with a delay of a few 100s Myrs with respect to when the stars were born, therefore tracing chemical enrichment on a slightly longer timescales. Importantly, while Y and Ba can be produced both via the {\it r-} and {\it s-} process, Eu is considered as an almost purely {\it r-} process element \citep[e.g.,][]{Truran1981, Travaglio1999}. At [Fe/H]$\lesssim -1.5$ the [Y, Ba, Eu / Fe] ratios of our sample overlaps completely with the range of values exhibited by SoNH samples, with [Y/Fe] however occupying the lower end of the SoNH stars [Y/Fe] distribution. On the other hand, significant differences are seen at higher metallicities: most of the stars with [Fe/H]$\sim -1.3$ group at similar values of [Y/Fe], [Ba/Fe] and [Eu/Fe], with the [Y/Fe] and [Eu/Fe] concentrating on the low and high end, respectively, of the range of values exhibited by SoNH samples at similar metallicity, while the [Ba/Fe] is clearly above the approximately solar values of SoNH samples ($+0.4 \lesssim$ [Ba/Fe] $\lesssim +1.0$). The departure of distant halo stars from the chemical abundances of SoNH samples becomes even more evident at [Fe/H]$\gtrsim -1$ in these 3 abundance ratios. When we compare to RGB stars in dwarf galaxies, [Y/Fe] does not provide much information, due to the large scatter shown by measurements in dwarf galaxies. On the other hand, an increase of [Ba/Fe] to super-solar values similar to those observed in these distant halo stars ($+0.4 \lesssim$ [Ba/Fe] $\lesssim +1.0$) is observed for massive systems such as Fornax and the LMC but not in a relatively small galaxy such as the Sculptor dSph ($L_V \sim 2.3 \times 10^6$L$_\odot$~, as compared to $L_V \sim 2 \times 10^7$L$_\odot$~ of Fornax and $\sim 1.5 \times 10^9$L$_\odot$~ of the LMC, see compilation by McConnachie \citeyear{McConnachie2012}). In Fornax and the LMC the increase is seen at about 0.3dex higher metallicity than in our sample stars, and at these metallicities RGB stars in these galaxies also show similar enhancements of [Eu/Fe]. This difference in the metallicity at which the high [Ba/Fe] and [Eu/Fe] values kick in are likely a consequence of a difference in the initial star formation rate of the various systems. We remind the reader that, while one might be tempted to interpret the trends observed for the outer halo stars in terms of chemical enrichment within one galactic environment, there is no evidence that these stars belong all to one, disrupted massive system (see next section); hence what we are seeing here is likely stars formed in environments each following a separate evolutionary path; this should caution against providing an interpretation within one single chemical enrichment history. In Fig.~\ref{fig:baeu} we consider the ratios of [Y/Eu] and [Ba/Eu] in order to gain some insight into the relative contribution of the {\it s-} and {\it r-} process. [Ba/Eu] for a pure $r-$process is predicted to be at [Ba/Eu]$_r = -0.69$ \citep{Arlandini1999} or $\sim -0.8$dex \citep{Sneden2008, Bisterzo2014}, while the pure {\it s-}process [Ba/Eu]$_s = +1$ \citep{Arlandini1999} or $+1.15$ \citep{Bisterzo2014}; our stars are within these values, therefore it is likely that the ISM from which they were born was enriched through both the $r-$ and $s-$ process and that we are likely detecting a component of enrichment from AGB stars. No particular differences are seen with respect to the behavior of SoNH stars or MW dwarf galaxies stars at similar metallicities, which are also thought to have been polluted by AGBs material at [Fe/H]$\gtrsim -2.0$ as witnessed by the rise in [Ba/Eu]. As discussed in Venn et al. (2004), a possible interpretation for a low [Y/Eu] is contributions from metal-poor AGB stars: Y belongs to the first peak that builds through rapid captures around neutron number N=50, while Ba (and La) belong to the second peak that builds around N=82, and at low-metallicity first-peak elements would be bypassed in favor of second-peak elements, because there are fewer nuclei to absorb the available neutrons. They also argue that a high [Ba/Y] would be compatible with the yields of low-metallicity AGB stars. Interestingly, \citet{fenner2006} show the [Ba/Y] yields of AGB stars of different metallicities as a function of the stellar mass and lifetime (see references therein): at metallicity [M/H] $\sim -1.5$, [Ba/Y] $\sim$ +0.5 would be contributed by AGB stars with lifetimes between 150-300Myr, while at the upper end ([Ba/Y]$\sim$+0.8) would be contributed by AGB stars with lifetimes from 800 Myr to several Gyrs. It is beyond the scope of this paper to trace the nucleosynthetic site of the various elements: what we point out though is that, once again, at [Fe/H]$\gtrsim -1.5$ the chemical signature of our outer halo stars departs from the one of SoNH stars, showing a [Ba/Y]$\sim +0.5$, while the latter are scattered around Solar values; also in this case, our sample resembles the pattern exhibited by massive MW dwarf galaxies, thought to show enrichment by low-metallicity AGB stars. It is then possible that our distant targets formed in an environment where pollution of the ISM by AGB stars played a more dominant role with respect to the environment where inner halo stars formed. Due to the delayed contribution of AGB stars, this might imply that our distant halo stars formed in an environment with a slower chemical enrichment than those in the SoNH samples. We note that even the \cite{nissenschuster11} ``low-$\alpha$'' stars at [Fe/H]$>-1.5$, which the authors argue to have originated in accreted systems, have a much lower [Ba/Y] than we measure at similar metallicity \citep[see also][]{Fishlock2017}: this would suggest a different chemical enrichment path of the systems that deposited their tidal debris in the outskirts of the MW halo. Finally, the bottom right panel of Fig.~\ref{fig:baeu} shows the location of our targets on the [Ba/Fe] vs [$\alpha$/Fe] plane to summarize some of the main similarities and differences with respect to MW satellites and SoNH stars at [Fe/H]$> -1.5$. As previously mentioned, at these metallicities, only systems more massive than Sculptor show enhancements in [Ba/Fe] as large as those we detect, which can tentatively point to a lower limit in the mass of the accreted satellite galaxies in which our [Fe/H]$> -1.5$ outer halo stars formed. Since the high [Ba/Fe] (and high [Ba/Y]) can be explained by enrichment of low-metallicity AGB stars with lifetimes from 150Myr to several billion years, these massive accreted systems must have experienced a slower chemical enrichment with respect to MW halo stars found in the more central regions, which have solar [Ba/Fe] and [Ba/Y] at the same metallicity. This might translate into a slightly later accretion time of these massive systems whose shredded stellar component deposited debris at the Galactocentric distances we are probing (at [Fe/H]$>-1.5$: $12 \le r_g \: \mathrm{[kpc]} \le 60$, with a median of 25.5kpc) with respect to the putative accreted population of Nissen \& Schuster (2010) whose maximum apocenter reaches to 30-40kpc. Looking at [$\alpha$/Fe], the bulk of our outer halo stars with [Fe/H]$\sim -1.5$ does not appear to have formed from an ISM dominated by pollution from SNe~Ia, hence not long after the ``knee'', with the possible exception of \#21 and \#28, which show the lowest [$\alpha$/Fe] and among the highest [Ba/Fe]; even though this does not provide a stringent upper limit on the accretion time, it would exclude accretion after several Gyrs from the start of star formation, because in that case we would have expected to detect lower values of [$\alpha$/Fe]. Nonetheless it is interesting to notice that the range of combined [$\alpha$/Fe] and [Ba/Fe] of RGB stars in massive MW satellites does overlap those of stars in our samples, while this is not the case for the majority of SoNH stars; not only does this highlight again a significant difference in the chemical abundance of halo stars when moving to the outskirts of the halo, but also tells us that chemical abundances as those we are detecting are not ``unheard of'' among the MW satellites when comparing to systems such as the LMC. Further comparison to the chemical evolution of the massive MW satellites would need to take into account the age distribution of the stars in the various spectroscopic samples, which not only varies according to the specific star formation history of the dwarf galaxy, but also depends on the spatial location where the spectroscopic samples were gathered, due to the age gradients generally present in dwarf galaxies. However, our findings appear to point to the fact that the MW outer halo could be at least partially made by accretion of systems experiencing a similar chemical enrichment to those of massive MW satellites. This is in agreement with the conclusions reached by \citet{Zinn2014, Fiorentino2015} on the basis of the period distribution of RRLyrae stars in the MW halo and MW satellite galaxies. \subsection{Star 07} The elemental abundance ratios of star \#07 stand out with respect to the rest of the sample, exhibiting a much lower [$\alpha$/Fe] and [Ba/Fe], higher [Mn/Fe]$\sim +0.1$ and marginally higher [Ni/Fe] ([Mg/Fe]$\sim -0.6$, [Ca/Fe]$\sim -0.15$), [Ba/Fe]$\sim -1.6$, [Mn/Fe]$\sim +0.1$). The differences in the abundances of star \#07 are large enough to be seen visually in the spectrum. Figure~\ref{fig:spectra} exhibits two regions of the HET spectrum of star \#07 and a star with similar stellar parameters, star \#26. We note that the Fe and Ni features have roughly the same strength while the Ba and Mg features are much weaker in Star \#07. A rise in [Mn/Fe] is seen in dSph galaxies at lower metallicities than in the MW \citep{NCJ12}, which can be explained by a metallicity dependent yield from SNae~Ia. This would point to the low [$\alpha$/Fe] and low [Ba/Fe] being due to a high Fe abundance, perhaps from having originated in/close to a SN~Ia pocket. \begin{figure*} \centering \includegraphics[width=\hsize]{abundances_neutronc_ybaeu_hetmikeuves_newHEThfs.pdf} \caption{As previous figure but for n-capture elements Y, Ba, Eu (left: compared to MW halo samples; right: compared to samples of RGB stars in MW dwarf galaxies).} \label{fig:ncapt} \end{figure*} \begin{figure*} \centering \includegraphics[width=\hsize]{abundances_baeu_hetmikeuves_newHEThfs.pdf} \caption{[Y/Eu], [Ba/Eu], [Ba/Y] vs [Fe/H] at all metallicities, and [Ba/Fe] vs [$\alpha$/Fe] of stars more metal-rich than [Fe/H]$=-1.5$ in our sample compared to literature samples of MW halo stars and of stars in MW dwarf galaxies (for the symbols see legend in Fig.~\ref{fig:alpha_MW} and \ref{fig:alpha_dwarfs}).} \label{fig:baeu} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.7\hsize]{sag_dh_hetmikeuves_newHEThfs.pdf} \caption{ Location and kinematics of our sample stars (green and purple squared) compared to expectations for the Sag stream from the \citet{LM10} model (gray points; only particles either still bound or stripped in the most recent five pericentric passages are plotted) and from various sources of observations (see legend and main text). The top panel shows the distribution as a function of longitude $\Lambda_{\odot}$ and latitude $B_{\odot}$ in a reference system where the equator is aligned with the Sag stream trailing tail \citep[see][]{Majewski2003} but with modifications proposed by \citet{Belokurov2014}, that is $\Lambda$ increasing in the direction of the Sag motion and latitude axis pointing to the North Galactic Pole. Middle: Heliocentric distance versus $\Lambda_{\odot}$. Bottom: Galactic standard of Rest velocities versus $\Lambda_{\odot}$. Our targets are shown with large filled squares (green and purple squared having latitude $|B_{\odot}|$ smaller and larger than 30deg, respectively; green squares with purple border show the sample stars with v$_{\rm GSR}$ incompatible with the expectations/observations of the Sag stream), while the trend derived in the literature using the tracers as in the legend are shown with small (red, blue, and orange) squares with errorbars (see main text for the references to the original articles). We note that our v$_{\rm GSR}$ and those for the Sag particles in the LM10 model have been calculated assuming $R_{\odot}$=8kpc, a Local Standard of Rest (LSR) velocity of 220 \kms and the Solar motion from \citet{Dehnen1998}; in this figure, a factor has been applied to both our velocities and those from the particles in the LM10 model to correct for the different solar motion and v$_{\rm LSR}$ used in \citet{Belokurov2014}.} \label{fig:sag} \end{figure*} \section{Possible relation to known substructures} \label{sec:substructures} Although our targets are not associated to any known MW satellite galaxy or globular cluster, Fig.~\ref{fig:location} shows that several of them project onto known MW halo substructures, such as the bright and faint Sag streams, the Orphan stream, the Virgo Overdensity and approximately in the Tri/And region. This raises the question of whether the chemical abundance trends that we have traced, which are distinct from those of halo stars in solar neighborhood samples at [Fe/H]$> -1.5$, are due to stars belonging to known MW halo substructures, or are to be ascribed to other features. \subsection{Sagittarius stream} The tidal shredding of the Sag dwarf galaxy \citep{Ibata1994} has produced the most impressive substructure visible in the halo of our Galaxy. While the inner regions (``core'') of the Sag dwarf galaxy are still gravitationally bound and form a spatially and kinematically confined structure, its stellar tidal debris are spread across a very large area on the sky and strongly overlap with halo stars both in distance, radial velocity and metallicity. To date the Sag stream has been traced both in the northern and southern hemisphere, with the various wraps encompassing heliocentric distances from $\sim$25 to 100 kpc, and Galactic Standard of Rest (GSR) velocities\footnote{These are line-of-sight heliocentric velocities corrected for the Sun motion and Local Standard of Rest motion, where we use the values from Dehnen \& Binney (1998) and v$_{\rm LSR}= 220$\kms, respectively.} about $\pm$150 \kms (e.g., Koposov et al. \citeyear{Koposov2012}, Belokurov et al. \citeyear{Belokurov2014} and references therein; see Fig.~\ref{fig:sag}). Recently, \citet{Hasselquist2017} have compared Sag core and MW stars in the metallicity range $-1.2 < $ [Fe/H] $ < 0$, showing that at [Fe/H] $\lesssim -0.8$ Sag core stars exhibit similar chemical patterns than MW stars at high latitude (mostly consistent of halo stars); this makes a distinction based on chemistry alone difficult in the metallicity regime of our targets. Given the complexity of the Sag system, N-body simulations modeling its orbital evolution in a MW-like gravitational potential offer a useful aid for a first order identification of which stars are most unlikely to be part of the stream. It should be kept in mind though that associating stars with substructures via comparison with models in various phase space parameters inherits the models' limitations, for example with respect to modeling older wraps of streams and parts of the stream that are far from the main body. The shape of the dark matter halo assumed will influence the modeling of the Sag stream, and in reality such a shape might be complex, time-variable and show variation at different radii \citep[e.g.,][]{Vera-Ciro2011}. Additionally, the influence from large perturbers, like the LMC, can not always be ignored \citep{Vera-Ciro2013}. Here we compare the location on the sky and radial velocities of our targets with the predictions from the \citet[hereafter LM10]{LM10} model (Fig.~\ref{fig:sag}), following their recommendation of using only particles stripped in the most recent five pericentric passages. We transform the equatorial coordinates of the stars in our sample to a heliocentric coordinate system aligned with the Sag stream; in particular, in this reference frame, the equator is defined by the mid-plane of the Sag trailing tail debris as proposed by \citet{Majewski2003}. We follow the convention of \citet{Belokurov2014} where the Sag longitude $\Lambda_{\odot}$ increases in the direction of Sag' motion and the Sag latitude axis $B_{\odot}$ points to the North Galactic pole. Due to the known mismatches between the predictions of Sag stream N-body models and some sets of observables (see Fig.~\ref{fig:sag}), we let our comparison be guided also by the observed signatures of portions of the stream, as traced in distance by BHB and red clump (RC) stars and in kinematics by SDSS spectroscopy of giants (using the estimates given in Koposov et al. 2012, Belokurov et al. 2014). According to the LM10 model, Sag debris can be found at different heights above the mid-plan of the trailing tail, with some dependence on the longitude $\Lambda_{\odot}$ range under consideration, with 99.4\% of the particles having $-23\degr < B_{\odot} < +21\degr$, approximately. We then adopt a very conservative cut of $|B_{\odot}| > $23deg to tag which stars in our sample have a location on the sky that makes them unlikely to be associated with the Sag stream (see Fig.~\ref{fig:sag}). Eight of the stars with $|B_{\odot}| \le 23$deg have GSR velocities well beyond the range of values expected for the stream at the corresponding $\Lambda_{\odot}$, both in terms of predictions from the LM10 model and of the observations, and therefore we also consider them as unlikely to belong to the Sag stream (see Fig.~\ref{fig:sag}): this leaves us with 13 stars being possibly associated and 15 unlikely to belong to the stream. Of the stars in our sample, none was classified as belonging to the Sag stream (nor other groups) by Janesh et al. (2016); on the other hand, Starkenburg et al. (2010) tag \#28 as part of Sag stream, \#20 possibly of early stripped Sag tidal debris (if the MW dark matter halo is prolate), while \#10 and \#26 are most likely associated to the Virgo overdensity, although it cannot be excluded that they belong to the Sag northern leading arm, in particular if the MW DM halo is oblate. Both samples of unlikely Sag stream members (purple) and possible Sag stream members (green) contain stars in the metallicity regime where we detect differences with SoNH stars (see e.g., Figs.~\ref{fig:alpha_MW}-\ref{fig:baeu}). Nonetheless, it is evident that the group of stars which show the most distinct chemical patterns with respect to SoNH stars, in particular those with [Fe/H]$\sim$-1.3 that ``clump'' in [Mg, Si, Ti, Mn, Ni, Na, Ba, Eu/Fe] and mildly in [Co/Fe] are not due to possible Sag stream members. We emphasize that with our selection we are likely to have overestimated the number of stars possibly belonging to the Sag stream, due to our very conservative Sag latitude selection. For example, the 5th and 95th percentiles of $B_{\odot}$ for the particles lost in the most recent five pericentric passages in the LM10 model are $-11$deg and 9deg; if we were to adopt this selection, only stars \#7 and \#28 would have position and kinematics compatible with membership to the stream. Hence, assuming that the LM10 model is not missing any important Sag feature in the regime of sky locations, distances and velocities we are exploring, it seems unlikely that the distinct chemical trends we are detecting at [Fe/H] $> -1.5$ between our distant outer halo stars and halo samples in the solar neighborhood are due to ``contamination'' by Sag stream stars. Therefore, it appears we might be probing the signature of other massive systems accreted by the MW in the outer parts of the stellar halo. In the next section, we will explore possible membership to other known MW halo substructures. It is noteworthy that the only clearly chemically peculiar star in our sample (\#7) would survive as Sag stream member also to the most stringent selection cut in $B_{\odot}$; however, its chemical properties do not resemble those of any Sag core stars studied at high resolution to date. \subsection{Other known substructures} Substructures in the MW halo have been detected in a wealth of works in the literature, surveying different portions of the visible sky and using a variety of stellar tracers (M-giants, horizontal branch stars, RRLyrae, main-sequence-turn-off etc.). This sometimes has led to detections of supposedly different structures, which only later have been linked back to the same features. Reviewing all these studies is clearly beyond the scope of the present work, hence we have taken the review article by \citet{Grillmair2016} as reference for the location and general properties of streams and clouds that are known to-date. For those stars in our sample whose location on the sky broadly coincides with any of the substructures listed in that article, we have explored further their possible physical association considering the velocity, and (to a less extent, due to the large uncertainties) the distance, information, going back to the original sources to check more in detail the expected trends in distance and line-of-sight (l.o.s.) velocity as a function of, for instance, galactic coordinates. The region that encompasses the range 150 $<$ RA [deg] $<$ 220, $-20 <$ DEC [deg] $< +20$ , where the majority of our targets falls, is a complex one, being home to the features generally known as the ``Virgo Overdensity'' (VOD), including the Virgo Stellar Stream (VSS) and partially projecting also onto the Sag stream. The VOD is a poorly understood overdensity with (tentatively) associated to it many different components \cite[e.g.,][and references therein]{Duffau2014}. The heliocentric velocities typically associated to substructures in this area span the range (200, 360) \kms for the VOD and $\sim$130 \kms for the VSS, while distances of stars associated to these substructures are expected to be approximately less than 20\,kpc. Another part of the sky rich in substructures is in the Triangulum-Andromeda region, where several features have been detected: the Segue~2 ultra-faint object \citep{Belokurov2009}, the Triangulum Stream \citep{Bonaca2012}, and a number of other features detected as overdensities of main sequence (MS) stars, main sequence turn-off (MSTO) stars, or K- and M-giants, dubbed Tri-And~1 and Tri-And~2 \citep{Majewski2004, Rocha-Pinto2004, Martin2007}, where some of them may be related to each other and have their origin in the disruption of the same small galactic system \citep[see e.g.,][]{Deason2014, Sheffield2014}. Only a handful of our targets appears to possibly belong to any of the 25 streams and clouds listed in Grillmair \& Carlin (2016). Star~\#18 has position, distance, line-of-sight (l.o.s.) velocity and metallicity that make it compatible with membership to the Hercules-Aquila Cloud. Stars~\#8,11,13 position, distance, l.o.s. velocity and metallicity fall well within the range of values listed for the VOD, hence they might be members of this structure; on the other hand, stars \#1,24,26 have positions, l.o.s. velocities and literature distance estimates compatible with those of VOD stars, but our revised distance values place them beyond 35kpc, making membership to the VOD unlikely. The metallicity does not provide a very tight constrain, as stars in the VOD have been measured to have a broad range of values, within $-2 \lesssim$[Fe/H]$\lesssim -1$; the metallicity of \#1 \& 24 could be considered too low ([Fe/H]$= -2.4$ and $-2.7$, respectively), but one cannot exclude that the MDF of VOD stars extends to lower values, in particular if the VOD is all or in part the result of the accretion of a dwarf galaxy. In any case, of the stars with [Fe/H]$\sim -1.3$ clumping in [Mg, Si, Ti, Y, Ba, Eu, Na and Ni/Fe] and [Co/Fe], only \#13 and \#18 would appear to belong to known MW halo substructures other than Sag (VOD and Hercules-Aquila, respectively). Hence, overall, the aforementioned chemical trend does not appear to be due to stars belonging to known MW substructures. \section{Summary and conclusions} \label{sec:summary} We have obtained VLT/UVES, Magellan/MIKE and HET/HRS high resolution optical spectroscopy for a sample of 28 halo stars with heliocentric distances 12 $\le d_h \: \mathrm{[kpc]} \le $73 (median $d_h=$ 32kpc), for which we have derived chemical abundances for 27 elements. The large present-day distances of the stars in our sample place them in the outer-halo region of the MW and allow us to explore the chemical properties of MW halo stars over a considerably larger volume than literature studies based on high resolution spectroscopy of larger samples of halo stars currently found in the solar neighborhood. The sample size is even competitive with respect to what is currently provided by high resolution spectroscopic surveys such as APOGEE at similar distances and provides abundances for 11 elements in common and 16 not available from APOGEE. At [Fe/H] $\lesssim -1.5$ the chemical properties of our sample stars mostly overlap those exhibited by SoNH stars. However, at [Fe/H] $\gtrsim -1.5$ differences are seen in particular for [Mn, Ni, Na, Ba, Eu/Fe], [Ba/Y] and [Ba/Eu], either as the chemical properties of our sample departing from those of SoNH stars or ``clumping'' in a restricted region of values with respect to the broader distribution shown by SoNH stars of similar metallicities (this can also be appreciated in [Mg, Si, Ti/Fe]). For the majority of stars at high metallicity the detected trends do not appear to be due to stars belonging to known MW substructures, including the Sag stream. We analyze this behavior in the light of the chemical abundances measured for RGB stars in MW dwarf galaxies, including in the comparison also massive systems such as Sag and the LMC. Super-solar values of [Ba/Fe] as those measured at [Fe/H] $\gtrsim -1.5$ in our sample appear only in the most luminous (massive) satellites (Fornax, Sag and the LMC), but not in the Sculptor dSph ($L_V \sim 2.3 \times 10^6$ L$_\odot$~), for which [Ba/Fe] remains constant to a solar-value in the high metallicity regime. On the other hand, the ratio of $\alpha$-elements to iron do not resemble those observed for RGB stars of similar metallicity in the Fornax dSph, while compare well to Sag and LMC stars. We find that the elemental abundances of stars in the most luminous dwarf galaxies show similar trends to those seen in our data for the elements for which we see differences with respect to SoNH stars at [Fe/H] $\gtrsim -1.5$. We conclude that, if MW outer halo stars have originated in the shredded stellar component of accreted dwarf galaxies, then the MW outer halo is partially formed by massive accreted systems. The large super-solar values we measure for [Ba/Fe] and [Ba/Y], which can be interpreted as the result of pollution by metal-poor AGB stars over time-scales of few 100s Myr to almost 1Gyr, compared to the solar values of SoNH samples at similar metallicities, might indicate that star formation in the accreted systems was able to proceed on longer time-scales than the accreted systems that possibly contributed to the build-up of the more internal halo regions, possibly because of later accretion times. Due to our target selection (see Sect.~\ref{sec:data}), half of our targets have [Fe/H]$ > -1.6$, which is the regime where we detect the most interesting differences with SoNH samples. Data from SDSS low resolution optical spectroscopy \citep{Fernandez-Alvar2015} suggest that at large Galactocentric distances, in particular at $r_g \gtrsim 30$kpc, stars with $-1.5\lesssim$ [Fe/H]$\lesssim -0.8$ , are not very common and essentially occupy the high metallicity tail of the metallicity distribution of MW stars. If stars in the MW outer halo originated in accreted dwarf galaxies, this would imply that this metallicity regime is sampling the end point of their chemical evolution. We note that the metallicity distribution function of MW satellite stars are very wide, encompassing a few dex in [Fe/H] even for the systems that have stopped star formation at $z>2$; hence if we are probing the high metallicity tail of accreted systems, these will have carried with them a significant, or even larger, portion of more metal-poor stars that are now spread out in the outer halo. It will be interesting to see if just a few massive systems might have formed a considerable fraction of the MW outer halo. \begin{acknowledgements} The authors are indebted to the International Space Science Institute (ISSI), Bern, Switzerland, for supporting and funding the international team "First stars in dwarf galaxies", where the idea for this project was born. The authors acknowledge the referee, T. Beers, for useful comments and H.~Morrison for providing clarification about possible selection effects in the Spaghetti survey with respect to the star's Carbon abundances. GB gratefully acknowledges financial support by the Spanish Ministry of Economy and Competitiveness (MINECO) under the Ramon y Cajal Programme (RYC-2012-11537) and the grant AYA2014-56795-P. ES gratefully acknowledges funding by the Emmy Noether program from the Deutsche Forschungsgemeinschaft (DFG). The Hobby-Eberly Telescope (HET) is a joint project of the University of Texas at Austin, the Pennsylvania State University, Stanford University, Ludwig-Maximilians-Universit\"at M\"unchen, and Georg-August-Universit\"at G\"ottingen. The HET is named in honor of its principal benefactors, William P. Hobby and Robert E. Eberly. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. \end{acknowledgements} \bibliographystyle{aa}
554b1cf50621a0370017096f88dba19b889b8494
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Nowadays, deep neural networks are popular and used in many different domains comprising image processing, natural language processing, and time-series processing. Though these deep networks have achieved high performance, they are still black boxes in nature. This behavior makes it tough to understand the reasons behind the decisions. In particular, this black box nature hinders the use of these models in critical domains like medical, autonomous driving, industrial, financial, and raises the need for interpretability methods to provide intuitive and understandable explanations. Only explainable models can are usable in critical domains that require transparency~\cite{samek2017explainable}. The existing methods for interpreting decisions of deep learning models are mostly applicable to image modalities. In particular, image concepts are intuitive by default~\cite{zhang2018visual}. Besides the image domain, there is only a limited amount of work in the field of time-series as the modalities are more complex and usually not directly interpretable for a human. Nevertheless, these time-series analysis networks and their explanations are pivotal for their industrial and financial use. Therefore, we propose P2ExNet as an approach to deal with time-series data. Also, existing approaches are mostly post-hoc methods that are applied after the classifier to explain their decisions~\cite{choo2018visual}. Intuitively, these approaches keep the network as it is without any change to the structure, enabling their use on almost every architecture. Usually, this results in an instance-based local explanation that does not explain any global behavior. In contrast to post-hoc methods, the intrinsic methods focus on model design concerning the inference process to provide an understandable global explanation. Ultimately, neither of the two approaches is superior as both have to deal with several limitations regarding the quality, subjectivity~\cite{lipton2016mythos}, the audience, and the domain usage. To overcome these limitations, we propose a network architecture for time-series analysis based on the standard deep neural network architecture providing a global explanation using representative class-specific prototypes and an instance-based local explanation using patch-based similarities and class-similarities. The inference process of our architecture follows the human-related reasoning process~\cite{guidoni1985natural} and uses concepts and prototypes~\cite{li2018deep}. Intuitive class-specific patches explain the network decision. Our approach is superior compared to existing template matching approaches~\cite{brunelli2009template} in the manner of generalization and applicability. Our experiments emphasize the use of our network structure by highlighting the comparable performance when compared to a non-interpretable network of the same size over eight publicly available datasets while preserving an intuitive and traceable explanation. \section{Related Work} The field of network interpretability covers post-hoc and intrinsic methods. Based on the use-case, it is not always possible to use both methods as these methods come with restrictions concerning the data and the network. In the following paragraphs, we address the perspectives, their advantages, and drawbacks. \subsection{Post-hoc} Using post-hoc methods to explain the decisions of deep neural networks is a very prominent approach as these methods do not modify the network architecture and can provide an instance base explanation. Furthermore, these methods offer instance-based as well as global explanations resulting in broad applicability. \subsubsection{Instance-based:} A widespread instance-based post-hoc class of approaches in the field of image domain are so-called back-propagation methods~\cite{bojarski2016visualbackprop}. These approaches produce heat-maps highlighting the most relevant and sensitive parts concerning the network decision. There exist enhancements that evolved~\cite{zintgraf2017visualizing} and take various aspects into account to improve the expressiveness and consistency. Another post-hoc instance-based class of methods are the layer-wise relevance propagation methods~\cite{gu2018understanding, arras2017explaining} that produce results that are close to the heat-maps but more stable. In particular, the image domain explored different approaches to visualize the activations~\cite{yosinski2015understanding} or make use of the gradients~\cite{selvaraju2017grad} or saliency~\cite{simonyan2013deep} to produce heat-maps for instances. However, in the case of the time-series modalities, there exists only a limited amount of work~\cite{siddiqui2019tsviz}. \subsubsection{Global:} In contrast to instance-based methods, there exist attempts to compute a global behavior based on the influence of the samples~\cite{koh2017understanding, yeh2018representer}. These methods provide an idea of helpful and harmful dataset samples to detect outliers and debug dataset using the sample influence. Another approach is to attach an interpretable architecture to the trained network. As presented in~\cite{Palacio_2018_CVPR}, the attachment of an autoencoder before the neural network and a customized loss function for the autoencoder can enhance the interpretability. Siddiqui et al.~\cite{siddiqui2020tsinsight} presented an adoption of this approach for the time-series domain with an adjusted loss function. \subsection{Intrinsic} Intrinsic methods approach the problem from a different perspective by incorporating the interpretability directly. Therefore, they modify the model architecture by introducing interpretable layers~\cite{zhang2018interpretable}. A drawback of these approaches is the restricted learning process that can harm the performance. An intuitive interpretable layer solution are prototype layers to explain model decision~\cite{angelov2019towards}. Mainly, two types of prototypes showed to provide reasonable explanations. First, class prototypes that cover the complete input~\cite{li2018deep, gee2019explaining} and second patch prototypes~\cite{chen2018looks}. \subsection{Limitations of Existing Methods} Even though there exists work to explain the network decisions, most of the approaches are limited to image modalities~\cite{schlegel2019towards}. Furthermore, there is ongoing research investigating the consistency, expressiveness, and subjectivity of these explanations. Some findings prove the inconsistency of saliency-based methods~\cite{tomsett2019sanity} and the expressiveness~\cite{alvarez2018robustness}. Also, methods that use sparsity constraints suffer from the same problems concerning their consistency. \section{P2ExNet: The Proposed Approach.} This section provides insights into the proposed approach. It starts with a motivation followed by the general architecture structure, the mathematical background, and the training procedure. \subsection{Motivation: An Understandable Reasoning Behavior.} Inspired by human reasoning behavior, we aligned our framework to rely on implicit knowledge about objects and examples already seen before. This approach is similar to the humans' inference process. Precisely, we compare new instances to abstract concepts include class-specific features. The term prototypical knowledge describes the knowledge about these concepts and covers the analogical process to map new to the existing knowledge~\cite{gentner2010analogical}. Following this process, the proposed method uses shallow representations. These prototypes encode class-specific pattern and provide the decision based on similarity. \subsection{Architecture} Inspired by the work of Gee et al.~\cite{gee2019explaining}, we combined an autoencoder with a prototype network. The autoencoder consists of several convolutional and max-pooling layers serving as a feature encoding network to provide a latent representation that encodes the relevant features of an input sequence. This representation is fed forward to a custom prototype layer to generate prototypes. Motivated by the work of Chen et al.~\cite{chen2018looks}, we use multiple prototypes to represent a sample rather than a single one for the complete input. Precisely, the prototype layer has randomly initialized variables representing patch prototypes of user-defined size. Larger sizes will result in composed concepts, and smaller sizes result in more basic concepts. On top of the prototype layer, we attached a prototype-weight layer to encourage class-specific prototypes and weight their position within the sample to cover the local importance. Finally, a soft-max classification evaluates the similarity scores produced by the prototype layer multiplied with weights of the prototypes, as shown in Figure~\ref{fig:network_structure}. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{images_edited/network_structure_v2.png} \caption{\textbf{Inference and testing workflow.} Artificially, computed prototypes are evaluated in a similarity-based manner to suggest class-specific patches.} \label{fig:network_structure} \end{figure} \subsection{Mathematical Background} Our method uses a novel combined loss that captures several aspects enabling the network to produce a meaningful set of patch prototypes based on the losses proposed by~\cite{chen2018looks, gee2019explaining}. For the following equations, let $S_{x}$ be the set of patches corresponding to a sample \textit{x} and the set \textit{P} of prototypes. \subsubsection{Distances:} We use the $L^{2}$ norm to compute the distance between any two vectors. Furthermore, we compute the minimum distance between a sample and any prototype ($D_{s2p}$) and vice versa ($D_{p2s}$). We denote $D_{p2p}$ as the minimal distance between a prototype and all others and calculate the minimum distance to a prototype of the same class $D_{clst}$ and to the other classes $D_{sep}$ w.r.t. \textit{y}. Therefore $P_y$ denotes the subset of \textit{P} assigned to the class label of \textit{y}. The distances are shown in Equations~\ref{eq:d_s2p} to~\ref{eq:sep}. \noindent\begin{minipage}{.5\linewidth} \begin{equation} D_{s2p}(s) = \min_{p \in P} L^{2}(s, p) \label{eq:d_s2p} \end{equation} \end{minipage}% \begin{minipage}{.5\linewidth} \begin{equation} D_{p2p}(p) = \min_{{p}' \in P} L^{2}(p, {p}') \label{eq:d_p2p} \end{equation} \end{minipage} \noindent\begin{minipage}{.5\linewidth} \begin{equation} D_{clst}(s,y) = \min_{p \in P_{y}} L^{2}(s,p) \label{eq:clst} \end{equation} \end{minipage}% \begin{minipage}{.5\linewidth} \begin{equation} D_{sep}(s,y) = \min_{p \in \{ P \setminus P_{y} \}} D_{s2p}(s,p) \label{eq:sep} \end{equation} \end{minipage} \subsubsection{Loses:} To ensure high-quality prototypes, we introduce our novel patch loss. This loss is a combination of different objectives to achieve good accuracy and an explanation that does not contain duplicates or prototypes that are not class-specific. Our loss combines the following losses: \begin{itemize} \item \textbf{Autoencoder loss:} MSE is used to encourage reconstruction later used for prototype reconstruction. \item\textbf{Classification loss:} To produce logits for the softmax cross-entropy we multiply the reciprocal of $D_{s2p}$ and the prototype-weight layer. \item\textbf{$L_{p2s}$ and $L_{s2p}$:} These losses preserve the relation between the input and the prototypes and vice versa as shown in Equation~\ref{eq:p2s_l} and~\ref{eq:s2p_l}. \item\textbf{$L_{div}$:} The diversity among the patch prototypes is computed as shown in Equation~\ref{eq:d_l}. \item\textbf{$L_{clst}$ and $L_{sep}$:} To encourage the network to learn class-specific prototypes we compute $L_{clst}$ and similarly to $L_{sep}$ but with a negative sign. This penalized prototypes close to samples of the wrong class w.r.t. their assigned class. \end{itemize} \noindent\begin{minipage}{.5\linewidth} \begin{equation} \small L_{p2s}(x) = \frac{1}{\left | P \right |} \sum_{p \in P} D_{p2s}(p) \label{eq:p2s_l} \end{equation} \end{minipage}% \begin{minipage}{.5\linewidth} \begin{equation} L_{s2p}(x) = \frac{1}{\left | S_{x} \right |} \sum_{s \in S_{x}} D_{s2p}(s) \label{eq:s2p_l} \end{equation} \end{minipage} \noindent\begin{minipage}{.5\linewidth} \begin{equation} \small L_{div} = \log (1 + \frac{1}{ \left | P \right |}\sum_{p \in P} D_{p2p}(p))^{-1} \label{eq:d_l} \end{equation} \end{minipage}% \begin{minipage}{.5\linewidth} \begin{equation} L_{clst}(x,y) = \frac{1}{\left | S_{x} \right |} \sum_{s \in S_{x}} D_{clst}(s,y) \label{eq:clst_l} \end{equation} \end{minipage} Our proposed final loss is a linear combination taking into account previously mentioned aspects and ensures meaningful, diverse, and class-specific patch prototypes shown in Equation~\ref{eq:l}. By default, we set all lambda values except $\lambda_{c}$ to one to find the best compromise between the objectives preserving high accuracy. \begin{multline} \small Patch\_Loss(x,y) = \lambda_{c} H(x,y) + \lambda_{mse} MSE(x,x) + \lambda_{p2s} L_{p2s}(x) \\ + \lambda_{s2p} L_{s2p}(x) + \lambda_{div} L_{div} + \lambda_{clst} L_{clst}(x,y) + \lambda_{sep} L_{sep}(x,y) \label{eq:l} \end{multline} \subsection{Training Process} The training process of your approach consists of two stages. In the first stage, we fix the weights of the pre-initialized prototype-weight layer to ensure class-specific prototypes. We then train the network until it converges. In the second learning phase, all layers except the prototype-weighting layer are frozen, and the network learns to adjust the prototype weights. The adjustment corrects the prototype class affiliation using the previously trained latent representation. \section{Datasets} We used eight publicly available time-series datasets to emphasize the broad applicability of our approach and examine possible limitations. As a representative set, we used seven different datasets from the UCR Time Series Classification Repository\footnote{http://www.timeseriesclassification.com/} and a point anomaly dataset proposed in~\cite{siddiqui2019tsviz}. These datasets and their parameters are visualized in Table~\ref{tab:accuracy_tradeoff}. Note that the Devices dataset corresponds to the 'Electrical devices' dataset taken from the UCR. To have better coverage of different types, we selected the datasets based on the characteristics concerning the number of classes, channels, and time-steps to cover several conditions and show the prototypes. However, we focus on classification datasets. \section{Experiments} In this section, we present our results concerning the performance, applicability, and resource consumption for our proposed approach, highlighting a comparable performance while producing interpretable results. \subsection{P2ExNet: Instance-based Evaluation} The proposed method provides the possibility to identify and highlight the parts of the input that were most relevant for the classification. Besides, it provides prototypes along with a sample containing the prototypes to compare it to the original input. Figure~\ref{fig:adiac_13} shows highlighted regions that were important for the inference on the ADIAC dataset sample. This explanation includes the original sample of the adiac dataset, a modified version, and two prototypes. In the modified version shown in Figure~\ref{fig:adiac_13_m}, we replaced the part between the two red lines with the most important patch prototype to show how close it is to the original part. Figure~\ref{fig:adiac_13_p} shows two prototypes. The value of each prototype denoted as 'Val' highlights its contribution towards the classification result. Similarly, Figure~\ref{fig:character_trajectories_m} shows a sample from the character trajectories dataset and the mapping of the time-series back to the character. The black value highlights the pressure of the pen, and the yellow part shows the mapping of the prototype back to the input space. In the case of an incorrect classification, the prototypes have a red caption. Furthermore, in Figure~\ref{fig:character_trajectories_distribution} the class-wise overall and patch-wise distribution provides additional information about similar classes and important patch positions. Especially in Figure~\ref{fig:character_trajectories_dist_detail}, we show that not all patches have the same importance when it comes to the classification. There are sensitive datasets for which the re-classification can change if the original data gets replaced with a prototype. However, for the classification and the explanation, this is not a problem as it can be solved. A proper re-scaling and adjustment can remove the offset between the prototype and the time-series. In Figure~\ref{fig:anomaly_new_1_m} such a jump in the orange signal is shown and leads to an anomaly. However, the classification of the original signal with the network was correct. Furthermore, some datasets are invariant to small offsets shown in Figure~\ref{fig:fordA_0_m}. That is why re-scaling should be done based on the problem task. In case of a point anomaly task, the patches have to align. In a classification task, it is unlikely that the offset of a single point changes the prediction. \begin{figure}[!t] \centering \subfloat[Original]{ \includegraphics[width=0.235\linewidth]{images_edited/Adiac_13_1x1.png} \label{fig:adiac_13_o} } \hfil \subfloat[Modified]{ \includegraphics[width=0.235\linewidth]{images_edited/Adiac_13_1x1_2.png} \label{fig:adiac_13_m} } \hfil \subfloat[Prototypes]{ \includegraphics[width=0.47\linewidth]{images_edited/Adiac_13_1x2_2.png} \label{fig:adiac_13_p} } \caption{\textbf{Adiac dataset prototype explanation.} a) shows the original series. b) shows the series with the prototype between the red bars. c) shows two prototypes.} \label{fig:adiac_13} \end{figure} \begin{figure}[!t] \centering \subfloat[Time-series]{ \includegraphics[width=0.48\linewidth]{images_edited/character_trajectories_m_1x2.png} \label{fig:character_trajectories_m_o} } \hfil \subfloat[Character of the class 'm']{ \includegraphics[width=0.48\linewidth]{images_edited/character_trajectories_m_1x2_2.png} \label{fig:character_trajectories_m_p} } \caption{\textbf{Character dataset prototype explanation.} a) shows the original series and the series with the prototypes. b) shows the character output and the modified character.} \label{fig:character_trajectories_m} \end{figure} \begin{figure}[!t] \centering \subfloat[Overall distribution]{ \includegraphics[width=0.39\linewidth]{images_edited/character_trajectories_0_dist.png} \label{fig:character_trajectories_dist} } \hfil \subfloat[Patch distribution]{ \includegraphics[width=0.48\linewidth]{images_edited/character_trajectories_0_dist_detail.png} \label{fig:character_trajectories_dist_detail} } \caption{\textbf{Class and prototype distribution.} a) shows the class similarities. b) shows some patches and the corresponding class similarities.} \label{fig:character_trajectories_distribution} \end{figure} \begin{figure}[!t] \centering \subfloat[Original]{ \includegraphics[width=0.23\linewidth]{images_edited/anomaly_new_1_1x1.png} \label{fig:anomaly_new_1_o} } \hfil \subfloat[Modified]{ \includegraphics[width=0.23\linewidth]{images_edited/anomaly_new_1_1x1_2.png} \label{fig:anomaly_new_1_m} } \hfil \subfloat[Original]{ \includegraphics[width=0.23\linewidth]{images_edited/FordA_0_1x1.png} \label{fig:fordA_0_o} } \hfil \subfloat[Modified]{ \includegraphics[width=0.23\linewidth]{images_edited/FordA_0_1x1_2.png} \label{fig:fordA_0_m} } \caption{\textbf{Prototype substitution.} a) and c) show original time-series. b) and d) show the corresponding modified samples and their re-classification.} \label{fig:substitution} \end{figure} \subsection{P2ExNet: Evaluation as a Classifier} Usually, intrinsic interpretability approaches come with an accuracy drop. In Table~\ref{tab:accuracy_tradeoff} we present the accuracy trade-off highlighting that our structure is on the same level as the non-interpretable counterpart. To create a network similar to ours without the interpretable part, we replaced the prototype layer with a dense layer and a cross-entropy loss, as suggested by Chen et al.~\cite{chen2018looks}. Furthermore, we removed the decoder as there is no need to restrict the latent representation as no reconstruction is required. We conducted this comparison for all eight datasets showing that P2ExNet achieves comparable or better performance in comparison to the non-interpretable variant. Overall the interpretable network has an insignificant performance increase of 0.03\%. Each architecture was superior in four out of the eight datasets. The results show that the accuracy using the interpretable model dropped about 6\% on the anomaly dataset but increased 7\% on the Electric Devices dataset. \begin{table}[!t] \caption{\textbf{Accuracy comparison}. A comparison of interpretable and the corresponding non-interpretable counterpart.} \label{tab:accuracy_tradeoff} \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Dataset} & \textbf{Classes} & \textbf{Length} & \textbf{Channel} & \textbf{CNN} & \textbf{P2ExNet} \\ \hline Anomaly~\cite{siddiqui2019tsviz} & 2 & 50 & 3 & \textbf{99.79} & 93.79 \\ FordA & 2 & 500 & 1 & 85.44 & \textbf{89.32} \\ Devices & 7 & 96 & 1 & 55.42 & \textbf{62.53} \\ \hline \hline Adiac & 37 & 176 & 1 & \textbf{63.54} & 60.15 \\ Crop & 24 & 46 & 1 & 68.27 & \textbf{68.54} \\ \hline \hline 50words & 13 & 270 & 1 & 76.84 & \textbf{81.98} \\ PenDigits & 10 & 8 & 2 & \textbf{94.29} & 93.95 \\ Character & 20 & 206 & 3 & \textbf{96.53} & 91.78 \\ \hline \end{tabular} \end{table} \subsection{P2ExNet: Sanity Check} \begin{table}[!t] \caption{\textbf{Replacement of original patch.} The second column shows how much data was replaced with the suggested prototypes proposed by P2ExNet. The third column shows whether the prediction was the same as with the original time-series or not. The fourth column shows the P2ExNet accuracy for the original sample and the last column for the sample replacing the original patch with the suggested patch. The first row of each dataset corresponds to replacements with the most similar whereas the second row with the most different prototype.} \label{tab:sanity} \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Dataset} & \textbf{Data replaced} & \textbf{Equal Pred.} & \textbf{P2ExNet Acc.} & \textbf{P2ExNet mod. Acc.} \\ \hline \multirow{2}{*}{Anomaly} & 71.99 & 87.43 & \multirow{2}{*}{93.79} & \textbf{91.78} \\ & 67.32 & 19.45 & & 22.72 \\ \hline \multirow{2}{*}{FordA} & 51.17 & 99.92 & \multirow{2}{*}{89.32} & \textbf{89.40} \\ & 44.95 & 23.09 & & 32.69 \\ \hline \multirow{2}{*}{Devices} & 52.36 & 81.65 & \multirow{2}{*}{62.53} & \textbf{60.52} \\ & 65.81 & 49.81 & & 39.11 \\ \hline \hline \multirow{2}{*}{Adiac} & 35.22 & 85.97 & \multirow{2}{*}{60.15} & \textbf{55.98} \\ & 69.90 & 9.11 & & 14.84 \\ \hline \multirow{2}{*}{Crop} & 50.50 & 94.08 & \multirow{2}{*}{68.54} & \textbf{66.94} \\ & 81.12 & 22.01 & & 23.28 \\ \hline \hline \multirow{2}{*}{50words} & 36.43 & 93.01 & \multirow{2}{*}{81.98} & \textbf{77.20} \\ & 52.88 & 62.50 & & 56.98 \\ \hline \multirow{2}{*}{PenDigits} & 69.47 & 99.31 & \multirow{2}{*}{93.95} & \textbf{93.54} \\ & 68.65 & 8.83 & & 11.0 \\ \hline \multirow{2}{*}{Character} & 18.15 & 92.93 & \multirow{2}{*}{91.78} & \textbf{85.30} \\ & 52.90 & 31.71 & & 32.87 \\ \hline \end{tabular} \end{table} \begin{table}[!t] \caption{\textbf{Closeness of prototypes.} The difference between representative and generated latent patch prototypes for P2ExNet with and without the use of the decoder are shown.} \label{tab:prototype_stats} \centering \begin{tabular}{|c|c|c|c|} \hline \textbf{Dataset} & \textbf{P2ExNet with decoder} & \textbf{P2ExNet without decoder} & \textbf{Improvement} \\ \hline Anomaly & 0.6393 & \textbf{0.4929} & -22.9\% \\ \hline FordA & \textbf{0.7018} & 1.0315 & 47.0\% \\ \hline Devices & 0.4135 & \textbf{0.3399} & -17.8\% \\ \hline \hline Adiac & 0.538 & \textbf{0.4993} & -6.2\% \\ \hline Crop & \textbf{0.442} & 0.4815 & 8.9\% \\ \hline \hline 50words & \textbf{0.0413} & 0.2086 & 505.1\% \\ \hline PenDigits & \textbf{0.5123} & 0.5622 & 9.7\% \\ \hline Character & \textbf{0.0099} & 0.5887 & 5946.5\% \\ \hline \end{tabular} \end{table} To prove the class-specific and meaningful behavior of the prototypes, we replaced the original time-series once with the most positive and once with the most negative influencing prototypes. In Table~\ref{tab:sanity} we show that the replacement with the most confident prototypes corresponding to the predicted class achieved results close to the default accuracy, whereas the best fit prototype of a different class dramatically decreased the performance as the prediction switched. These results show that our prototypes are class-specific. However, we conducted the second sanity check to investigate the need for the decoder to produce latent representations that are close to the representative prototypes. In Table~\ref{tab:prototype_stats} we show that for the character trajectories, 50words, and the FordA dataset there is a significant difference if the decoder gets excluded. Also, we compared the representative and decoded prototypes and visualized two prototypes in Figure~\ref{fig:prototype_comparison} highlighting the small difference between the selected representative sample (left) and the decoded one (right). We further provide the latent representation of the character trajectory prototype in Figure~\ref{fig:character_p2_latent}. Each plot represents one of the three channels and the blue color encodes the part of the selected sample whereas the orange color decodes the latent representation of the prototype. It is clearly visible that both latent representations share the same pattern and therefore result in a similar decoded prepresentation as shown in Figure~\ref{fig:character_p2}. \begin{figure}[!t] \centering \subfloat[Crop dataset]{ \includegraphics[width=0.44\linewidth]{images_edited/crop_p0.png} \label{fig:crop_p0} } \hfil \subfloat[Character trajectories dataset]{ \includegraphics[width=0.44\linewidth]{images_edited/character_trajectories_p2.png} \label{fig:character_p2} } \caption{\textbf{Prototype comparison.} This figure shows the representative patch based on the distance to the latent prototype and the reconstruction of the latent representation.} \label{fig:prototype_comparison} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.88\linewidth]{images_edited/character_trajectories_p2_latent.png} \caption{\textbf{Latent space difference.} The difference between the prototype (orange) and the real sample (blue) in the latent space for each channel are shown.} \label{fig:character_p2_latent} \end{figure} \subsection{Comparison with Existing Prototype-based Approaches} \begin{figure}[!t] \centering \subfloat[Original]{ \includegraphics[width=0.2\linewidth]{images_edited/character_trajectories_class_0.png} \label{fig:compare_methods_org} } \hfil \subfloat[Gee et al.~\cite{gee2019explaining}]{ \includegraphics[width=0.2\linewidth]{images_edited/character_trajectories_gee.png} \label{fig:compare_methods_g} } \hfil \subfloat[Chen et al.~\cite{chen2018looks}]{ \includegraphics[width=0.2\linewidth]{images_edited/character_trajectories_chen.png} \label{fig:compare_methods_c} } \hfil \subfloat[P2ExNet]{ \includegraphics[width=0.2\linewidth]{images_edited/character_trajectories_our.png} \label{fig:compare_methods_o} } \caption{\textbf{P2ExNet approaches.} Different explanations of the character 'a'.} \label{fig:compare_methods} \end{figure} We compared the proposed method against existing work~\cite{chen2018looks} and~\cite{gee2019explaining}. Precisely, we highlight the explanations and additional outputs. In Figure~\ref{fig:compare_methods} we show the explanation of each approach for a character 'a' sample. While~\cite{gee2019explaining} explains the class with a prototype providing a single prototype capturing the complete sample,~\cite{chen2018looks} is based on parts of the input leading to a more detailed explanation. This method searches a patch for a region in the input image. Precisely, this means additional position information is available. Lastly, our proposed method provides the same information about the location but offers re-scaling as well as an implicit comparison to other prototypes and a class distribution for the complete sample and the patches, as shown in Figure~\ref{fig:character_trajectories_dist_detail}. Furthermore, our prototypes are class-specific and invertible. It is possible to decode them for a comparison with the representatives. \section{Conclusion} Summarizing our results, we came up with novel network architecture, along with a loss and training procedure aligned to produce interpretable results and an inference process similar to the human reasoning without a significant drop in performance. Further, we proved that the proposed method works for several time-series classification tasks and when excluding the class-specific prototype assignment, our approach is suitable to produce prototypes for regression and forecast tasks. Besides, we compared the proposed method with existing prototype-based methods concerning their interpretable output and time consumption, finding ours superior in both aspects. \section*{Acknowledgements} This work was supported by the BMBF projects DeFuseNN (Grant 01IW17002) and the ExplAINN (BMBF Grant 01IS19074). We thank all members of the Deep Learning Competence Center at the DFKI for their comments and support.
532df8de6c906d7cfd3dc6ba38ba2333450b2e57
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Graph is a powerful tool to model the complex relationships between entities. For instance, in healthcare analytics, protein-protein interactions can be modeled as a graph (called a \emph{chemical network}); and a social network can be modeled as a graph, where nodes are users and edges indicate certain social relationships among them. A graph may be treated as a data owner's intellectual property because the data owner may spend a lot of resources collecting the graph, e.g., collecting a chemical network often involves expensive and resource-consuming chemical experiments. Moreover, a graph may also contain sensitive user information, e.g., private social relationships among users. Recently, a family of machine learning techniques known as \emph{graph neural networks (GNNs)} was proposed to analyze graphs. We consider GNNs for \emph{node classification}. Specifically, given a graph, attributes of each node in the graph, and a small number of node labels, a GNN model is trained and can predict the label of each remaining unlabeled node. Due to their superior performance, we have seen growing applications of GNNs in various domains, such as healthcare analytics~\cite{GSRVD17,EPBM20}, recommender systems~\cite{FMLHZTY19}, and fraud detection~\cite{WJG19}. However, the security and privacy implications of training GNNs on graphs are largely unexplored. \mypara{Our Contributions} In this work, we take the first step to study the security and privacy implications of training GNNs on graphs. In particular, we propose the first attacks to steal a graph from the outputs of a GNN model trained on the graph. We call our attacks \emph{link stealing attacks}. Specifically, given a black-box access to a target GNN model, our attacks aim to predict whether there exists a link between any pair of nodes in the graph used to train the target GNN model. Our attacks reveal serious concerns on the intellectual property, confidentiality, and/or privacy of graphs when training GNNs on them. For instance, our attacks violate the intellectual property of the data owner when it spends lots of resources collecting the graph; and our attacks violate user privacy when the graph contains sensitive social relationships among users~\cite{GL162,BHPZ17}. \noindent{\it Adversary's Background Knowledge:} We refer to the graph and nodes' attributes used to train the target GNN model as the \emph{target dataset}. We characterize an adversary's background knowledge along three dimensions, including the target dataset's \emph{nodes' attributes}, the target dataset's \emph{partial graph}, and an auxiliary dataset (called \emph{shadow dataset}) which also contains its own graph and nodes' attributes. An adversary may or may not have access to each of the three dimensions. Therefore, we obtain a comprehensive taxonomy of a threat model, in which adversaries can have 8 different types of background knowledge. \noindent{\it Attack Methodology:} We design an attack for each of the 8 different types of background knowledge, i.e., we propose 8 link stealing attacks in total. The key intuition of our attacks is that two nodes are more likely to be linked if they share more similar attributes and/or predictions from the target GNN model. For instance, when the adversary only has the target dataset's nodes' attributes, we design an unsupervised attack by calculating the distance between two nodes' attributes. When the target dataset's partial graph is available, we use supervised learning to train a binary classifier as our attack model with features summarized from two nodes' attributes and predictions obtained from the black-box access to the target GNN model. When the adversary has a shadow dataset, we propose a \emph{transferring attack} which transfers the knowledge from the shadow dataset to the target dataset to mount our attack. \noindent{\it Evaluation:} We evaluate our 8 attacks using 8 real-world datasets. First, extensive experiments show that our attacks can effectively steal links. In particular, our attacks achieve high AUCs (area under the ROC curve). This demonstrates that the predictions of a target GNN model encode rich information about the structure of a graph that is used to train the model, and our attacks can exploit them to steal the graph structure. Second, we observe that more background knowledge leads to better attack performance in general. For instance, on the Citeseer dataset~\cite{KW17}, when an adversary has all the three dimensions of the background knowledge, our attack achieves 0.977 AUC. On the same dataset, when the adversary only has nodes' attributes, the AUC is 0.878. Third, we find that the three dimensions of background knowledge have different impacts on our attacks. Specifically, the target dataset's partial graph has the strongest impact followed by nodes' attributes, the shadow dataset, on the other hand, has the weakest impact. Fourth, our transferring attack can achieve high AUCs. Specifically, our transferring attack achieves better performance if the shadow dataset comes from the same domain as the target dataset, e.g., both of them are chemical networks. We believe this is due to the fact that graphs from the same domain have similar structures, which leads to less information loss during transferring. Fifth, our attacks outperform conventional link prediction methods~\cite{LK07,GTMHSSSS14}, which aim to predict links between nodes based on a partial graph. \smallskip In summary, we make the following contributions. \begin{itemize} \item We propose the first link stealing attacks against graph neural networks. \item We propose a threat model to comprehensively characterize an adversary's background knowledge along three dimensions. Moreover, we propose 8 link stealing attacks for adversaries with different background knowledge. \item We extensively evaluate our 8 attacks on 8 real-world datasets. Our results show that our attacks can steal links from a GNN model effectively. \end{itemize} \section{Graph Neural Networks} \label{section:Gnn} Many important real-world datasets come in the form of graphs or networks, e.g., social networks, knowledge graph, and chemical networks. Therefore, it is urgent to develop machine learning algorithms to fully utilize graph data. To this end, a new family of machine learning algorithms, i.e., graph neural networks (GNNs), has been proposed and shown superior performance in various tasks~\cite{AT16,DBV16,KW17,VCCRLB18}. \mypara{Training a GNN Model} Given a graph, attributes for each node in the graph, and a small number of labeled nodes, GNN trains a neural network to predict labels of the remaining unlabeled nodes via analyzing the graph structure and node attributes. Formally, we define the \emph{target dataset} as $\mathcal{D} = (\mathcal{A}, \mathcal{F})$, where $\mathcal{A}$ is the adjacency matrix of the graph and $\mathcal{F}$ contains all nodes' attributes. Specifically, $\mathcal{A}_{uv}$ is an element in $\mathcal{A}$: If there exists an edge between node $u$ and node $v$, then $\mathcal{A}_{uv}=1$, otherwise $\mathcal{A}_{uv}=0$. Moreover, $\mathcal{F}_u$ represents the attributes of $u$. $\mathcal{V}$ is a set containing all nodes in the graph. Note that we consider undirected graphs in this paper, i.e., $\forall u, v \in \mathcal{V}, \mathcal{A}_{uv}=\mathcal{A}_{vu}$. A GNN method iteratively updates a node's features via aggregating its neighbors' features using a neural network, whose last layer predicts labels for nodes. Different GNN methods use slightly different aggregation rules. For instance, \emph{graph convolutional network (GCN)}, the most representative and well-established GNN method~\cite{KW17}, uses a multi-layer neural network whose architecture is determined by the graph structure. Specifically, each layer obeys the following propagation rule to aggregate the neighboring features: \begin{equation} \label{equation:GCNlayer} H^{(k+1)} = \sigma(\tilde{\mathcal{Q}}^{-\frac{1}{2}} \tilde{\AdjMatrix} \tilde{\mathcal{Q}}^{-\frac{1}{2}} H^{(k)}W^{(k)} ), \end{equation} where $\tilde{\AdjMatrix}=\mathcal{A} + I$ is the adjacency matrix of the graph with self-connection added, i.e., $I$ is the identity matrix. $\tilde{\mathcal{Q}}^{-\frac{1}{2}} \tilde{\AdjMatrix} \tilde{\mathcal{Q}}^{-\frac{1}{2}}$ is the symmetric normalized adjacency matrix and $\tilde{\mathcal{Q}}_{uu} = \sum_{u}{\tilde{\AdjMatrix}_{uv}}$. Moreover, $W^{(k)}$ is the trainable weight matrix of the $k$th layer and $\sigma(\cdot)$ is the activation function to introduce non-linearity, such as ReLU. As the input layer, we have $H^{(0)} = \mathcal{F}$. When the GCN uses a two-layer neural network, the GCN model can be described as follows: \begin{equation} \label{equation:GCNModel} \textit{softmax}(\tilde{\mathcal{Q}}^{-\frac{1}{2}} \tilde{\AdjMatrix} \tilde{\mathcal{Q}}^{-\frac{1}{2}}\sigma(\tilde{\mathcal{Q}}^{-\frac{1}{2}} \tilde{\AdjMatrix} \tilde{\mathcal{Q}}^{-\frac{1}{2}}\mathcal{F} W^{(0)}) W^{(1)}). \end{equation} Note that in most of the paper, we focus on two-layer GCN. Later, we show that our attack can be also performed on other types of GNNs, including GraphSAGE~\cite{HYL17} and GAT~\cite{VCCRLB18} (see \autoref{section:Evaluation}). \mypara{Prediction in a GNN Model} Since all nodes' attributes and the whole graph have been fed into the GNN model in the training phase to predict the label of a node, we only need to provide the node's ID to the trained model and obtain the prediction result. We assume the prediction result is a posterior distribution (called \emph{posteriors}) over the possible labels for the node. Our work shows that such posteriors reveal rich information about the graph structure: As mentioned before, a GNN essentially learns a node's features via aggregating its neighbors' features, if two nodes are connected, then their posteriors should be similar. We leverage this to build our attack models. We further use $f$ to denote the target GNN model and $f(u)$ to represent the posteriors of node $u$. For presentation purposes, we summarize the notations introduced here and in the following sections in \autoref{table:notation}. \begin{table}[!t] \centering \caption{List of notations.} \label{table:notation} \footnotesize \begin{tabular}{r|l} \toprule Notation & Description \\ \midrule $\mathcal{D}$ & Target dataset\\ $\mathcal{A}$ & Graph of $\mathcal{D}$ represented as adjacency matrix\\ $\AdjMatrix^{*}$ & Partial graph of $\mathcal{D}$\\ $\mathcal{F}$ & Nodes' attributes of $\mathcal{D}$\\ $\mathcal{V}$ & Set of nodes of $\mathcal{D}$\\ $f$ & Target model\\ $g$ & Reference model\\ $f(u)$ & $u$'s posteriors from the target model\\ $g(u)$ & $u$'s posteriors from the reference model\\ $\mathcal{D}^{\prime}$ & Shadow dataset\\ $f^{\prime}$ & Shadow target model\\ $g^{\prime}$ & Shadow reference model\\ $\mathcal{K}$ & Adversary's knowledge\\ $d(\cdot, \cdot)$ & Distance metric\\ $\Psi(\cdot,\cdot)$ & Pairwise vector operations\\ $e(f(u))$ & Entropy of $f(u)$\\ \bottomrule \end{tabular} \end{table} \section{Problem Formulation} \label{section:Problem} In this section, we first propose a threat model to characterize an adversary's background knowledge. Then, we formally define our link stealing attack. \subsection{Threat Model} \mypara{Adversary's Goal} An adversary's goal is to infer whether a given pair of nodes $u$ and $v$ are connected in the target dataset. Inferring links between nodes leads to a severe privacy threat when the links represent sensitive relationship between users in the context of social networks. Moreover, links may be confidential and viewed as a model owner's intellectual property because the model owner may spend lots of resources collecting the links, e.g., it requires expensive medical/chemical experiments to determine the interaction/link between two molecules in a chemical network. Therefore, inferring links may also compromise a model owner's intellectual property. \mypara{Adversary's Background Knowledge} First, we assume an adversary has a black-box access to the target GNN model. In other words, the adversary can only obtain nodes' posteriors by querying the target model $f$. This is the most difficult setting for the adversary~\cite{SSSS17,SZHBFB19,SBBFZ20}. An adversary can have a black-box access to a GNN model when an organization uses GNN tools from another organization (viewed as an adversary) or the GNN model prediction results are shared among different departments within the same organization. For instance, suppose a social network service provider leverages another company’s tool to train a GNN model for fake-account detection, the provider often needs to send the prediction results of (some) nodes to the company for debugging or refining purposes. In such a scenario, the security company essentially has a black-box access to the GNN model. Note that the graph structure is already revealed to the adversary if she has a white-box access to the target GNN model as the GNN model architecture is often based on the graph structure. Then, we characterize an adversary's background knowledge along three dimensions: \begin{itemize} \item \textbf{Target Dataset's Nodes' Attributes, denoted by $\mathcal{F}$.} This background knowledge characterizes whether the adversary knows nodes' attributes $\mathcal{F}$ in $\mathcal{D}$. We also assume that the adversary knows labels of a small subset of nodes. \item \textbf{Target Dataset's Partial Graph, denoted by $\AdjMatrix^{*}$.} This dimension characterizes whether the adversary knows a subset of links in the target dataset $\mathcal{D}$. Since the goal of link stealing attack is to infer whether there exists an edge/link between a pair of nodes, the partial graph can be used as ground truth edges to train the adversary's attack model. \item \textbf{A Shadow Dataset, denoted by $\mathcal{D}^{\prime}$.} This is a dataset which contains its own nodes' attributes and graph. The adversary can use this to build a GNN model, referred to as \emph{shadow target model} (denoted by $f^{\prime}$) in order to perform a transferring attack. It is worth noting that the shadow dataset does not need to come from the same domain of the target dataset. For instance, the shadow dataset can be a chemical network, while the target dataset can be a citation network. However, results in \autoref{section:Evaluation} show that same-domain shadow dataset indeed leads to better transferring attack performance. \end{itemize} We denote the adversary's background knowledge as a triplet: \[ \mathcal{K}=(\mathcal{F},\AdjMatrix^{*}, \mathcal{D}^{\prime}). \] Whether the adversary has each of the three items is a binary choice, i.e., yes or no. Therefore, we have a comprehensive taxonomy with 8 different types of background knowledge, which leads to 8 different link stealing attacks. \autoref{table:attack_scenario} summarizes our attack taxonomy. \begin{table}[!t] \centering \caption{ Attack taxonomy. $\checkmark$ ($\times$) means the adversary has (does not have) the knowledge. } \label{table:attack_scenario} \footnotesize \begin{tabular}{l|ccc|l|ccc} \toprule Attack & $\mathcal{F}$ & $\AdjMatrix^{*}$ & $\mathcal{D}^{\prime}$ &Attack & $\mathcal{F}$ & $\AdjMatrix^{*}$ & $\mathcal{D}^{\prime}$\\ \midrule Attack-0 & $\times$ & $\times$ & $\times$ & Attack-4 & $\times$ & $\checkmark$& $\checkmark$\\ Attack-1 & $\times$& $\times$& $\checkmark$ & Attack-5 & $\checkmark$ & $\times$& $\checkmark$\\ Attack-2 & $\checkmark$& $\times$ & $\times$ & Attack-6 & $\checkmark$ & $\checkmark$ & $\times$\\ Attack-3 & $\times$& $\checkmark$ & $\times$ & Attack-7 & $\checkmark$ & $\checkmark$ & $\checkmark$\\ \bottomrule \end{tabular} \end{table} \subsection{Link Stealing Attack} After describing our threat model, we can formally define our link stealing attack as follows: \begin{definition}[Link Stealing Attack] Given a black-box access to a GNN model that is trained on a target dataset, a pair of nodes $u$ and $v$ in the target dataset, and an adversary's background knowledge $\mathcal{K}$, link stealing attack aims to infer whether there is a link between $u$ and $v$ in the target dataset. \end{definition} \section{Attack Taxonomy} \label{section:AttackModels} \begin{table*}[!t] \centering \caption{ Features adopted by our supervised attacks (Attack-3 and Attack 6) and transferring attacks (Attack-1, Attack-4, Attack-5, and Attack-7). Here, $(\ast)$ means the features are extracted from the shadow dataset in the training phase, and $(\star)$ means the features are extracted from both the shadow dataset and the target dataset (its partial graph) in the training phase. $d(\cdot, \cdot)$ represents distance metrics defined in \autoref{table:distance}, $\Psi(\cdot, \cdot)$ represents the pairwise vector operations defined in \autoref{table:operator}. Note that the features used in these attack models include all the distance metrics and pairwise vector operations. } \label{table:attack_feature} \footnotesize \setlength{\tabcolsep}{0.6em} \begin{tabular}{l|ccc|ccc|cc} \toprule Attack & $d(f(u),f(v))$ & $\Psi(f(u),f(v)))$ & $\Psi(e(f(u)),e(f(v)))$ & $d(g(u),g(v))$ & $\Psi(g(u),g(v))$ & $\Psi(e(g(u)),e(g(v)))$ & $d(\mathcal{F}_u,\mathcal{F}_v)$ & $\Psi(\mathcal{F}_u,\mathcal{F}_v)$\\ \midrule Attack-1 $\ast$ & $\checkmark$ & $\times$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Attack-3 & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Attack-4 $\star$ & $\checkmark$ & $\times$ & $\checkmark$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ \\ Attack-5 $\ast$ & $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ \\ Attack-6 & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ Attack-7 $\star$ & $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ & $\checkmark$ & $\checkmark$ & $\times$ \\ \bottomrule \end{tabular} \end{table*} In this section, we present the detailed constructions of all the 8 attacks in \autoref{table:attack_scenario}. Given different knowledge $\mathcal{K}$, the adversary can conduct their attacks in different ways. However, there are two problems that exist across different attacks. The first problem is \emph{node pair order}. As we consider undirected graph, when the adversary wants to predict whether there is a link between two given nodes $u$ and $v$, the output should be the same regardless of the input node pair order. The second problem is \emph{dimension mismatch}. The shadow dataset and the target dataset normally have different dimensions with respect to attributes and posteriors (as they are collected for different classification tasks). For transferring attacks that require the adversary to transfer information from the shadow dataset to the target dataset, it is crucial to keep the attack model's input features' dimension consistent no matter which shadow dataset she has. We will discuss how to solve these two problems during the description of different attacks. For presentation purposes, features used in our supervised attacks and transferring attacks are summarised in~\autoref{table:attack_feature}. \subsection{Attack Methodologies} \label{subsection:AttackTaxonomy} \mypara{Attack-0: $\mathcal{K}=(\times, \times, \times)$} We start with the most difficult setting for the adversary, that is she has no knowledge of the target dataset's nodes' attributes, partial graph, and a shadow dataset. All she has is the posteriors of nodes obtained from the target model $f$ (see \autoref{section:Gnn}). As introduced in \autoref{section:Gnn}, GNN essentially aggregates information for each node from its neighbors. This means if there is a link between two nodes, then their posteriors obtained from the target model should be closer. Following this intuition, we propose an unsupervised attack. More specifically, to predict whether there is a link between $u$ and $v$ , we calculate the distance between their posteriors, i.e., $d(f(u), f(v))$, as the predictor. We have in total experimented with 8 common distance metrics: Cosine distance, Euclidean distance, Correlation distance, Chebyshev distance, Braycurtis distance, Canberra distance, Manhattan distance, and Square-euclidean distance. Their formal definitions are in~\autoref{table:distance} in Appendix. It is worth noting that all distance metrics we adopt are symmetric, i.e., $d(f(u), f(v)) = d(f(v), f(u))$, this naturally solves the problem of \emph{node pair order}. Since the attack is unsupervised, to make a concrete prediction, the adversary needs to manually select a threshold depending on application scenarios. To evaluate our attack, we mainly use AUC which considers a set of thresholds as previous works~\cite{FLJLPR14,BHPZ17,HZHBTWB19,SZHBFB19,JSBZG19,ZHSMVB20}. In addition, we propose a threshold estimation method based on clustering (see \autoref{section:Evaluation} for more details). \mypara{Attack-1: $\mathcal{K}=(\times, \times, \mathcal{D}^{\prime})$} In this attack, we broaden the adversary's knowledge with a shadow dataset, i.e., $\mathcal{D}^{\prime}$. This means the adversary can train a classifier for a supervised attack, more specifically, a \emph{transferring attack}. She first constructs a shadow target model $f^{\prime}$ with $\mathcal{D}^{\prime}$. Then, she derives the training data from $f^{\prime}$ to train her attack model. The adversary cannot directly use the posteriors obtained from the shadow target model as features to train her attack model, as the shadow dataset and the target dataset very likely have different numbers of labels, i.e., the corresponding posteriors are in different dimensions. This is the dimension mismatch problem mentioned before. To tackle this, we need to design features over posteriors. As discussed in Attack-0, for any dataset, if two nodes are linked, then their posteriors obtained from the target model should be similar. This means if the attack model can capture the similarity of two nodes' posteriors from the shadow target model, it can transfer the information to the target model. We take two approaches together to design features. The first approach is measuring distances between two nodes' posteriors. To this end, for each pair of nodes $u^{\prime}$ and $v^{\prime}$ from the shadow dataset $\mathcal{D}^{\prime}$, we adopt the same set of 8 metrics used in Attack-0 (formal definitions are listed in~\autoref{table:distance}) to measure their posteriors $f^{\prime}(u^{\prime})$ and $f^{\prime}(v^{\prime})$'s distances, and concatenate these different distances together. This leads to an 8-dimension vector. The second approach is to use entropy to describe each posterior inspired by previous works~\cite{NSH18,JSBZG19}. Formally, for the posterior of node $u^{\prime}$ obtained from the shadow target model $f^{\prime}$, its entropy is defined as the following. \begin{equation} \label{equation:Entropy} e(f^{\prime}(u^{\prime})) = -\sum_i f^{\prime}_i(u^{\prime}) log(f^{\prime}_i(u^{\prime})) \end{equation} where $f^{\prime}_i(u^{\prime})$ denotes the $i$-th element of $f^{\prime}(u^{\prime})$. Then, for each pair of nodes $u^{\prime}$ and $v^{\prime}$ from the shadow dataset, we obtain two entropies $e(f^{\prime}(u^{\prime}))$ and $e(f^{\prime}(v^{\prime}))$. To eliminate the node pair order problems for these entropies, we further take the approach of Grover and Leskovec~\cite{GL16}, by applying pairwise vector operation, denoted by $\Psi(\cdot, \cdot)$. In total, we have used all the 4 operations defined in \autoref{table:operator} (in Appendix) for our attack. Note that these operations in \autoref{table:operator} are applied on two single numbers, i.e., scalars, in this attack. However, they can also be applied to vectors and we will adopt them again on posteriors and nodes' attributes in other attacks. In total, the features used for training the attack model is assembled with 8 different distances between two nodes' posteriors from the shadow target model and 4 features obtained from pairwise vector operations between two nodes' posteriors' entropies. Regarding labels for the training set, the adversary uses all the links in $\mathcal{D}^{\prime}$ and samples the same number of node pairs that are not linked (see \autoref{section:Evaluation} for more details). We adopt an MLP as our attack model. \mypara{Attack-2: $\mathcal{K}=(\mathcal{F}, \times, \times)$} In this attack, we assume that the adversary has the knowledge of the target dataset's nodes' attributes $\mathcal{F}$. Since the adversary has no knowledge of the partial graph and a shadow dataset, her attack here is also unsupervised (similar to Attack-0). We again rely on the distance metrics to perform our attack. For each pair of nodes $u$ and $v$ from the target dataset, we consider four types of information to measure distance with all the metrics listed in \autoref{table:distance}. Similar to Attack-0, we experimentally decide which is the most suitable distance metric for Attack-2. \begin{itemize} \item $d(f(u), f(v))$. The first type is the same as the method for Attack-0, i.e., distance between posteriors of $u$ and $v$ from the target model $f$, i.e., $f(u)$ and $f(v)$. \item $d(\mathcal{F}_u, \mathcal{F}_v)$. The second type is calculating the pairwise distance over $u$ and $v$'s attributes $\mathcal{F}_u$ and $\mathcal{F}_v$. \item $d(f(u),f(v)) - d(g(u),g(v))$. For the third type, since we have the target model's nodes' attributes (as well as a subset of their corresponding labels), we train a separate MLP model, namely \emph{reference model} (denoted by $g$). Our intuition is that if two nodes are connected, the distance between their posteriors from the target model should be smaller than the corresponding distance from the reference model. Therefore, we calculate $d(f(u),f(v)) - d(g(u),g(v))$ to make prediction. \item $d(g(u), g(v))$. For the fourth type, we measure the distance over $u$ and $v$'s posteriors from the reference model. \end{itemize} \mypara{Attack-3: $\mathcal{K}=(\times, \AdjMatrix^{*}, \times)$} In this scenario, the adversary has access to the partial graph $\AdjMatrix^{*}$ of the target dataset. For the attack model, we rely on links from the known partial graph as the ground truth label to train an attack model (we again adopt an MLP). Features used for Attack-3 are summarized in \autoref{table:attack_feature}. For each pair of nodes $u$ and $v$ from the target dataset, we calculate the same set of features proposed for Attack-1 on their posteriors and posteriors' entropies. Besides, since we can directly train the attack model on the partial target graph (i.e., we do not face the dimension mismatch problem), we further define new features by adopting the pairwise vector operations listed in \autoref{table:operator} to $f(u)$ and $f(v)$. \mypara{Attack-4: $\mathcal{K}=(\times, \AdjMatrix^{*}, \mathcal{D}^{\prime})$} In this attack, the adversary has the knowledge of the partial graph $\AdjMatrix^{*}$ of the target dataset and a shadow dataset $\mathcal{D}^{\prime}$. To take both knowledge into consideration, for each pair of nodes either from the shadow dataset or the partial graph of the target dataset, we calculate the same set of features over posteriors as proposed in Attack-1. This means the only difference between Attack-4 and Attack-1 is that the training set for Attack-4 also includes information from the target dataset's partial graph (see \autoref{table:attack_feature}). Different from Attack-3, Attack-4 cannot perform the pairwise vector operations to $f(u)$ and $f(v)$. This is due to the dimension mismatch problem as the adversary needs to take both $\AdjMatrix^{*}$ and $\mathcal{D}^{\prime}$ into account for her attack. \mypara{Attack-5: $\mathcal{K}=(\mathcal{F}, \times, \mathcal{D}^{\prime})$} In this attack, the adversary has the knowledge of the target model's nodes' attributes $\mathcal{F}$ and a shadow dataset $\mathcal{D}^{\prime}$. As we do not have $\AdjMatrix^{*}$ to train the attack model, we need to rely on the graph of the shadow dataset. To this end, we first calculate the same set of features used for Attack-1. Moreover, as we have the target dataset's nodes' attributes, we further build a reference model (as in Attack-2), and also a shadow reference model in order to transfer more knowledge from the shadow dataset for the attack. For this, we build the same set of features as in Attack-1 over the posteriors obtained from the shadow reference model, i.e., the distance of posteriors (\autoref{table:distance}) and pairwise vector operations performed on posteriors' entropies (\autoref{table:operator}). In addition, we also calculate the 8 different distances over the shadow dataset's nodes' attributes. \mypara{Attack-6: $\mathcal{K}=(\mathcal{F}, \AdjMatrix^{*}, \times)$} In this scenario, the adversary has the access to the target dataset's nodes' attributes $\mathcal{F}$ and the partial target graph $\AdjMatrix^{*}$. As a supervised learning setting, we build an MLP considering links from the partial graph as the ground truth label. The adversary first adopts the same set of features defined over posteriors obtained from the target model as proposed in Attack-3. Then, the adversary builds a reference model over the target dataset's nodes' attributes, and calculate the same set of features over posteriors obtained from the reference model. In the end, we further calculate the distances of the target dataset's nodes' attributes as another set of features. \mypara{Attack-7: $\mathcal{K}=(\mathcal{F}, \AdjMatrix^{*}, \mathcal{D}^{\prime})$} This is the last attack with the adversary having all three knowledge. The set of features for this attack is the same as the ones used in Attack-5 (\autoref{table:attack_feature}). The only difference lies in the training phase, we can use the partial graph from the target dataset together with the graph from the shadow dataset as the ground truth. We expect this leads to better performance than the one for Attack-5. However, this attack also relies on the information of the shadow dataset, thus, the features used here are a subset of the ones for Attack-6, this is similar to the difference between Attack-4 and Attack-3. Note that if the adversary does not take the shadow dataset into consideration, this scenario is equivalent to the one for Attack-6. \subsection{Summary} We propose 8 attack scenarios with the combination of the knowledge that the adversary could have. They could be divided into three categories. The first category is unsupervised attacks, i.e., Attack-0 and Attack-2, where the adversary does not have the knowledge about the partial graph from the target dataset or a shadow dataset. In these scenarios, the adversary can use distance metrics for posteriors or nodes' attributes to infer the link. The second category is the supervised attacks, including Attack-3 and Attack-6, where the adversary has the knowledge of the partial graph from the target dataset but does not have a shadow dataset. In these scenarios, the adversary can use different distances and pairwise vector operations over nodes' posteriors (and the corresponding entropies) from the target model and their attributes to build features. The third category is the transferring attacks (supervised), including Attack-1, Attack-4, Attack-5, and Attack-7, where the adversary has the knowledge of a shadow dataset. In these scenarios, the adversary can use distance metrics over posteriors/nodes' attributes and pairwise operations over posteriors' entropies as the bridge to transfer the knowledge from the shadow dataset to perform link stealing attacks. It is worth noting that for Attack-4 and Attack-7, if the adversary leaves the shadow dataset out of consideration, they will not have the dimension mismatch problem and can take the same attack methods as Attack-3 and Attack-6, respectively. \section{Evaluation} \label{section:Evaluation} This section presents the evaluation results of our 8 attacks. We first introduce our experimental setup. Then, we present detailed results for different attacks. Finally, we summarize our experimental findings. \subsection{Experimental Setup} \mypara{Datasets} We utilize 8 public datasets, including Citeseer~\cite{KW17}, Cora~\cite{KW17}, Pubmed~\cite{KW17}, AIDS~\cite{RB08}, COX2~\cite{SOW03}, DHFR~\cite{SOW03}, ENZYMES~\cite{DD03}, and PROTEINS\_full~\cite{BOSVSK05}, to conduct our experiments. These datasets are widely used as benchmark datasets for evaluating GNNs~\cite{KW17,VCCRLB18,DJLBB20,EPBM20}. Among them, Citeseer, Cora, and Pubmed are citation datasets with nodes representing publications and links indicating citations among these publications. The other five datasets are chemical datasets, each node is a molecule and each link represents the interaction between two molecules. All these datasets have nodes' attributes and labels. \mypara{Datasets Configuration} For each dataset, we train a target model and a reference model. In particular, we randomly sample 10\% nodes and use their ground truth labels to train the target model and the reference model.\footnote{We do not train the reference model for attacks when $\mathcal{F}$ is unavailable.} Recall that several attacks require the knowledge of the target dataset's partial graph. To simulate and fairly evaluate different attacks, we construct an \emph{attack dataset} which contains node pairs and labels representing whether they are linked or not. Specifically, we first select all node pairs that are linked. Then, we randomly sample the same number of node pairs that are not linked. We note that such negative sampling approach follows the common practice in the literature of link prediction~\cite{GL16,BHPZ17,Z19}. Furthermore, the main metric we use, i.e., AUC (introduced below), is insensitive to the class imbalance issue~\cite{FLJLPR14,BHPZ17,PZ17} contrary to accuracy. Next, we split the attack dataset randomly by half into \emph{attack training dataset} and \emph{attack testing dataset}.\footnote{We perform additional experiments and observe that training set size does not have a strong impact on the attack performance, results are presented in \autoref{figure:attack_different_ratio} in Appendix.} We use the attack training dataset to train our attack models when the target dataset's partial graph is part of the adversary's knowledge. We use attack testing dataset to evaluate all our attacks. For the attacks that have a shadow dataset, we also construct an attack dataset on the shadow dataset to train the attack model. Note that we do not split this attack dataset because we do not use it for evaluation. \mypara{Metric} We use AUC (area under the ROC curve) as our main evaluation metric. AUC is frequently used in binary classification tasks~\cite{FLJLPR14,BHPZ17,PZ17,PZ172,HZHBTWB19,Z19,JSBZG19}, it is threshold independent. For convenience, we refer to node pairs that are linked as \emph{positive node pairs} and those that are not linked as \emph{negative node pairs}. If we rank node pairs according to the probability that there is a link between them, then AUC is the probability that a randomly selected positive node pair ranks higher than a randomly selected negative node pair. When performing random guessing, i.e., we rank all node pairs uniformly at random, the AUC value is 0.5. Note that we also calculate Precision and Recall for all supervised attacks (see \autoref{table:attack1_pr}, \autoref{table:attack3_pr}, \autoref{table:attack4_pr}, \autoref{table:attack5_pr}, \autoref{table:attack6_pr}, and \autoref{table:attack7_pr} in Appendix). \mypara{Models} We use a graph convolutional network with 2 hidden layers for both the target model and the shadow target model, and assume they share the same architecture (see \autoref{section:Problem}). Note that we also evaluate the scenario where the target model and the shadow model have different architectures later in this section and find the performances of our attacks are similar. The number of neurons in the hidden layer is set to 16. We adopt the frequently used ReLU and softmax as activation functions for the first hidden layer and the second hidden layer, respectively. Note that we append Dropout (the rate is 0.5) to the output of the hidden layer to prevent overfitting. We train 100 epochs with a learning rate of 0.01. Cross-entropy is adopted as the loss function and we use the Adam optimizer to update the model parameters. Our GNNs are implemented based on publicly available code.\footnote{\url{https://github.com/tkipf/gcn}} Experimental results show that our GNNs achieve similar performance as reported in other papers. We omit them to preserve space. We use an MLP with 2 hidden layers as the reference model and the shadow reference model. Hyperparameters, including the number of neurons in the hidden layer, activation functions, loss function, optimizer, epochs, and learning rate are the same as those of the target model. \begin{table*}[!t] \centering \caption{ Average AUC with standard deviation for Attack-1 on all the 8 datasets. Best results are highlighted in bold. } \label{table:attack1} \footnotesize \setlength{\tabcolsep}{0.4em} \begin{tabular}{l|cccccccc} \toprule & \multicolumn{8}{c}{Shadow Dataset}\\ Target Dataset & AIDS & COX2 & DHFR & ENZYMES & PROTEINS\_full & Citeseer & Cora & Pubmed\\ \midrule AIDS& - & 0.720 $\pm$ 0.009 & 0.690 $\pm$ 0.005 & \textbf{0.730 $\pm$ 0.010} & 0.720 $\pm$ 0.005 & 0.689 $\pm$ 0.019 & 0.650 $\pm$ 0.025 & 0.667 $\pm$ 0.014\\ COX2 & 0.755 $\pm$ 0.032& - & 0.831 $\pm$ 0.005 & 0.739 $\pm$ 0.116 & \textbf{0.832 $\pm$ 0.009} & 0.762 $\pm$ 0.009 & 0.773 $\pm$ 0.008 & 0.722 $\pm$ 0.024\\ DHFR & 0.689 $\pm$ 0.004 & \textbf{0.771 $\pm$ 0.004}& - & 0.577 $\pm$ 0.044 & 0.701 $\pm$ 0.010 & 0.736 $\pm$ 0.005 & 0.740 $\pm$ 0.003 & 0.663 $\pm$ 0.010\\ ENZYMES & \textbf{0.747 $\pm$ 0.014} & 0.695 $\pm$ 0.023 & 0.514 $\pm$ 0.041& - & 0.691 $\pm$ 0.030 & 0.680 $\pm$ 0.012 & 0.663 $\pm$ 0.009 & 0.637 $\pm$ 0.018\\ PROTEINS\_full & 0.775 $\pm$ 0.020 & 0.821 $\pm$ 0.016 & 0.528 $\pm$ 0.038 & 0.822 $\pm$ 0.020& - & \textbf{0.823 $\pm$ 0.004} & 0.809 $\pm$ 0.015 & 0.809 $\pm$ 0.013\\ Citeseer & 0.801 $\pm$ 0.040 & 0.920 $\pm$ 0.006 & 0.842 $\pm$ 0.036 & 0.846 $\pm$ 0.042 & 0.848 $\pm$ 0.015& - & \textbf{0.965 $\pm$ 0.001} & 0.942 $\pm$ 0.003\\ Cora & 0.791 $\pm$ 0.019 & 0.884 $\pm$ 0.005 & 0.811 $\pm$ 0.024 & 0.804 $\pm$ 0.048 & 0.869 $\pm$ 0.012 & \textbf{0.942 $\pm$ 0.001}& - & 0.917 $\pm$ 0.002\\ Pubmed & 0.705 $\pm$ 0.039 & 0.796 $\pm$ 0.007 & 0.704 $\pm$ 0.042 & 0.708 $\pm$ 0.067 & 0.752 $\pm$ 0.014 & 0.883 $\pm$ 0.006 & \textbf{0.885 $\pm$ 0.005}& -\\ \bottomrule \end{tabular} \end{table*} We use an MLP with 3 hidden layers as our attack model. The number of neurons for all hidden layers is 32. ReLU is adopted as the activation function for hidden layers and softmax is used as the output activation function. We append Dropout (the rate is 0.5) to each hidden layer to prevent overfitting. We train 50 epochs with a learning rate of 0.001. The loss function is cross-entropy and the optimizer is Adam. We run all experiments with this setting for 5 times and report the average value and the standard deviation of AUC scores. Note that for Attack-0 and Attack-2, the AUC scores keep the same since these two attacks are unsupervised. \subsection{Attack Performance} \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{fig/attack_result_nnn.pdf} \caption{ AUC for Attack-0 on all the 8 datasets with all the 8 distance metrics. The x-axis represents the dataset and the y-axis represents the AUC score. } \label{figure:attack0} \end{figure} \begin{figure*}[!t] \centering \includegraphics[width=2.0\columnwidth]{fig/target_posterior_correlation_ylog.pdf} \caption{ The Correlation distance distribution between nodes' posteriors for positive node pairs and negative node pairs on all the 8 datasets. The x-axis represents Correlation distance and the y-axis represents the number of node pairs. } \label{figure:TargetCorrelation} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}{1.0\columnwidth} \centering \includegraphics[width=1.0\columnwidth]{fig/shadow_cora_target_citeseer.pdf} \caption{} \label{figure:transfer_cora_citeseer} \end{subfigure} \begin{subfigure}{1.0\columnwidth} \centering \includegraphics[width=1.0\columnwidth]{fig/shadow_cora_target_ENZYMES.pdf} \caption{} \label{figure:transfer_cora_ensymes} \end{subfigure} \caption{ The last hidden layer's output from the attack model of Attack-1 for 200 randomly sampled positive node pairs and 200 randomly sampled negative node pairs projected into a 2-dimension space using t-SNE. (a) Cora as the shadow dataset and Citeseer as the target dataset, (b) Cora as the shadow dataset and ENZYMES as the target dataset. } \label{figure:transfer_tsne} \end{figure*} \mypara{Attack-0: $\mathcal{K}=(\times, \times, \times)$} In this attack, the adversary only relies on measuring the distance of two nodes' posteriors obtained from the target model. We compare 8 different distance metrics and \autoref{figure:attack0} shows the results. First, we observe that Correlation distance achieves the best performance followed by Cosine distance across all datasets. In contrast, Canberra distance performs the worst. For instance, on the Citeseer dataset, the AUC scores for Correlation distance and Cosine distance are 0.959 and 0.946, respectively, while the AUC score for Canberra distance is 0.801. Note that both Correlation distance and Cosine distance measure the inner product between two vectors, or the ``angle'' of two vectors while other distance metrics do not. Second, we find that the performance of the same metric on different datasets is different. For instance, the AUC of Correlation distance on Citeseer is 0.959 compared to 0.635 on ENZYMES. As mentioned in \autoref{section:AttackModels}, unsupervised attacks could not provide a concrete prediction. To tackle this, we propose to use clustering, such as K-means. Concretely, we obtain a set of node pairs' distances, and perform K-means on these distances with K being set to 2. The cluster with lower (higher) average distance value is considered as the set of positive (negative) node pairs. Our experiments show that this method is effective. For instance, on the Citeseer dataset, we obtain 0.788 Precision, 0.991 Recall, and 0.878 F1-Score. The complete results are summarized in \autoref{table:attack_0_prf} in Appendix. Another method we could use is to assume that the adversary has a certain number of labeled edges, either from the target dataset or the shadow dataset. The former follows the same setting as our Attack-3, Attack-4, Attack-6, and Attack-7, and the latter is equivalent to Attack-1 and Attack-5. The corresponding results will be shown later. \autoref{figure:TargetCorrelation} shows the frequency of Correlation distance computed on posteriors obtained from the target model for both positive node pairs and negative node pairs in attack testing datasets. The x-axis is the value of Correlation distance and the y-axis is the number of pairs. A clear trend is that for all datasets, the Correlation distance for positive node pairs is much smaller than negative node pairs. We select the top 50\% of node pairs with lowest Correlation distance, group them, and calculate the AUC for each group. Due to the space limit, we only show the result on Pubmed (\autoref{table:attack0_corr}). We can see that the AUC drops when the Correlation distance increase, which indicates that Attack-0 works better on node pairs with lower Correlation distance. In general, the posteriors for positive node pairs are ``closer'' than that for negative node pairs. This verifies our intuition in \autoref{section:AttackModels}: GNN can be considered as an aggregation function over the neighborhoods, if two nodes are linked, they aggregate with each other's features and therefore become closer. \begin{table}[!t] \centering \caption{ AUC in different Correlation distance levels for Attack-0 on Pubmed. } \label{table:attack0_corr} \footnotesize \begin{tabular}{c|c|c|c} \toprule Correlation Distance & AUC & Correlation Distance & AUC \\ \midrule 0.00-0.01 & 0.608 & 0.02-0.03 & 0.407 \\ 0.01-0.02 & 0.535 & 0.03-0.04 & 0.399 \\ \bottomrule \end{tabular} \end{table} \mypara{Attack-1: $\mathcal{K}=(\times, \times, \mathcal{D}^{\prime})$} In this attack, the adversary can leverage a shadow dataset. In particular, for each dataset, we use one of the remaining datasets as the shadow dataset to perform the attack. \autoref{table:attack1} summarizes the results. We leave the blank in the diagonal because we do not use the target dataset itself as its shadow dataset. As we can see from \autoref{table:attack1}, the AUC scores from the best-performing shadow dataset have a consistent improvement on almost all datasets compared to Attack-0. One exception is the COX2 dataset in which the AUC score decreases by 0.02. The results indicate that the adversary can indeed transfer the knowledge from the shadow dataset to enhance her attack. An interesting finding is that for a chemical dataset, the best shadow dataset is normally a chemical dataset as well. Similar results can be observed for citation datasets. This shows that it is more effective to transfer knowledge across datasets from the same domain. To better understand this, we extract the attack model's last hidden layer's output (32-dimension) for positive node pairs and negative node pairs and project them into a 2-dimension space using t-Distributed Stochastic Neighbor Embedding (t-SNE)~\cite{MH08}. \autoref{figure:transfer_cora_citeseer} shows the results for Citeseer when using Cora as the shadow dataset, both of which are citation datasets. We can see that the positive (negative) node pairs from both the target dataset and the shadow dataset can be clustered into similar position, which indicates the positive (negative) node pairs from both datasets have similar distributions. This means if the attack model learns a decision boundary to separate positive nodes pairs from the negative node pairs on the shadow dataset, this decision boundary can be easily carried over to the target dataset. In contrast, \autoref{figure:transfer_cora_ensymes} shows the results for ENZYMES (a chemical dataset) when using Cora (a citation dataset) as the shadow dataset. We see that the positive (negative) node pairs from the shadow dataset and the target dataset are distributed differently in the 2-dimension space. For example, the positive node pairs for Cora are clustered into the outer space of the circle area whereas the positive node pairs for ENZYMES are clustered into the inner space of the circle area. Therefore, it is hard for the adversary to perform an effective transferring attack. The underlying reason for this to happen is that graphs from the same domain have analogous graph structures and similar features. This leads to less information loss for our transferring attack. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{fig/attack_result_ynn.pdf} \caption{ Average AUC for Attack-2 on all the 8 datasets with all the 4 types of information considered. The x-axis represents the dataset and the y-axis represents the AUC score. } \label{figure:attack2} \end{figure} \mypara{Attack-2: $\mathcal{K}=(\mathcal{F}, \times, \times)$} In Attack-2, the adversary has the knowledge of the target dataset's nodes' attributes. As discussed in \autoref{section:AttackModels}, she trains a reference model $g$ by herself from $\mathcal{F}$. We compare four types of information mentioned in \autoref{section:AttackModels}, and the results are shown in \autoref{figure:attack2}. Note that we only show the results calculated with Correlation distance out of the 8 distance metrics (\autoref{table:distance}) since Correlation distance achieves the best performance in almost all settings. We can see that in all chemical datasets and one citation dataset, using the distance of target dataset's nodes' attributes leads to the best performance. For the other two citation datasets, using the distance between posteriors of the target model can get better performance. Nodes' attributes' dimensions are higher in citation datasets than in chemical datasets. In other words, the node attributes for citation datasets are sparser. For instance, we observe that most attributes are 0 in citation datasets. Therefore, we conclude that the attack can get better performance using the Correlation distance between posteriors of the target model when the target dataset's nodes' attributes are in high dimension. \begin{table}[!t] \centering \caption{ Average AUC with standard deviation for Attack-3 on all the 8 datasets. } \label{table:attack3} \footnotesize \begin{tabular}{l|c|l|c} \toprule Dataset & AUC & Dataset & AUC \\ \midrule AIDS & 0.961 $\pm$ 0.001 & PROTEINS\_full & 0.958 $\pm$ 0.000\\ COX2 & 0.939 $\pm$ 0.002 & Citeseer & 0.973 $\pm$ 0.000\\ DHFR & 0.934 $\pm$ 0.001 & Cora & 0.954 $\pm$ 0.001\\ ENZYMES & 0.882 $\pm$ 0.001 & Pubmed & 0.947 $\pm$ 0.001\\ \bottomrule \end{tabular} \end{table} \begin{table*}[!t] \centering \caption{ Average AUC with standard deviation for Attack-4 on all the 8 datasets. Best results are highlighted in bold. } \label{table:attack4} \footnotesize \setlength{\tabcolsep}{0.4em} \begin{tabular}{l|cccccccc} \toprule & \multicolumn{8}{c}{Shadow Dataset}\\ Target Dataset & AIDS & COX2 & DHFR & ENZYMES & PROTEINS\_full & Citeseer & Cora & Pubmed\\ \midrule AIDS& - & 0.750 $\pm$ 0.009 & \textbf{0.763 $\pm$ 0.010} & 0.733 $\pm$ 0.007 & 0.557 $\pm$ 0.009 & 0.729 $\pm$ 0.015 & 0.702 $\pm$ 0.010 & 0.673 $\pm$ 0.009\\ COX2 & 0.802 $\pm$ 0.031& - & \textbf{0.866 $\pm$ 0.004} & 0.782 $\pm$ 0.012 & 0.561 $\pm$ 0.030 & 0.860 $\pm$ 0.002 & 0.853 $\pm$ 0.004 & 0.767 $\pm$ 0.023\\ DHFR & 0.758 $\pm$ 0.022 & \textbf{0.812 $\pm$ 0.005} & - & 0.662 $\pm$ 0.030 & 0.578 $\pm$ 0.067 & 0.799 $\pm$ 0.002 & 0.798 $\pm$ 0.009 & 0.736 $\pm$ 0.005\\ ENZYMES & \textbf{0.741 $\pm$ 0.010} & 0.684 $\pm$ 0.024 & 0.670 $\pm$ 0.008& - & 0.733 $\pm$ 0.019 & 0.624 $\pm$ 0.002 & 0.627 $\pm$ 0.014 & 0.691 $\pm$ 0.012\\ PROTEINS\_full & 0.715 $\pm$ 0.009 & 0.802 $\pm$ 0.025 & 0.725 $\pm$ 0.041 & 0.863 $\pm$ 0.010& - & 0.784 $\pm$ 0.031 & 0.815 $\pm$ 0.012 & \textbf{0.867 $\pm$ 0.003}\\ Citeseer & 0.832 $\pm$ 0.078 & 0.940 $\pm$ 0.005 & 0.914 $\pm$ 0.007 & 0.879 $\pm$ 0.062 & 0.833 $\pm$ 0.088& - & \textbf{0.967 $\pm$ 0.001} & 0.955 $\pm$ 0.003\\ Cora & 0.572 $\pm$ 0.188 & 0.899 $\pm$ 0.003 & 0.887 $\pm$ 0.014 & 0.878 $\pm$ 0.045 & 0.738 $\pm$ 0.168 & \textbf{0.945 $\pm$ 0.001} & - & 0.924 $\pm$ 0.005\\ Pubmed & 0.777 $\pm$ 0.056 & 0.893 $\pm$ 0.001 & 0.90 $\pm$ 0.006 & 0.866 $\pm$ 0.002 & 0.806 $\pm$ 0.042 & \textbf{0.907 $\pm$ 0.004} & 0.902 $\pm$ 0.001& -\\ \bottomrule \end{tabular} \end{table*} \mypara{Attack-3: $\mathcal{K}=(\times, \AdjMatrix^{*}, \times)$} \autoref{table:attack3} shows the results for this attack. With the knowledge of the target dataset's partial graph, the average AUC score for all cases is over 0.9. Compared to Attack-2, the AUC scores on chemical datasets have an improvement over 10\% and the AUC scores on citation datasets have an improvement over 2\%.\footnote{Attack-2 achieves relatively high AUC scores on citation datasets.} Compared to Attack-1 and Attack-2, Attack-3 achieves the best performance, this indicates the target dataset's partial graph is the most important component for an adversary for performing a link stealing attack. The reason is that the partial graph contains the ground truth links in the target dataset, which can be directly exploited by the attack model. We further investigate the contribution of each feature set to the final prediction following the methodology of Dong et al.~\cite{DJC15}. Concretely, when studying one feature set, we set other features' value to 0. As shown in \autoref{figure:component}, the features extracted by applying pairwise operation over posteriors are most useful for the final prediction, followed by the features based on posteriors with different distance metrics. We note that our attack also achieves over 0.70 AUC on average when only using pairwise operation over entropy of posteriors as features. Moreover, our attack achieves the best performance when taking all the three feature sets together, which implies the combination of different features indeed improves the overall performance. \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{fig/component_analysis_result_nyn.pdf} \caption{ Average AUC for Attack-3 on all the 8 datasets with different set of features. The x-axis represents the dataset and the y-axis represents the AUC score. } \label{figure:component} \end{figure} \mypara{Attack-4: $\mathcal{K}=(\times, \AdjMatrix^{*}, \mathcal{D}^{\prime})$} \autoref{table:attack4} shows the results for Attack-4. First, compared to Attack-1 ($\mathcal{K}=(\times, \times, \mathcal{D}^{\prime})$), the overall performance of Attack-4 improves with the help of target dataset's partial graph $\AdjMatrix^{*}$. This is reasonable since the target dataset's partial graph contains some ground truth links from the target dataset. Second, we note that the performances of Attack-4 are worse than Attack-3 ($\mathcal{K}=(\times, \AdjMatrix^{*}, \times)$). Intuitively, the performance should be better since Attack-4 has more background knowledge. The reason for the performance degradation is that we do not take the pairwise vector operation (\autoref{table:operator}) over posteriors as the input for Attack-4 since we want to learn information from both the target dataset and the shadow dataset, and need to eliminate the dimension mismatch issue (as discussed in \autoref{section:AttackModels}). Moreover, the results also indicate that compared to the shadow dataset, the target dataset's partial graph is more informative. \mypara{Attack-5: $\mathcal{K}=(\mathcal{F}, \times, \mathcal{D}^{\prime})$} In Attack-5, the adversary has the knowledge of target dataset's nodes' attributes as well as a shadow dataset, evaluation results are shown in \autoref{table:attack5}. We observe that Attack-5 performs better than both Attack-1 (only with $\mathcal{D}^{\prime}$) and Attack-2 (only with $\mathcal{F}$). This shows the combination of $\mathcal{F}$ and $\mathcal{D}^{\prime}$ can lead to a better link stealing performance. Furthermore, we observe similar trends as for Attack-1, that is the attack performs better if the shadow dataset comes from the same domain as the target dataset. \begin{table*}[!t] \centering \caption{ Average AUC with standard deviation for Attack-5 on all the 8 datasets. Best results are highlighted in bold. } \label{table:attack5} \footnotesize \setlength{\tabcolsep}{0.4em} \begin{tabular}{l|cccccccc} \toprule & \multicolumn{8}{c}{Shadow Dataset}\\ Target Dataset & AIDS & COX2 & DHFR & ENZYMES & PROTEINS\_full & Citeseer & Cora & Pubmed\\ \midrule AIDS& - & 0.841 $\pm$ 0.003 & 0.846 $\pm$ 0.009 & 0.795 $\pm$ 0.016 & \textbf{0.875 $\pm$ 0.002} & 0.839 $\pm$ 0.006 & 0.793 $\pm$ 0.015 & 0.787 $\pm$ 0.008\\ COX2 & 0.832 $\pm$ 0.036& - & \textbf{0.977 $\pm$ 0.002} & 0.874 $\pm$ 0.020 & 0.946 $\pm$ 0.003 & 0.911 $\pm$ 0.004 & 0.908 $\pm$ 0.004 & 0.887 $\pm$ 0.004\\ DHFR & 0.840 $\pm$ 0.018 & \textbf{0.988 $\pm$ 0.001}& - & 0.757 $\pm$ 0.032 & 0.970 $\pm$ 0.004 & 0.909 $\pm$ 0.010 & 0.911 $\pm$ 0.009 & 0.860 $\pm$ 0.004\\ ENZYMES & 0.639 $\pm$ 0.005 & 0.581 $\pm$ 0.010 & 0.587 $\pm$ 0.005& - & 0.608 $\pm$ 0.001 & \textbf{0.685 $\pm$ 0.005} & 0.674 $\pm$ 0.007 & 0.663 $\pm$ 0.002\\ PROTEINS\_full & 0.948 $\pm$ 0.007 & \textbf{0.981 $\pm$ 0.004} & 0.968 $\pm$ 0.014 & 0.818 $\pm$ 0.017& - & 0.970 $\pm$ 0.002 & 0.876 $\pm$ 0.010 & 0.885 $\pm$ 0.003\\ Citeseer & 0.773 $\pm$ 0.048 & 0.666 $\pm$ 0.018 & 0.652 $\pm$ 0.020 & 0.860 $\pm$ 0.049 & 0.794 $\pm$ 0.009& - & \textbf{0.969 $\pm$ 0.002} & 0.967 $\pm$ 0.001\\ Cora & 0.743 $\pm$ 0.017 & 0.587 $\pm$ 0.012 & 0.568 $\pm$ 0.009 & 0.778 $\pm$ 0.052 & 0.686 $\pm$ 0.018 & \textbf{0.956 $\pm$ 0.001}& - & 0.936 $\pm$ 0.002\\ Pubmed & 0.777 $\pm$ 0.030 & 0.661 $\pm$ 0.018 & 0.645 $\pm$ 0.008 & 0.786 $\pm$ 0.041 & 0.741 $\pm$ 0.008 & 0.938 $\pm$ 0.007 & \textbf{0.941 $\pm$ 0.007}& -\\ \bottomrule \end{tabular} \end{table*} \begin{table*}[!ht] \centering \caption{ Average AUC with standard deviation for Attack-7 on all the 8 datasets. Best results are highlighted in bold. } \label{table:attack7} \footnotesize \setlength{\tabcolsep}{0.4em} \begin{tabular}{l|cccccccc} \toprule & \multicolumn{8}{c}{Shadow Dataset}\\ Target Dataset & AIDS & COX2 & DHFR & ENZYMES & PROTEINS\_full & Citeseer & Cora & Pubmed\\ \midrule AIDS& - & \textbf{0.925 $\pm$ 0.001} & 0.913 $\pm$ 0.005 & 0.784 $\pm$ 0.010 & 0.848 $\pm$ 0.010 & 0.538 $\pm$ 0.022 & 0.520 $\pm$ 0.011 & 0.849 $\pm$ 0.004\\ COX2 & 0.954 $\pm$ 0.007& - & \textbf{0.982 $\pm$ 0.001} & 0.874 $\pm$ 0.010 & 0.898 $\pm$ 0.030 & 0.947 $\pm$ 0.003 & 0.940 $\pm$ 0.007 & 0.875 $\pm$ 0.034\\ DHFR & 0.982 $\pm$ 0.002 & \textbf{0.992 $\pm$ 0.00}& - & 0.871 $\pm$ 0.017 & 0.966 $\pm$ 0.008 & 0.933 $\pm$ 0.008 & 0.947 $\pm$ 0.012 & 0.937 $\pm$ 0.003\\ ENZYMES & \textbf{0.698 $\pm$ 0.007} & 0.691 $\pm$ 0.008 & 0.671 $\pm$ 0.003& - & 0.610 $\pm$ 0.001 & 0.657 $\pm$ 0.009 & 0.662 $\pm$ 0.006 & 0.677 $\pm$ 0.001\\ PROTEINS\_full & 0.984 $\pm$ 0.002 & 0.962 $\pm$ 0.010 & 0.986 $\pm$ 0.002 & \textbf{0.993 $\pm$ 0.001}& - & 0.840 $\pm$ 0.013 & 0.823 $\pm$ 0.006 & 0.987 $\pm$ 0.005\\ Citeseer & 0.816 $\pm$ 0.048 & 0.791 $\pm$ 0.033 & 0.702 $\pm$ 0.025 & 0.880 $\pm$ 0.057 & 0.902 $\pm$ 0.026& - & \textbf{0.977 $\pm$ 0.000} & 0.964 $\pm$ 0.000\\ Cora & 0.746 $\pm$ 0.068 & 0.680 $\pm$ 0.038 & 0.574 $\pm$ 0.038 & 0.888 $\pm$ 0.014 & 0.695 $\pm$ 0.10 & \textbf{0.960 $\pm$ 0.001}& - & 0.935 $\pm$ 0.001\\ Pubmed & 0.807 $\pm$ 0.016 & 0.712 $\pm$ 0.025 & 0.710 $\pm$ 0.006 & 0.881 $\pm$ 0.009 & 0.739 $\pm$ 0.012 & \textbf{0.956 $\pm$ 0.001} & 0.949 $\pm$ 0.001& -\\ \bottomrule \end{tabular} \end{table*} \begin{table}[!ht] \centering \caption{ Average AUC with standard deviation for Attack-6 on all the 8 datasets. } \label{table:attack6} \footnotesize \begin{tabular}{l|c|l|c} \toprule Dataset & AUC & Dataset & AUC \\ \midrule AIDS & 0.979 $\pm$ 0.001 & PROTEINS\_full & 0.999 $\pm$ 0.000\\ COX2 & 0.987 $\pm$ 0.001 & Citeseer & 0.981 $\pm$ 0.000\\ DHFR & 0.992 $\pm$ 0.001 & Cora & 0.964 $\pm$ 0.000\\ ENZYMES & 0.891 $\pm$ 0.001 & Pubmed & 0.970 $\pm$ 0.000\\ \bottomrule \end{tabular} \end{table} \mypara{Attack-6: $\mathcal{K}=(\mathcal{F}, \AdjMatrix^{*}, \times)$} The result of Attack-6 on all datasets is shown in \autoref{table:attack6}. We can see that for almost all datasets (except ENZYMES), the AUC scores are over 0.95, which means this attack achieves an excellent performance. In particular, the AUC score is nearly 1 on PROTEINS\_full. Moreover, Attack-6 consistently outperforms Attack-2 ($\mathcal{K}=(\mathcal{F}, \times, \times)$). This further validates the effectiveness of $\AdjMatrix^{*}$ in helping the adversary to infer links. Another finding is that for chemical datasets, the information of target dataset's partial graph brings a larger improvement than the citation datasets. One possible explanation is that the nodes' attributes in chemical datasets contain less information (they are in lower dimension), thus the target dataset's partial graph contributes more to the final prediction performance. \mypara{Attack-7: $\mathcal{K}=(\mathcal{F}, \AdjMatrix^{*}, \mathcal{D}^{\prime})$} The results of Attack-7 are summarized in \autoref{table:attack7}. Compared to Attack-5 ($\mathcal{K}=(\mathcal{F}, \times, \mathcal{D}^{\prime})$), the overall performances improve with the help of $\AdjMatrix^{*}$. We would expect the adversary's accuracy is better than that of Attack-6 ($\mathcal{K}=(\mathcal{F}, \AdjMatrix^{*}, \times)$) since she has more background knowledge. However, we observe that the performance drops from Attack-6 to Attack-7. We suspect this is due to the fact that we want to learn information from both the target dataset and the shadow dataset, to avoid the dimension mismatch problem, Attack-7 uses fewer features than Attack-6 (similar to the reason that Attack-4 performs worse than Attack-3). \begin{figure}[!t] \centering \includegraphics[width=\columnwidth]{fig/attack_result_all.pdf} \caption{ Average AUC with standard deviation for all the attacks on all the 8 datasets. For each attack, we list its best result. The x-axis represents the dataset and the y-axis represents the AUC score. } \label{figure:attack_all} \end{figure} \mypara{Comparison with Link Prediction} We further compare all our attacks with a traditional link prediction method~\cite{LK07}. More specifically, we build an MLP with features summarized from the target model's partial graph, including Common neighbor, Jaccard index, and Preferential attachment~\cite{LK07}. As we can see from \autoref{figure:attack_all}, most of our attacks outperforms the link prediction method. For instance, on the COX2 dataset, all our 8 attacks outperform the link prediction model, the best attack (Attack-6) achieves more than 20\% performance gain. This demonstrates that GNNs lead to more severe privacy risks than traditional link prediction. \mypara{Effect of Different GNN Structures} In our experiments, we adopt the same architecture for both the target model and the shadow target model by default for transferring attack scenarios. We further evaluate the impact of the shadow target model using different architectures. Note that for space reasons, we only report the results of Attack-1. Results for other attacks are similar. We set the number of hidden layers to 3 for the shadow target model (the target model has 2 hidden layers). The results are summarized in \autoref{table:attack1_different_layer} in Appendix. We find the average AUC scores of our attack are maintained at the same level or even higher for certain datasets compared with the scenario where the shadow target model and the shadow model have the same architecture. For instance, on the Citeseer dataset, we obtain 0.924 AUC, while the original attack achieves 0.965. In other words, our attacks are still effective when the shadow target model and the target model have different architectures. \mypara{Attacks on Other GNNs} We further investigate whether our attacks are applicable to other GNN models besides GCN. Concretely, we focus on GraphSAGE~\cite{HYL17} and GAT~\cite{VCCRLB18}. We implement GraphSAGE\footnote{\url{https://github.com/williamleif/GraphSAGE}} and GAT\footnote{\url{https://github.com/PetarV-/GAT}} based on publicly available code and only report the results of Attack-6. \autoref{table:attack6_GraphSAGE} shows that our attack has similar AUC scores on GraphSAGE and GAT compared to GCN. For instance, on the COX2 dataset, our attack against GraphSAGE and GAT achieves AUC of 0.982 and 0.984, respectively (the corresponding AUC for GCN is 0.987). This further demonstrates that our attacks are generally applicable. \begin{table}[!t] \centering \caption{ Average AUC with standard deviation for Attack-6 when using GraphSAGE or GAT as the target model on all the 8 datasets. } \label{table:attack6_GraphSAGE} \footnotesize \begin{tabular}{l|c|c} \toprule Dataset & AUC (GraphSAGE) & AUC (GAT)\\ \midrule AIDS & 0.977 $\pm$ 0.002 & 0.968 $\pm$ 0.001 \\ COX2 & 0.982 $\pm$ 0.001 & 0.984 $\pm$ 0.001 \\ DHFR & 0.990 $\pm$ 0.001 & 0.995 $\pm$ 0.000 \\ ENZYMES & 0.747 $\pm$ 0.001 & 0.766 $\pm$ 0.004 \\ PROTEINS\_full & 0.999 $\pm$ 0.000 & 0.999 $\pm$ 0.000 \\ Citeseer & 0.938 $\pm$ 0.000 & 0.972 $\pm$ 0.000 \\ Cora & 0.883 $\pm$ 0.001 & 0.958 $\pm$ 0.000 \\ Pubmed & 0.923 $\pm$ 0.000 & 0.965 $\pm$ 0.000 \\ \bottomrule \end{tabular} \end{table} \mypara{Possible Defense} We try to restrict the GNN model to output $k$ largest posteriors as a defense mechanism to mitigate our attacks. The intuition is that the smaller $k$ is, the less information the model reveals. Here, we fix $k=2$ and report the results for Attack-3. Note that we have similar observations for other attacks. Experimental results in \autoref{table:attack3_top2} show that this defense indeed reduces the performance of our attack. However, the performance drop is not very big, i.e., our attack still achieves relatively high AUC scores. For instance, on the Citeseer dataset, this defense reduces Attack-3's performance by less than 2\%. On the AIDS dataset, the attack's performance drop is higher but AUC being 0.855 still indicates our attack is effective. We also note that the defense will impact the utility of the model. In other words, it is a trade-off between utility and privacy. In conclusion, the top-$k$ defense is not effective enough to defend against our attacks. We can also leverage differential privacy (DP) and adversarial examples to mitigate our attacks. In detail, we can adopt edge-DP developed for social networks~\cite{HLMJ09,ZCPSX15} to defend against our attacks. Borrowing the idea from previous work~\cite{JG18,JSBZG19}, we can also add carefully crafted noise to the prediction of GNN to fool the adversary. We plan to explore both of them in the future. \begin{table}[!t] \centering \caption{ Average AUC with standard deviation for Attack-3 when only reporting top-2 posteriors on all the 8 datasets. } \label{table:attack3_top2} \footnotesize \begin{tabular}{l|c|l|c} \toprule Dataset & AUC & Dataset & AUC \\ \midrule AIDS & 0.855 $\pm$ 0.004 & PROTEINS\_full & 0.954 $\pm$ 0.001\\ COX2 & 0.839 $\pm$ 0.005 & Citeseer & 0.958 $\pm$ 0.000\\ DHFR & 0.851 $\pm$ 0.003 & Cora & 0.945 $\pm$ 0.001\\ ENZYMES & 0.876 $\pm$ 0.002 & Pubmed & 0.946 $\pm$ 0.001\\ \bottomrule \end{tabular} \end{table} \mypara{Summary of Results} In summary, we have made the following observations from our experimental results. \begin{itemize} \item Our attacks can effectively steal the links from GNNs. For instance, our Attack-6 can achieve average AUC scores over 0.95 on 7 out of 8 datasets, which demonstrate that the GNNs are vulnerable to our attacks. \item Generally speaking, the performances of the attack are better if there is more background knowledge as shown in \autoref{figure:attack_all}. However, we find the impact of different knowledge is different. In particular, the target dataset's partial graph is the most informative. For instance, Attack-3 ($\mathcal{K}=(\times, \AdjMatrix^{*}, \times)$) significantly outperforms Attack-1 ($\mathcal{K}=(\times, \times, \mathcal{D}^{\prime})$) and Attack-2 ($\mathcal{K}=(\mathcal{F}, \times, \times)$). \item Our transferring attack can achieve good performance. Furthermore, we find that our transferring attack achieves better performance when the shadow dataset and the target dataset are from the same domain as validated by experimental results for Attack-1 and Attack-5. \end{itemize} \section{Related Work} \label{section:RelatedWork} Various research has shown that machine learning models are vulnerable to security and privacy attacks~\cite{TZJRR16,PMGJCS17,SHNSSDG18,LF20,QMR19,LHZG19,JCBKP20,CWSJ20,SWBMZ20,LMXZ20,CZWBHZ20}. In this section, we mainly survey four of these attacks that are most relevant to ours. \mypara{Membership Inference} In membership inference attacks~\cite{SSSS17,SZHBFB19,NSH18,YGFJ18,HMDC19,NSH19,CXXLBKZ18,SS202,CLEKS19,LZ20}, the adversary aims to infer whether a data sample is in the target model's training dataset or not. Shokri et al.~\cite{SSSS17} propose the first membership inference attacks against machine learning models and demonstrate its relationship with model overfitting. Salem et al.~\cite{SZHBFB19} further show membership inference attacks are broadly applicable at low cost via relaxing assumptions on the adversary. To mitigate attacks, many empirical defenses~\cite{SSSS17,SZHBFB19,NSH18,JSBZG19} have been proposed. For instance, Nasr et al.~\cite{NSH18} propose to mitigate attacks via formulating the defense as a min-max optimization problem which tries to decrease the accuracy loss and increase the membership privacy. Salem et al.~\cite{SZHBFB19} explore dropout and model stacking to mitigate membership inference attacks. More recently, Jia et al.~\cite{JSBZG19} leverage adversarial examples to fool the adversary and show their defense has a formal utility guarantee. Other attacks in this space study membership inference in natural language processing models~\cite{SS19}, generative models~\cite{HMDC19,CYZF20}, federated learning~\cite{MSCS19}, and biomedical data~\cite{HZHBTWB19}. \mypara{Model Inversion} In model inversion attacks~\cite{FLJLPR14,FJR15,PMSW18,MSCS19,JCBKP20}, the adversary aims to learn sensitive attributes of training data from target models. For example, Fredrikson et al.~\cite{FLJLPR14} propose the model inversion attack in which the adversary can infer the patient's genetic markers given the model and some demographic information about the patients. Fredrikson et al.~\cite{FJR15} further explore the model inversion attacks on decision trees and neural networks via exploiting the confidence score values revealed along with predictions. Melis et al.~\cite{MSCS19} revealed that in the collaborative learning scenarios, when the target model updated with new training data, the adversary could infer sensitive attributes about the new training data. \mypara{Model Extraction} In model extraction attacks~\cite{TZJRR16,WG18,JCBKP20,CCGJY20}, the adversary aims to steal the parameters of a certain target model or mimic its behaviors. Tram\'{e}r et al.~\cite{TZJRR16} show that an adversary can exactly recover the target model's parameters via solving the equations for certain models, e.g., linear models. Wang and Gong~\cite{WG18} propose attacks to steal the hyperparameters and show their attacks are broadly applicable to a variety of machine learning algorithms, e.g., ridge regression and SVM. Orekondy et al.~\cite{OSF19} propose a functionality stealing attack aiming at mimicking the behaviors of the target model. Concretely, they query the target model and use the query-prediction pairs to train a ``knockoff'' model. Jagielski et al.~\cite{JCBKP20} improve the query efficiency of learning-based model extraction attacks and develop the practical functionally-equivalent model whose predictions are identical to the target model on all inputs without training model's weights. Some defenses~\cite{JSMA19,OSF20} have been proposed to defend against model extraction attacks. For instance, Juuti et al.~\cite{JSMA19} propose to detect malicious queries via analyzing the distribution of consecutive API queries and raises an alarm when the distribution different from benign queries. Orekondy et al~\cite{OSF20} propose a utility-constrained defense against neural network model stealing attacks via adding perturbations to the output of the target model. \mypara{Adversarial Attacks on Graph Neural Networks} Some recent studies~\cite{ZAG18,BG192,DLTHWZS18,ZG19,WWTDLZ19,WG19,ZJWG20} show that GNNs are vulnerable to adversarial attacks. In particular, the adversary can fool GNNs via manipulating the graph structure and/or node features. For instance, Z{\"{u}}gner et al.~\cite{ZAG18} introduce adversarial attacks to attributed graphs and focus on both training and testing phase. In particular, their attacks target both node's features and graph structure and show that the node classification accuracy drops with a few perturbations. Bojchevski et al.~\cite{BG192} analyze the vulnerability of node embeddings to graph structure perturbation via solving a bi-level optimization problem based on eigenvalue perturbation theory. Z{\"{u}}gner and G{\"{u}}nnemann~\cite{ZG19} investigate training time attacks on GNNs for node classification via treating the graph as a hyperparameter to optimize. Wang and Gong~\cite{WG19} propose an attack to evade the collective classification based classifier via perturbing the graph structure, which can also transfer to GNNs. Dai et al.~\cite{DLTHWZS18} propose to fool the GNNs via manipulating the combinatorial structure of data and try to learn generalizable attack policy via reinforcement learning. Zhang et al.~\cite{ZJWG20} propose a subgraph based backdoor attack to GNN based graph classification. In particular, a GNN classifier outputs a target label specified by an adversary when a predefined subgraph is injected to the testing graph. These studies are different from our work since we aim to steal links from GNNs. To mitigate attacks, many defenses~\cite{BG19,ZZCZ19,WWTDLZ19,ZG192} have been proposed. For instance, Zhu et al.~\cite{ZZCZ19} propose to enhance the robustness of GCNs via using Gaussian distributions in graph convolutional layers to mitigate the effects of adversarial attacks and leveraged attention mechanism to impede the propagation of attacks. Z{\"{u}}gner and G{\"{u}}nnemann~\cite{ZG192} propose a learning principle that improves the robustness of the GNNs and show provable robustness guarantees against nodes' attributes perturbation. Bojchevski et al.~\cite{BG192} propose to certify the robustness against graph structure perturbation for a general class of models, e.g., GNNs, via exploiting connections to PageRank and Markov decision processes. These defenses are designed to improve the robustness of GNNs rather than preventing the privacy leakage of it. Note that there are also some attacks and defenses on graph that focus on non-GNN models~\cite{CNKMPAV17,JWCG20}. For instance, Chen et al.~\cite{CNKMPAV17} propose attacks that mislead the behavior of graph-cluster algorithm and show some practical defenses. Jia et al.~\cite{JWCG20} propose certified defense which is based on randomized smoothing to defend against adversarial structural attacks to community detection. \section{Conclusion and Future Work} \label{section:Conclusion} In this paper, we propose the first link stealing attacks against GNNs. Specifically, we show that, given a black-box access to a target GNN model, an adversary can accurately infer whether there exists a link between any pair of nodes in a graph that is used to train the GNN model. We propose a threat model to systematically characterize an adversary's background knowledge along three dimensions. By jointly considering the three dimensions, we define 8 link stealing attacks and propose novel methods to realize them. Extensive evaluation over 8 real-world datasets shows that our attacks can accurately steal links. Interesting future work includes generalizing our attacks to GNNs for graph classification and defending against our attacks. \section*{Acknowledgments} We thank the anonymous reviewers and our shepherd Minhui Xue for constructive feedback. This work is partially funded by the Helmholtz Association within the project ``Trustworthy Federated Data Analytics'' (TFDA) (funding number ZT-I-OO1 4) and National Science Foundation grant No.\ 1937787. \bibliographystyle{plain}
0478da59c3b5d15462a0aba315be972bd1d9529a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{sec:int} \subsection{Background and motivation} Given a sequence $(a_n)_{n \ge 1}$ of the rational integers $\Z$, a prime divisor of a term $a_n$ is called \textit{primitive} if it divides no earlier term. The sequence is a \textit{divisibility sequence} if $a_m \mid a_n$ whenever $m \mid n$, and it is a \textit{strong divisibility sequence} if $\gcd(a_m, a_n) = a_d$ with $d = \gcd(m,n)$ for any positive integers $m, n$. These notions apply to any sequence of a unique factorization domain. It is a classical and still very active topic in number theory to study primitive prime divisors of an integer sequence. The classical Zsigmondy theorem \cite{Zsig} in 1892, extending earlier work of Bang \cite{Bang} in the case $b=1$, says that every term beyond the sixth in the sequence $(a^n - b^n)_{n \ge 1}$ has a primitive prime divisor, where $a,b$ are positive coprime integers. This theorem was independently rediscovered by Birkhoff and Vandiver \cite{BV}. Results of this form are often useful in group theory and in the theory of recurrence sequences (see \cite[Section 6.3]{EPSW} for a discussion and references). In 1913, Carmichael \cite{Car} showed that each term of the Lucas sequence $((a^n - b^n)/(a-b))_{n \ge 1}$ beyond the twelfth has a primitive prime divisor, where $a, b$ are real algebraic integers such that $a/b$ is not a root of unity, and $a+b$ and $ab$ are coprime integers in $\Z$. In 1955, Ward \cite{Ward} obtained a similar result for the Lehmer sequence $(s_n)_{n \ge 1}$ with $s_n = (a^n - b^n)/(a-b)$ for odd $n$ and $s_n = (a^n - b^n)/(a^2-b^2)$ for even $n$, where $a,b$ are real, and $(a+b)^2$ and $ab$ are coprime integers in $\Z$. All these results, including Zsigmondy's theorem, were extended to any number field (that is, $a, b$ do not need to be real) by Schinzel \cite{Sch} in an effective but not explicit manner (see \cite{PS} for an earlier work), which was first made explicitly by Stewart \cite{Stewart}. Furthermore, in 2001, Bilu, Hanrot and Voutier \cite{BHV} listed all the Lucas and Lehmer numbers without primitive prime divisor. So far, the above classical results have various extensions in different settings. For example, the extensions to elliptic divisibility sequence \cite{EMW,Sil}, to dynamical sequences \cite{IS,Rice}, to function fields defined over number fields \cite{IMSSS}, to Drinfeld modules \cite{Bam,Quan,ZJ}. Recently, Flatters and Ward \cite{FW} found an analogue of Zsigmondy's theorem for a polynomial sequence $(f^n - g^n)_{n \ge 1}$, where $f,g$ are two coprime polynomials in a polynomial ring $K[X]$ ($K$ is a field). In this paper, we want to establish analogues of Zsigmondy's theorem and the primitive divisor results for the Lucas and Lehmer sequences in polynomial rings of several variables. The approach is essentially the same as in \cite{FW}. It in fact follows the classical one with some modifications needed to avoid terms in the sequence where the Frobenius automorphism precludes primitive divisors. However, for analogues of polynomial Lucas and Lehmer sequences, it indeed needs some more considerations. Throughout the paper, let $K$ be a field, and $R = K[X_1, \ldots, X_r]$ the ring of polynomials in varibales $X_1, \ldots, X_r$. Let $p$ be the characteristic of $K$. Note that $R$ is a unique factorization domain. Besides, a \textit{prime divisor} of a polynomial $h$ in $R$ means a monic irreducible polynomial in $R$ dividing $h$. We state the main results in the rest of Section~\ref{sec:int}, and then prove them later on. \subsection{Main results} Let $\lambda, \eta$ be non-zero algebraic elements over $R$ such that $\lambda / \eta$ is not a root of unity. Assume that $(\lambda + \eta)^2$ and $\lambda\eta$ are non-zero coprime polynomials in $R$ which are not both in $K$. Define the \textit{Lehmer sequence} of $R$: \begin{equation*} U_n = \left\{\begin{array}{ll} \frac{\lambda^n - \eta^n}{\lambda - \eta} & \textrm{if $n$ is odd,}\\ \\ \frac{\lambda^n - \eta^n}{\lambda^2 - \eta^2} & \textrm{if $n$ is even.}\end{array}\right. \end{equation*} We remark that the Lehmer sequence $(U_n)_{n \ge 1}$ satisfies the following recurrence relation over $R$: $$ U_{n+4} = (\lambda^2 + \eta^2) U_{n+2} - \lambda^2\eta^2 U_n, \quad n = 1, 2, \ldots. $$ The following two theorems are about the strong divisibility property and the primitive prime divisors of the sequence $(U_n)_{n \ge 1}$, respectively. \begin{theorem} \label{thm:strong3} The sequence $(U_n)_{n \ge 1}$ is a strong divisibility sequence. \end{theorem} \begin{theorem} \label{thm:primitive3} Suppose the characteristic $p > 0$ and let $U^\prime$ be the sequence obtained from $(U_n)_{n \ge 1}$ by deleting the terms $U_n$ with $p \mid n$, then each term of $U^\prime$ beyond the second has a primitive prime divisor. If $p = 0$, then each term of $(U_n)_{n \ge 1}$ beyond the second has a primitive prime divisor. \end{theorem} Applying Theorems~\ref{thm:strong3} and \ref{thm:primitive3}, we can obtain the strong divisibility property and the primitive divisor result for polynomial Lucas sequences. Let $\alpha, \beta$ be non-zero algebraic elements over $R$ such that the quotient $\alpha/\beta$ is not a root of unity. Assume that $\alpha + \beta$ and $\alpha \beta$ are coprime polynomials in $R$ which are not both in $K$. Define the \textit{Lucas sequence} of $R$: $$ L_n = \frac{\alpha^n - \beta^n}{\alpha - \beta}, \quad n =1,2, \ldots. $$ We remark that the Lucas sequence $(L_n)_{n \ge 1}$ satisfies the following recurrence relation over $R$: $$ L_{n+2} = (\alpha + \beta) L_{n+1} - \alpha\beta L_n, \quad n = 1, 2, \ldots. $$ \begin{theorem} \label{thm:strong2} The sequence $(L_n)_{n \ge 1}$ is a strong divisibility sequence. \end{theorem} \begin{theorem} \label{thm:primitive2} Suppose the characteristic $p > 0$ and let $L^\prime$ be the sequence obtained from $(L_n)_{n \ge 1}$ by deleting the terms $L_n$ with $p \mid n$, then each term of $L^\prime$ beyond the second has a primitive prime divisor. If $p = 0$, then each term of $(L_n)_{n \ge 1}$ beyond the second has a primitive prime divisor. \end{theorem} Theorem~\ref{thm:primitive2} in fact implies an analogue of Zsigmondy's theorem in $R$. Let $f, g$ be non-zero coprime polynomials in $R$ such that $f$ and $g$ are not both in $K$ and the quotient $f/g$ is not a root of unity. Define the sequence of $R$: $$ F_n = f^n - g^n, \quad n =1,2, \ldots. $$ \begin{theorem} \label{thm:strong1} The sequence $(F_n)_{n \ge 1}$ is a strong divisibility sequence. \end{theorem} \begin{theorem} \label{thm:primitive1} Suppose the characteristic $p > 0$ and let $F^\prime$ be the sequence obtained from $(F_n)_{n \ge 1}$ by deleting the terms $F_n$ with $p \mid n$, then each term of $F^\prime$ beyond the second has a primitive prime divisor. If $p = 0$, then each term of $(F_n)_{n \ge 1}$ beyond the second has a primitive prime divisor. \end{theorem} For $f, g$ as the above, define the sequence: $$ S_n = f^n + g^n, \quad n = 1,2, \ldots. $$ If $p \ne 2$, then $(S_n)_{n \ge 1} \ne (F_n)_{n \ge 1}$. Note that $F_{2n} = F_n S_n$, which implies that each primitive prime divisor of $F_{2n}$ comes from $S_n$. Then, the following corollary is a direct consequence of Theorem~\ref{thm:primitive1}. \begin{corollary} \label{cor:primitive1} Suppose $p > 0$, and let $S^\prime$ be the sequence obtained from $(S_n)_{n \ge 1}$ by deleting the terms $S_n$ with $p \mid n$, then each term of $S^\prime$ beyond the second has a primitive prime divisor. If $p = 0$, then each term of $(S_n)_{n \ge 1}$ beyond the second has a primitive prime divisor. \end{corollary} \section{Preliminaries} Although the sequences we consider are defined over $R$, we prefer to establish some results in a more general setting. Throughout this section, let $D$ be a unique factorization domain. Recall that the resultant of two homogeneous polynomials in variables $X$ and $Y$ is defined to the determinant of their Sylvester matrix. Some basic properties of this resultant are listed in the following lemma; see \cite[Proposition 2.3]{Silverman}. \begin{lemma} \label{lem:Res} For two non-constant homogeneous polynomials defined over a field $$ A(X,Y) = a_0 X^m + a_1 X^{m-1}Y + \ldots + a_m Y^m = a_0 \prod_{i=1}^{m} (X - \alpha_i Y) $$ and $$ B(X,Y) = b_0 X^n + b_1 X^{n-1}Y + \ldots + b_n Y^n = b_0 \prod_{j=1}^{n} (X - \beta_j Y), $$ their resultant is $$ \Res(A,B) = a_0^n b_0^m \prod_{i=1}^{m} \prod_{j=1}^{n} (\alpha_i - \beta_j) \in \Z[a_0, \ldots, a_m, b_0, \ldots, b_n]. $$ Moreover, there exist $G_1, H_1, G_2, H_2 \in \Z[a_0, \ldots, a_m, b_0, \ldots, b_n][X,Y]$ homogeneous in $X$ and $Y$ such that \begin{align*} & G_1A + H_1 B = \Res(A,B)X^{m+n-1}, \\ & G_2A + H_2 B = \Res(A,B)Y^{m+n-1}. \end{align*} \end{lemma} For any integer $n \ge 1$, the $n$-th homogeneous cyclotomic polynomial is defined by $$ \Phi_n (X,Y) = \prod_{k=1, \, \gcd(k,n)=1}^{n} (X - \zeta_n^k Y) \in \Z[X,Y], $$ where $\zeta_n$ is a primitive $n$-th root of unity, and we also define the polynomial $$ P_n(X,Y) = \frac{X^n - Y^n}{X-Y} = \sum_{k=0}^{n-1} X^{n-1-k}Y^k = \prod_{k=1}^{n-1} (X - \zeta_n^k Y). $$ Then, it is easy to see that $$ X^n - Y^n = \prod_{d \mid n} \Phi_d (X,Y), \qquad P_n (X,Y) = \prod_{d \mid n, \, d \ge 2} \Phi_d (X,Y). $$ The following result is \cite[Lemma 2.4]{FW} about the resultant of $P_m(X,Y)$ and $P_n(X,Y)$. \begin{lemma} \label{lem:Res2} For any positive coprime integers $m$ and $n$, we have $\Res(P_m, P_n) = \pm 1$. \end{lemma} We now want to establish some results about coprime elements in $D$. First, we prove a general result. \begin{lemma} \label{lem:coprime} Let $a, b$ be algebraic elements over $D$. Assume that $(a+b)^2$ and $ab$ are coprime elements in $D$. Let $A(X,Y), B(X,Y) \in \Z[X,Y]$ be non-constant homogeneous polynomials with resultant $\Res(A,B) = \pm 1$. Assume that both $A(a,b)$ and $B(a,b)$ are in $D$. Then, $A(a,b)$ and $B(a,b)$ are coprime in $D$. \end{lemma} \begin{proof} Let $m = \deg A$ and $n = \deg B$. By assumption, $a^2 + b^2$ and $ab$ are also coprime in $D$. Note that for any integer $k \ge 1$, $a^{2k} + b^{2k}$ is in $D$. Using Lemma~\ref{lem:Res} and noticing $\Res(A,B) = \pm 1$, we obtain that there exist $u_1, w_1, u_2, w_2 \in \Z[a, b]$ such that $$ u_1 A(a,b) + w_1B(a,b) = a^{2(m+n-1)} + b^{2(m+n-1)} \in D $$ and $$ u_2 A(a,b) + w_2 B(a,b) = a^{2(m+n-1)}b^{2(m+n-1)} \in D. $$ Note that $u_1, w_1, u_2, w_2$ might be not in $D$. By contradiction, suppose that $A(a,b)$ and $B(a,b)$ are not coprime in $D$. Then, there is a prime element $\pi \in D$ such that $\pi \mid A(a,b)$ and $\pi \mid B(a,b)$ in $D$. By the above discussion, we obtain that both $$ \frac{u_1 A(a,b) + w_1 B(a,b)}{\pi}, \quad \frac{u_2 A(a,b) + w_2 B(a,b)}{\pi} $$ are in the fraction field of $D$ and integral over $D$ (because $a,b$ are integral over $D$ satisfying the equation $X^4 - (a^2 + b^2)X^2 + a^2b^2 = 0$). Note that $D$ is an integrally closed domain, so these two quotients are both in $D$. Hence, we have $\pi \mid a^{2(m+n-1)} + b^{2(m+n-1)}$ and $\pi \mid a^{2(m+n-1)}b^{2(m+n-1)}$ in $D$. So, $\pi \mid ab$ in $D$. Let $k = m+n-1$. We have known that $\pi \mid a^{2k} + b^{2k}$ and $\pi \mid ab$ in $D$. Consider \begin{align*} (a^2 + b^2)^k & = a^{2k} + b^{2k} + \sum_{i=1}^{k-1}\binom{k}{i}(a^2)^i (b^2)^{k-i} \\ & = a^{2k} + b^{2k} + a^2b^2 \sum_{i=1}^{k-1}\binom{k}{i}(a^2)^{i-1} (b^2)^{k-1-i}. \end{align*} Note that $\sum_{i=1}^{k-1}\binom{k}{i}(a^2)^{i-1} (b^2)^{k-1-i}$ is also in $D$, because it is symmetric in $a^2$ and $b^2$. Hence, we have $\pi \mid (a^2 + b^2)^{k}$, and then $\pi \mid a^2 + b^2$, so $\pi \mid (a + b)^2$ in $D$ (because $\pi \mid ab$). This leads to a contradiction with the assumption that $(a + b)^2$ and $ab$ are coprime in $D$. Therefore, $A(a,b)$ and $B(a,b)$ are coprime in $D$. \end{proof} Based on Lemma~\ref{lem:coprime}, we can derive several results about coprime elements in $R$ in the sequel. \begin{lemma} \label{lem:PmPn2} Let $a, b$ be two algebraic elements over $D$. Assume that $a+b$ and $ab$ are coprime elements in $D$. Then, for any positive coprime integers $m, n$, $P_m(a,b)$ and $P_n(a,b)$ are coprime in $D$. \end{lemma} \begin{proof} Without loss of generality, we can assume $m \ge 2, n \ge 2$. Clearly, both $P_m(a,b)$ and $P_n(a,b)$ are in $D$. Because both $P_m(X,Y)$ and $P_n(X,Y)$ are symmetric in $X$ and $Y$. By assumption, $(a+b)^2$ and $ab$ are coprime elements in $D$. Then, the desired result follows from Lemmas~\ref{lem:Res2} and \ref{lem:coprime}. \end{proof} \begin{lemma} \label{lem:PmPn-odd} Let $a, b$ be defined as in Lemma~\ref{lem:coprime}. Let $m,n$ be two positive coprime integers such that both $m$ and $n$ are odd. Then, $P_m(a,b)$ and $P_n(a,b)$ are coprime in $D$. \end{lemma} \begin{proof} Without loss of generality, we can assume $m \ge 3, n \ge 3$. Since both $(a+b)^2$ and $ab$ are in $D$, we have $a^2 + b^2 \in D$. Moreover, we have that for any integer $k \ge 1$, $a^{2k} + b^{2k} \in D$. Note that $P_m(X,Y)$ is homogeneous of even degree $m-1$ and symmetric in $X$ and $Y$. So, if $X^iY^j$ is a term in $P_m(X,Y)$, then $X^jY^i$ is also a term in $P_m(X,Y)$, and then assuming $i \le j$, we have $$ a^i b^j + a^j b^i = (ab)^i (a^{j-i} + b^{j-i}) \in D, $$ where we use the fact that $j - i$ is even (because $i + j = m-1$ is even). Hence, we have that $P_m(a,b)$ is in $D$. Similarly, $P_n(a,b)$ is also in $D$. Now, the desired result follows directly from Lemmas~\ref{lem:Res2} and \ref{lem:coprime}. \end{proof} \begin{lemma} \label{lem:PmPn-mix} Let $a, b$ be defined as in Lemma~\ref{lem:coprime}. Let $m,n$ be two positive coprime integers such that $m$ is odd and $n$ is even. Then, $P_m(a,b)$ and $P_n(a,b)/(a + b)$ are coprime in $D$. \end{lemma} \begin{proof} Without loss of generality, we can assume $m \ge 3, n \ge 4$ (because $P_2(a,b)/(a + b)=1$). Since $m$ is odd, as in the proof of Lemma~\ref{lem:PmPn-odd} $P_m(a,b)$ is in $D$. For any odd integer $k \ge 1$, note that $$ \frac{a^k + b^k}{a+b} = a^{k-1} - a^{k-2}b + \ldots - ab^{k-2} + b^{k-1} $$ is homogeneous of even degree $k-1$ and is symmetric in $a$ and $b$, so it is in $D$. Hence, for even $n$, since \begin{align*} \frac{P_n(a,b)}{a + b} = \frac{a^{n-1} + b^{n-1}}{a+b} + ab \cdot \frac{a^{n-3} + b^{n-3}}{a+b} + \ldots + a^{\frac{n-2}{2}}b^{\frac{n-2}{2}} \cdot \frac{a + b}{a+b}, \end{align*} we have that $P_n(a,b)/(a + b)$ is in $D$. Denote $T_n(X,Y) = P_n(X,Y)/(X+Y)$, which can be viewed as a polynomial over $\Z$. Using Lemma~\ref{lem:Res} and applying the same arguments as in the proof of \cite[Lemma 2.4]{FW}, we obtain that the resultant of $P_m(X,Y)$ and $T_n(X,Y)$ is equal to $\pm 1$. Hence, by Lemma~\ref{lem:coprime}, $P_m(a,b)$ and $T_n(a,b)$ are coprime in $D$. \end{proof} \begin{lemma} \label{lem:Pmn} Let $a, b$ be defined as in Lemma~\ref{lem:coprime}. Let $m, n$ be two positive integers such that both $m$ and $n$ are odd. Then, $P_m(a^n,b^n)$ and $(a^n + b^n)/(a + b)$ are coprime in $D$. \end{lemma} \begin{proof} Without loss of generality, we can assume $m \ge 3, n \ge 3$. As before, since $m$ and $n$ are odd, both $P_m(a^n,b^n)$ and $(a^n + b^n)/(a + b)$ are indeed in $D$. Define $$ V_m(X,Y) =P_m(X^n, Y^n), \qquad W_n(X,Y) = \frac{X^n + Y^n}{X+Y}. $$ Both $V_m$ and $W_n$ can be viewed as polynomials over $\Z$. So, we first compute their resultant over $\Z$. Note that $$ V_m(X,Y) = \prod_{i=1}^{m-1}(X^n - \zeta_m^i Y^n) = \prod_{i=1}^{m-1}\prod_{j=1}^{n} (X - \zeta_n^j \zeta_{mn}^i Y), $$ and $$ W_n(X,Y) = \frac{X^n - (-Y)^n}{X+Y} = \prod_{k=1}^{n-1}(X + \zeta_n^k Y). $$ By Lemma~\ref{lem:Res}, the resultant $$ \Res(V_m, W_n) = \prod_{i=1}^{m-1}\prod_{j=1}^{n} \prod_{k=1}^{n-1} ( \zeta_{mn}^i \zeta_n^j + \zeta_n^k) \in \Z. $$ For each factor $\zeta_{mn}^i \zeta_n^j + \zeta_n^k$ in the resultant, we have $\zeta_{mn}^i \zeta_n^j \ne \zeta_n^k$ (because otherwise we would have $\zeta_{mn}^i \zeta_{mn}^{mj} = \zeta_{mn}^{mk}$, and then $m \mid i$, but $1 \le i \le m-1$), and so $\zeta_{mn}^i \zeta_n^j + \zeta_n^k = \zeta_{m^\prime}(1 -\zeta_2\zeta_{n^\prime})$ for some odd integer $m^\prime$ and odd integer $n^\prime \ge 3$ (noticing both $m,n$ are odd), and thus it is a unit by \cite[Proposition 2.8]{Was}. Hence, $\Res(V_m, W_n)$ is a unit in $\Z$, that is, $\Res(V_m, W_n) = \pm 1$. Therefore, as polynomials over $D$, we also have $\Res(V_m, W_n) = \pm 1$. Then, by Lemma~\ref{lem:coprime}, $V_m(a,b)$ and $W_n(a,b)$ are coprime in $D$. \end{proof} \begin{lemma} \label{lem:abn} Let $a, b$ be defined as in Lemma~\ref{lem:coprime}. Then, for any odd integer $n \ge 1$, $(a^n - b^n)/(a-b) $ and $(a+b)^2$ are coprime in $D$. \end{lemma} \begin{proof} Without loss of generality, we fix an odd integer $n \ge 3$. As before, both $(a^n - b^n)/(a-b) $ and $(a+b)^2$ are in $D$. As in the proof of Lemma~\ref{lem:Pmn}, we deduce that the resultant of the homogeneous polynomials $(X^n - Y^n)/(X-Y) $ and $(X+Y)^2$ is equal to $\pm 1$. Hence, using Lemma~\ref{lem:coprime}, we obtain that $(a^n - b^n)/(a-b) $ and $(a+b)^2$ are coprime in $D$. \end{proof} \section{Proofs of Theorems~\ref{thm:strong3} and \ref{thm:primitive3}} We need to make one more preparation. Recall that $p$ is the characteristic of the field $K$. As usual, denote by $v_\pi(h)$ the maximal power to which an irreducible polynomial $\pi$ divides $h \in R$. Let $M$ be the fraction field of $R$. By assumption, $M(\lambda)$ is a field extension over $M$ having degree at most four. Note that $\eta \in M(\lambda)$. For any irreducible polynomial $\pi \in R$, as usual $v_\pi$ induces a valuation of $M$. It is well-known that the valuation $v_\pi$ in $M$ can be extended to the field $M(\lambda)$; see, for instance, \cite[Theorem 3.1.2]{EP}. Without confusion, we still denote by $v_\pi$ the corresponding extension of valuation in $M(\lambda)$. \begin{lemma} \label{lem:vU} Let $\pi \in R$ be an irreducible polynomial dividing $U_n$ for some $n \ge 3$. Then, for any $m \ge 1$ with $p \nmid m$ $($including the case $p=0)$, we have $v_\pi (U_{mn}) = v_\pi (U_n)$. \end{lemma} \begin{proof} First, since $\lambda, \eta$ are both integral over the ring $R$, we have that $v_\pi(\lambda) \ge 0$ and $v_\pi(\eta) \ge 0$. Suppose that $v_\pi(\eta) > 0$. Note that we have either $\lambda^n = \eta^n + (\lambda - \eta)U_n$, or $\lambda^n = \eta^n + (\lambda^2 - \eta^2)U_n$. Then, since $v_\pi(\eta) > 0$ and $v_\pi(U_n) > 0$, we have $v_\pi(\lambda^n) > 0$. So, $v_\pi (\lambda) > 0$. Thus, $$ v_\pi(\lambda + \eta) >0, \qquad v_\pi(\lambda\eta) > 0, $$ which contradicts the assumption that $(\lambda + \eta)^2$ and $\lambda\eta$ are coprime in $R$. Hence, we must have $v_\pi(\eta) = 0$. Similarly, we must have $v_\pi(\lambda) = 0$. Assume that $n$ is odd. Then, $U_n = (\lambda^n - \eta^n)/(\lambda - \eta)$. So, we have $$ \lambda^n = \eta^n + (\lambda - \eta)U_n. $$ Then, we obtain $$ \lambda^{mn} = \big( \eta^n + (\lambda - \eta)U_n \big)^m = \eta^{mn} + \sum_{i=1}^{m} \binom{m}{i} (\lambda - \eta)^i U_n^i \eta^{n(m-i)}. $$ So $$ \frac{\lambda^{mn} - \eta^{mn}}{\lambda - \eta} = m\eta^{n(m-1)} U_n + \sum_{i=2}^{m} \binom{m}{i} (\lambda - \eta)^{i-1} \eta^{n(m-i)} U_n^i. $$ Hence, we obtain that for odd $m$ $$ U_{mn} = m\eta^{n(m-1)}U_n + \sum_{i=2}^{m}\binom{m}{i} (\lambda - \eta)^{i-1}\eta^{n(m-i)} U_n^{i}, $$ and for even $m$ $$ (\lambda + \eta)U_{mn} = m\eta^{n(m-1)}U_n + \sum_{i=2}^{m}\binom{m}{i} (\lambda - \eta)^{i-1}\eta^{n(m-i)} U_n^{i}. $$ We also note that since $n$ is odd and $v_\pi (U_n) > 0$, by Lemma~\ref{lem:abn} we have $v_\pi (\lambda + \eta) = 0$. Then, the desired result follows. Finally, assume that $n$ is even. Then, as the above, for any integer $m \ge 1$ we obtain $$ U_{mn} = m\eta^{n(m-1)}U_n + \sum_{i=2}^{m}\binom{m}{i} (\lambda^2 - \eta^2)^{i-1}\eta^{n(m-i)} U_n^{i}. $$ The desired result now follows. \end{proof} Now, we are ready to prove the theorems. \begin{proof}[Proof of Theorem~\ref{thm:strong3}] Let $d = \gcd(m,n)$. First, we assume that both $m$ and $n$ are even. Then, $d$ is also even. By definition, we obtain $$ U_m = U_d P_{m/d}(\lambda^d, \eta^d), \quad U_n = U_d P_{n/d}(\lambda^d, \eta^d). $$ By assumption, it is easy to see that $\lambda^d + \eta^d$ and $\lambda^d \eta^d$ are coprime in $R$ (as in the last paragraph of the proof of Lemma~\ref{lem:coprime}). Hence, by Lemma~\ref{lem:PmPn2}, we know that $P_{m/d}(\lambda^d, \eta^d)$ and $P_{n/d}(\lambda^d, \eta^d)$ are coprime in $R$, and so we have $\gcd(U_m, U_n) = U_d$ in this case. Now, we assume that both $m$ and $n$ are odd. Then, $d$ is also odd. By definition, we have $$ U_m = U_d P_{m/d}(\lambda^d, \eta^d), \quad U_n = U_d P_{n/d}(\lambda^d, \eta^d). $$ We also note that $(\lambda^d + \eta^d)^2$ and $\lambda^d \eta^d$ are coprime in $R$. Then, by Lemma~\ref{lem:PmPn-odd} we know that $P_{m/d}(\lambda^d, \eta^d)$ and $P_{n/d}(\lambda^d, \eta^d)$ are coprime in $R$, and so we have $\gcd(U_m, U_n) = U_d$. Finally, when $m$ and $n$ do not have the same parity, without loss of generality, we assume that $m$ is odd and $n$ is even. Then, $d$ is odd. By definition, we have $$ U_m = U_d P_{m/d}(\lambda^d, \eta^d), $$ and $$ U_n = U_d \cdot \frac{P_{n/d}(\lambda^d, \eta^d)}{\lambda^d + \eta^d} \cdot \frac{\lambda^d + \eta^d}{\lambda + \eta}. $$ Then, by Lemma~\ref{lem:PmPn-mix} we know that $P_{m/d}(\lambda^d, \eta^d)$ and $P_{n/d}(\lambda^d, \eta^d)/(\lambda^d + \eta^d)$ are coprime in $R$. Besides, by Lemma~\ref{lem:Pmn} we obtain that $P_{m/d}(\lambda^d, \eta^d)$ and $(\lambda^d + \eta^d)/(\lambda + \eta)$ are coprime in $R$. Hence, we have $\gcd(U_m, U_n) = U_d$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:primitive3}] As in \cite{Ward}, we define the sequence $(Q_n)_{n \ge 1}$ of polynomials by $Q_1 = 1, Q_2 = 1$, and $$ Q_n(X,Y) = \Phi_n(X,Y), \quad n = 3, 4, \ldots. $$ Then, it is easy to see that for any integer $n \ge 1$ we have $$ U_n = \prod_{d \mid n} Q_d(\lambda, \eta). $$ By the M{\"o}bius inversion, we have $$ Q_n(\lambda, \eta) = \prod_{d\mid n} U_d^{\mu(n/d)}. $$ So, for any irreducible polynomial $\pi$ in $R$ we have $$ v_\pi(Q_n(\lambda, \eta)) = \sum_{d \mid n} \mu(n/d) v_\pi(U_d). $$ Now, assume the characteristic $p > 0$. suppose that $\pi$ is a prime divisor of $U_n$ which is not primitive, where $p \nmid n$. Let $m$ be the minimal positive integer such that $\pi \mid U_m$. Automatically, $p \nmid m$. Then, by Theorem~\ref{thm:strong3} we have $m \mid n$, and by Lemma~\ref{lem:vU}, for any positive integer $k$ with $p \nmid k$ $$ v_\pi(U_{mk}) = v_\pi(U_m). $$ Hence, if $m < n$, noticing $p \nmid n$ we obtain \begin{align*} v_\pi(Q_n(\lambda, \eta)) & = \sum_{d \mid n/m} \mu(n/(dm)) v_\pi(Q_{dm}) \\ & = \sum_{d \mid n/m} \mu(n/(dm)) v_\pi(Q_m) \\ & = v_\pi(Q_m) \sum_{d \mid n/m} \mu(n/(dm)) = 0. \end{align*} So, any non-primitive prime divisor of $U_n$ (in the sequence $U^\prime$) does not divide $Q_n(\lambda, \eta)$. It is easy to see that when $n > 2$, $Q_n(\lambda, \eta) = \Phi_n(\lambda, \eta)$ is non-constant (because at least one of $\lambda$ and $\eta$ is transcendental over $K$), and so $Q_n(\lambda, \eta) $ has a prime divisor in $R$. Thus, when $n > 2$, any prime divisor of $Q_n(\lambda, \eta)$ is primitive, and so each term in the sequence $U^\prime$ beyond the second has a primitive prime divisor. The proof for the case $p = 0$ follows exactly the same way. \end{proof} \begin{remark} \label{rem:Lehmer} In the proof of Theorem~\ref{thm:primitive3}, we obtain more: the \textit{primitive part} (that is, the product of all the primitive prime divisors to their respective powers) of $U_n$ is $Q_n(\lambda, \eta) = \Phi_n(\lambda,\eta)$, where $n \ge 3$, and $p \nmid n$ if $p > 0$. \end{remark} \section{Proofs of Theorems~\ref{thm:strong2} and \ref{thm:primitive2}} The proofs follow easily from Theorems~\ref{thm:strong3} and \ref{thm:primitive3}. \begin{proof}[Proof of Theorem~\ref{thm:strong2}] Fix positive integers $m, n$ with $d= \gcd(m,n)$. If either both $m,n$ are odd, or both $m,n$ are even, it follows directly from Theorem~\ref{thm:strong3} that $\gcd(L_m, L_n) = L_d$ (setting $\lambda = \alpha, \eta = \beta$). Now, without loss of generality, assume that $m$ is even and $n$ is odd. By Theorem~\ref{thm:strong3}, we have $$ \gcd\Big(\frac{\alpha^m - \beta^m}{\alpha^2 - \beta^2}, \frac{\alpha^n - \beta^n}{\alpha - \beta} \Big) = \frac{\alpha^d - \beta^d}{\alpha - \beta}. $$ Using Lemma~\ref{lem:abn} we know that $(\alpha^n - \beta^n)/(\alpha - \beta)$ and $\alpha + \beta$ are coprime in $R$. Hence, we obtain $\gcd(L_m, L_n) = L_d$. This completes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:primitive2}] Assume that the characteristic $p = 0$. First, by Lemma~\ref{lem:abn}, we have that for any odd $n \ge 3$, $\Phi_n(\alpha,\beta)$ and $\alpha + \beta$ are coprime in $R$. Now, fix an even integer $n \ge 4$. Suppose that there exists an irreducible polynomial, say $\pi$, in $R$ dividing both $\Phi_n(\alpha, \beta)$ and $\Phi_2(\alpha,\beta) = \alpha + \beta$. This means that the polynomial $X^n - Y^n$, defined over the fraction field of the ring $R$ (mod $\pi$), has a multiple root (that is, $(\alpha, \beta)$). However, this fraction field has characteristic zero (because it contains the field $K$), which implies that $X^n - Y^n$ is in fact a simple polynomial. Hence, this leads to a contradiction, and so $\Phi_n(\alpha, \beta)$ and $\alpha + \beta$ are coprime in $R$. Therefore, by constructions we directly obtain from Remark~\ref{rem:Lehmer} that the primitive part of $L_n$ is $\Phi_n(\alpha,\beta)$, where $n \ge 3$. Finally, if the characteristic $p > 0$, then by contruction the above arguments still work (because in the sequence $L^\prime$ we have deleted those terms $L_n$ with $p \mid n$). \end{proof} \begin{remark} \label{rem:Lucas} In the proof of Theorem~\ref{thm:primitive2}, we obtain more: the primitive part of $L_n$ is $\Phi_n(\alpha,\beta)$, where $n \ge 3$, and $p \nmid n$ if $p > 0$. \end{remark} \section{Proofs of Theorems~\ref{thm:strong1} and \ref{thm:primitive1}} Clearly, Theorem~\ref{thm:strong1} follows directly from Theorem~\ref{thm:strong2}. \begin{proof}[Proof of Theorem~\ref{thm:primitive1}] Assume that the characteristic $p = 0$. Fix an integer $n \ge 3$. Taking $\alpha = f$ and $\beta = g$ in Theorem~\ref{thm:primitive2} and noticing Remark~\ref{rem:Lucas}, we know that the primitive part of the term $ (f^n - g^n) / (f-g)$ is $\Phi_n(f,g)$. As the above, we obtain that $\Phi_n(f,g)$ and $f-g$ are coprime in $R$. Hence, the primitive part of the term $F_n = f^n - g^n$ is $\Phi_n(f,g)$. Finally, if the characteristic $p > 0$, then by contruction the above arguments still work (because in the sequence $F^\prime$ we have deleted those terms $F_n$ with $p \mid n$). \end{proof} \begin{remark} In the proof of Theorem~\ref{thm:primitive1}, we obtain more: the primitive part of $F_n$ is $\Phi_n(f,g)$, where $n \ge 3$, and $p \nmid n$ if $p > 0$. \end{remark} \section{Comments} In this section, we make some remarks about extending our results to unique factorization domains. Note that all the lemmas used in proving the strong divisibility property are valid for any unique factorization domain. So, we have the following result. \begin{theorem} The strong divisibility properties in Theorems~\ref{thm:strong3}, \ref{thm:strong2} and \ref{thm:strong1} still hold when we replace the ring $R$ by a unique factorization domain $D$. \end{theorem} In order to extend fully all our results on primitive divisors to a unique factorization domain $D$, we need to assure two properties. One is about the valuation similar as in Lemma~\ref{lem:vU}. The other is to assure that $\Phi_n(f,g), \Phi_n(\alpha,\beta)$ and $\Phi_n(\lambda, \eta)$ are all non-zero and non-unit whenever $n \ge 3$. If $D$ contains a field, then any integer as an element in $D$ is either zero or a unit, and so the valuation result holds in this case by following the same arguments as Lemma~\ref{lem:vU}. Hence, in this case, if one can show that $\Phi_n(f,g)$ is non-unit whenever $n > n_0$ for some integer $n_0$, then one in fact prove the result in Theorem~\ref{thm:primitive1} by replacing ``beyond the second" with ``beyond the $n_0$-th". Similar things apply to Theorems~\ref{thm:primitive3} and \ref{thm:primitive2}. We present an example here. Let $D=K[[X]]$ be the formal power series ring defined over a field $K$ in one variable $X$. Then, an element $\sum_{n=0}^{\infty}a_n X^n$ in $D$ is a unit if and only if $a_0 \ne 0$. Let $f$ and $g$ be non-zero, non-unit and coprime in $D$ such that $f/g$ is not a root of unity. Then, $\Phi_n(f,g)$ is non-zero and non-unit for any $n \ge 1$, and so Theorem~\ref{thm:primitive1} holds in this case. In addition, if let $f$ be non-unit and $g$ a unit in $D$, then $\Phi_n(f,g)$ is a unit for any $n \ge 1$, \section*{Acknowledgement} The author was partly supported by the Australian Research Council Grant DE190100888.
94118d3cb555b389f4fd239c14ebec257813e0de
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In transportation research, it is quite unlikely to observe or even perform real-world experiments in terms of travel behavior or traffic flow. There are few notable exceptions: subway strikes suddenly make one important alternative mode not available anymore \cite{Anderson2014,Adler2016}, a global pandemic changes travelers' preferences for traveling at all or traveling collectively with others \cite{Molloy2021}, or a bridge collapse forces travelers to alter their daily activities \cite{Zhu2010}. However, in 2022 the German federal government announced in response to a sharp increase in energy and consumer prices a set of measures that partially offset the cost increases for households. Among these are a public transport ticket at 9\ EUR per month\footnote{\url{https://www.bundesregierung.de/breg-de/aktuelles/9-euro-ticket-2028756}} for traveling all across Germany in public transport, except for long-distance train services (e.g., ICE, TGV, night trains), as well as a tax cut on gasoline and diesel, resulting in a cost reduction of about 15\ \% for car drivers\footnote{\url{https://www.bundesfinanzministerium.de/Content/DE/Standardartikel/Themen/Schlaglichter/Entlastungen/schnelle-spuerbare-entlastungen.html}}. Both measures were limited to three months, namely June, July and August 2022. As of end of August, more than 52\ million tickets have been sold\footnote{ \url{https://www.vdv.de/bilanz-9-euro-ticket.aspx} }, while it seems that the fuel tax cut did not reach consumers due to generally increased fuel prices and oil companies are accused of not forwarding the tax cuts to consumers \footnote{\url{https://www.spiegel.de/wirtschaft/tankrabatt-hat-zunehmend-an-wirkung-verloren-rwi-studie-a-cb7a4e84-c943-44a3-b0d3-fcfff9ba3061?dicbo=v2-bf58f4c0d939c05bc696544c175d1063}}. For the Munich metropolitan region in Germany, we designed a study under the label "Mobilität.Leben" \footnote{\url{https://www.hfp.tum.de/hfp/tum-think-tank/mobilitaet-leben/}} comprising three elements: (i) a three-wave survey before, during and after the introduction of cost-saving measures; (ii) a smartphone app based measurement of travel behavior and activities during the same period; (iii) an analysis of aggregated traffic counts and mobility indicators. We will use data from 2017 as well as 2019 (pre-COVID-19) and data from shortly before the cost reduction measures for the comparison. In addition, the three-wave survey is presented to a nation-wide control group, which however does not participate in the app. The main goal of the study is to investigate the effectiveness of the cost-saving measures with focus on the behavioral impact of the 9\ EUR-ticket on mode choice \cite{ben1985discrete}, rebound effects \cite{Greening2000,Hymel2010}, and induced demand \cite{Weis2009}. Further details on the study design can be found in our first and second report \cite{reportone,cantner_nation-wide_2022}. In this third report, we first provide an update on the study participation in Section \ref{sec:participation}; second, we present how participants see and use the 9~EUR-ticket in Section \ref{sec:ticket}; third, Section \ref{sec:mobility} provides a summary of the travel behavior reported in the second wave during the experiment in comparison to the time before; in Section \ref{sec:wtp} we show preliminary results regarding the willingness-to-pay for a successor of the 9~EUR-ticket and finally present the reported impacts on the households' finances in Section \ref{sec:finance}. \section{Study participation update} \label{sec:participation} For the entire study, 2~268 participants had been successfully recruited. The entire sample comprises 1~349 participants for the \textit{Munich study} (MS) and 919 participants from the nation-wide \textit{control group} (CG). At the end of the experiment in August 2022, four participants from the Munich study explicitly opted out. If not stated otherwise, numbers in parentheses refer to the findings from the control group. The first survey wave has been fully completed by 1~225 participants or 91.1~\% of the recruited participants (CG: 100~\%). The first wave has been distributed to participants at the end of May, right before the beginning of the cost reduction measures in June. The second survey has been fully completed by 1~010 participants or 75.1~\% of the recruited participants (CG: 75.2~\%). The second wave has been distributed at the end of July. In the Munich study 18 participants completed the second survey, but not the first survey, resulting only in 992 total joint responses for the first and second survey or 73.8~\% of the recruited participants. Considering the app participation, at the end of August 2022, 775 participants are still using the app of which 717 have completed both surveys. \section{9~EUR-ticket} \label{sec:ticket} In the Munich study, 90~\% (CG: 49~\%) and 89~\% (CG: 48~\%) of the second-wave respondents bought the 9~EUR-ticket for June and July, respectively. For August, 49~\% (CG: 20~\%) bought the 9~EUR-ticket and 40~\% (CG: 29~\%) had the intention to buy it when asked at the end of July. Consequently, we can conclude that the interest in holding this travel pass did not change over time, while the data suggests that interest on the other hand did not increase over time. Officially, around 52 million 9~EUR-tickets have been sold in total during the three months and ten million people received it indirectly through their existing travel card subscriptions. This corresponds to around one third of Germany's population. Consequently, the ownership shares in the Munich study and the control group are remarkably higher. In the second wave, the 9~EUR-ticket receives support from respondents across the board. For example, 84~\% (CG: 61~\%) of respondents agree to the statement that the 9~EUR-ticket leads to a more comprehensible pricing structure, while 85~\% (CG: 72~\%) agree to the statement that the new travel makes traveling in Germany more flexible. 80~\% of respondents (CG: 60~\%) agree that the 9~EUR-ticket makes them less worried to buy the wrong public transport ticket. Regarding the savings caused by the 9~EUR-ticket, 64~\% (CG: 65~\%) of respondents agree to the statement that the savings can be spent for meaningful products or services. \section{Travel behavior} \label{sec:mobility} When asked about their behavioral changes during the first weeks of the 9~EUR-ticket and fuel rebate compared to the time before, 48~\% (CG: 29~\%) of respondents state as seen in Figure \ref{fig:travel} that they increased public transport use, 3~\% (CG: 6~\%) state that they decreased public transport use, and 47~\% (CG: 65~\%) report no change. Regarding car use, 5~\% (CG: 7~\%) of respondents state that they increased car use, 31~\% (CG: 25~\%) that they decreased car use and 59~\% (CG: 67~\%) state no change in their car use. Overall, 89~\% (CG: 87~\%) of respondents who increased public transport use state that this was in response to the introduction of the 9~EUR-ticket, while only 74~\% (CG: 66~\%) of respondents who decreased car use state that this was in response to the introduction of the 9~EUR-ticket. Interestingly, regarding the increase in car use, 2~\% (CG: 40~\%) of respondents argue that their increase is in response to the fuel tax cut. \begin{figure} \centering \includegraphics[width=14cm]{hist.pdf} \caption{Stated changes in travel behavior during the period of the 9~EUR-ticket and the fuel tax cut compared to the time before.} \label{fig:travel} \end{figure} Considering both modes jointly, we find that 26~\% (CG: 17~\%) of respondents report an increase in public transport use and an decrease in car use. Out of these, more than 82~\% (CG: 84~\%) of respondents argue that the reason for their behavioral change is the introduction of the 9~EUR-ticket. We corroborate these first findings by considering the stated weekly car and public transport usage patterns reported in the first and second wave. Here, we classify usage as follows: never, less than once per week, once per week, 2-3 days per week, 4-5 days per week and 6-7 days per week. Thus, observing a change here, is likely to be more robust. We find that 55~\% (CG: 63~\%) of respondents did not change their stated car use, while 46~\% (CG: 58~\%) of respondents did not change their stated public transport use; 24~\% (CG: 18~\%) of respondents show an increase and 22~\% (CG: 18~\%) of respondents a decrease in car use accordingly; 37~\% (CG: 23~\%) of respondents show an increase and 17~\% (CG: 18~\%) show a decrease in public transport use respectively. Considering both modes jointly, we find that 8.6~\% (CG: 5.9~\%) of respondents increased public transport and decreased car use. When focusing only on those respondents who used the car at least four days per week, i.e. those who can change their behavior, we find that 18~\% (CG: 7.2~\%) of respondents increased public transport and decreased car use. The substantial difference between this figure and the 26~\% (CG: 17~\%) reported above can be explained by the fact that the lower figure refers to a more coarse weekly pattern classified into bins of two days width. This means that, for example, replacing one out of two car trips per day by public transport is not reflected in this classification, only if it is completely abandoned for one day or more. However, the weekly pattern can be expected to provide more robust estimates regarding the impact of the behavioral change, e.g., in terms of kilometers traveled. Regarding activities, 35~\% (CG: 23~\%) of respondents state that they participated in more activities as a consequence of the 9~EUR-ticket, while 44~\% (CG: 59~\%) state that the introduction of the new travel pass did not increase their number of activities. Respondents report to use public transport on average for 1.2 more activities per week and to use the car for 0.7 less activities per week. This finding again provides evidence that the introduction of the 9~EUR-ticket generates to some extent new travel demand. \section{Willingness-to-pay for a successor of the 9~EUR-ticket} \label{sec:wtp} In the second survey, respondents were asked to state their maximum willingness-to-pay for a nation-wide travel pass for all local public transport services and for all public transport services including long-distance services. Figure \ref{fig:wtp} shows the distributions of the responses. The average willingness-to-pay for a nation-wide travel pass for all local public transport services as a successor of the 9~EUR-ticket was 52.39~EUR (CG: 47.74~EUR), while the average willingness-to-pay for a nation-wide travel pass for all public transport services including long-distance services was 101.47~EUR (CG: 77.62~EUR). For the nation-wide travel pass for all local services, we find that higher incomes increase the willingness-to-pay by 10~EUR to 15~EUR compared to the lowest income group. We find no statistically significant differences between males and females, but a small age-effect of about minus 2~EUR per ten years of age in the willingness-to-pay, i.e., older people are willing to spend less for such a ticket. Students willingness-to-pay is about 7~EUR lower compared to working people. Using the car frequently does not impact the willingness-to-pay, but being a public transport user before the introduction of the 9~EUR ticket increases the willingness-to-pay by around 18~EUR. The tendency of effects is similar for the nation-wide travel pass for all services including long-distance services, but at a higher level as it can be seen in Figure \ref{fig:wtp}. However, for this type of travel pass an effect of gender exist as women's willingness-to-pay is around 8~EUR less. \begin{figure}[t!] \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{wtp_local.pdf} \caption{} \label{fig:local} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{wtp_full.pdf} \caption{} \label{fig:nation} \end{subfigure} \caption{Willingness-to-pay for nation-wide travel passes. (a) shows the distribution for a travel pass for local public transport services only and (b) shows the distribution for a travel pass for all public transport services, incl. long-distance services like ICE, IC and EC.} \label{fig:wtp} \end{figure} Comparing the findings from Figure \ref{fig:wtp} to the public debate we find that our sample's average willingness-to-pay is very close to the discussed 49~EUR, which has been proposed by the Social Democratic and the Green Party, currently part of the federal government. Contrary, the willingness-to-pay is substantially below the 69~EUR as proposed by the Association of German Transport Companies (VDV). \section{Financial aspects} \label{sec:finance} The primary intention of the fuel tax cut and the 9~EUR-ticket has been to partially offset the recent price increases. Therefore, it is not surprising that the respondents' agreement to the statement that the ticket is an relief stays high at 82~\% (CG: 75~\%). Overall, 76~\% (CG: 39~\%) of respondents state that they benefited financially from the 9~EUR-ticket, while only 24~\% (CG: 32~\%) mentioned that they benefited financially from the fuel tax cut. Table \ref{tab:1} shows the stated savings of the Munich study respondents (and the control group in parentheses). It can be seen that the savings for most respondents are less than 50~EUR per month. Yet, a substantial portion stated savings between 50~EUR and 100~EUR per month. Overall, 48.9~\% (CG: 55.2~\%) of respondents state that these savings have already been consumed by inflation, while 14.5~\% (CG: 11.7~\%) state that they used these savings for spending on other goods and services. \begin{table}[h!] \centering \begin{tabularx}{\textwidth}{l | cccccc} \toprule & \multicolumn{6}{c}{Savings per month in EUR}\\ \cmidrule{2-7} & < 50 & 50-100 & 100-150 & 150-200 & 200-250 & > 250 \\ \midrule Benefit from ticket, N = 781 (269) & 45.1 (40.9) & 38.7 (36.4) & 10.8 (13.4) & 4.1 (6.7) & 0.8 (1.9) & 0.6 (0.7) \\ Benefit from fuel tax, N = 238 (221) & 47.0 (56.11) & 35.3 (26.7) & 11.3 (11.3) & 5.0 (4.5) & 0.4 (1.4) & 0.8 (0.0) \\ Benefit from both, N = 193 (101) & 41.5 (34.7) & 38.9 (37.6) & 13.0 (16.8) & 5.2 (8.9) & 0.5 (2.0) & 1.0 (0.0) \\ \toprule \end{tabularx} \caption{Distribution of respondents across savings per month from the fuel tax cut and the 9~EUR-ticket in percent. The values from the control group are given in parentheses.} \label{tab:1} \end{table} \section{Discussion and outlook} In this third report, we have provided some first insights into the second wave of our study "Mobilität.Leben". Based on the responses it can be seen that the introduction of the 9~EUR-ticket impacted mobility and everyday life. There is evidence that some respondents changed from the car to public transport, but in order to estimate the precise extent of this effect we need to more data and analyses from our app-based travel diary. There is also some evidence that the 9~EUR-ticket increased travel demand. Nevertheless, the first findings indicate similar effect sizes as already found elsewhere \cite{keblowski_why_2020}. In closing, it should be noted that this report does not present the final results of our study and are therefore preliminary. The presented results are not yet weighted to correspond to a representative sample. Thus, the findings at this moment in time only describe our sample from the Munich metropolitan area and the control group. The next steps include the analysis of travel behavior based on the smartphone app before, during and after the fuel tax cut and the 9~EUR-ticket as well as the completing the study with the third survey wave in September and October, which includes a stated preference experiment on the pricing of a successor ticket of the 9~EUR-ticket. \section*{Acknowledgements} The authors would like to thank the TUM Think Tank at the Munich School of Politics and Public Policy led by Urs Gasser for their financial and organizational support and the TUM Board of Management for supporting personally the genesis of the project. The authors thank the company MOTIONTAG for their efforts in producing the app at unprecedented speed. Further, the authors would like thank everyone who supported us in recruiting participants, especially Oliver May-Beckmann (M Cube) and Ulrich Meyer (TUM), respectively. \bibliographystyle{unsrt}
bf90d4cedc22cd7a0879e76c02eda6f1d1292ff7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{\label{sec:Intro} Introduction} \par The shandite Co$_3$Sn$_2$S$_2$, hosting the kagome lattice of Co ions, is a fascinating magnetic material, in several respects. It is a ferromagnet with small spontaneous magnetization (about $0.3$ $\mu_{\rm B}$ per Co site), but a relatively high Curie temperature $T_{\rm C} \sim 177$ K. It has attracted a lot of attention as a magnetic Weyl semimetal those non-trivial topology of the electronic states gives rise to a large anomalous Hall effect (AHE)~\cite{Liu,Xu,Minami,Yanagi}. The quantum AHE was also realized by fabricating the two-dimensional devices on the basis of Co$_3$Sn$_2$S$_2$~\cite{Tanaka,Muechler}. These intriguing magnetic properties are further amplified by the fact that Co$_3$Sn$_2$S$_2$ is half-metalic~\cite{Jiao}, where the conduction takes place only in one spin channel while another spin channel is gapped. This half-metallicity remains robust upon doping in Co$_3$In$_x$Sn$_{2-x}$S$_2$, where the ground-state magnetization depends linearly on $x$ and persists up to $x \simeq 0.85$, as was demonstrated in theoretical~\cite{Yanagi} and experimental~\cite{Zhou,Kassem} studies. The coexistence of the easy-axis ferromagnetic (FM) and planar $120^{\circ}$ antiferromagnetic (AFM) orders was proposed from the $\mu$SR measurements in the temperature range $T_{\rm A} < T < T_{\rm C}$ (where $T_{\rm A} \sim 90$ K), and the intensity of the AHE was proportional to the fraction of the FM phase~\cite{Guguchia.NatComm}. The anomaly of the magnetic susceptibility, which may be relevant to the two-phase state, was also observed but at somewhat higher $T_{\rm A} \sim 130$ K~\cite{ZhangPRL}. On the other hand, no evidence of the AFM component up to $T_{\rm C} \sim 177$ K was observed in recent unpolarized neutron diffraction and spherical neutron polarimetry measurements~\cite{SohArxiv}. \par Very generally, the Weyl semimetals require either spatial inversion or time reversal symmetry to be broken. While the early realizations were initially all from the former category, the latter direction attracts more and more attention recently. For instance, the intrinsic AHE in Co$_3$Sn$_2$S$_2$ is associated with the spontaneous time-reversal symmetry breaking caused by the FM order. In this sense, the origin of this FM order is one of the key question in the physics of Co$_3$Sn$_2$S$_2$. Nevertheless, it is not fully understood and remains largely controversial~\cite{SavrasovPRB2021}. \par The small magnetization is believed to be related to the cluster effects, which also reduce the effective Coulomb interactions~\cite{SavrasovPRB2021}, as expected for molecular type compounds~\cite{lacunar}. \par The half-metallic state implies the absence of Stoner excitations, so that the important role of spin fluctuations is generally expected in Co$_3 $Sn$_2 $S$ _2 $ (e.g., in the temperature dependence of magnetic moment). Indeed, experimental magnetization curves~\cite{Kassem} for Co$_3$In$_x$Sn$_{2-x}$S$_2$ demonstrate strong fluctuations and are reminiscent of those for weak itinerant ferromagnets, especially at large $x$, where the ground state moment is strongly reduced. The electronic structure is also expected to be unusual and featured by the appearance of non-quasiparticle states in the gap owing to the electron-magnon scattering~\cite{RMP}. On the other hand, with increasing $T$ beyond the spin-wave region, the spin fluctuations inherent to the itinerant magnets should play a role. This is confirmed by the parameters of Takahashi's theory~\cite{Takahashi} obtained from the Arrott plot and fitted to a generalized Rhodes–Wohlfarth plot, $p_{\rm eff}/p_{\rm s}$ versus $T_{\rm C}/T_{\rm 0}$ ($p_{\rm s}$ being the spontaneous moment, $p_{\rm eff}$ being the effective moment, and $T_{\rm 0}$ being a measure of the spin-fluctuation spectral distribution in the frequency space): $T_{\rm 0} = 1230$ K and $p_{\rm eff}/p_{\rm s}=2.14$ at $x=0$ (and considerably increase with the increase of $x$)~\cite{Kassem}. Such a situation is radically different from half-metallic localized-moment Heusler compounds, where $T_C/T_0 \simeq 1$ and $p_{\rm eff} \simeq p_{\rm s}$~\cite{Sakon}. Besides that, in the quasi-two-dimensional situation, specific fluctuation behavior occurs even in the localized-spin model~\cite{kat07}. \par The problem of exchange interactions and stability of the FM state was addressed recently on the basis of combined experimental inelastic neutron scattering studies and theoretical calculations in the framework of density functional theory (DFT)~\cite{Liu2021,ZhangPRL}. On the experimental side, it was concluded that the FM order is primarily stabilized by the long-range ``across-hexagon'' interaction in the kagome plane~\cite{ZhangPRL}. One should note, however, that the available experimental spin-wave dispersion data is limited only to the acoustic branch close to the $\Gamma$ point~\cite{Liu2021,ZhangPRL}. Moreover, the experimental picture of interatomic exchange interaction was in sharp contrast with the results of theoretical calculations, predicting the strongest nearest-neighbor interaction to be in the kagome plane~\cite{Liu2021,ZhangPRL}. On the other hand, the theoretical analysis was based on the magnetic force theorem (MFT)~\cite{LKAG1987}, the validity of which is known to be questionable for the itinerant electron systems as it relies on additional approximations. Therefore, more rigorous theoretical methods may be necessary~\cite{BrunoPRL2003,Antropov2006,PRB2021}. \par In this work, we systematically study the problem of stability of the FM state in Co$_3$Sn$_2$S$_2$ using different kinds of response theories and argue that the magnetism of Co$_3$Sn$_2$S$_2$ has a dual nature. First, we consider the criteria of emergence of the FM state caused by longitudinal fluctuations of the magnetic moments and show that the behavior of Co$_3$Sn$_2$S$_2$ bears certain similarities to the Stoner picture of itinerant magnetism in the sense that the local moments are pretty soft and can easily evolve with temperature (on a reasonable temperature scale) and depending on the angle between them. Nevertheless, the transversal spin fluctuations, relevant to the rotational spin degrees of freedom, are also important and should be rigorously considered in the analysis of stability of the FM state. \par The article is organized as follows. In Sec.~\ref{sec:DFT} we briefly discuss the details of DFT calculations and summarize the key results, which are important for understanding the origin of the ferromagnetism in Co$_3$Sn$_2$S$_2$. Then, in Sec.~\ref{sec:model}, we deal with the realistic electronic model extracted from DFT in the basis of Wannier functions and capturing the essential ingredients of the electronic structure of Co$_3$Sn$_2$S$_2$ relevant to the magnetism. Particularly, in Sec.~\ref{sec:existence}, we consider the criteria of emergence of the magnetic state from the nonmagnetic one, which explains the main tendencies of DFT calculations. The analysis is similar to the Stoner theory of magnetism~\cite{Stoner}, but generalized to the case of several different atoms in the primitive cell, including the ligand states. Namely, we explicitly show that the magnetic solution exists up to certain critical angles formed by three Co spins in the kagome lattice and collapses to the nonmagnetic state when the angles exceed these critical values. This is clearly different from the Heisenberg picture of magnetism, which would be expected for the localized spins. Nevertheless, the Heisenberg model can be still introduced locally for the description of local stability of the FM state with respect to the transversal spin fluctuations caused by the infinitesimal rotations of spins~\cite{LKAG1987}. We consider such model in Sec.~\ref{sec:Jij}. For these purposes we employ a formally exact theory of interatomic exchange interactions~\cite{PRB2021} and show how it revises the MFT based results. Particularly, the exact theory predicts Co$_3$Sn$_2$S$_2$ to be the three-dimensional ferromagnet with the strongest interaction $J_{5}$ operating between the kagome planes in the fifth coordination sphere. Moreover, the ligand states play a very important role in strengthening the FM interactions. Then, in Sec.~\ref{sec:Jm}, we investigate the dependence of the exchange interactions on the value of total magnetization $M$ in the FM state. By these means we simulate the temperature effects, which according to the Stoner picture should decrease the magnetization. We show that the inter-plane interactions drastically decrease with the decrease of $M$, making the FM state unstable with respect to the spin-spiral state propagating perpendicular to the kagome planes, which may be relevant to the AFM phase emerging below $T_{\rm C}$~\cite{Guguchia.NatComm}. Finally, in Sec.~\ref{sec:summary}, we summarize our work. \section{\label{sec:DFT} GGA calculations} \subsection{\label{sec:DFT_details} Details} \par First-principles electronic structure calculations for Co$_{3}$Sn$_{2}$S$_{2}$ were performed in the generalized gradient approximation (GGA)~\cite{gga-pbe} for the experimental crystal structure~\cite{structure} using Vienna ab-initio simulation package (VASP)~\cite{vasp} within the framework of projected augmented waves~\cite{paw}. The rhombohedral Brillouin zone was sampled on a mesh of 10$\times$10$\times$10 Monkhorst-Pack $\boldsymbol{k}$-points~\cite{mpack}. The partial occupancies were determined using the Methfessel-Paxton scheme with the smearing of 0.1 eV~\cite{MethfesselPaxton}. The convergence criteria for the total energy calculations was set to 10$^{-8}$ eV. We have considered two types of magnetic structures: (i) a collinear FM state with a fixed value of the total magnetization defined as a difference between the spin-up and spin-down states, (ii) a non-coplanar umbrella-type spin texture (which can be viewed as a continuous transformation of the FM state to the $120^{\circ}$ spin state in the $xy$ plane), by constraining directions of the magnetic moments at three Co sites (while allowing the size of the moments to relax in the course of self-consistency). \par The electronic structure of Co$_{3}$Sn$_{2}$S$_{2}$ was interpolated in the basis of Wannier functions constructed for the Co 3$d$, Sn 5$p$, and S 4$p$ orbitals using the maximal localization technique~\cite{wannier90}. The calculated band structures were disentangled in the range from $\sim-8$ eV to $\sim5$ eV with respect to the Fermi level, and the states up to $\sim2$ eV above the Fermi level were kept frozen during the wannierization. \par Nodal lines and positions of the Weyl points were identified based on the Wannier interpolation by using the WannierTools package~\cite{WannierTools}. \subsection{\label{sec:DFT_summary} Summary of main results} \par The band structures of Co$_{3}$Sn$_{2}$S$_{2}$ in a collinear FM state calculated without and with spin–orbit coupling are shown in Fig.~\ref{fig.band_gga}. \noindent \begin{figure}[b] \begin{center} \includegraphics[width=0.48\textwidth]{figure1.eps} \end{center} \caption{(a) Band structure for the collinear ferromagnetic state of Co$_{3}$Sn$_{2}$S$_{2}$ with and without spin-orbit coupling (GGA+SOC and GGA, respectively). (b) Band crossing (left) and nodal lines (middle and right) calculated for the pair of spin-up states at the Fermi level without spin-orbit coupling. Blue and pink points correspond to the Weyl nodes with opposite chiralities, as calculated in the presence of spin-orbit coupling. Grey planes denote the mirror planes in the reciprocal space. Band crossings derived from the gap function are only shown in the first Brillouin zone.} \label{fig.band_gga} \end{figure} \noindent Co$_{3}$Sn$_{2}$S$_{2}$ is half-metallic where the spin-down channel has a gap of $\sim0.33$~eV at the Fermi level, and the total magnetic moment $M$ is 1 $\mu_{\mathrm{B}}$ per formula unit. Without spin-orbit coupling, the spin-up states in the vicinity of the Fermi level develop linear band crossings along the $\mathrm{P}-\mathrm{L}$ and $\mathrm{L}-\Gamma$ paths due to the band inversion. In fact, the proximity of the spin-up states at the Fermi level and the corresponding gap function $(E_{n+1}-E_{n})^{2}$ has a complicated structure, where the band crossings form closed intersecting lines, as shown in Fig.~\ref{fig.band_gga}(b). Six closed lines lie in the mirror planes of the $D_{3d}$ point symmetry and turn out to be topologically protected in the absence of spin-orbit coupling, forming the nodal lines. In the presence of spin-orbit coupling, the FM state looses its mirror symmetry. This causes the crossings to split and open small gaps with band anti-crossings along the former nodal lines, except for a pair of points for each nodal line where the linear crossing persists. These points known as the Weyl nodes act as a monopole sink and source of the Berry curvature with the opposite topological charges (or chiralities, $\chi=\pm1$). \par Deviation of the spin magnetization from the ground state value $M=1$ $\mu_{\mathrm{B}}$ destroys the half-metallic character of the electronic structure, so that the Fermi level crosses the majority spin (spin up) as well as minority spin (spin down) states (Fig.~\ref{fig.band}). \noindent \begin{figure}[b] \begin{center} \includegraphics[width=0.48\textwidth]{figure2.eps} \end{center} \caption{Electronic band structures of Co$_{3}$Sn$_{2}$S$_{2}$ in the collinear ferromagnetic state as obtained in constraint calculations with the fixed value of total magnetization: (a) $M=1$ $\mu_{\mathrm{B}}$, (b) $M=0.99$ $\mu_{\mathrm{B}}$, (c) $M=0.78$ $\mu_{\mathrm{B}}$, and (d) $M=0.55$ $\mu_{\mathrm{B}}$. The dashed lines denote the Fermi level.} \label{fig.band} \end{figure} \par Then, let us consider the results of constrained GGA calculations, where we fix the absolute values of magnetic moments at the Co sites in the FM structure. The dependence of total energy ${\cal E}$ on the magnetic moment is shown in Fig.~\ref{fig.EM}. \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure3.eps} \end{center} \caption{ Total energy versus magnetic moment of three Co sites in the unit cell, as obtained in constrained GGA calculations. The magnetic moment was evaluated within atomic spheres of radii $1.3$~\AA. The total energy is calculated relative to the nonmagnetic state.} \label{fig.EM} \end{figure} \noindent The minimum is obtained at $1.035$ $\mu_{B}$ (evaluated with Co atomic spheres of the radii $1.3$~\AA), which corresponds to the total moment $M=1$ $\mu_{B}$ (also including the contributions of the S and Sn sites as well as of the interstitial region). Then, ${\cal E}$ increases with the decrease of $M$. However, the change is relatively small (corresponding to only $143$ K on the temperature scale, which is comparable with the experimental $T_{\rm C}$). This is the first indication of the itinerant character of magnetism in Co$_3$Sn$_2$S$_2$, where the modest elevation of $T$ results in the change of the absolute value of $M$. For instance, a simple thermal averaging with $e^{-{\frac{{\cal E}(M)}{k_{\rm B} T}}}$ will decrease $M$ by about 25\% for temperatures close to $T_{\rm C}$. The derivative discontinuity of the total energy ${\cal E}$ at $M=1$ $\mu_{B}$ is related to the half-metallic character of the electronic structure, where the constraining field, $h= -\frac{\partial {\cal E}}{\partial M}$, required to produce the magnetization in the vicinity of $M=1$ $\mu_{B}$ undergoes a jump of the order of the energy gap in the minority spin channel. \par This is quite contrary to the expectations based on the band splitting between the majority- and minority-spin states near the Fermi level, being about $0.5$ eV in the ground sate~\cite{SavrasovPRB2021} (see Fig.~\ref{fig.band}). On the temperature scale, this splitting would correspond to $5800$ K and definitely rule out the Stoner picture of magnetism for Co$_3$Sn$_2$S$_2$. However, for the half-metallic state in DFT such splitting is not well defined (the shift of the minority-spin states does not change the energy and magnetization, provided that the Fermi level continues to fall in the gap)~\cite{EschrigPickett}. More generally (and according to the philosophy of DFT), the Kohn-Sham (KS) single particle energies is an auxiliary construction, which does not have a clear physical meaning. Therefore, the thermodynamic properties in DFT should be evaluated using the total energies (instead of the KS single particle ones), which lead to very different temperature scale in the case of Co$_3$Sn$_2$S$_2$. \par Fig.~\ref{fig.GGAumbrella} shows the results of another constrained GGA calculation, where the magnetic moments at the Co sites were forced to form the ``umbrella structure'', which is characterized by the rotation of spins away from the FM axis $z$ by the angle $\theta$, such that the projections of spins onto the $xy$ plane would form the $120^{\circ}$ structure. Meanwhile, the size of the magnetic moment was allowed to relax during the self-consistency. \noindent \begin{figure}[b] \begin{center} \includegraphics[width=0.48\textwidth]{figure4.eps} \end{center} \caption{ (a) Total energy (left axis $y$) and size of magnetic moment at the Co site (right axis $y$) as obtained in the constrained GGA calculations for the umbrella spin structure, depending on the angle $\theta$ formed by the Co spin moments and the axis perpendicular to the plane: $\theta=0$ corresponds to the ferromagnetic order, $\theta=90^{\circ}$ corresponds to the in-plane $120^{\circ}$ spin order. (b) Top and (c) side-top view of the umbrella structure with the notations of the Co sites. } \label{fig.GGAumbrella} \end{figure} \noindent The total energy minimum is realized at $\theta =0$, thus confirming that the ground state is ferromagnetic. Then, the total energy gradually increases for $\theta \lesssim 60^{\circ}$ and becomes practically constant afterwards. For $\theta \lesssim 30^{\circ}$, the size of the local magnetic moments at each of the three Co sites, $m_{\nu} = \sqrt{(m_{\nu}^{x})^{2} + (m_{\nu}^{y})^{2} + (m_{\nu}^{z})^{2}}$ ($\nu=$ $1$, $2$, and $3$), is of the order of $0.35$ $\mu_{\rm B}$ and only weakly depends on $\theta$. However, further increase of $\theta$ leads to the collapse of magnetisation: $m_{\nu}$ decreases and becomes equal to zero around $\theta \lesssim 60^{\circ}$ where the total energy reaches the saturation and does not depend on $\theta$, i.e. contrary to what could be expected for localized spins. This is another signature of itineracy of Co$_3$Sn$_2$S$_2$: although the Heisenberg model, which is typically used for the description of localized spins, can still be defined locally, for small rotations of magnetic moments near the FM ground state (as will become evident in Sec.~\ref{sec:Jij}), it breaks down globally, for arbitrary rotations of the magnetic moments by arbitrary angles. Such behavior is not new for the itinerant electron systems: for instance, it is well known that finite rotations of magnetic moments in fcc Ni away from the FM ground state also lead to the collapse of magnetism~\cite{Turzhevskii,Singer}. A similar behaviour is observed in SrRu$_2$O$_6$ and AgRuO$_3$ within GGA, where the sublattice magnetization vanishes upon gradual rotation of spins from the N\'eel AFM ground state to the FM state~\cite{Streltsov,Schnelle2021}. \par Furthermore, these calculations rule out the existence of $120^{\circ}$ planar structure in Co$_3$Sn$_2$S$_2$, which was proposed to explain the magnetic behavior of Co$_3$Sn$_2$S$_2$ in the temperature range $90$ K $< T <$ $177$ K~\cite{Guguchia.NatComm}, because this $120^{\circ}$ structure does not seem to be compatible with the itinerant character of Co$_3$Sn$_2$S$_2$ as it evolves to the nonmagnetic state. \par Finally, we would like to emphasize that these calculations were performed without spin-orbit coupling, where the $\theta$-dependence of ${\cal E}$ stems solely from isotropic interactions in the system. It should not be confused with the easy-axis FM anisotropy considered, for instance, in Ref.~\cite{OzawaNomura}. \section{\label{sec:model} Realistic modelling} \par In order to estimate the exchange parameters and investigate the stability of the FM state, the electronic states close to the Fermi level were reexpanded in the basis of Wannier functions constructed for the Co $3d$, Sn $5p$, and S $4p$ orbitals using the maximally localized Wannier functions technique~\cite{wannier90}, as described above. \subsection{\label{sec:existence} Emergence and stability of the magnetic order} \par First, let us discuss the emergence of the FM state. For these purposes we start with the nonmagnetic solution and evaluate analytically the second derivative of the total energy with respect to the small induced magnetization $\vec{\boldsymbol{m}}$. Very generally, the nonmagnetic state is expected to be unstable because of the kagome flat bands located near the Fermi level~\cite{Yanagi}. \par In our notations, $\boldsymbol{m}_{\nu} = (m_{\nu}^{x},m_{\nu}^{y},m_{\nu}^{z})$ is the spin magnetic moment at the unit cell site $\nu$, $\vec{\boldsymbol{m}}$ is the column vector assembled from all such $\boldsymbol{m}_{\nu}$ within the unit cell, and $\vec{\boldsymbol{m}}^{T}$ is the row vector corresponding to $\vec{\boldsymbol{m}}$. Then, we formulate the problem in the spirit of constrained spin-density functional theory, where the size and direction of $\vec{\boldsymbol{m}}$ is controlled by the external field $\vec{\boldsymbol{h}}$. The corresponding total energy (per one unit cell) is given by~\cite{PRB2021} \noindent \begin{equation} {\cal E} = {\cal E}_{\rm sp}(\vec{\boldsymbol{h}} + \vec{\boldsymbol{b}}) - \frac{1}{2}\vec{\boldsymbol{m}}^{T} \cdot\left( \vec{\boldsymbol{h}} + \vec{\boldsymbol{b}} \right) + {\cal E}_{\rm xc}(\vec{\boldsymbol{m}}), \label{eq:tenergy} \end{equation} \noindent where the first term is the sum of occupied KS single particle energies (${\cal E}_{\rm sp}$), the second term is minus interaction energy of $\vec{\boldsymbol{m}}$ with $\vec{\boldsymbol{h}}$ and the exchange-correlation (xc) field $\vec{\boldsymbol{b}} = 2 \frac{\delta {\cal E}_{\rm xc}}{\delta \vec{\boldsymbol{m}}}$, and the third terms is the xc energy (${\cal E}_{\rm xc}$), which is taken in the form~\cite{Gunnarsson} \noindent \begin{equation} {\cal E}_{\rm xc} = -\frac{1}{4} \vec{\boldsymbol{m}}^{T} \cdot \hat{\cal I} \vec{\boldsymbol{m}}, \label{eq:exc} \end{equation} \noindent so that $\vec{\boldsymbol{b}} = - \hat{\cal I} \vec{\boldsymbol{m}}$ for the site-diagonal matrix $\hat{\cal I} = [\, \dots \, , { \cal I}_{\nu}, \, \dots \, ] $, where $ { \cal I}_{\nu}$ is the Stoner parameter for an ion of the sort $\nu$. Then, it is straightforward to show that \begin{equation} {\cal E} = -\frac{1}{4} \vec{\boldsymbol{m}}^{T} \cdot \vec{\boldsymbol{h}}. \label{eq:emh} \end{equation} \noindent Furthermore, $\vec{\boldsymbol{m}}$ can be related to $\vec{\boldsymbol{h}}$ via the response tensor \noindent \begin{widetext} \begin{equation} {\cal R}_{\boldsymbol{q}}^{\sigma \sigma'} (ab,cd) = \sum_{ml \boldsymbol{k}} \frac{f_{m \boldsymbol{k}}^{\sigma} - f_{l \boldsymbol{k}+\boldsymbol{q}}^{\sigma'}}{\varepsilon_{m \boldsymbol{k}}^{\sigma} - \varepsilon_{l \boldsymbol{k}+\boldsymbol{q}}^{\sigma'}} (C_{m \boldsymbol{k}}^{a\sigma})^{*}C_{l \boldsymbol{k}+\boldsymbol{q}}^{b\sigma'} (C_{l \boldsymbol{k}+\boldsymbol{q}}^{c\sigma'})^{*}C_{m \boldsymbol{k}}^{d\sigma}, \label{eq:gresponse} \end{equation} \end{widetext} \noindent where $\varepsilon_{m \boldsymbol{k}}^{\sigma}$ are the KS eigenvalues and $C_{l \boldsymbol{k}}^{a\sigma}$ are the eigenvectors in the Wannier basis, the pairs of the orbital indices $ab$ and $cd$ belong to the atomic sites $\mu$ and $\nu$, respectively, and $f_{m \boldsymbol{k}}^{\sigma}$ is the Fermi distribution function. In the nonmagnetic states, the elements of the response tensor ${\cal R}_{\mu \nu}^{\sigma \sigma'}$ do not depend on the spin indices. Then, it holds that $\vec{\boldsymbol{h}} = \left( \hat{\mathbb R}_{0}^{-1} + \hat{\cal I} \right) \vec{\boldsymbol{m}}$, where $\hat{\mathbb R}_{\boldsymbol{q}} \equiv [{\mathbb R}_{\boldsymbol{q}, \mu \nu}] $ and \noindent \begin{equation} {\mathbb R}_{\boldsymbol{q}, \mu \nu}^{\sigma \sigma'} = \sum_{a \in \mu} \sum_{c \in \nu} {\cal R}_{\boldsymbol{q}}^{\sigma \sigma'} (aa,cc). \label{eq:aresponse} \end{equation} \noindent By substituting it in Eq. (\ref{eq:emh}), one obtains \noindent \begin{equation} {\cal E} = \frac{1}{2} \vec{\boldsymbol{m}}^{T} \cdot \hat{\cal D} \vec{\boldsymbol{m}}, \label{eq:emfm} \end{equation} \noindent where \noindent \begin{equation} \hat{\cal D} = -\frac{1}{2} \left( \hat{\mathbb R}_{0}^{-1} + \hat{\cal I} \right). \label{eq:Dmtrx} \end{equation} \par The atomic indices $\mu$ and $\nu$ run over the transition-metal (${\rm T}=$ Co) and ligand (${\rm L}=$ Sn or S) sites. Then, the contributions of the ${\rm L}$ variables can be eliminated by assuming that for each instantaneous configuration of the ${\rm T}$ spin moments, the ones at the ligand sites have sufficient time to reach the equilibrium and, therefore, can be found from the adiabaticity condition $\frac{\partial {\cal E}}{\partial \vec{\boldsymbol{m}}_{\rm L}} = 0$. In this case, ${\cal E}$ can be written as~\cite{PRB2021} \noindent \begin{equation} {\cal E} = \frac{1}{2} \vec{\boldsymbol{m}}^{T}_{\rm T} \cdot \hat{\widetilde{\cal D}}^{\phantom{T}}_{\rm TT} \vec{\boldsymbol{m}}^{\phantom{T}}_{\rm T}, \label{eq:emtfmt} \end{equation} \noindent where \noindent \begin{equation} \hat{\widetilde{\cal D}}_{\rm TT} = \hat{\cal D}_{\rm TT} - \hat{\cal D}_{\rm TL}^{\phantom{-1}}\hat{\cal D}_{\rm LL}^{-1}\hat{\cal D}_{\rm LT}^{\phantom{-1}}. \label{eq:dTT} \end{equation} \par Taking into account the symmetry properties for the matrix elements of $\hat{\widetilde{\cal D}}_{\rm TT}$ connecting three Co sites in the primitive cell, $\widetilde{\cal D}_{11} = \widetilde{\cal D}_{22} = \widetilde{\cal D}_{33}$ and $\widetilde{\cal D}_{12} = \widetilde{\cal D}_{23} = \widetilde{\cal D}_{31}$, one obtains: \noindent \begin{eqnarray} {\cal E} & = & \frac{1}{2} \widetilde{\cal D}_{11} \left( \boldsymbol{m}_{1}^{2} + \boldsymbol{m}_{2}^{2} + \boldsymbol{m}_{3}^{2} \right) + \nonumber \\ & & \phantom{\frac{1}{2}} \widetilde{\cal D}_{12} \left( \boldsymbol{m}_{1} \cdot \boldsymbol{m}_{2} + \boldsymbol{m}_{2} \cdot \boldsymbol{m}_{3} + \boldsymbol{m}_{3} \cdot \boldsymbol{m}_{1} \right). \nonumber \end{eqnarray} \noindent Considering the directions of the magnetic moments in the umbrella structure (see Fig.~\ref{fig.GGAumbrella} for the geometry and notations of the Co sites), \noindent \begin{eqnarray} \boldsymbol{m}_{1} &=& (0, \, \sin \theta, \, \cos \theta) \, m, \nonumber \\ \boldsymbol{m}_{2} &=& (- \frac{\sqrt{3}}{2} \sin \theta, \, -\frac{1}{2} \sin \theta, \, \cos \theta) \, m, \nonumber \\ \boldsymbol{m}_{3} &=& (\phantom{-}\frac{\sqrt{3}}{2} \sin \theta, \, -\frac{1}{2} \sin \theta, \, \cos \theta) \, m, \nonumber \end{eqnarray} \noindent one can finally obtain the following expression: \noindent \begin{equation} {\cal E} = \frac{3}{2} \left\{ \widetilde{\cal D}_{11} + (3 \cos^2 \theta -1) \widetilde{\cal D}_{12} \right\} m^2. \label{eq:umbrella} \end{equation} \par Therefore, if $\frac{\partial^2 {\cal E}}{\partial m^2} = 3 \{ \widetilde{\cal D}_{11} + (3 \cos^2 \theta -1) \widetilde{\cal D}_{12} \} >0$, the nonmagnetic state is stable. Otherwise, the system will converge to a magnetic solution with finite $m$. In the simplest case of one site in the unit cell, $\hat{\mathbb R}_{0} = -{\cal N}(\varepsilon_{\rm F})$ (the density of states at the Fermi level per one spin) and we recover the conventional criterium of Stoner ferromagnetism: ${\cal I}{\cal N}(\varepsilon_{\rm F})>1$, which can be readily obtained from the condition ${\cal D} <0$ in Eq.~(\ref{eq:emfm}) and using Eq.~(\ref{eq:Dmtrx}) for ${\cal D}$. As expected, the result depends on temperature $T$, which enters this Stoner-type model via the Fermi distribution functions $f_{m \boldsymbol{k}}^{\sigma}$ in Eq.~(\ref{eq:gresponse}). The magnetic structure is stable when $\theta$ is smaller than a certain critical value \noindent \begin{equation} \theta_{m} = \cos^{-1} \sqrt{ \frac{1}{3} \left( 1 - \frac{\widetilde{\cal D}_{11}}{\widetilde{\cal D}_{12}} \right) } \label{eq:thetam} \end{equation} \noindent for which $\frac{\partial^2 {\cal E}}{\partial m^2} = 0$. \par We evaluate these dependencies using the model parameters derived within GGA. In order to obtain the Stoner parameters, ${\cal I}_{\nu} = -\frac{m^{z}_{\nu}}{b^{z}_{\nu}}$, one should know the xc-field $\vec{b}^{z}$ for the given magnetization $\vec{m}^{z}$. It can be obtained from the sum rule $\vec{m}^{z} = \hat{\mathbb R}_{0}^{\uparrow \downarrow} \vec{b}^{z}$~\cite{PRB2021}. Since Eq.~(\ref{eq:exc}) is an approximation, these Stoner parameters depend on the magnetization. Then, for the perturbation theory near the nonmagnetic state, which we consider here, it is logical to derive $\hat{\cal I}$ from the constraint FM calculations with small $M$. More specifically, we use $M = 0.55$ $\mu_{\rm B}$, which yields the following parameters: ${\cal I}_{\rm Co} = 0.97$, ${\cal I}_{\rm Sn_{1}} = -3.52$, ${\cal I}_{\rm Sn_{2}} = -4.65$, and ${\cal I}_{\rm S} = 1.40$ eV. The value of ${\cal I}_{\rm Co}$ is quite consistent with previous estimates for the transition metals~\cite{Gunnarsson}. ${\cal I}_{\rm S}$ is expected to be even larger, as is also known for the oxygen atoms~\cite{MazinSingh1997}. It may look unphysical that ${\cal I}_{\rm Sn_{1}}$ and ${\cal I}_{\rm Sn_{2}}$ are largely negative. However, the small magnetic moments at Sn sites are solely induced by the hybridization with other sites and do not play a primary role in the magnetism of Co$_3$Sn$_2$S$_2$. The response tensor in the nonmagnetic state $\hat{\mathbb R}_{0}$ was evaluated on the mesh of $56 \times 56 \times 56$ $\boldsymbol{k}$-points in the rhombohedral Brillouin zone, which provides a sufficient accuracy at least for $T \gtrsim 150$ K. The results are summarized in Fig.~\ref{fig.Stoner}. \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure5.eps} \end{center} \caption{ Parameters of the Stoner-type model and temperature dependence of the critical angle of the umbrella structure.} \label{fig.Stoner} \end{figure} \par In the FM state for $\theta = 0$, $\frac{\partial^2 {\cal E}}{\partial m^2} = 3 \{ \widetilde{\cal D}_{11} + 2 \widetilde{\cal D}_{12} \}$ changes the sign around $T_{\rm C}=410$ K, which can be regarded as the Curie temperature of the Stoner model, provided that the transition is not metamagnetic~\cite{Shimizu1}, where $T_{\rm C}$ should be evaluated differently. At the phenomenological level, the conventional practice for the thermodynamic properties of itinerant magnets is to use the Landau-type theory, by expressing the free energy in terms of even powers of $M$: ${\cal E }(M) = \sum_{n=1}^{n_{max}} \frac{1}{2n} a_{2n-1}M^{2n}$ and incorporating the temperature dependence to $a_{1}$ as $a_{1} \to a_{1}\frac{T_{\rm C}-T}{T_{\rm C}}$~\cite{Shimizu2,Mohn}. For instance, for $n_{max}=3$, the metamagnetism occurs if $a_{5}>0$ but $a_{3}<0$. At $T=0$, ${\cal E}(M)$ can be derived from constrained spin density-functional calculations. However, for the half-metallic systems, the dependence ${\cal E}(M)$ is not smooth (see Fig.~\ref{fig.EM}) and such expansion does not apply. Thus, at the moment it is not clear how to proceed in this direction. In any case, $T_{\rm C}=410$ K can be probably regarded as a rough (an order of magnitude) estimate for the Curie temperature, which exceeds the experimental value by factor 3, as expected for the Stoner-type picture~\cite{Gunnarsson,MoriyaKawabata}. \par The umbrella structure can be realized for not too large $\theta$ near the FM state. We confirm that there is a critical $\theta_{m}$, which decreases with $T$, and the rotation of magnetic moments beyond this angle makes the umbrella structure unstable relative to the nonmagnetic states, in semi-quantitative agreement with the results of constrained GGA calculations considered in Sec.~\ref{sec:DFT_summary}. Particularly, the critical angle $\theta_{m} \sim 26^{\circ}$ obtained in this model analysis at $T \sim 150$ K is quite consistent with $\theta_{m} \sim 30^{\circ}$ derived from GGA at $T = 0$. \par Another important point is that $T_{\rm C}$ is expected to decrease with the increase of $\theta$ in the umbrella structure, which immediately follows from Eq.~(\ref{eq:umbrella}) for $\widetilde{\cal D}_{12} < 0$ (see Fig.~\ref{fig.Stoner}). The correspondent dependence $T_{\rm C}(\theta)$ is obtained by inverting the graph $\theta_{m}(T)$, which is also displayed in Fig.~\ref{fig.Stoner}. Thus, the realization of such umbrella structure instead of the collinear FM state could probably rationalize the discrepancy with the experimental data regarding the value of $T_{\rm C}$. Even within the simple Stoner-type picture, considered above, the canting of magnetic moment by $\theta \sim 26^{\circ}$ would be sufficient to produce the experimental $T_{\rm C} \sim 170$ K. Since the nearest Co sites in the Co$_3$Sn$_2$S$_2$ structure are not connected by the inversion symmetry, such canting could be caused by Dzyalishinskii-Moriya interactions~\cite{Dzyaloshinskii_weakF,Moriya_weakF}. This is clearly seen in GGA calculations with the spin-orbit coupling at $T=0$. However, the obtained $\theta$ is too small (only about $2^{\circ}$). It is an interesting question whether $\theta$ will increase with the increase of $T$. \subsection{\label{sec:Jij} Interatomic exchange interactions in the ferromagnetic state} \par As we have seen above, finite rotations of spins in Co$_3$Sn$_2$S$_2$ result in the collapse of the magnetic state and in the break down of the Heisenberg model of magnetism. Nevertheless, one can define the model for infinitesimal rotations of spin magnetic moments near the FM ground state. In this section, we construct such model, \noindent \begin{equation} {\cal E} = -\frac{1}{2N} \sum_{ij} J^{ij} \boldsymbol{e}_{i} \cdot \boldsymbol{e}_{j}, \label{eq:Heis} \end{equation} \noindent where $\boldsymbol{e}_{i}$ is the direction of spin at the $i$th Co site and $N$ is the number of such sites. \par For these purposes we consider two techniques. The first one is the standard MFT, which assumes that infinitesimal rotations of spin magnetic moments induce the rotations of the xc fields by the same angles, and this change of the xc fields is treated as a perturbation~\cite{LKAG1987}. The corresponding parameters of exchange interactions between the sublattices $\mu$ and $\nu$ can be found in the reciprocal ($\boldsymbol{q}$) space as \noindent \begin{equation} J^{\mu \nu}_{\boldsymbol{q}} = - \frac{1}{2} \left( b^{z}_{\mu} \left[ \boldsymbol{\mathcal{R}}_{\boldsymbol{q}}^{\uparrow \downarrow} \right]_{\mu \nu} \, b^{z}_{\nu} - b^{z}_{\mu} m^{z}_{\nu} \delta_{\mu \nu} \right). \label{eq:jmft} \end{equation} \noindent In the conventional implementation of MFT, $b^{z}$ and $m^{z}$ are the matrices in the subspace of orbital indices and Eq.~(\ref{eq:jmft}) implies the summation over orbital indices of $b^{z}$, $m^{z}$, and $\boldsymbol{\mathcal{R}}_{\boldsymbol{q}}^{\uparrow \downarrow} \equiv [ {\cal R}_{\boldsymbol{q}}^{\uparrow \downarrow} (ab,cd) ]$. The details can be found in Ref.~\cite{PRB2021}. \par Nevertheless, MFT is an approximation, which becomes exact only in the long wavelength and strong-coupling limits. However, for the analysis of the exchange interactions, it is essential to go beyond the long wavelength limit and consider the contributions of all $\boldsymbol{q}$ points in the first Brillouin zone. Furthermore, the strong-coupling limit is far from being realized in Co$_3$Sn$_2$S$_2$, as is clearly seen from small values of magnetic moments at the Co sites. Therefore, we consider another technique, which is formally exact as it goes beyond the long wavelength and strong-coupling limits~\cite{BrunoPRL2003}. The corresponding exchange parameters can be found as~\cite{PRB2021} \noindent \begin{equation} J^{\mu \nu}_{\boldsymbol{q}} = \frac{1}{2} \left( m^{z}_{\mu} \left[ {\mathbb R}_{\boldsymbol{q}}^{\uparrow \downarrow} \right]^{-1}_{\mu \nu} m^{z}_{\nu} - b^{z}_{\mu} m^{z}_{\nu} \delta_{\mu \nu} \right), \label{eq:jexactM} \end{equation} \noindent where $m^{z}_{\mu}$ ($b^{z}_{\mu}$) is the regular (scalar) magnetization (exchange field) at site $\mu$. Similar to MFT, one can also introduce the matrix analog of this expression with $m^{z}_{\mu}$ and $b^{z}_{\mu}$ being the matrices in the subspace of orbital indices. However, the microscopic processes underlying such extension (and describing the rigid rotations of the full magnetization matrix by the same angle) would correspond to much larger energy change, and do not properly capture the low-energy excitations in the system of spins~\cite{PRB2021}. \par Then, one can start with the bare interactions between the Co sites, which are given by $J^{\mu \nu}_{\boldsymbol{q}}$, and take into account the contributions of the ligand states~\cite{PRB2021}, similar to what we did in Sec.~\ref{sec:existence} in order to understand the emergence of the FM state. The corresponding exchange parameters are given by \noindent \begin{equation} \tilde{J}_{\boldsymbol{q}}^{\rm TT} = J_{\boldsymbol{q}}^{\rm TT} - J_{\boldsymbol{q}}^{\rm TL} \left[ J_{\boldsymbol{q}}^{\rm LL} \right]^{-1} J_{\boldsymbol{q}}^{\rm LT}. \label{eq:jTT} \end{equation} \par Finally, $J^{\mu \nu}_{\boldsymbol{q}}$ and $\tilde{J}^{\mu \nu}_{\boldsymbol{q}}$ can be Fourier transformed to the real space. In these calculations we used the meshes of $40 \times 40 \times 40$ $\boldsymbol{k}$-points and $12 \times 12 \times 12$ $\boldsymbol{k}$-points in the rhombohedral Brillouin zone. Quite expectedly for itinerant systems, the obtained exchange parameters appear to be very long ranged so that sizable interactions can be found even beyond 9th coordinations sphere (Figs.~\ref{fig.Jstr} and \ref{fig.Jk}). \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure6.eps} \end{center} \caption{ (a), (b) Parameters of interatomic exchange interactions operating in the plane. (c), (d) Parameters operating between the planes (top view). The Co atoms located in adjacent planes are denoted by different colors. The coordination spheres of atoms around the origin are denoted by dotted circles. (a), (c) Parameters, which have the same value in all bonds for the given coordination sphere. (b), (d) Parameters, which are characterized by two distinct values for two types of inequivalent bonds for the given coordination sphere. The distribution of parameters around two other Co sites in the primitive cell are obtained by the symmetry operation of the space group $R\overline{3}m$.} \label{fig.Jstr} \end{figure} \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure7.eps} \end{center} \caption{ Distance dependence of interatomic exchange interactions as obtained using the magnetic force theorem (MFT) and exact approach: bare parameters of Co-Co interactions and the ones taking into account the contributions of the ligand states. The notations of parameters are explained in Fig.~\ref{fig.Jstr}.} \label{fig.Jk} \end{figure} \noindent Furthermore, the exchange interactions depend on the method, which is used for their calculations, and additional approximations~\cite{PRB2021}. In MFT, the nearest-neighbor interactions in the plane are clearly the strongest (see Fig.~\ref{fig.Jk} and Table~\ref{tab:J}), in agreement with previous studies~\cite{ZhangPRL,Liu2021}. Besides Eq.~(\ref{eq:jmft}), we have also considered a more conventional real-space implementation of MFT based on Green's function technique~\cite{LKAG1987} and confirmed that it provides essentially the same parameters of interatomic exchange interactions. \noindent \begin{table}[b] \caption{Parameters of interatomic exchange interactions (in meV) as obtained in the framework of magnetic force theorem (MFT) and exact formalism: bare Co-Co interactions and the ones taking into account the contributions of the ligand states. The notations of parameters are explained in Fig.~\ref{fig.Jstr}.} \label{tab:J} \begin{ruledtabular} \begin{tabular}{ccccc} & \multicolumn{2}{c}{MFT} & \multicolumn{2}{c}{exact} \\ \cline{2-3} \cline{4-5} & $bare$ & $+ligands$ & $bare$ & $+ligands$ \\ \hline $J_{1}$ & $\phantom{-}1.59$ & $\phantom{-}1.65$ & $-0.44$ & $\phantom{-}0.48$ \\ $J_{2}$ & $\phantom{-}0.05$ & $\phantom{-}0.08$ & $-0.23$ & $\phantom{-}0.03$ \\ $J_{3}$ & $\phantom{-}0.10$ & $\phantom{-}0.15$ & $-0.20$ & $\phantom{-}0.45$ \\ $J_{4}$ & $\phantom{-}0.19$ & $\phantom{-}0.23$ & $\phantom{-}0.20$ & $\phantom{-}0.35$ \\ $J_{4}'$ & $\phantom{-}0.53$ & $\phantom{-}0.55$ & $\phantom{-}0.56$ & $\phantom{-}0.72$ \\ $J_{5}$ & $\phantom{-}0.43$ & $\phantom{-}0.45$ & $\phantom{-}0.72$ & $\phantom{-}1.33$ \\ $J_{5}'$ & $\phantom{-}0.64$ & $\phantom{-}0.69$ & $\phantom{-}0.91$ & $\phantom{-}1.08$ \\ $J_{6}$ & $\phantom{-}0.29$ & $\phantom{-}0.33$ & $\phantom{-}0.43$ & $\phantom{-}0.53$ \\ $J_{7}$ & $\phantom{-}0.10$ & $\phantom{-}0.11$ & $\phantom{-}0.11$ & $\phantom{-}0.14$ \\ $J_{8}$ & $\phantom{-}0.04$ & $\phantom{-}0.06$ & $\phantom{-}0.09$ & $\phantom{-}0.18$ \\ $J_{9}$ & $-0.13$ & $-0.13$ & $-0.26$ & $-0.23$ \\ $J_{9}'$ & $\phantom{-}0.07$ & $\phantom{-}0.08$ & $\phantom{-}0.03$ & $\phantom{-}0.02$ \end{tabular} \end{ruledtabular} \end{table} \noindent The contributions of the ligand states in this case are relatively unimportant and the main tendencies of $J_{ij}$ are captured already by the bare interactions between Co sites. \par This picture changes significantly in the exact approach, where the strongest interaction is $J_{5}$ in the 5th coordination sphere (see Fig.~\ref{fig.Jstr}). Since $J_{5}$ operates between the planes, Co$_3$Sn$_2$S$_2$ in our picture is essentially a three-dimensional material. Furthermore, the ligands states appear to be very important in this case, as they strengthen the FM character of interactions and are primarily responsible for the FM origin of these interactions in the first three coordination spheres. Nevertheless, $T_{\rm C}$ evaluated in the Heisenberg model appears to be smaller than the experimental one. Particularly, the molecular field approximation, where $T_{\rm C} = \frac{1}{3k_{\rm B}} \sum_{j}J_{0j}$, is known to overestimate $T_{\rm C}$. However, if we applied this approximation to Co$_3$Sn$_2$S$_2$, we would get only $T_{\rm C}=$ $77$ and $95$ K in framework of MFT and the exact approach, respectively. \par The results of recent inelastic neutron scattering data were interpreted in terms of three parameters~\cite{ZhangPRL}: $J_{2} = -0.08$, $J_{c1} = 0.44$, and $J_{d} = 0.81$ meV (corresponding to $J_{2}$, $J_{3}$, and $J_{4}$ in our notations)~\cite{footnote1}. Thus, the strongest interaction is expected to be $J_{4}$ (the so-called ``cross-hexagon'' interaction), while the nearest-neighbor coupling $J_{1}$ is neglegibly small. This interpretation is clearly inconsistent with theoretical calculations based on MFT, where $J_{1}$ is the strongest. Nevertheless, there is also a considerable difference from the results of the exact method, where the strongest interaction is $J_{5}$ ($J_{c3}$ in the notations of Ref.~\cite{ZhangPRL}), while the ``cross-hexagon'' interaction $J_{4}$ is substantially smaller. Furthermore, even within the 4th coordination sphere, $J_{4}$ is not the strongest interaction and $J_{4}'$ is considerably stronger than $J_{4}$. It is also interesting to note that $J_{4}$ and $J_{5}$ operate practically at the same distances: $J_{4}$ is within the plane, while $J_{5}$ is between the planes (see Fig.~\ref{fig.Jk}). \par We hope that the results of our theoretical calculations of interatomic exchange interactions could be used as the guideline for the interpretation of experimental inelastic neutron scattering data. In Fig.~\ref{fig.SWhex}, we plot the theoretical spin-wave dispersion, which was defined as eigenvalues $\omega_{n \boldsymbol{q}}$ of the $3 \times 3$ matrix $\hat{\Omega}_{\boldsymbol{q}} = [\Omega_{\boldsymbol{q}}^{\mu \nu}]$ (for 3 magnetic Co sublattices in the rhombohedral unit cell or the $9 \times 9$ matrix for the hexagonal cell including 9 Co atoms), where \noindent \begin{equation} \Omega_{\boldsymbol{q}}^{\mu \nu} = \frac{2}{m}\left( J^{\mu} \delta_{\mu \nu} - J^{\mu \nu}_{\boldsymbol{q}} \right), \label{eqn.SW} \end{equation} \noindent $J^{\mu \nu}_{\boldsymbol{q}}$ is the Fourier image of $J_{ij}$ between sublattices $\mu$ and $\nu$, and $J^{\mu} = \sum_{\nu} J^{\mu \nu}_{0}$. \noindent \begin{figure}[b] \begin{center} \includegraphics[width=0.48\textwidth]{figure8.eps} \end{center} \caption{ Spin-wave dispersion corresponding to the exchange parameters derived in the framework of magnetic force theorem (MFT) and exact approach. All notations are taken from Ref.~\cite{ZhangPRL} for the hexagonal lattice.} \label{fig.SWhex} \end{figure} \noindent We consider the results based on the MFT and exact technique, taking into account the contributions of the ligand states, and use the notations of Ref.~\cite{ZhangPRL} for the hexagonal lattice. In fact, the experimental spin-wave dispersion was measured only for not too large values of $\boldsymbol{q}$ around the $\Gamma$ point, which is denoted as $(-1,1,1)$ in Fig.~\ref{fig.SWhex}, and limited only to the acoustic ($A$) branch. The key feature of this experimental data is that the magnon dispersion along the $[HH0]$ direction is considerably steeper than the one along $[-HH0]$. This anisotropy of the magnon spectrum was suggested to be the main signature of the strong ``cross-hexagon'' interaction $J_{4}$, as other theoretical models used for the fitting of the experimental data led to very similar dispersion along the $[HH0]$ and $[-HH0]$ directions~\cite{ZhangPRL}. Nevertheless, this explanation looks disputable in the light of the following arguments: the behavior of the $A$ branch near the $\Gamma$ point is described by the spin-stiffness tensor $\hat{D} = [D^{\alpha \beta}]$: \noindent \begin{equation} \omega_{L \boldsymbol{q}} = \sum_{\alpha, \beta} D^{\alpha \beta} q_{\alpha} q_{\beta}, \label{eqn.DSW} \end{equation} \noindent where $\alpha, \beta=$ $x$, $y$, or $z$. For the $R$-$3m$ symmetry, $\hat{D}$ is diagonal and $D^{xx}=D^{yy}$. Thus, the spin-wave dispersion near the $\Gamma$ point caused by isotropic exchange interactions, including the ``cross-hexagon'' $J_{4}$, must be isotropic in the $xy$ plane, and this is exactly what is seen in our calculations in Fig.~\ref{fig.SWhex}. \par Although the exact approach for the interatomic exchange interactions better captures the total energy change, caused by the infinitesimal rotations of spins, it is believed that MFT is more suitable for the analysis of the spin-wave dispersion~\cite{KL2004}. Nevertheless, in the long wavelength limit $\boldsymbol{q} \to 0$ these two techniques provide very similar description~\cite{BrunoPRL2003,PRB2021}, as is clearly seen in Fig.~\ref{fig.SWhex}, while the main difference occurs in the high-energy region of optical branches. \par The experimental anisotropy of the spin-wave dispersion in the $xy$ plane is an interesting point~\cite{ZhangPRL}. However, it is probably caused by other mechanisms and not related to isotropic exchange interactions. \subsection{\label{sec:Jm} Magnetic moments dependence of the exchange interactions} \par The picture of collinear FM spins, whose size changes with temperature, is at the heart of the Stoner model of magnetism~\cite{Stoner}. Nevertheless, it is reasonable to expect that besides these changes (longitudinal fluctuations), the spins can experience the infinitesimal rotations near the equilibrium state (or transversal fluctuations), which can be regarded as the step towards a more general spin fluctuation theory~\cite{Takahashi,MoriyaKawabata,MoriyaSF,UhlKubler}. In this section, we explore the effect of the size of the magnetic moments on the stability of the FM state with respect to the spin rotational degrees of freedom employing a somewhat phenomenological strategy for these purposes. Namely, we perform constrained GGA calculations, where we additionally fix the value of the total magnetic moment and, then, using the so-obtained constrained electronic structure we evaluate parameters of interatomic exchange interactions. A similar strategy was used for the analysis of photoemission~\cite{HolderPRPphoto} and optical~\cite{YangPRLoptics} data. As for the exchange interactions, we consider here only the exact approach, Eq.~(\ref{eq:jexactM}), and take into account the contributions of the ligand states using Eq.~(\ref{eq:jTT}). In our constraint calculations, we fix the total moment of three Co sites in the unit cell (evaluated within atomic spheres of radii $1.3$~\AA) to $0.32$, $0.53$, and $0.83$ $\mu_{B}$. This corresponds to the following values of total magnetic moments in the unit cell (and including the contributions of the Sn and S sites): $M=$ $0.55$, $0.78$, and $0.99$ $\mu_{B}$, which are considered together with the results of unconstrained calculations with $M=1$ $\mu_{B}$ ($1.035$ $\mu_{B}$ within atomic Co spheres). Particularly, we will show that with the decrease of $M$, the FM state becomes unstable and this instability may be related to the emergence of some AFM phase at elevated $T$, which was observed experimentally in Ref.~\cite{Guguchia.NatComm}. Using the experimental dependence $M(T)$ reported in Ref.~\cite{Kassem}, the values of $M=$ $0.55$, $0.78$, and $0.99$ $\mu_{B}$ can be roughly related to the temperatures $T/T_{\rm C} \sim$ $0.95$, $0.75$, and $0.2$, respectively. \par Distance dependence of the exchange interactions for different values of $M$ is shown in Fig.~\ref{fig.Jm}. \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure9.eps} \end{center} \caption{ (Top) Distance dependence of interatomic exchange interactions in the exact approach including the contributions of the ligand states as obtained for the constrained electronic structure with fixed values of total magnetic moments $M$. (Bottom) Dependence of interplane interactions $J_{5}$ and $J'_{5}$ on $M$. The notations of parameters are explained in Fig.~\ref{fig.Jstr}.} \label{fig.Jm} \end{figure} \noindent Particularly, we note that the decrease of $M$ strengthens the nearest-neighbor interaction $J_{1}$, which gradually starts to dominate over other exchange interactions. On the other hand, the interplane interactions $J_{5}$ and $J'_{5}$ decrease with the decrease of $M$. Moreover, some long-range interplane interactions beyond the 9th coordination sphere become more antiferromagnetic. Thus, one can expect the weakening of the FM coupling between the planes with the decrease of $M$. \par Using the obtained exchange parameters, we evaluate the stability of the FM state. For this purpose we calculate the magnon energies, which are given by the eigenvalues of Eq.~(\ref{eqn.SW}) for the rhombohedral lattice. If some of the $\omega_{n \boldsymbol{q}}$'s are negative, the state is unstable for those $\boldsymbol{q}$'s. The results are shown in Fig.~\ref{fig.SW}. \noindent \begin{figure}[t] \begin{center} \includegraphics[width=0.48\textwidth]{figure10.eps} \end{center} \caption{ Spin-wave dispersion in the ferromagnetic state for different values of total magnetic moment. The coordinates of the high symmetry points of the rhombohedral Brillouin zone are ${\rm L}: (\frac{\pi}{\sqrt{3}a},\frac{\pi}{3a},\frac{\pi}{3c})$, $\Gamma:(0,0,0)$, and ${\rm Z}:(0,0,\frac{\pi}{c})$.} \label{fig.SW} \end{figure} \noindent Furthermore, we evaluate the spin-stiffness tensor $\hat{D} = [D^{\alpha \beta}]$ for the $A$ branch. The non-vanishing matrix elements $D^{xx}=D^{yy}$ and $D^{zz}$ of $\hat{D}$ are listed in Table~\ref{tab:D}. \noindent \begin{table}[b] \caption{Matrix elements of the spin-stiffness tensor (in meV/\AA$^2$) for different values of total magnetic moment $M$ (in $\mu_{\rm B}$). The values obtained after excluding the Fermi surface contributions are given in parentheses.} \label{tab:D} \begin{ruledtabular} \begin{tabular}{rrrrr} $M$ & \multicolumn{2}{c}{$D^{xx}$} & \multicolumn{2}{c}{$D^{zz}$} \\ \hline $1.00$ & $1019$ & ($1019$) & $1107$ & ($1107$) \\ $0.99$ & $957$ & ($961$) & $527$ & ($545$) \\ $0.78$ & $639$ & ($644$) & $-421$ & ($-387$) \\ $0.55$ & $469$ & ($467$) & $-565$ & ($-498$) \end{tabular} \end{ruledtabular} \end{table} \noindent In the ground state ($M=1$ $\mu_{\rm B}$) the tensor $\hat{D}$ is nearly isotropic ($D^{xx} \approx D^{zz}$). However, even small deviation from the ground state for $M=0.99$ $\mu_{\rm B}$ leads to the sharp drop of $D^{zz}$ and moderate decrease of $D^{xx}$. Such drop is caused by the discontinuity of the electronic structure related to the deviation from the half-metallic state, which also leads to the derivative discontinuity of ${\cal E}(M)$ as shown in Fig.~\ref{fig.EM}. The obtained values are still larger than the experimental $D^{xx} = 803 \pm 46$ and $D^{zz} = 237 \pm 13$ meV/\AA$^2$ measured at $T=8$ K~\cite{Liu2021}. Nevertheless, these parameters are very sensitive to the value of $M$ (and the ordered moment at the Co site, reported in Ref.~\cite{Liu2021}, was smaller than $0.3$ $\mu_{\rm B}$, meaning that the measured sample was probably not in the half-metallic state). Indeed, further decrease of $M$ makes $D^{zz}<0$ and the FM state becomes unstable. $D^{xx}$ also decreases with the decrease of $M$, but remains positive for all considered values of $M$. Such instability is resolved in the formation of an incommensurate spin-spiral state with $\boldsymbol{q}=(0,0,q_{\rm z})$ as confirmed by the spin-wave calculations in Fig.~\ref{fig.SW}. \par In order to study the effect of the Fermi surface contributions to the exchange parameters, we eliminate these contributions by enforcing $\frac{f_{m \boldsymbol{k}}^{\sigma} - f_{l \boldsymbol{k}+\boldsymbol{q}}^{\sigma'}}{\varepsilon_{m \boldsymbol{k}}^{\sigma} - \varepsilon_{l \boldsymbol{k}+\boldsymbol{q}}^{\sigma'}} = 0$ in Eq.~(\ref{eq:gresponse}) for $\varepsilon_{m \boldsymbol{k}}^{\sigma} \rightarrow \varepsilon_{l \boldsymbol{k}+\boldsymbol{q}}^{\sigma'}$. Although the effect of the Fermi surface states on the individual exchange interactions does not look strong, there is an appreciable contribution of these states to the spin stiffness, mainly associated with the long-range interactions. The results are given in parentheses in Table~\ref{tab:D}. As expected, there is no Fermi surface contribution to $\boldsymbol{\mathcal{R}}_{\boldsymbol{q}}^{\uparrow \downarrow}$ in the half-metallic ground state with $M = 1$ $\mu_{\rm B}$. In the metallic state with $M < 1$ $\mu_{\rm B}$, the contribution of the Fermi surface states to $D^{xx}$ is negligibly small. Nevertheless, there is an appreciable AFM contribution of the surface states to $D^{zz}$, which additionally destabilizes the FM state. \par Thus, we expect that with the increase of $T$, when the magnetic moments become sufficiently small, Co$_3$Sn$_2$S$_2$ can undergo the transition to the incommensurate AFM state. At present, we cannot elaborate details of this transition (for instance, whether it goes via the region of coexistence of the FM and AFM phases). Nevertheless, we believe that such behavior may be relevant to the anomalous properties of Co$_3$Sn$_2$S$_2$ for $T > 90$ K~\cite{Guguchia.NatComm}. \section{\label{sec:summary} Summary and Conclusions} \par Using results of density functional theory in the generalized gradient approximation, we investigated the origin and stability of the FM order in the Weyl semimetal Co$_3$Sn$_2$S$_2$. For these purposes, we constructed the realistic model in the basis of localized Wannier functions, which included the contributions of the Co $3d$ as well as the ligand Sn $5p$, and S $4p$ states, and studied this model using different types of the response theories. \par One of the interesting aspects of Co$_3$Sn$_2$S$_2$ is that the local magnetic moments are rather soft and strongly depend on the angle formed by the Co spins in the kagome lattice. This is one of the key results of magnetic GGA calculations, which is nicely reproduced by the response theory, by considering the emergence of the magnetic solutions starting from the nonmagnetic state. This finding strongly supports the itinerant character of magnetism in Co$_3$Sn$_2$S$_2$, which should be considered in the analysis of properties of this compound. For instance, the size of the local magnetic moments should depend on temperature, which should be one of the genuine physical properties of Co$_3$Sn$_2$S$_2$. \par On the other hand, the Heisenberg model of localized magnetism also makes sense in the case of Co$_3$Sn$_2$S$_2$ for the analysis of local stability of the FM state with respect to the transversal spin fluctuations, inherent to rotational spin degrees of freedom. For the construction of such model, we employed the exact theory of interatomic exchange interactions based on the calculation of the inverse response function. We argued that the interatomic exchange interactions in Co$_3$Sn$_2$S$_2$ are very long-ranged and the strongest one, stabilizing the FM state, is operating in the 5th coordinate spheres between the kagome planes. \par Furthermore, we expect the FM magnetization to decrease with temperature via the longitudinal fluctuations, affecting the size of magnetic moments. This will destroy the half-metallic character of Co$_3$Sn$_2$S$_2$ and gradually makes the FM state unstable with respect to the transversal fluctuations. The change of the electronic structure mainly affects the interactions between the kagome planes, partly owing to the contributions stemming from the Fermi surface states. Thus, with the increase of $T$, we expect Co$_3$Sn$_2$S$_2$ to change gradually from a three-dimensional to quasi-two-dimensional ferromagnet, which should be followed by emergence of the spin-spiral phase propagating perpendicular to the kagome planes. This finding could probably rationalize the experimental behavior of Co$_3$Sn$_2$S$_2$ near $T_{\rm C}$~\cite{Guguchia.NatComm}. \par Another important question is the validity of GGA, which is typically employed for the analysis of Weyl semimetal properties of Co$_3$Sn$_2$S$_2$. From the viewpoint of interatomic exchange interactions, the experimental information available on hands is not sufficient to make a definite conclusion. The anisotropy of the spin stiffness, which is measured experimentally~\cite{ZhangPRL,Liu2021}, can be understood by a small deviation from the half-metallic state. We hope that the comprehensive analysis presented in our work can be used as the guideline for future experimental studies. Particularly, it would be interesting to check our finding that the strongest exchange interaction stabilizing the FM state in Co$_3$Sn$_2$S$_2$ operates in the 5th coordination sphere, between the kagome plane. The theory for $T_{\rm C}$ in Co$_3$Sn$_2$S$_2$ should involve the aspects of both Stoner and Heisenberg theories of magnetism~\cite{Takahashi,MoriyaKawabata,MoriyaSF,UhlKubler}. Separately, none of these models would provide a reasonable description for Co$_3$Sn$_2$S$_2$. \par In the present work, we had to deal with the extended model in the basis of Co $3d$, Sn $5p$, and S $4p$ states, similar to the previous studies~\cite{Yanagi,Minami}. A very interesting direction is the formulation of effective toy theories for magnetic Weyl semimetals, which would capture the behavior of small number states near the Fermi level~\cite{OzawaNomura}. Although this can be done rigorously by employing the Wannier function technique~\cite{wannier90}, such a construction for Co$_3$Sn$_2$S$_2$ and similar materials is not always straightforward because of the clustering effects and the formation of molecular groups of states~\cite{SavrasovPRB2021}. \section*{Acknowledgement} \par IVS acknowledges useful communication with V. P. Antropov, drawing our attention to Refs.~\cite{Turzhevskii,Singer} on validity of the Heisenberg model for transition metals, and S. Okamoto on details of Ref.~\cite{ZhangPRL}. The work was supported by program AAAA-A18-118020190095-4 (Quantum).
04e107a8261eeba65785d8c5fcaf0c152827490c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{BN-Pairs and the Weyl functor} Before introducing the general concepts of building and BN-pair, we study the standard example of projective spaces (from the building point of view).\\ \subsection{Projective space} Let $\mathbf{R}$ be a division ring (= skew field), let $n \in \mathbb{N}$, and let $V = V(n,\mathbf{R}) = \mathbf{R}^n$\index{$V(n,\mathbf{R})$} be the $n$-dimensional (left or right) vector space over $\mathbf{R}$. We define the {\em $(n - 1)$-dimensional (left or right) projective space}\index{projective!space} $\ensuremath{\mathbf{PG}}(n,\mathbf{R})$\index{$\ensuremath{\mathbf{PG}}(n,\mathbf{R})$} as being the set \begin{equation} (\mathbf{R}^n \setminus \{0\})/\sim, \end{equation} where the equivalence relation ``$\sim$'' is defined by (left or right) proportionality, with the subspace structure being induced by that of $V$. (When $n$ is not finite, similar definitions hold.) The choice of ``left'' or ``right'' does not affect the isomorphism class. If $\mathbf{R} = \mathbb{F}_q$ is the finite field with $q$ elements ($q$ a prime power), we also write $\ensuremath{\mathbf{PG}}(n - 1,q)$\index{$\ensuremath{\mathbf{PG}}(n - 1,q)$} instead of $\ensuremath{\mathbf{PG}}(n - 1,\mathbb{F}_q)$. Sometimes the notations $\ensuremath{\mathbf{P}}^{n - 1}(\mathbf{R})$\index{$\ensuremath{\mathbf{P}}^{n - 1}(\mathbf{R})$}, $\ensuremath{\mathbf{P}}^{n - 1}(q)$\index{$\ensuremath{\mathbf{P}}^{n - 1}(q)$}, $\mathbb{P}^{n - 1}(\mathbf{R})$\index{$\mathbb{P}^{n - 1}(\mathbf{R})$} and $\mathbb{P}^{n - 1}(q)$\index{$\mathbb{P}^{n - 1}(q)$} occur as well. There is also a notion of {\em axiomatic projective space}\index{axiomatic!projective space}, which is defined to be an {\em incidence geometry} (defined later in this section) which is governed by certain axioms, which are (of course) satisfied by ``classical'' projective spaces over division rings. A truly remarkable thing is that Veblen and Young \cite{VY} showed that if the dimension $n - 1$ of such a space is at least three, it {\em is} isomorphic to some $\ensuremath{\mathbf{PG}}(n - 1,\mathbf{R})$. And this is well-known {\em not} to be true when the dimension is less than three. \\ \subsection{Representing spaces as group coset geometries} Let $\ensuremath{\mathbf{P}}$ be a finite\\ ($n$-)dimensional projective space over some division ring $\mathbf{R}$. Consider any $\mathbf{R}$-base $\ensuremath{\mathbf{B}}$. Define a simplicial complex (in the next section to be formally defined, and called ``apartment''\index{apartment}) $\ensuremath{\mathscr{C}} \equiv \ensuremath{\mathscr{C}}(\ensuremath{\mathbf{B}})$, by letting it be the set of all possible subspaces of $\ensuremath{\mathbf{P}}$ generated by subsets of $\ensuremath{\mathbf{B}}$. (Let it also contain the empty set.) Define a (maximal) ``flag''\index{flag} or {\em chamber}\index{chamber} in $\ensuremath{\mathscr{C}}$ as a maximal chain (so of length $n + 1$) of subspaces in $\ensuremath{\mathscr{C}}$. Let $F$ be such a fixed flag. Consider the special projective linear group $K := \ensuremath{\mathbf{PSL}}_{n + 1}(\mathbf{R})$ of $\mathbf{P}$. Then note that $K$ acts transitively on the pairs $(\ensuremath{\mathscr{C}}(\ensuremath{\mathbf{B}}'),F')$, where $\ensuremath{\mathbf{B}}'$ is any $\mathbf{R}$-base and $F'$ is a maximal flag in $\ensuremath{\mathscr{C}}(\ensuremath{\mathbf{B}}')$. Put $B = K_{\ensuremath{\mathscr{C}}}$ and $N = K_{F}$; then note the following properties: \begin{itemize} \item[(i)] $\langle B,N \rangle = K$; \item[(ii)] put $H = B \cap N \lhd N$ and $N/H = W$; then $W$ obviously is isomorphic to the symmetric group $\mathbf{S}_{n + 1}$ on $n + 1$ elements. Note that a presentation of $\mathbf{S}_{n + 1}$ is: \begin{equation} \langle s_i \vert s_i^2 = \mathrm{id}, (s_is_{i + 1})^3 = \mathrm{id}, (s_is_j)^2 = \mathrm{id}, i, j \in \{1,\ldots,n + 1\}, j \ne i \pm 1 \rangle. \end{equation} \item[(iii)] $Bs_iBwB \subseteq BwB \cup Bs_iwB$ whenever $w \in W$ and $i\in\{1,2,\ldots,n + 1\}$; \item[(iv)] $s_iBs_i \ne B$ for all $i\in\{1,2,\ldots,n + 1\}$. \end{itemize} (Here, expressions such as $BwB$ mean $B\widetilde{w}HB$, where $\widetilde{w}$ is any representant of $\widetilde{w}H = w$.) Now let $K \cong \ensuremath{\mathbf{PSL}}_{n + 1}(\mathbf{R})$ be as above, and suppose that $B$ and $N$ are groups satisfying these properties. Define a geometry $\ensuremath{\mathscr{B}}_{(K,B,N)}$\index{$\ensuremath{\mathscr{B}}_{(K,B,N)}$} as follows. \begin{itemize} \item[(B1)] Its elements are left cosets in $K$ of the groups $P_i$ which properly contain $B$ and are different from $K$, $i = 1,\ldots,n + 1$; \item[(B2)] two elements $gP_i$ and $hP_j$ are incident if they intersect nontrivially.\\ \end{itemize} \begin{proposition} $\ensuremath{\mathscr{B}}_{(K,B,N)}$ is isomorphic to $\ensuremath{\mathbf{PG}}(n,\mathbf{R})$. \end{proposition} \medskip \subsubsection{Low dimensional cases} For dimension $n = 1$, our definition of axiomatic space doesn't make much sense. Here we rather {\em start} from a division ring $\mathbf{R}$, and define $\ensuremath{\mathbf{P}}$, the {\em projective line} over $\mathbf{R}$, as being the set $(\mathbf{R}^2 \setminus \{0\})/\sim$, where $\sim$ is defined by (left) proportionality. So we can write \begin{equation} \ensuremath{\mathbf{P}} = \{(0,1)\} \cup \{(1,\ell) \vert \ell \in \mathbf{R}\}. \end{equation} Now $\ensuremath{\mathbf{PSL}}_2(\mathbf{R})$ acts naturally on $\mathbf{P}$; in fact, we have defined the projective line as a permutation group equipped with the natural doubly transitive action of $\ensuremath{\mathbf{PSL}}_2(\mathbf{R})$. Defining a geometry as we did for higher rank projective spaces, through the ``$(B,N)$-pair structure'' of $\ensuremath{\mathbf{PSL}}_2(\mathbf{R})$, one obtains the same notion of projective line.\\ Restricting to finite fields, we obtain the following very simple \begin{proposition} A finite projective line has $q + 1$ points, for some prime power $q$. \end{proposition} The $2$-dimensional case is different, still. Here, other than in the $1$-dimensional case, one obtains a nontrivial geometry; the axioms now boil down to just demanding that each two different points are incident with precisely one line, that, dually, any two distinct lines intersect in precisely one point, and there exist four distinct points of which no three are on the same line. So we need not require additional algebraic structure in order to have interesting objects. Here we cannot say much about the order of the plane a priori. \subsubsection{Representation by diagram} We represent the presentation of $\mathbf{S}_{n + 1}$ as above in the following way (this will be explained in more detail in the next section): \bigskip $\mathbf{A}_{n + 1}$: \begin{tikzpicture}[style=thick, scale=1.5] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (4,0) node {$\dots$} ; \end{tikzpicture} \hspace{0.5cm} ($n\geq 0$)\\ (The number of vertices is $n + 1$ | each vertex corresponding to an involution in the generating set of involutions | and we have an edge between vertices $s_i$ and $s_j$ if and only if $\vert j-i\vert=1$.)\\ \subsection{Simplicial complexes} Recall that a (combinatorial) {\em simplicial complex}\index{simplicial complex} is a pair $(\ensuremath{\mathscr{S}},Y)$, where $Y$ is a set and $\ensuremath{\mathscr{S}} \subseteq 2^Y$, such that $Y \in \ensuremath{\mathscr{S}}$ and \begin{equation} U \subseteq V \in \ensuremath{\mathscr{S}} \Longrightarrow U \in \ensuremath{\mathscr{S}}. \end{equation} We are ready to introduce buildings. We will not provide each result with a specific reference | rather, we refer the reader to \cite{AbBr}.\\ \subsection{Combinatorial definition} A {\em chamber geometry}\index{chamber!geometry} is a geometry \begin{equation} \Gamma = (\ensuremath{\mathscr{C}}_1,\ensuremath{\mathscr{C}}_2,\ldots,\ensuremath{\mathscr{C}}_j,\mathbf{I}) \end{equation} of rank $j$ (so $\Gamma$ has $j$ different kinds of elements and $\mathbf{I}$ is an incidence relation between the elements such that no two elements belonging to the same $\ensuremath{\mathscr{C}}_i$, $1 \leq i \leq j$, can be incident) so that the simplicial complex $(\ensuremath{\mathscr{C}},X)$, where $\ensuremath{\mathscr{C}} = \cup_{i = 1}^{j}\ensuremath{\mathscr{C}}_i$ and $S \subseteq \ensuremath{\mathscr{C}}$ is contained in $X$ if and only if every two distinct elements of $S$ are incident, is a chamber complex (as in, e.g., \cite{POL}). A {\em building}\index{building} $(\ensuremath{\mathscr{C}},X)$ is a thick chamber geometry $(\ensuremath{\mathscr{C}}_1,\ensuremath{\mathscr{C}}_2,\ldots,\ensuremath{\mathscr{C}}_j,\mathbf{I})$ of rank $j$, where $\ensuremath{\mathscr{C}} = \cup_{i = 1}^j\ensuremath{\mathscr{C}}_i$, together with a set $\ensuremath{\mathscr{A}}$ of thin chamber subgeometries, so that: \begin{itemize} \item[(i)] every two chambers are contained in some element of $\ensuremath{\mathscr{A}}$; \item[(ii)] for every two elements $\Sigma$ and $\Sigma'$ of $\ensuremath{\mathscr{A}}$ and every two simplices $F$ and $F'$, respectively contained in $\Sigma$ and $\Sigma'$, there exists an isomorphism $\Sigma \mapsto \Sigma'$ which fixes all elements of both $F$ and $F'$. \end{itemize} If all elements of $\ensuremath{\mathscr{A}}$ are finite, then the building is called {\em spherical}\index{spherical!building}. Elements of $\ensuremath{\mathscr{A}}$ are called {\em apartments}\index{apartment}. \\ \subsection{Coxeter groups and systems} We need to introduce the notions of ``Coxeter system'' and ``Coxeter diagram''.\\ \subsubsection{Coxeter groups} A {\em Coxeter group}\index{Coxeter!group} is a group with a presentation of type \begin{equation} \langle s_1,s_2,\ldots,s_n \vert (s_is_j)^{m_{ij}} = \mathrm{id} \rangle, \end{equation} \noindent where $m_{ii} = 1$ for all $i$, $m_{ij} \geq 2$ for $i \ne j$, and $i, j$ are natural numbers bounded above by the natural number $n$. If $m_{ij} = \infty$, no relation of the form $(s_is_j)^{m_{ij}}$ is imposed. All generators in this presentation are involutions. The natural number $n$ is the {\em rank}\index{rank} of the Coxeter group. A {\em Coxeter system}\index{Coxeter!system} is a pair $(W,S)$, where $W$ is a Coxeter group and $S$ the set of generators defined by the presentation. Recall that a {\em dihedral group}\index{dihedral group} of {\em rank}\index{rank} $n$, denoted $\mathbf{D}_n$\index{$\mathbf{D}_n$}, is the symmetry group of a regular $n$-gon in the real plane. \\ \subsubsection{Coxeter matrices} A square $n \times n$-matrix $(M)_{ij}$ is a {\em Coxeter matrix}\index{Coxeter!matrix} if it is symmetric and defined over $\mathbb{Z} \cup \{\infty\}$, has only $1$s on the diagonal, and has $m_{ij} \geq 2$ if $i \ne j$. Starting from a Coxeter matrix $(M)_{ij}$, one can define a Coxeter group $\langle s_1,s_2,\ldots,s_n \vert (s_is_j)^{m_{ij}} = \mathrm{id} \rangle$, and conversely. Different Coxeter systems can give rise to the same Coxeter group, even if the rank is different. \\ \subsubsection{Coxeter diagrams} Let $(W,S)$ be a Coxeter system. Define a graph, called ``Coxeter diagram''\index{Coxeter!diagram}, as follows. Its vertices are the elements of $S$. If $m_{ij} = 3$, we draw a single edge between $s_i$ and $s_j$; if $m_{ij} = 4$, a double edge, and if $m_{ij} \geq 5$, we draw a single edge with label $m_{ij}$. If $m_{ij} = 2$, nothing is drawn. If the Coxeter diagram is connected, we call $(W,S)$ {\em irreducible}\index{irreducible!Coxeter diagram}. If it has a finite number of vertices, we call $(W,S)$ {\em spherical}\index{spherical!Coxeter diagram}.\\ The irreducible, spherical Coxeter diagrams were classified by H. S. M. Coxeter \cite{HSMC}; the complete list is the following. \bigskip $\mathbf{A}_n$\index{Coxeter!diagram!of type $\mathbf{A}_n$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (4,0) node {$\dots$} ; \end{tikzpicture} \hspace{0.5cm} ($n\geq 1$)\\ $\mathbf{B}_n = \mathbf{C}_n$\index{Coxeter!diagram!of type $\mathbf{C}_n$}\index{Coxeter!diagram!of type $\mathbf{B}_n$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0.035) -- (6,0.035); \draw (5,-0.035) -- (6,-0.035); \draw (4,0) node {$\dots$} ; \end{tikzpicture}\hspace{0.5cm} ($n\geq 2$)\\ $\mathbf{D}_n$\index{Coxeter!diagram!of type $\mathbf{D}_n$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \fill (5,1) circle (2pt); \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (5,0) -- (5,1); \draw (4,0) node {$\dots$} ; \end{tikzpicture}\hspace{0.5cm} ($n\geq 4$)\\ $\mathbf{E}_n$\index{Coxeter!diagram!of type $\mathbf{E}_n$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {0,2,3,4}{ \fill (\x,0) circle (2pt);} \fill (2,1) circle (2pt); \draw (0,0) -- (.5,0); \draw (1.5,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (4,0); \draw (2,0) -- (2,1); \draw (1,0) node {$\dots$} ; \end{tikzpicture} \hspace{0.5cm} ($n=6,7,8$)\\ $\mathbf{F}_4$\index{Coxeter!diagram!of type $\mathbf{F}_4$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {0,1,2,3}{ \fill (\x,0) circle (2pt);} \draw (0,0) -- (1,0); \draw (2,0) -- (3,0); \draw (1,0.035) -- (2,0.035); \draw (1,-0.035) -- (2,-0.035); \end{tikzpicture}\\ $\mathbf{H}_3$\index{Coxeter!diagram!of type $\mathbf{H}_3$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {0,1,2}{ \fill (\x,0) circle (2pt);} \draw (0,0) -- (1,0); \draw (1,0) -- (2,0); \draw (1.5,.25) node {$5$} ; \end{tikzpicture}\\ $\mathbf{H}_4$\index{Coxeter!diagram!of type $\mathbf{H}_4$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {-1,0,1,2}{ \fill (\x,0) circle (2pt);} \draw (-1,0) -- (0,0); \draw (0,0) -- (1,0); \draw (1,0) -- (2,0); \draw (1.5,.25) node {$5$} ; \end{tikzpicture}\\ $\mathbf{I}_2(m)$\index{Coxeter!diagram!of type $\mathbf{I}_2(m)$}: \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$m$} ; \end{tikzpicture}\hspace{0.5cm} ($m\geq 5$)\\ \medskip \subsection{Incidence geometries} Having met some standard examples of incidence geometries, we now introduce general incidence geometries\index{incidence!geometry} in a formal way. These objects will be our way to approach combinatorial geometries in the present chapter. An {\em incidence geometry}\index{incidence!geometry} or {\em Buekenhout-Tits geometry}\index{Buekenhout-Tits!geometry} consists of a set $X$ of {\em objects}\index{objects} provided with a symmetric relation $\mathbf{I}$ called {\em incidence}\index{incidence} and a surjective function \begin{equation} t:X \longrightarrow I \end{equation} that assigns a {\em type} to each object, such that two objects of the same type are never incident. The set $I$ is the set of {\em types}\index{type}. The cardinality $|I|$ is called the {\em rank}\index{rank!of incidence geometry} of the geometry. Denote the geometry by $\Gamma = \Gamma(X,\mathbf{I},I,t)$. If the rank is two, we also speak of a {\em point-line geometry}\index{point-line geometry} (the assignment \begin{equation} \mathbf{I}\ \ \longrightarrow\ \ \{\mbox{point},\mbox{line} \} \end{equation} being bijective but arbitrary). \subsubsection{Geometries as incidence graphs} It will be particularly important in the $\mathbb{F}_1$-context to see incidence geometries as a kind of graph. An extra feature which comes in handy is that essentially (or better: usually) the automorphism group $A$ of the geometry is the same as the automorphism group $B$ of the associated graph. In any case, $A \leq B$, and if $[B : A] \ne 1$, then this quantity is a measure for the number of types of objects that play the same role.\\ An incidence geometry can be viewed as a multipartite graph $\Gamma$ with vertex set $X$ and partition $\{X_i \mid i \in I\}$ (with $X_i = t^{-1}(i)$), with incidence taken as adjacency.\\ The geometry $\Gamma$ is called {\em connected}\index{connected incidence geometry} when the graph $\Gamma$ is connected. The graph without vertices is not connected: a connected graph has precisely one connected component, while the graph without vertices has no connected component. A {\em flag}\index{flag} $F$ in $\Gamma$ is a clique, which is by definition a complete subgraph. No two elements of a flag have the same type. The {\em rank}\index{rank!of flag} of $F$ is $|t(F)|$ (that is, $|F|$). The {\em corank}\index{corank of a flag} of $F$ is $| I \setminus t(F)|$. The {\em residue}\index{residue} ${\rm Res}(F)$ (also written $\Gamma_F$) is the geometry with set of objects $Y = \{y \in X \setminus F \mid F \cup \{y\} ~\mbox{\rm is a flag}\}$, incidence inherited from $\Gamma$, set of types $I \setminus t(F)$, and type function inherited from $\Gamma$. The geometry $\Gamma$ is called {\em residually connected}\index{residually connected} when every residue of rank at least two is connected (and hence nonempty), and every residue of rank one is nonempty.\\ \medskip \subsection{BN-Pairs and buildings} A group $G$ is said to have a {\em BN-pair}\index{BN-pair} $(B,N)$, where $B, N$ are subgroups of $G$, if: \begin{itemize} \item[(BN1)] $\langle B,N \rangle = G$; \item[(BN2)] $H = B \cap N \lhd N$ and $N/H = W$ is a Coxeter group with distinct generator set of involutions $S = \{ s_j \vert j \in J \}$; \item[(BN3)] $BsBwB \subseteq BwB \cup BswB$ whenever $w \in W$ and $s\in S$; \item[(BN4)] $sBs \ne B$ for all $s\in S$. \end{itemize} The group $B$, respectively $W$, is a {\em Borel subgroup}\index{Borel subgroup}, respectively the {\em Weyl group}\index{Weyl!group}, of $G$. The quantity $\vert S\vert$ is called the \emph{rank}\index{rank!of BN-pair} of the BN-pair. If $W$ is a finite group, the BN-pair is {\em spherical}\index{spherical!BN-pair}. It is {\em irreducible}\index{irreducible!BN-pair} if the corresponding Coxeter system is. Sometimes we call $(G,B,N)$ also a {\em Tits system}\index{Tits system}.\\ \begin{remark} {\rm Asking that $W$ is a Coxeter group is in fact redundant; by the other axioms and the fact that $S$ consists of involutions, it is not hard to show that $W$ {\em must be} a Coxeter group, and that $S$ is uniquely determined as the set of elements in $W^\times$ for which \begin{equation} B \cup BsB \end{equation} is a group. } \end{remark} \subsubsection{Buildings as group coset geometries} To each Tits system $(G,B,N)$ one can associate a building $\ensuremath{\mathscr{B}}_{(G,B,N)}$ in a natural way, through a group coset construction. For that reason we introduce the standard {\em parabolic subgroups}\index{parabolic subgroup}; these are just the proper subgroups of $G$ which properly contain $B$. Let $I \subset J$, and define \begin{equation} W_I := \langle s_i \vert i \in I \rangle \leq W. \end{equation} Then \begin{equation} P_I := BW_IB \end{equation} is a subgroup of $G$ which obviously contains $B$, and vice versa it can be shown that any standard parabolic subgroup has this form. We are ready to introduce $\ensuremath{\mathscr{B}}_{(G,B,N)}$. \begin{itemize} \item[(B1)] \textsc{Elements}: (or ``subspaces'' or ``varieties'') are elements of the left coset spaces $G/P_I$, $\emptyset \ne I \subset J \ne I$. \item[(B2)] \textsc{Incidence}: $gP_I$ is incident with $hP_L$, $I \ne L$, if these cosets intersect nontrivially. \end{itemize} The {\em rank}\index{rank!of a building} of $\ensuremath{\mathscr{B}}_{(G,B,N)}$ is the rank of the BN-pair. The building $\ensuremath{\mathscr{B}}_{G,B,N}$ is {\em spherical}\index{spherical!building} when the BN-pair $(B,N)$ is; note that this is in accordance with the aforementioned synthetic definition of ``spherical building'' (taken that there is already a BN-pair around). It is {\em irreducible}\index{irreducible!building} when $(B,N)$ is irreducible.\\ \subsubsection{$G$ as an automorphism group} The group $G$ acts as an automorphism group, by multiplication on the left, on $\ensuremath{\mathscr{B}}_{(G,B,N)}$. The kernel $K$ of this action is the biggest normal subgroup of $G$ contained in $B$, and is equal to \begin{equation} K=\bigcap_{g\in G}B^g. \end{equation} For the sake of convenience, suppose $J$ is the finite set $\{ 1,2,\ldots,n\}$, $n \in \mathbb{N} \setminus \{0\}$. The group $G/K$ acts faithfully on $\ensuremath{\mathscr{B}}_{(G,B,N)}$ and the stabilizer of the flag \begin{equation} F = \{P_{\{1\}},P_{\{1,2\}},\ldots,P_J\} \end{equation} is $B/K$. If $K = \{\mathrm{id}\}$, we say that the Tits system is {\em effective}\index{effective Tits system}.\\ Let $\Sigma$ be an apartment of $\ensuremath{\mathscr{B}}_{(G,B,N)}$, and let its elementwise stabilizer be $E$; then $NE$ is the global stabilizer of $\Sigma$. We can write \begin{equation} E = \bigcap_{w\in W}B^w. \end{equation} The next theorem sums up several properties. \begin{theorem}[\cite{AbBr,Titslect}] \label{Titssystem} Let $(G,B,N)$ be a Tits system with Weyl group $W$. Then the geometry $\ensuremath{\mathscr{B}}_{(G,B,N)}$ is a Tits building. Setting \begin{equation} K=\bigcap_{g\in G}B^g\mbox{ and } E=\bigcap_{w\in W}B^w, \end{equation} we have that $G/K$ acts naturally and faithfully by left translation on $\ensuremath{\mathscr{B}}_{(G,B,N)}$. Also, $B$ is the stabilizer of a unique flag $F$ and $NE$ is the stabilizer of a unique apartment containing $F$, and the triple $(G/K,B/K,NE/K)$ is a Tits system associated with $\ensuremath{\mathscr{B}}_{(G,B,N)}$. Moreover, $G/K$ acts transitively on the sets $(A,F')$, where $A$ is an apartment and $F'$ is a maximal flag (chamber) in $A$. \end{theorem} The Tits system $(G,B,N)$ is called \emph{saturated}\index{saturated} precisely when $N=NE$, with $E$ as above. Replacing $N$ by $NE$, every Tits system is ``equivalent'' to a saturated one.\\ \subsubsection{Bruhat decomposition} Let $G$ be a group with a spherical, saturated, effective BN-pair $(B,N)$. Then the ``Bruhat decomposition''\index{Bruhat decomposition} tells us that \begin{equation} G = BWB = \coprod_{w \in W}BwB, \end{equation} where $W = N/(B \cap N)$ is the Weyl group. Note that with $I \subset J$, we also have \begin{equation} P_I = BW_IB = \coprod_{w \in W_I}BwB. \end{equation} \subsubsection{Classification of BN-pairs} If the rank of an abstract spherical building is at least $3$, Tits showed in a celebrated work \cite{Titslect} that it is always associated to a BN-pair in the way explained above, and this deep observation led him eventually to classify all spherical BN-pairs of rank $\geq 3$ (cf. \cite[11.7]{Titslect}). So Tits realized a far reaching generalization of the Veblen-Young theorem for spherical buildings, which roughly could be formulated as follows. \begin{theorem}[Classification of spherical buildings | Tits \cite{Titslect}] An irreducible spherical building of rank at least $3$ arises from a simple algebraic group (of relative rank at least $3$) over an arbitrary division ring.\\ \end{theorem} \bigskip \begin{tabular}{c | c} & \\ \mbox{\textsc{Projective}}\ \ \mbox{\textsc{spaces}} &\mbox{\textsc{Buildings}}\\ & \\ \hline \\ \mbox{\textsc{Veblen-Young:}} & \mbox{\textsc{Tits:}} \\ & \\ \mbox{dim} $\geq 3$: & \mbox{rank} $\geq 3$: \\ \mbox{vector}\ \ \mbox{spaces} &\mbox{BN-pairs;}\ \ \mbox{simple}\ \ \mbox{algebraic} \ \ \mbox{groups} \\ \mbox{over} \ \ \mbox{division} \ \ \mbox{rings} & \mbox{over} \ \ \mbox{division} \ \ \mbox{rings}\\ & \\ \mbox{dim} $2$: & \mbox{rank} $2$: \\ \mbox{axiomatic}\ \ \mbox{projective}\ \ spaces &\mbox{generalized}\ \ \mbox{polygons} \\ & \\ & \\ \end{tabular} \medskip \subsection{The rank $2$ case} Combinatorially, a {\em generalized $n$-gon}\index{generalized!$n$-gon} ($n \geq 3$) is a point-line geometry $\Gamma = (\ensuremath{\mathscr{P}},\ensuremath{\mathscr{B}},\mathbf{I})$ for which the following axioms are satisfied: \begin{itemize} \item[(i)] $\Gamma$ contains no ordinary $k$-gon (as a subgeometry), for $2 \leq k < n$; \item[(ii)] any two elements $x,y \in \ensuremath{\mathscr{P}} \cup \ensuremath{\mathscr{B}}$ are contained in some ordinary $n$-gon (as a subgeometry) in $\Gamma$; \item[(iii)] there exists an ordinary $(n + 1)$-gon (as a subgeometry) in $\Gamma$. \end{itemize} A {\em generalized polygon}\index{generalized!polygon} (GP)\index{GP} is a generalized $n$-gon for some $n$. By (iii), generalized polygons have at least three points per line and three lines per point. The generalized $3$-gons are precisely the projective planes. A geometry $\Gamma$ which satisfies (i) and (ii) is a {\em weak generalized $n$-gon}\index{weak generalized $n$-gon}. If (iii) is not satisfied for $\Gamma$, then $\Gamma$ is called {\em thin}\index{thin}. Otherwise, it is called {\em thick}\index{thick}. Sometimes we will speak of ``thick (respectively thin) generalized $n$-gon'' instead of ``thick (respectively thin) weak generalized $n$-gon''. Generalized polygons were introduced by Tits in his triality paper \cite{Tits}; the basic reference is \cite{POL}.\\ It is not hard to show that once (iii) is also satisfied for a weak generalized polygon, there are constants $s$ and $t$ such that each point is incident with $t + 1$ lines, and each line is incident with $s + 1$ points. If the polygons were to be classical (that is, {\em Moufang}\index{Moufang} \cite{POL}), then there are division rings $\mathbb{K}$ and $\mathbb{L}$ such that $s + 1 = \vert \mathbb{K} \vert + 1$ and $t + 1 = \vert \mathbb{L} \vert + 1$. If (iii) is not satisfied, it can be shown that each line is incident with precisely $1 + 1$ points, so that thin generalized polygons are the polygons over $\mathbb{F}_1$. And thin generalized $n$-gons are nothing else than ordinary $n$-gons. We will come back to this in more detail. Note that there are many equivalent definitions for the notion of generalized polygon\footnote{Think, e.g., of the definition of generalized $3$-gon and of axiomatic projective plane given in the introduction to this chapter.}, but the present one is very natural in the characteristic one context | we only use $\mathbb{F}_1$-polygons to describe the axioms.\\ The relation between buildings and generalized polygons, as observed by Tits in \cite{Titslect} (see also \cite{POL}, \S 1.3.7, of which the notation is used), is as follows: \begin{itemize} \item[(S)] {\em Suppose $(\ensuremath{\mathscr{C}},X)$, $\ensuremath{\mathscr{C}} = \ensuremath{\mathscr{C}}_1 \cup \ensuremath{\mathscr{C}}_2$, is a spherical building of rank $2$. Then $\Gamma = (\ensuremath{\mathscr{C}}_1,\ensuremath{\mathscr{C}}_2,\mathbf{I})$ is a generalized polygon. Conversely, suppose that $\Gamma = (\ensuremath{\mathscr{P}},\ensuremath{\mathscr{B}},\mathbf{I})$ is a generalized polygon, and let $\ensuremath{\mathscr{F}}$ be the set of its flags. Then $(\ensuremath{\mathscr{P}} \cup \ensuremath{\mathscr{B}}, \emptyset \cup \{ \{v\} \vert v \in \ensuremath{\mathscr{P}} \cup \ensuremath{\mathscr{B}} \} \cup \ensuremath{\mathscr{F}})$ is a chamber geometry of rank $2$. Declaring the thin chamber geometry corresponding to any ordinary subpolygon an apartment, we obtain a spherical building of rank $2$.} \end{itemize} \medskip \subsubsection{Duality} Interchanging the role of points and lines, that is, applying the map \begin{equation} \Gamma = (\ensuremath{\mathscr{P}},\ensuremath{\mathscr{B}},\mathbf{I})\ \ \overset{D}\longrightarrow\ \ \Gamma^D = (\ensuremath{\mathscr{B}},\ensuremath{\mathscr{P}},\mathbf{I}), \end{equation} we obtain the (point-line) {\em dual}\index{dual} of $\Gamma$. It is also a GP (with the parameters switched). \medskip \subsubsection{Polygons as graphs} Let $\ensuremath{\mathscr{S}} = (\ensuremath{\mathscr{P}},\ensuremath{\mathscr{B}},\mathbf{I})$ be a generalized $n$-gon. The {\em (point-line) incidence graph}\index{incidence!graph} $(V,E)$ of $\ensuremath{\mathscr{S}}$ is defined by taking $V = \ensuremath{\mathscr{P}} \cup \ensuremath{\mathscr{B}}$, where an edge is drawn between vertices if the corresponding elements in $\ensuremath{\mathscr{S}}$ are incident; $(V,E)$ then is a bipartite graph of diameter $n$ and girth $2n$. Vice versa, such graphs define GPs. Let the graph corresponding to $\ensuremath{\mathscr{S}}$ be denoted by $\Gamma$. We call $(x_0,\ldots,x_k)$ a {\em (simple) path}\index{path} if the $x_i$ are pairwise distinct and $x_i$ is adjacent to $x_{i+1}$ for $i = 0,\ldots,k - 1$. The natural graph theoretic distance function on $\Gamma$ is denoted by ``$\mathrm{d}$'' or sometimes ``$\mathrm{d}_n$''\index{$\mathrm{d}$}\index{$\mathrm{d}_n$}. The set of elements at distance $i$ from some element $x \in \Gamma$ is denoted by $\Gamma_i(x)$\index{$\Gamma_i(x)$}. Elements at distance $n$ are called {\em opposite}\index{opposite}. \subsection{$\mathbb{F}_1$-Buildings and the Weyl functor} We now define the Weyl functor, and describe some of the examples Tits mentioned in \cite{anal}.\\ \subsubsection{The Weyl functor\index{Weyl!functor}} Note again that since $\mathbb{F}_1$ expresses the idea of an Absolute Arithmetic, it is clear that the buildings of a certain prescribed type $\mathbf{T}$ over $\mathbb{F}_1$ should be present in any thick building of the same type $\mathbf{T}$.\\ Motivated by the properties which a building over $\mathbb{F}_1$ of type $\mathbf{T}$ should have, we are ready to define such geometries (and their groups) in general. Let $\ensuremath{\mathscr{B}} = (\ensuremath{\mathscr{C}}_1,\ensuremath{\mathscr{C}}_2,\ldots,\ensuremath{\mathscr{C}}_j,\mathbf{I})$ be a thick building of rank $j$ and type $\mathbf{T}$ (given by one of the Coxeter diagrams below), and let $\ensuremath{\mathscr{A}}$ be its set of thin chamber subgeometries. Suppose $(B,N)$ is a saturated effective BN-pair associated to $\ensuremath{\mathscr{B}}$; its Weyl group $W$ is a Coxeter group defined by one of the Coxeter graphs below. \begin{proposition} A building of rank $j$ and type $\mathbf{T}$ defined over $\mathbb{F}_1$ is isomorphic to any element of $\ensuremath{\mathscr{A}}$. Its automorphism group is isomorphic to the Coxeter group $W$. \end{proposition} \bigskip $\mathbf{A}_n$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (4,0) node {$\dots$} ; \end{tikzpicture} \hspace{0.5cm} ($n\geq 1$)\\ $\mathbf{C}_n$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0.035) -- (6,0.035); \draw (5,-0.035) -- (6,-0.035); \draw (4,0) node {$\dots$} ; \end{tikzpicture}\hspace{0.5cm} ($n\geq 2$)\\ $\mathbf{D}_n$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \fill (5,1) circle (2pt); \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (5,0) -- (5,1); \draw (4,0) node {$\dots$} ; \end{tikzpicture}\hspace{0.5cm} ($n\geq 4$)\\ $\mathbf{E}_n$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {0,2,3,4}{ \fill (\x,0) circle (2pt);} \fill (2,1) circle (2pt); \draw (0,0) -- (.5,0); \draw (1.5,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (4,0); \draw (2,0) -- (2,1); \draw (1,0) node {$\dots$} ; \end{tikzpicture} \hspace{0.5cm} ($n=6,7,8$)\\ $\mathbf{F}_4$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {0,1,2,3}{ \fill (\x,0) circle (2pt);} \draw (0,0) -- (1,0); \draw (2,0) -- (3,0); \draw (1,0.035) -- (2,0.035); \draw (1,-0.035) -- (2,-0.035); \end{tikzpicture}\\ $\mathbf{H}_3$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {0,1,2}{ \fill (\x,0) circle (2pt);} \draw (0,0) -- (1,0); \draw (1,0) -- (2,0); \draw (1.5,.25) node {$5$} ; \end{tikzpicture}\\ $\mathbf{H}_4$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {-1,0,1,2}{ \fill (\x,0) circle (2pt);} \draw (-1,0) -- (0,0); \draw (0,0) -- (1,0); \draw (1,0) -- (2,0); \draw (1.5,.25) node {$5$} ; \end{tikzpicture}\\ $\mathbf{I}_2(m)$: \begin{tikzpicture}[style=thick, scale=1.2] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$m$} ; \end{tikzpicture}\hspace{0.5cm} ($m\geq 5$)\\ \subsubsection{Rank $2$ case} Generalized $n$-gons over $\mathbb{F}_1$ are ordinary $n$-gons, and their automorphism groups are dihedral groups $\mathbf{D}_n$ \cite{anal}. It follows that the corner stones of the spherical buildings of rank at least $3$ over $\mathbb{F}_1$ are the ordinary $n$-gons with $n = 3, 4, 6, 8$ (since these gonalities are the only ones which do occur in the corresponding thick buildings by \cite{FeHi}). Still, it is important to note that in the rank $2$ examples, {\em all} positive integer values for $n$ occur (except $n = 0,1,2$). \subsubsection{Quadrics} We give one final explicit example | it concerns quadrics. \\ Let $n \in \mathbb{N}^\times$. A {\em quadric}\index{quadric over $\mathbb{F}_1$} of projective dimension $2n$ or $2n + 1$ over $\mathbb{F}_1$ is a set $\ensuremath{\mathbf{Q}}$ of $2(n + 1)$ points arranged in pairs $x_0,y_0,x_1,y_1,\ldots,x_n,y_n$, and its subspaces are the subsets not containing any couple $(x_i,y_i)$. The Witt index of the so defined quadrics is $n$. The quadrics in dimension $2n$ have the further property that the maximal singular subspaces ($n$-spaces consisting of $n + 1$ points) are partitioned in two types, namely those containing an even number of points of $\{a_0,a_1,\ldots,a_n\}$ and those containing an odd number. Automorphisms are permutations of the set $\ensuremath{\mathbf{Q}}$ which preserve the given pairing in the $2n$-dimensional case.\\ \bigskip \subsubsection{Trees as $\mathbb{F}_1$-geometries} If we allow the value $\infty = n$ in the definition of generalized $n$-gon, we obtain a point-line geometry $\Gamma$ without closed paths, such that any two points or lines are contained in a path without end points. So $\Gamma$ becomes a tree (allowing more than $2$ points per line) without end points. Its apartments are paths without end points, and the Weyl group is an infinite dihedral group (generated by the reflections about two different adjacent vertices of such an apartment). So in this setting, a generalized $\infty$-gon over $\mathbb{F}_1$ is a tree of valency $2$ without end points. \begin{figure} \begin{center} \begin{tikzpicture}[style=thick, scale=2] \foreach \x in {1,2,3,4}{ \fill (\x,0) circle (2pt);} {\fill (0,-0.5) circle (2pt);} {\fill (5,-0.5) circle (2pt);} {\fill (-0.5,-1.5) circle (2pt);} {\fill (5.5,-1.5) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (4,0); \draw (4,0) -- (5,-0.5); \draw (0,-0.5) -- (-0.5,-1.5); \draw (1,0) -- (0,-0.5); \draw (5,-0.5) -- (5.5,-1.5); \draw[dashed] (-0.5,-2.5) -- (-0.5,-1.5); \draw[dashed] (5.5,-2.5) -- (5.5,-1.5); \end{tikzpicture} \caption{Generalized $\infty$-gon over $\mathbb{F}_1$.} \end{center} \end{figure} Consider, for instance, $G = \mathbf{SL}_2(\mathbb{F}_q((t^{-1})))$. Then $G$ has a BN-pair $(B,N)$, where \begin{equation} B = \{ \left(\begin{array}{cc} a & b\\ c & d \end{array}\right) \in \mathbf{SL}_2(\mathbb{F}_q[[t^{-1}]]) \vert c \equiv 0\mod{t^{-1}}\}, \end{equation} and $N$ is the subgroup of $G$ consisting of elements with only $0$ on the diagonal or only $0$ on the antidiagonal. Its Weyl group is an infinite dihedral group generated by \begin{equation} s_1 = \left(\begin{array}{cc} 0 & -1\\ 1 & 0 \end{array}\right)\ \ \mathrm{and}\ \ s_2 = \left(\begin{array}{cc} 0 & -t\\ 1/t & 0 \end{array}\right). \end{equation} The corresponding building (defined in the same way as before) is a generalized $\infty$-gon with $q + 1$ points per line and $q + 1$ lines per point. Its apartments are exactly the trees we introduced earlier in this section.\\ \bigskip \subsection{Generalizations} One notes that it is possible to relax the BN-pair axioms and still get a meaningful theory, and a Weyl functor. For example, let us fix a certain category of groups $\mathbf{C}$, and let us consider groups $G$ with subgroups $C, E$ such that \begin{itemize} \item[(GBN1)] $\langle C,E \rangle = G$; \item[(GBN2)] $H = C \cap E \lhd E$ and $E/H$ is isomorphic to an object of $\mathbf{C}$; \item[(GBN3)] (to be filled in appropriately). \end{itemize} Now we fix a set of groups $\ensuremath{\mathscr{C}}$ with the property that each of its elements is a proper subgroup of $G$ which properly contains $C$, and construct a geometry $\Gamma = \Gamma(C,E;\mathbf{C},\ensuremath{\mathscr{C}})$ as follows. \begin{itemize} \item[(GB1)] \textsc{Elements}: are elements of the left coset spaces $G/P$, $P \in \ensuremath{\mathscr{C}}$. \item[(GB2)] \textsc{Incidence}: $gP$ is incident with $hP'$, $P \ne P'$, if these cosets intersect nontrivially. \end{itemize} With $\mathbf{C}$ = category of Coxeter groups and $\ensuremath{\mathscr{C}}$ = all proper subgroups properly containing $C$, and (GBN3) replaced by the Bruhat decomposition axioms, we obtain a BN-pair $(C,E)$. Defining (maximal) flags as before, the reader notes the following: \begin{proposition} $G$ acts by left translation as an automorphism group of $\Gamma$. This action is transitive on the maximal flags of the geometry. \end{proposition} The Weyl functor for this category of group data would be \begin{equation} \underline{\ensuremath{\mathscr{A}}}: G \longrightarrow E/H, \end{equation} and on the geometrical side, it should send the geometry to the geometry induced by the left coset spaces $E/H$ ($= \{eH \vert e \in E\}$).\\ \quad\textsc{Question}\quad {\em Find good candidates for $\mathbf{C}$ and $\ensuremath{\mathscr{C}}$, and formulations for {\rm (GBN3)} such that the Weyl images have precisely $2$ points per line.}\\ It is precisely this kind of question which we will be considering in the next section from the synthetic side. (The game that we play there is first to imagine what the Weyl images should be | certain thin geometries which are fixed objects of the functor we want to define | and build, using certain prescribed axioms, the general geometries ``defined over $\mathbb{F}_1$'' using the Weyl geometries as bricks. One could play the same game here: imagine what the geometries induced on $E/H$ should be, and ask a similar question.)\\ \section{Introduction} We start this chapter by elaborating on ideas which were hatched from some seminal remarks made by Tits in his early paper ``Sur les analogues alg\'{e}briques des groupes semi-simples complexes'' (1957) \cite{anal}.\\ \medskip \subsection{Projective $\mathbb{F}_1$-geometry} When considering a class of incidence geometries which are defined over finite fields | take for instance the class of finite classical buildings of a fixed rank and type (we refer to later sections for a formal explanation of these notions) | it sometimes makes sense to consider the ``limit'' of these geometries when the number of field elements tends to $1$. As a star example, let the class of geometries be the classical projective planes $\ensuremath{\mathbf{PG}}(2,\mathbb{K})$ defined over finite fields $\mathbb{K}$. Then the number of points per line of such a plane is \begin{equation} \vert \mathbb{K} \vert + 1, \end{equation} so in the limit, the ``limit object'' should have $1 + 1$ points incident with every line. On the other hand, we want the limit object still to be an axiomatic projective plane\index{axiomatic!projective plane}, so we still want it to have the following properties: \begin{itemize} \item[(i)] any two distinct lines meet in precisely one point; \item[(ii)] any two distinct points are incident with precisely one line (the dual of (i)); \item[(iii)] not all points are on one and the same line (to avoid degeneracy). \end{itemize} It is clear that such a limit projective plane (``defined over $\mathbb{F}_1$'') should be an ordinary triangle (as a graph). So it is nothing else than a {\em chamber} in the building of any thick projective plane. Note that projective planes precisely are generalized $3$-gons (which are also to be defined later). \\ Adopting this point of view, it is easily seen that, more generally, projective $n$-spaces over $\mathbb{F}_1$ should be just sets $X$ of cardinaly $n + 1$ endowed with the geometry of $2^X$: any subset (of cardinality $0 \leq r + 1 \leq n + 1$) is a subspace (of dimension $r$). In other words, projective $n$-spaces over $\mathbb{F}_1$ are complete graphs on $n + 1$ vertices with a natural subspace structure. It is important to note that these spaces still satisfy the Veblen-Young axioms, and that they are the only such incidence geometries with thin lines. \begin{proposition}[See, e.g., Cohn \cite{Cohn} and Tits \cite{anal}] \label{propCT} Let $n \in \mathbb{N} \cup \{-1\}$. The combinatorial projective space $\ensuremath{\mathbf{PG}}(n,\mathbb{F}_1) = \ensuremath{\mathbf{PG}}(n,1)$ is the complete graph on $n + 1$ vertices endowed with the induced geometry of subsets, and $\ensuremath{\mathrm{Aut}}(\ensuremath{\mathbf{PG}}(n,\mathbb{F}_1)) \cong \ensuremath{\mathbf{PGL}}_{n + 1}(\mathbb{F}_1) \cong \mathbf{S}_{n + 1}$. \end{proposition} \begin{proof} We already have obtained the geometric part of the proposition. As for the group theoretical part, the symmetric group on $n + 1$ letters clearly is the full automorphism group of $\ensuremath{\mathbf{PG}}(n,1)$. \end{proof} It is extremely important to note that any $\ensuremath{\mathbf{PG}}(n,\mathbb{K})$ with $\mathbb{K}$ a division ring contains (many) subgeometries isomorphic to $\ensuremath{\mathbf{PG}}(n,\mathbb{F}_1)$ as defined above; so the latter object is independent of $\mathbb{K}$, and is the {\em common geometric substructure of all projective spaces of a fixed given dimension}: \begin{equation} \underline{\ensuremath{\mathscr{A}}}: \{ \ensuremath{\mathbf{PG}}(n,\mathbb{K}) \vert \mathbb{K} \ \ \mbox{division}\ \ \mbox{ring} \} \longrightarrow \ \ \{ \ensuremath{\mathbf{PG}}(n,\mathbb{F}_1)\}. \end{equation} Further in this chapter, we will formally find the automorphism groups of $\mathbb{F}_1$-vector spaces through matrices, and these groups will perfectly agree with Proposition \ref{propCT}. We will also investigate other examples of limit buildings, as first described by Tits in \cite{anal}. In fact, we will look for a more general functor $\underline{\ensuremath{\mathscr{A}}}$ (called {\em Weyl functor}\index{Weyl!functor} for reasons to be explained later) from a certain category of more general incidence geometries than buildings, to its subcategory of fixed objects under $\underline{\ensuremath{\mathscr{A}}}$. \\ Note that over $\mathbb{F}_1$, \begin{equation}\mathbf{P\Gamma L}_{n + 1}(\mathbb{F}_1) \cong \ensuremath{\mathbf{PGL}}_{n + 1}(\mathbb{F}_1) \cong \ensuremath{\mathbf{PSL}}_{n + 1}(\mathbb{F}_1).\end{equation} \medskip \subsection{Counting functions} It is easy to see the symmetric group also directly as a limit with $\vert \mathbb{K} \vert \longrightarrow 1$ of linear groups $\ensuremath{\mathbf{PG}}(n,\mathbb{K})$ (with the dimension fixed). The number of elements in $\ensuremath{\mathbf{PG}}(n,\mathbb{K})$ (where $\mathbb{K} = \mathbb{F}_q$ is assumed to be finite and $q$ is a prime power) is \begin{equation} \frac{(q^{n + 1} - 1)(q^{n + 1} - q)\cdots(q^{n + 1} - q^n)}{(q - 1)} = (q - 1)^nN(q) \end{equation} for some polynomial $N(X) \in \mathbb{Z}[X]$, and we have \begin{equation} N(1) = (n + 1)! = \vert \mathbf{S}_{n + 1} \vert.\\ \end{equation} \medskip Now let $n,q \in \mathbb{N}$, and define $[n]_q = 1 + q + \cdots + q^{n - 1}$. (For $q$ a prime power, $[n]_q = \vert \ensuremath{\mathbf{PG}}(n,q)\vert$.) Put $[0]_q! = 1$, and define \begin{equation} [n]_q! := [1]_q[2]_q\ldots [n]_q. \end{equation} Let $\mathbf{R}$ be a ring, and let $x, y, q$ be ``variables'' for which $yx = qxy$. Then there are polynomials $\left[\begin{array}{c}n \\ k\end{array}\right]_q$ in $q$ with integer coefficients, such that \begin{equation} (x + y)^n = \sum_{k = 0}^n\left[\begin{array}{c}n \\ k\end{array}\right]_qx^ky^{n - k}. \end{equation} Then \begin{equation} \left[\begin{array}{c}n \\ k\end{array}\right]_q = \frac{[n]_q!}{[k]_q![n - k]_q!}, \end{equation} and if $q$ is a prime power, this is the number of $(k - 1)$-dimensional subspaces of $\ensuremath{\mathbf{PG}}(n - 1,q)$ ($=\wis{Grass}(k,n)(\mathbb{F}_q)$). The next proposition again gives sense to the limit situation of $q$ tending to $1$. \begin{proposition}[See e.g. Cohn \cite{Cohn}] The number of $k$-dimensional linear subspaces of $\ensuremath{\mathbf{PG}}(n,\mathbb{F}_1)$, with $k \leq n \in \mathbb{N}$, equals \begin{equation} \left[\begin{array}{c}n + 1 \\ k + 1\end{array}\right]_1 = \frac{n!}{(n - k)!k!} = \left[\begin{array}{c}n + 1 \\ k + 1\end{array}\right]. \end{equation} \end{proposition} Many other enumerative formulas in Linear Algebra, Projective Geometry, etc. over finite fields $\mathbb{F}_q$ seem to keep meaningful inerpretations if $q$ tends to $1$, and this phenomenon (the various interpretations) suggests a deeper theory in characteristic one. \medskip \subsection{The Weyl functor} In this chapter, we will consider various categories $\mathbf{C}$ of combinatorial objects, and in a first stage these objects will come with certain field data (later we will also consider categories were no obvious field data are available). We will look for a functor $\underline{\ensuremath{\mathscr{A}}}$ which associates with the objects $o$ of $\mathbf{C}$ interpretations of $o$ over the field with one element, $\mathbb{F}_1$, keeping in mind that $\mathbb{F}_1$ does not exist, but $\underline{\ensuremath{\mathscr{A}}}(o)$ {\em does}. In all those categories, expressions of the form \begin{equation} \underline{\ensuremath{\mathscr{A}}}(o)\ \ +\ \ \mbox{field}\ \mbox{data} \end{equation} make sense, in that the knowledge of $\underline{\ensuremath{\mathscr{A}}}(o)$ together with field data will single out uniquely defined objects in the $\underline{\ensuremath{\mathscr{A}}}$-fiber of $o$. In principle, many objects in $\mathbf{C}$ could descend to some $\underline{\ensuremath{\mathscr{A}}}(o)$, but with additional field data, we can point to a unique object. (Think, for instance, of the category $\mathbf{C}$ of projective spaces over finite fields with natural morphisms; applying $\underline{\ensuremath{\mathscr{A}}}$ to $o = \ensuremath{\mathbf{PG}}(n,\mathbb{F}_q)$ yields the aforementioned geometry $\ensuremath{\mathbf{PG}}(n,1)$ which is independent of $\mathbb{F}_q$, so the $\underline{\ensuremath{\mathscr{A}}}$-fiber consists of all finite $n$-dimensional projective spaces. But giving the additional data of a single field yields a unique projective space coordinatized by this field.) So the functor $\underline{\ensuremath{\mathscr{A}}}$ comes with a number of base extension arrows to fields, and together with these arrows, the originial theories can be reconstructed from below. Since we will consider many different categories $\mathbf{C}$, we want $\underline{\ensuremath{\mathscr{A}}}$ to be defined in such a way that it commutes with various natural functors between these categories, an example of this prinicple being the diagram \begin{equation} \begin{array}{ccc} \{ \ensuremath{\mathbf{PG}}(n,\mathbb{F}_q) \vert n, q \} &\overset{\underline{\ensuremath{\mathscr{A}}}}\longrightarrow &\{ \ensuremath{\mathbf{PG}}(n,\mathbb{F}_1) \vert n \}\\ &&\\ \downarrow& &\downarrow\\ &&\\ \{ \ensuremath{\mathbf{PGL}}_{n + 1}(q) \vert n,q\} &\overset{\underline{\ensuremath{\mathscr{A}}}}\longrightarrow &\{\mathbf{S}_{n + 1} \vert n \}\\ &&\\ \end{array} \end{equation} which we already considered. \include{F1BN_book} \include{F1_Weyl} \include{F1LA_book} \newpage \section{Group representations} In a recent communication \cite{JLP}, Javier L\'{o}pez Pe\~{n}a observed that the classical set-theoretical approach to representation theory of groups can be seen as a degenerate case of the general theory of linear representations through elementary $\mathbb{F}_1$-theory, giving some common ground that explains the similarities between these two theories. We will start this section by describing this observation. Additional comments on (linear representations of) braid groups are made, some of which are taken from \cite{KapranovUN}.\\ \subsection{Linear representations} Let us first recall that one may think of two basic ways of representing groups $G$: \begin{itemize} \item[(a)] The first one | with a more geometrical flavor | is looking for a $\mathbb{K}$-vector space $V$, where $\mathbb{K}$ is a field, and trying to describe $G$ inside the group of automorphisms of $V$ by looking for group homomorphisms \begin{equation} \rho: G \longrightarrow \mathbf{GL}(V), \end{equation} also called {\em linear representations}\index{linear!representations}. Note that one does not ask $\rho$ to be injective (although this property is often highly desirable), so that we could have a nontrivial kernel. If the representation, however, {\em is} faithful, the group $G$ is called {\em linear}\index{linear!group}. \item[(b)] The other | set-theoretical | one, consists of looking for sets $X$ endowed with a group action $G \curvearrowright X$ of $G$, also called {\em $G$-sets}\index{$G$-set}, that allow us to describe $G$ as a group of permutations. So we seek for group homomorphisms \begin{equation} \gamma: G \longrightarrow \mathbf{Sym}(X). \end{equation} (Here, again, initially one does not ask $\gamma$ to be faithful, so that it could be that we describe $G/\mathrm{ker}(\gamma)$, rather than $G$, as a group inside $\mathbf{Sym}(X)$.)\\ \end{itemize} \subsection{Representations over $\mathbb{F}_1$} There are reasons to think that both approaches should have similar properties | after all we are trying to describe the same object in different guises. When one is looking for linear representations of a group, one has to fix the field over which the vector space is defined, as well as the dimension of the vector space. In particular, we might take representations defined over {\em finite fields}, diving into what is called ``modular representation theory''. But now we might try to study linear representations over $\mathbb{F}_1$. As $\mathbb{F}_1$-vector spaces are just pointed sets $V = (\mathbf{0},\Omega)$ of which the cardinality $\omega = \vert \Omega \vert$ corresponds to the vector space dimension, and as the Tits argument on Chevalley groups tells us that the automorphism group of $V$ is the symmetric group $\mathbf{S}(\Omega)$, we conclude that: \begin{proposition}[J. L\'{o}pez Pe\~{n}a \cite{JLP}] Linear representations over $\mathbb{F}_1$ of a group are precisely permutation representations.\\ \end{proposition} \subsection{Special example} Consider a faithful linear representation \begin{equation} \rho: G \longrightarrow \mathbf{GL}(V) \end{equation} of some group $G$. In the author's second chapter we will encounter a particular kind of such a representation that will be very important for $\mathbb{F}_1$-geometry. It is defined by the property that the projection of $\rho(G)$ on $\ensuremath{\mathbf{PGL}}(V)$ (after dividing out the scalars) acts sharply transitively on the points of the corresponding projective space $\ensuremath{\mathbf{P}}(V) = (V\setminus \{\mathbf{0}\})/\sim$. In fact, we will consider ``semi-linear representations'' (allowing twists by field automorphisms) \begin{equation} \rho: G \longrightarrow \mathbf{\Gamma L}(V) \end{equation} with the same properties. These representations (called {\em Singer representations}\index{Singer representation}) will be used in a framework to understand the ad\`{e}le class space of a global field in characteristic $0$. \\ Note that the $\mathbb{F}_1$-analogs of such representations are nothing else than sharply transitive goup actions $G \curvearrowright X$.\\ \subsection{Braid groups} We introduce braid groups in three different ways. \subsubsection{Braid groups via strings} Let $n \in \mathbb{N}^\times$. An {\em $n$-braid}\index{$n$-braid} consists of $n$ {\em strings}\index{string} or {\em strands}\index{strand} which connect $n$ ``top inputs'' (called $1,2,\ldots,n$) to $n$ ``bottom inputs'' (also called $1,2,\ldots,n$). Strands must move from top to bottom at any time. Note that there is a natural composition of $n$-braids which also yields $n$-braids, and which makes the set of all $n$-braids into a group $\mathbb{B}_n$\index{$\mathbb{B}_n$} (taken that we identify braids which can be naturally transformed into each other | see \S\S \ref{braidfund} for more on these identifications). \begin{proposition} The braid group $\mathbb{B}_n$ is torsion-free. \end{proposition} It is clear that an $n$-braid naturally induces an element of $\mathbf{S}_n$ (by going from top to bottom), and this association yields a surjective group homomorphism \begin{equation} \label{braideq} \gamma: \mathbb{B}_n \longrightarrow \mathbf{S}_n. \end{equation} The kernel $\mathbb{P}_n$ of $\gamma$ is the {\em pure braid group}\index{pure braid group} (on $n$ strings), and consists of all $n$-braids with the same input and output position: \begin{equation} \mathrm{id} \longrightarrow \mathbb{P}_n \longrightarrow \mathbb{B}_n \longrightarrow \mathbf{S}_n \longrightarrow \mathrm{id}. \end{equation} Any $n$-braid can be devided in intervals such that in each interval there is precisely one crossing of strings, so the set \begin{equation} \{ \sigma_i \vert i \in \{1,2,\ldots,n - 1\}\}, \end{equation} where $\sigma_j$ is defined as the $n$-braid in which there is an overcrossing between inputs $j$ and $j + 1$ (and nothing else), generates $\mathbb{B}_n$. (Undercrossing yields the inverses of the generators.) \begin{figure} \centering \begin{tikzpicture}[style=thick, scale=1.5] \foreach \x in {1,2,3,4,5}{ \fill (\x,0) circle (2pt);} \foreach \x in {1,2,3,4,5}{ \fill (\x,-3) circle (2pt);} \draw (1,0.3) node {1}; \draw (2,0.3) node {2}; \draw (3,0.3) node {3}; \draw (4,0.3) node {4}; \draw (5,0.3) node {5}; \draw (1,-3.3) node {1}; \draw (2,-3.3) node {2}; \draw (3,-3.3) node {3}; \draw (4,-3.3) node {4}; \draw (5,-3.3) node {5}; \draw (1,0) -- (1,-3); \draw (2,0) -- (2,-3); \draw (3,0) -- (3,-3); \draw (4,0) -- (5,-3); \draw (5,0) -- (4.6,-1.4); \draw (4,-3) -- (4.45,-1.8); \end{tikzpicture} \caption{The generator $\sigma_4$ in $\mathbb{B}_5$.} \end{figure} Now in terms of relations in the $\sigma_j$s, Artin proved that all relations can be deduced from only two, namely: \begin{equation} \left\{ \begin{array}{cccc} \sigma_i\sigma_j &= &\sigma_j\sigma_i &\mbox{if}\ \ \vert i - j\vert \geq 2\\ \sigma_i\sigma_{i + 1}\sigma_i &= &\sigma_{i + 1}\sigma_i\sigma_{i + 1} & \end{array} \right. \end{equation} So \begin{equation} \mathbb{B}_n \cong \langle \sigma_i, i = 1,2,\ldots,n - 1 \vert \sigma_i\sigma_j = \sigma_j\sigma_i\ \mbox{if}\ \vert i - j\vert \geq 2, \sigma_i\sigma_{i + 1}\sigma_i = \sigma_{i + 1}\sigma_i\sigma_{i + 1} \rangle. \end{equation} Recalling the presentation by generators and relations of $\mathbf{S}_n$ as a Coxeter group, we now have an explicit form for the homomorphism $\gamma$ (noting that if the extra relations $\{ {\sigma_i}^2 = \mathrm{id} \vert i = 1,2,\ldots,n - 1\}$ would be added, we would get the symmetric group).\\ \subsubsection{Braid groups as fundamental groups} \label{braidfund} Let $X$ be a connected topological space, let $d \geq 2$ be a positive integer, and consider the $d$-fold Cartesian product of $d$ copies of $X$, denoted by $X^d$. Let $\widetilde{X^d}$ be the {\em symmetrized $d$-fold Cartesian product}\index{symmetrized Cartesian product}, which is defined by moding out the natural action of the symmetric group $\mathbf{S}_d$ on the indices of the Cartesian coordinates. (That is, we consider unordered $d$-tuples.) We only want to consider elements with no repeated entries, so we take out the ``hyperplanes'' with equations $x_i = x_j$ on the coordinates ($i \ne j$). The obtained space is denoted by $\widetilde{X^d}_*$\index{$\widetilde{X^d}_*$} (and its elements can be identified with the subsets of $X$ of size $d$). We define the {\em braid group}\index{braid group} $\mathbb{B}_d(X)$\index{$\mathbb{B}_d(X)$} of $X$ on $d$ {\em strings}\index{string} as the fundamental group of this space with respect to an arbitrary point $x_0$ (of which the choice does not affect the isomorphism class of the group): \begin{equation} \mathbb{B}_d(X) := \pi_1(\widetilde{X^d}_*,x_0). \end{equation} Now put $X = \mathbb{C}$; then there is a natural isomorphism between $\mathbb{B}_d(\mathbb{C})$ and $M\mathbb{C}^d[X]$\index{$M\mathbb{C}^d[X]$}, which is the set of polynomials in $X$ over $\mathbb{C}$ of degree $d$ and with leading coefficient $1$, without multiple roots. The map is given by \begin{equation} \{u_1,\ldots,u_n\} \in \mathbb{B}_d(\mathbb{C})\ \ \longrightarrow\ \ (X - u_1)\cdots(X - u_n) \in M\mathbb{C}^d[X]. \end{equation} One can show that $\mathbb{B}_n(\mathbb{C})$ is isomorphic to the group $\mathbb{B}_n$ as above.\\ \medskip \subsubsection{Braid groups via graphs of type $\mathbf{A}_{n - 1}$} Let $\Gamma = (V,E)$ be a graph, with vertex set $V$ and edge set $E$. We define the {\em Artin group}\index{Artin group} $A(\Gamma)$ as the free group $F(V)$ generated by the elements of $V$, modulo the following relations: \begin{itemize} \item[(R1)] If $x$ and $y$ are adjacent vertices, then \begin{equation} xyx = yxy. \end{equation} \item[(R2)] If $x$ and $y$ are not adjacent, they commute. \end{itemize} We also say that $A(\Gamma)$ is an Artin group ``of type $\Gamma$''\index{Artin group!of type $\Gamma$}. If $\Gamma$ is a Coxeter graph of type $\mathbf{A}_{n - 1}$, then $A(\Gamma)$ is isomorphic to $\mathbb{B}_n$. \bigskip $\mathbf{A}_{n - 1}$: \begin{tikzpicture}[style=thick, scale=1.5] \foreach \x in {1,2,3,5,6}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \draw (3,0) -- (3.5,0); \draw (4.5,0) -- (5,0); \draw (5,0) -- (6,0); \draw (4,0) node {$\dots$} ; \end{tikzpicture} \medskip Let $\Gamma$ be a graph, and let $d \in \mathbb{N}^\times$. The {\em Shepard group}\index{Shepard group} $A(\Gamma,d)$ is the quotient of $A(\Gamma)$ by the relations $v^d = \mathrm{id}$ for all $v \in V$. \begin{proposition} Let $\Gamma$ be a graph, and $A(\Gamma)$ its Artin group. Then $A(\Gamma,2)$ is the Coxeter group related to $\Gamma$. If $\Gamma$ is a Coxeter graph of type $\mathbf{A}_{n - 1}$, then $A(\Gamma) \cong \mathbb{B}_n$ and $A(\Gamma,2) \cong \mathbf{S}_n$. \end{proposition} \medskip \subsubsection{Linear representations of $\mathbb{B}_n$} Since we know that the symmetric groups are general linear groups over $\mathbb{F}_1$, one might wonder (and this is wat Kapranov and Smirnov do in their manuscript \cite{KapranovUN}), whether the expression (\ref{braideq}) fits into a diagram \begin{equation} \begin{array}{ccc} ?_1 &\longrightarrow &\mathbf{GL}_n(\mathbb{F}_q)\\ &&\\ \downarrow& &\downarrow\\ &&\\ ?_2 \cong \mathbb{B}_n &\overset{\gamma}\longrightarrow &\mathbf{GL}_n(\mathbb{F}_1) \cong \mathbf{S}_n,\\ &&\\ \end{array} \end{equation} where passing from the first row to the second means passing to the limit $q \longrightarrow 1$. The first row should be seen as a class of arrows (with $q$ taking values in the set of prime powers). Kapranov and Smirnov suggest to replace $?_2$ by $\mathbf{GL}_n(\mathbb{F}_1[X])$, and also suggest that the evaluation morphism $X = 0$ yields $\gamma$. Their motivation is a theorem of Drinfeld which states that over a finite field $\mathbb{F}_q$, the profinite completion of $\mathbf{GL}_n(\mathbb{F}_q[X])$ is embedded in the fundamental group of the space of $q$-polynomials of degree $n$ in a rather similar way $\mathbb{B}_n$ is the fundamental group of $M\mathbb{C}^n[X]$. Still, as we will see in the second chapter of the author in this monograph, in the direction we want to take expressions such as $\mathbb{F}_1[X]$, this idea does not make much sense, as $\mathbf{GL}_n(\mathbb{F}_1[X])$ will not be a group. (Another rather natural candidate would be $\mathbf{GL}_n(\mathbb{F}_1[X,X^{-1}])$, but $\mathbf{S}_n$ is a subgroup, while $\mathbb{B}_n$ is torsion-free.) In any case, an $\mathbb{F}_1^n$-linear representation of $\mathbb{B}_n$ is given by the map \begin{equation} \rho: \sigma_i \longrightarrow \left( \begin{array}{cccc} \mathbf{I}_{i - 1} &&& \\ &0 &\mu &\\ &\mu^{-1} &0 &\\ &&&\mathbf{I}_{n - 1 - i} \end{array} \right), \end{equation} where $\mathbb{F}_1^n = \mu_n \cup \{0\} = \langle \mu \rangle \cup \{0\}$. This representation is of course not faithful, and inside $\mathbf{GL}_n(\mathbb{F}_1^n)$ the elements $\rho(\sigma_i)$, $i = 1,2,\ldots,n - 1$ generate a subgroup isomorphic to $\mathbf{S}_n$.\\ The linearity of braid groups over ``real fields'' was only quite recently obtained by Bigelow \cite{Bigelow} and Krammer \cite{Krammer} (independently), and presented the solution of a major open problem. Note that any faithful linear representation \begin{equation} \rho: \mathbb{B}_n \longrightarrow \mathbf{GL}_m(R) \end{equation} with $m \in \mathbb{N}^\times$ and $R$ an ``$\mathbb{F}_1$-ring'' (see the author's second chapter) which is embeddable in a field (or division ring) $\mathbb{K}$ would give a faithful linear representation over $\mathbb{K}$. \newpage \section{From absolute mantra to absolute Algebraic Geometry} In the early nineties, Christopher Deninger published his studies (\cite{Deninger1991}, \cite{Deninger1992}, \cite{Deninger1994}) on motives and regularized determinants. In \cite{Deninger1992}, Deninger gave a description of conditions on a category of motives that would admit a translation of Weil's proof of the Riemann Hypothesis for function fields of projective curves over finite fields $\mathbb{F}_q$ to the hypothetical curve $\overline{\wis{Spec}(\mathbb{Z})}$. In particular, he showed that the following formula would hold: \begin{equation} \zeta_{\overline{\wis{Spec}(\mathbb{Z})}}(s) = 2^{-1/2}\pi^{-s/2}\Gamma(\frac s2)\zeta(s) = \frac{\Rprod_\rho\frac{s - \rho}{2\pi}}{\frac{s}{2\pi}\frac{s - 1}{2\pi}} \overset{?}{=} \nonumber \\ \end{equation} \begin{eqnarray} \frac{\mbox{\textsc{Det}}\Bigl(\frac 1{2\pi}(s\cdot\mathbf{1} - \Theta)\Bigl| H^1(\overline{\wis{Spec}(\mathbb{Z})},*_{\mathrm{abs}})\Bigr.\Bigr)}{\mbox{\textsc{Det}}\Bigl(\frac 1{2\pi}(s\cdot\mathbf{1} -\Theta)\Bigl| H^0(\overline{\wis{Spec}(\mathbb{Z})},*_{\mathrm{abs}})\Bigr.\Bigr)\mbox{\textsc{Det}}\Bigl(\frac 1{2\pi}(s\cdot\mathbf{1} - \Theta)\Bigl| H^2(\overline{\wis{Spec}(\mathbb{Z})},*_{\mathrm{abs}})\Bigr.\Bigr)}, \end{eqnarray} where $\Rprod$ is the infinite {\em regularized product}, similarly $\mbox{\textsc{Det}}$ denotes the {\em regularized determinant}\index{regularized determinant} (cf. the author's second chapter), $\Theta$ is an ``absolute'' Frobenius endomorphism and the $H^i(\overline{\wis{Spec}(\mathbb{Z})},*_{\mathrm{abs}})$ are certain proposed cohomology groups. The $\rho$s run through the set of critical zeroes of the classical Riemann zeta. This description combines with Kurokawa's work on multiple zeta functions (\cite{Kurokawa1992}) from 1992 to the hope that there are motives $h^0$ (``the absolute point''), $h^1$ and $h^2$ (``the absolute Lefschetz motive'')\index{absolute!Lefschetz motive} with zeta functions \begin{equation} \label{eqzeta} \zeta_{h^w}(s) \ = \ \mbox{\textsc{Det}}\Bigl(\frac 1{2\pi}(s\cdot\mathbf{1}-\Theta)\Bigl| H^w(\overline{\wis{Spec}(\mathbb{Z})},*_{\mathrm{abs}})\Bigr.\Bigr) \end{equation} for $w=0,1,2$. Deninger computed that $\zeta_{h^0}(s)=s/2\pi$ and $\zeta_{h^2}(s)=(s-1)/2\pi$. Manin proposed in \cite{Manin} the interpretation of $h^0$ as $\wis{Spec}(\mathbb{F}_1)$ and the interpretation of $h^2$ as the affine line over $\mathbb{F}_1$. The search for a proof of the Riemann Hypothesis became a main motivation to look for a geometric theory over $\mathbb{F}_1$. \\ About ten years after Manin's lecture notes \cite{Manin}, the first papers got published in which scheme theories over $\mathbb{F}_1$ were developed, the first one being Deitmar's important paper \cite{Deitmarschemes2} in 2005 (cf. the author's second chapter for more details). One year before, Soul\'{e} already published his $\mathbb{F}_1$-approach to varieties \cite{Soule}. We have seen that once we forget about addition, a good basic theory of Linear Algebra can be developed which agrees with Tits's initial observations on the symmetric groups (and their geometries) as limit objects over $\mathbb{F}_1$. In the author's second chapter, several versions of Algebraic Geometry will be described in detail. And in yet another chapter, Lorscheid explains his side of the story with much rigor. Those will all be based on the fundamental observation that if we want develop a scheme theory over $\mathbb{F}_1$, we need algebraic objects in which we do not have addition at hand. One of the aims is to have a construction of base extension \begin{equation} \mathbb{F}_1 \longrightarrow \mathbb{Z} \end{equation} in order to be able to pass to Grothendieck's $\mathbb{Z}$-schemes from below.\\ We will revisit the combinatorial realizations of the obtained scheme theories again and again, and show that they are in perfect harmony with what was obtained in the present chapter. \newpage \frenchspacing \section{Basic absolute Linear Algebra} In this section we describe several aspects of absolute Linear Algebra, partially and loosely following the Kapranov-Smirnov document \cite{KapranovUN}. We will usually only consider finite or infinitely countable dimensional vector spaces; in the second chapter of the author, detailed considerations will be made on dimensions of any cardinality. \subsection{Structural setting and mantra} As we want to see $\mathbb{F}_1$ as a field which is different from $\mathbb{F}_2$, one often depicts $\mathbb{F}_1$ as the set $\{0,1\}$ for which we only have the following operations: \begin{equation} 0\cdot 1 = 0 = 0\cdot 0\ \ \mbox{and}\ \ 1\cdot 1 = 1. \end{equation} So in absolute Linear Algebra we are not allowed to have addition of vectors and we have to define everything in terms of scalar multiplication. \\ \subsection{Field extensions of $\mathbb{F}_1$} Formally, for each $m \in \mathbb{N}^\times$ we define the {\em field extension}\index{field extension} $\mathbb{F}_1^m$\index{$\mathbb{F}_1^m$} of $\mathbb{F}_1$ of degree $m$ as the set $\{0\} \cup \mu_m$, where $\mu_m$ is the (multiplicatively written) cyclic group of order $m$, and $0$ is an absorbing element for the extended multiplication to $\{0\} \cup \mu_m$.\\ \subsection{Vector spaces over $\mathbb{F}_1^{(n)}$} At the level of $\mathbb{F}_1$ we cannot make a distinction between affine spaces and vector spaces (as a torsor, nothing happens), so in the vein of the previous section, a {\em vector/affine space}\index{vector space over $\mathbb{F}_1^n$}{affine space over $\mathbb{F}_1^n$} over $\mathbb{F}_1^n$, $n \in \mathbb{N}^\times$, is a triple $V = (\mathbf{0},X,\mu_n)$, where $\mathbf{0}$ is a distinguished point and $X$ a set, and where $\mu_n$ acts freely on $X$. Each $\mu_n$-orbit corresponds to a direction. If $n = 1$, we get the notion considered in the previous section. If the dimension is countably infinite, $\mu_n$ may be replaced by $\mathbb{Z}, +$ (the infinite cyclic group). Another definition is needed when the dimension is larger | we will come back to this issue in due course.\\ \subsection{Basis} A {\em basis}\index{basis} of the $d$-dimensional $\mathbb{F}_1^n$-vector space $V = (\mathbf{0},X,\mu_n)$ is a set of $d$ elements in $X$ which are two by two contained in different $\mu_n$-orbits (so it is a set of representatives of the $\mu_n$-action); here, formally, $X$ consists of $dn$ elements, and $\mu_n$ is the cyclic group with $n$ elements. (If $d$ is not finite one selects exactly one element in each $\mu_n$-orbit.) If $n = 1$, we only have $d$ elements in $X$ (which expresses the fact that the $\mathbb{F}_1$-linear group indeed is the symmetric group) - as such we obtain the {\em absolute basis}\index{absolute!basis}. Once a choice of a basis $\{b_i \vert i \in I\}$ has been made, any element $v$ of $V$ can be uniquely written as $b_j^{\alpha^u}$, for unique $j \in I$ and $\alpha^u \in \mu_n = \langle \alpha \rangle$. So we can also represent $v$ by a $d$-tuple with exactly one nonzero entry, namely $b_j^{\alpha^u}$ (in the $j$-th column). \subsection{Dimension} In the notation of above, the {\em dimension}\index{dimension of vector space} of $V$ is given by $\mathrm{card}(V)/n = d$ (the number of $\mu_n$-orbits). \subsection{Field extension} Let $V$ be a (not necessarily finite dimensional) $d$-space over $\mathbb{F}_1^n$, $n$ finite, so that $\vert X = X_V \vert = dn$. For any positive integral divisor $m$ of $n$, with $n = mr$, $V$ can also be seen as a $dr$-space over $\mathbb{F}_1^{m}$. Note that there is a unique cyclic subgroup $\mu_m$ of $\mu_n$ of size $m$, so there is only one way to do it (since we have to preserve the structure of $V$ in the process).\\ In terms of affine spaces, interpretation over a subfield can be depicted as follows.\\ \begin{equation} \begin{array}{ccc} \ensuremath{\mathbf{AG}}(d,\mathbb{F}_1^n) &\longrightarrow &\ensuremath{\mathbf{AG}}(dr,\mathbb{F}_1^{m})\\ &&\\ \downarrow& &\downarrow\\ &&\\ (X,\mu_n)& \longrightarrow& (X,\mu_m) \end{array} \end{equation} \subsection{Projective completion} By definition, the {\em projective completion}\index{projective!completion} of a combinatorial affine space $\ensuremath{\mathbf{AG}}(n,\mathbb{K})$, $n \in \mathbb{N}$ and $\mathbb{K}$ a field, is the projective space $\ensuremath{\mathbf{PG}}(n,\mathbb{K})$ of the same dimension and defined over the same field, which one obtains by adding a ``hyperplane at infinity". The latter is a projective spave of one dimension less of which the subspaces represent parallel classes of subspaces of $\ensuremath{\mathbf{AG}}(n,\mathbb{K})$. For example, if $n = 2$ and $\mathbb{K} = \mathbb{R}$, we add a line at infinity which consists of parallel classes of affine lines. Following the aforementioned considerations on projective completion, we immediately have the details on field extension for projective $\mathbb{F}_1^n$-spaces: starting from a projective $\mathbb{F}_1^n$-space $\mathbf{P} = \ensuremath{\mathbf{PG}}(d,\mathbb{F}_1^n)$, we choose an arbitrary point $x$, construct the affine space $\mathbf{P} \setminus \{x\}$, blow up as above, and then projectively complete.\\ From the motivic point of view (which will be considered in the second chapter of the author in this volume), projective completion is extremely important: we refer the reader to the aforementioned chapter for the meaning of the mysterious identity \begin{equation} [\mathbf{P}^n(k)] = \mathbf{1} + \mathbb{L} + \mathbb{L}^2 + \cdots + \mathbb{L}^n. \end{equation} \subsection{Direct sums} One defines a {\em direct sum}\index{direct sum} of (not necessarily finite dimensional) vector spaces $V$ and $W$, both defined over $\mathbb{F}_1^n$, as \begin{equation} V \oplus W := V \coprod W, \end{equation} where the distinguished points $\mathbf{0}_V$ and $\mathbf{0}_W$ are identified. \begin{theorem}[Dimension Theorem] We have that \begin{equation} \mathrm{dim}(V \oplus W) = \mathrm{dim}(V) + \mathrm{dim}(W). \end{equation} \end{theorem} \medskip \subsection{Tensor products} For defining the {\em tensor product}\index{tensor product} we start with vector spaces $V$ and $W$ defined over $\mathbb{F}_1^n$ and put \begin{equation} V \otimes W := V\times W, \end{equation} the vector space corresponding to the Cartesian product of free $\mu_n$-sets. Here, we identify $\mathbf{0}_V \times W$ with $V \times \mathbf{0}_W$. If the dimensions of $V$ and $W$ are respectively $d$ and $e$, then $V \otimes W$ consists of $den^2$ elements, so is of dimension $den$ over $\mathbb{F}_1^n$. In order to have a sensible notion of tensor product we have to eliminate the $n$-factor. We do this by identifying $(x,y)$ with $(x^\nu,y^{\nu^{-1}})$ for any $\nu$ in $\mu_n$ and call the corresponding vector space $V \otimes W$ (so the latter is the quotient of $V \times W$ by the anti-diagonal action of $\mu_n$). If we denote the image of $(x,y)$ in $V \otimes W$ by $x \otimes y$, then the identification merely says we can pull the $\mu_n$-action through the tensor sign: \begin{equation} (x \otimes y)^{\nu} = x^{\nu} \otimes y = x \otimes y^{\nu}, \end{equation} with $\nu \in \mu_n$ arbitrary. The set $V \otimes W$ is equipped with this $\mu_n$-action. \begin{theorem}[Dimension for tensor product] We have that \begin{equation} \mathrm{dim}(V \otimes W) = \mathrm{dim}(V)\mathrm{dim}(W). \end{equation} \end{theorem} \subsection{Linear automorphisms} A {\em linear automorphism}\index{linear!automorphism} $\alpha$ of an $\mathbb{F}_1^n$-vectorspace $V$ with basis $\{b_i\}$ is of the form \begin{equation} \alpha(b_i) = b_{\sigma(i)}^{\beta_i} \end{equation} for some power $\beta_i$ of the primitive $n$-th root of unity $\alpha$, and some permutation $\sigma \in \mathbf{S}_d$. Then we have that \begin{equation} \mathbf{GL}_d(\mathbb{F}_1^n) \cong \mathbf{S}_d \wr (\mu_n)^d. \end{equation} Elements of $\mathbf{GL}_d(\mathbb{F}_1^n)$ can be written as $d \times d$-matrices with precisely one element of $\mu_n$ in each row or column (and conversely, any such element determines an element of $\mathbf{GL}_d(\mathbb{F}_1^n)$). (Note that the reason that rows and columns have only one nonzero element is that we do not have addition in our vector space.) (In this setting, $\mathbf{S}_d$ is represented by $d\times d$-matrices with in each row and column exactly one $1$ | permutation matrices.) \subsection{Determinants} Using the setting of the precious paragraph, we define the {\em determinant}\index{determinant} \begin{equation} \mathrm{det}(A) = \prod \beta_i \in \mu_n. \end{equation} One verifies that the determinant is multiplicative and independent of the choice of basis.\\ \subsubsection{Examples} Scalar multiplication by $\nu \in \mu_n$ gives an automorphism on any $d$-dimensional $\mathbb{F}_1^n$-vectorspace $V$ and the corresponding determinant clearly $\nu^d$. That is, the det-functor ``remembers'' the dimension modulo $n$. These mod $n$ features are a recurrent theme in absolute Linear Algebra. Another example, which will become relevant when we come to reciprocity laws, is the following. Take $n=2$. Then, an $\mathbb{F}_1^2$-vector space $V$ of dimension $d$ is a set $V$ consisting of $2d$ elements equipped with a free involution. Any linear automorphism $\alpha$ is represented by a $d \times d$-matrix $A$ having one nonzero entry in every row and column being equal to $+1$ or $-1$. Hence, the determinant $\mathrm{det}(A)$ is in $\{+1,-1\}$ (having put $\alpha = -1$). On the other hand, by definition, the linear automorphism $\alpha$ determines a permutation on the $2d$ non-zero elements of $V$ (the elements of $X_V$). In fact, it is a permutation on the $d$ $\mu_2$-orbits. The connection between these two interpretations is that $\mathrm{det}(A) = \mathrm{sgn}(A)$; the determinant gives the sign of the permutation. \subsection{Power residue symbol} For a prime power $q = p^k$ with $q \equiv 1 \mod{n}$, the roots of unity $\mu_n$ are contained in $\mathbb{F}_q^\times$, so that $\mathbb{F}_q$ is a vectorspace over $\mathbb{F}_1^n$. For any field unit $a \in \mathbb{F}_q^\times$ we have the power residue symbol \begin{equation} \left(\begin{array}{c} a\\ \mathbb{F}_q \end{array}\right)_n = a^{\frac{q - 1}{n}} \in \mu_n. \end{equation} On the other hand, multiplication by $a$ is a linear automorphism $A$ on the $\mathbb{F}_1^n$-vectorspace $\mathbb{F}_q$ and hence we can look at its determinant $\mathrm{det}(A)$. The characteristic one interpretation of a classical lemma by Gauss now asserts: \begin{theorem} The power residue symbol equals $\mathrm{det}(A)$. \end{theorem} \section{Synthetic geometry over $\mathbb{F}_1$} In this section, we consider good axioms for incidence geometries to be naturally {\em defined over} $\mathbb{F}_1$. This has already been done in various ways for schemes, and in the next chapters we will be concerned by this matter. Still, apart from some remarks made by Tits in his 1957 paper, not much seems to be known prior to our paper \cite{NotesI} (on which the present section is based). We want to distinguish between geometries defined over $\mathbb{F}_1$ (or {\em $\mathbb{F}_1$-geometries}\index{$\mathbb{F}_1$-geometry}) and their {\em $\mathbb{F}_1$-versions}. Let $\ensuremath{\mathscr{C}}$ be a class of incidence geometries (say, of Buekenhout-Tits geometries with some prescribed set of axioms, cf. below). If $\ensuremath{\mathscr{C}}$ (that is, all its elements) will be defined over $\mathbb{F}_1$, we want to have a (Weyl) functor at our disposal which maps any element of $\ensuremath{\mathscr{C}}$ to its ``$\mathbb{F}_1$-version''; this will be a possibly degenerate incidence geometry which also satisfies the aforementioned axioms, and it will be independent of the chosen element in $\ensuremath{\mathscr{C}}$. A model example is the class of generalized $m$-gons with $m \in \mathbb{N} \setminus \{0,1\}$ (that is, the rank $2$ spherical buildings); they will all be defined over $\mathbb{F}_1$ (whether or not they are themselves defined over a ``real'' field), and the images under the functor we seek to define are ordinary $m$-gons. The situation we want to describe can be best (and even almost precisely) compared to the principle of base extension/descent in scheme theory. In fact, in the second chapter of the author in this volume we will show that once this theory has been established, there will be an analogy between $\mathbb{F}_1$-incidence geometry and $\mathbb{F}_1$-scheme theory which goes much further than one would suspect at first. (The interplay between both theories enables one to study, for instance, large classes of groups (including Chevalley groups) as automorphism groups of schemes over $\mathbb{F}_1$.) The details can be found in \cite{NotesI}. \\ \subsection{Incidence geometries related to diagrams} In this chapter we will consider incidence geometries {\em related to diagrams}. An axiom system is then imposed by providing a {\em Buekenhout-Tits diagram}\index{Buekenhout-Tits!diagram} as explained in the next paragraph.\\ \subsection{Buekenhout-Tits diagrams} Let $\mathscr{D}$ be a labeled graph on $I$, where for $i,j \in I$ the label $\mathscr{D}_{ij}$ is a class of rank $2$ geometries. We say that $\mathscr{D}$ is a {\em Buekenhout-Tits diagram}\index{Buekenhout-Tits!diagram} for the geometry $\Gamma = (X,\mathbf{I},I,t)$ when for every flag $F$ of $\Gamma$ of corank 2, say $t(F) = I \setminus \{ i,j\}$, the residue $\Gamma_F$ belongs to the class of geometries $\mathscr{D}_{ij}$. This is a recursive definition for the concept of diagram in terms of what the labeled edges mean for rank 2 geometries.\\ \subsection{Some traditional labels} We introduce the nomenclature for some frequently used labels. \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \end{tikzpicture} \hspace{0.5cm} $\mathbf{Di}$\index{$\mathbf{Di}$-label}: Every $i$-object is incident with every $j$-object. \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \end{tikzpicture}\hspace{0.5cm} $\mathbf{A}_2$\index{$\mathbf{A}_2$-label}: The $i$-objects and $j$-objects form the points and lines of an axiomatic projective plane. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0.035) -- (2,0.035); \draw (1,-0.035) -- (2,-0.035); \end{tikzpicture}\hspace{0.5cm} $\mathbf{B}_2$\index{$\mathbf{B}_2$-label}: The $i$-objects and $j$-objects form the points and lines of a generalized quadrangle. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$m$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{I}_2(m)$ ($m\in \{6,8\}$)\index{$\mathbf{I}_2(m)$-label}: The $i$-objects and $j$-objects form the points and lines of a generalized hexagon/octagon. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$\mathbf{Af}$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{Af}$\index{$\mathbf{Af}$-label}: The $i$-objects and $j$-objects form the points and lines of an axiomatic affine plane. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$\mathbf{C}$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{C}$\index{$\mathbf{C}$-label}: The $i$-objects and $j$-objects form the points and edges of a complete graph.\\ Many other such diagrams are used, but those will be of no concern for our purposes.\\ \subsection{An example} The geometry of points, lines and planes in a $3$-dimen\-sio\-nal combinatorial projective space satisfies the axioms given by the diagram \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2,3}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (2,0) -- (3,0); \end{tikzpicture} \bigskip By the result of Veblen-Young, combinatorial $3$-dimensional projective spaces necessarily come from (left or right) vector spaces over a skew field. It is an easy exercise to prove the Veblen-Young axiom from the $\mathbf{A}_n$-diagram, so that the following holds. \begin{theorem} A (thick) Buekenhout-Tits geometry satisfying the $\mathbf{A}_n$ diagram axioms is a projective space. \end{theorem} So the axioms which are imposed on the Buekenhout-Tits geometry by the $\mathbf{A}_n$-diagram suffice to fully determine the incidence geometry. \subsection{The general Weyl functor} For the case of buildings, we have seen that the natural way to associate to a building its $\mathbb{F}_1$-building/version, is through the functor\index{$\underline{\mathscr{A}}$} \begin{equation} \underline{\mathscr{A}}: \mathbb{B} \rightarrow \mathbb{A}, \end{equation} from the category of (spherical) buildings to the category of apartments of such buildings\index{$\mathbb{A}$}\index{$\mathbb{B}$}. Let us use the same notation for the more general hypothetical functor which associates to a geometry (satisfying strong enough axioms), its ``$\mathbb{F}_1$-component'', and let us also keep the notation $\mathbb{A}$ for the more general version of Weyl geometries we are seeking. We want to see the objects in $\mathbb{A}$ also as objects of $\mathbb{B}$.\\ The $\mathbb{F}_1$-functor $\underline{\mathscr{A}}$ should have several properties (with respect to the images): \begin{itemize} \item[A1|] all lines should have at most $2$ different points; \item[A2|] an image should be a ``universal object'', in the sense that it should be a subgeometry of any thick geometry of the same ``type'' (defined over {\em any} field, if at all defined over one) of at least the same rank (as we will later see, it will correspond to scheme theoretic base descent to $\mathbb{F}_1$); \item[A3|] it should, of course, still carry the same axiomatic structure (so that $o \in \mathbb{A}$ and elements of $\underline{\ensuremath{\mathscr{A}}}^{-1}(o)$ carry the same Buekenhout-Tits diagram); \item[N|] it should give a geometric meaning to (certain) arithmetic formulas which express (certain) combinatorial properties of the finite thick geometries we want to define, assumed to have $s + 1$ points incident with every line, evaluated at the value $s = 1$; \item[F|] as $\mathbb{A}$ will be a subclass of the class of $\mathbb{F}_1$-geometries, it should consist precisely of the fixed elements of $\underline{\mathscr{A}}$. \end{itemize} \begin{proposition}[Conjecturally, \cite{NotesI}] \label{propoconj} Consider $\underline{\ensuremath{\mathscr{A}}}: \mathbb{B} \longrightarrow \mathbb{A}$. Then $\mathbb{A}$ is given by the solutions of \begin{equation} \underline{\ensuremath{\mathscr{A}}}(X) = (X). \end{equation} (The functor ``retracts'' $\mathbb{B}$ to $\mathbb{A}$.) \end{proposition} \begin{remark} {\rm Contrary to the base extension theory we will later speculate on, not every incidence geometry is suited to be defined over $\mathbb{F}_1$. (In general, without imposing extra structure on such a geometry, examples are too wild.) } \end{remark} Some other remarks need to be made. \begin{itemize} \item[A1$'$|] We work up to point-line duality: that is why we are allowed to ask, without loss of generality, that lines have at most two points. We do {\em not} ask that they have {\em precisely} two points, one motivation being e.g. (combinatorial) affine spaces over $\mathbb{F}_1$, in which any line has precisely one point. (And their scheme theoretic versions have precisely one {\em closed point}\index{closed!point}, cf. later chapters for a formal definition.) \item[CL|] Referring to the preceding remark, we already note that later on, $\mathbb{F}_1$-geometries with precisely $2$ points per line will correspond to {\em closed subschemes}\index{closed!subscheme} (cf. later chapters) of the appropriate ambient projective $\mathbb{F}_1$-space (as a scheme). (If they contain lines with only one point, one will need to invoke open sets to define the natural associated $\mathbb{F}_1$-scheme.) \item[A4|] In some sense, the number of lines through a point of an element $\Gamma$ of $\mathbb{A}$ should reflect the {\em rank} of the geometries in $\underline{\mathscr{A}}^{-1}(\Gamma)$. Think for example of the combinatorial affine and projective spaces over $\mathbb{F}_1$, and the ``Weyl geometries'' of buildings as described by Tits. Note that this is not a feature of incidence geometries in general, but it appears to be a property which is encoded in the $\underline{\mathscr{A}}$-image of an incidence geometry. \end{itemize} Our natural starting point in \cite{NotesI} was the category of Buekenhout-Tits geometries. The reader is noted that all buildings are members (and so all Chevalley group schemes are automorphism group schemes of members). We only consider connected geometries | the general theory can be reduced to the connected theory as usual. We call this assumption ``C''. The first step is to classify the elements of $\mathbb{A}$. We take A1-A2-A3-A4-C to be the main axioms. After having determined $\mathbb{A}$ \cite{NotesI}, one defines the functor $\underline{\mathscr{A}}$, and the inverse image $\underline{\mathscr{A}}^{-1}(\mathbb{A})$ in $\mathbb{BT}$\index{$\mathbb{BT}$}, the category of Buekenhout-Tits geometries with obvious morphisms. This inverse image is denoted by $\mathbb{BT}_{\vert 1}$\index{$\mathbb{BT}_{\vert 1}$}. We refer the reader to \cite{NotesI} for more details. Let us just mention the rank one and two examples in $\mathbb{A}$. \subsection{Rank $1$ | $\mathbb{A}$} \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \end{tikzpicture} \hspace{0.5cm} $\mathbf{Di}_1$: Every $i$-object is incident with every $j$-object. Over $\mathbb{F}_1$, this example has one line and one point, and they are incident. \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \end{tikzpicture}\hspace{0.5cm} $\mathbf{A}_1$: The $i$-objects and $j$-objects form the points and lines of a combinatorial projective line over $\mathbb{F}_1$: two distinct points incident with a line. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$\mathbf{Af}$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{Af}$: The $i$-objects and $j$-objects form the points and lines of a combinatorial affine line over $\mathbb{F}_1$: one point incident with one line (the ``absolute flag''\index{absolute!flag}). \bigskip \subsection{Rank $2$ | $\mathbb{A}$} The rank $2$ examples of Buekenhout-Tits geometries are the most important ones, since all other examples (ignoring the rank $0$ and $1$ cases) are constructed from these from axioms governed by the diagrams. By A4, any point is incident with at most two lines. Taking this property into account, the reader easily sees that the geometries must be of one of the following types (where at the end, we introduce a new type).\\ \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \end{tikzpicture} \hspace{0.5cm} $\mathbf{Di}_2$: Every $i$-object is incident with every $j$-object. Over $\mathbb{F}_1$, this example has two lines and two points, and any point is incident with any line. \bigskip \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \end{tikzpicture}\hspace{0.5cm} $\mathbf{A}_2$: The $i$-objects and $j$-objects form the points and lines of a combinatorial projective plane over $\mathbb{F}_1$: an ordinary triangle. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0.035) -- (2,0.035); \draw (1,-0.035) -- (2,-0.035); \end{tikzpicture}\hspace{0.5cm} $\mathbf{B}_2$: The $i$-objects and $j$-objects form the points and lines of a generalized quadrangle, which is an ordinary $4$-gon. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$m$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{I}_2(m)$ ($m\in \mathbb{N} \cup \{\infty\}$, $m \geq 5$): The $i$-objects and $j$-objects form the points and lines of an ordinary $m$-gon. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$\mathbf{Af}$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{Af}$: The $i$-objects and $j$-objects form the points and lines of a combinatorial affine plane over $\mathbb{F}_1$: one point incident with two lines which are incident with only that point. There is also an odd-one-out class of examples which enters the picture. \begin{tikzpicture}[style=thick, scale=1.3] \foreach \x in {1,2}{ \fill (\x,0) circle (2pt);} \draw (1,0) -- (2,0); \draw (1.5,.25) node {$\mathbf{U}$} ; \end{tikzpicture}\hspace{0.5cm} $\mathbf{U}$\index{$\mathbf{U}$-label}: The $i$-objects and $j$-objects form the points and lines of a connected tree of valency $\leq 2$, with at least one end point. (Lines with one point are allowed, so at the ends, one can have end points or end lines.) The unique examples of $\mathbf{Di}_2$, $\mathbf{A}_2$, $\mathbf{B}_2$ and $\mathbf{I}_2(m)$ are self-dual. The class described by $\mathbf{U}$ is also self-dual, while the point-line dual of the $\mathbf{Af}$-type geometry is one of the rank $1$ examples. \\ \subsection{Cardinality} By ``ordinary $\infty$-gons'' we mean connected trees with valency $2$ without end points. {\em The number of points is countable by the connectedness condition.} The same is true for elements of type $\mathbf{U}$.\\ \medskip \subsection{$\mathbb{F}_1$-Incidence geometries and base extension} An incidence geometry which is {\em defined over} $\mathbb{F}_1$\index{incidence!geometry!over $\mathbb{F}_1$} could also be regarded as a couple $(S,\underline{S})$, where $S \in \mathbb{BT}_{\vert 1}$, $\underline{S} \in \mathbb{A}$, and $\underline{S} \cong \underline{\ensuremath{\mathscr{A}}}(S)$. It is important to keep the category $\mathbb{S}$ in mind with objects the elements of $\underline{\ensuremath{\mathscr{A}}}^{-1}(\underline{S})$ and natural morphisms. Many of the known fundamental finite incidence geometries (think in the first place of generalized polygons) come in ``classes''; for instance, the $\ensuremath{\mathbf{Q}}(4,k)$\index{$\ensuremath{\mathbf{Q}}(4,k)$-functor} quadrangles can be seen as a functor which associates with each (possibly infinite) field $k$ the classical Moufang quadrangle $\ensuremath{\mathbf{Q}}(4,k)$ \cite{POL} (in fact, it is defined as a $4$-dimensional hypersurface, so can also be regarded as a $\mathbb{Z}$-scheme). It is convenient to consider the subcategory $\mathbb{FBT}_{\vert 1}$\index{$\mathbb{FBT}_{\vert 1}$} of $\mathbb{BT}_{\vert 1}$ which consists of those elements of $\mathbb{BT}_{\vert 1}$ which are members of infinite classes which arise as a functor from the category of fields to $\mathbb{BT}_{\vert 1}$. So to each $\Gamma \in \mathbb{FBT}_{\vert 1}$ we can associate at least one such functor $F_{\Gamma}$ ($F_{\Gamma}$ need not be unique, as often over small finite fields such classes intersect in classical examples). In $\mathbb{FBT}_{\vert 1}$ we can define a more refined version of $\mathbb{F}_1$-incidence geometry: it is a triple of the form \begin{equation} (F_{\Gamma},\Gamma,\underline{\Gamma}), \end{equation} where $(\Gamma,\underline{\Gamma})$ is as above. In this context we call $F_{\Gamma}(k)$ a {\em $k$-extension}\index{$k$-extension} of $\Gamma$.
6d082b27738d2755877c1751e4123a8e021c1818
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{Section.1 Introduction} A central topic in the area of analysis is to understand the influence of geometry on the behavior of solutions to the nonlinear evolution partial differential equations. Dispersive equations are well-known examples for their different phenomena on curved manifolds. In the study of nonlinear dispersive equations, there are two main tools: the \textit{Strichartz inequality} and the \textit{smoothing property}. Strichartz inequality usually means $L^{p}L^{q}$ space‐time mixed norm estimates, while smoothing properties are Sobolev $L^{2}$ space‐time estimates. Both approaches rely on global analysis for solutions to the corresponding linearized equations. Non-compact symmetric spaces are Riemannian manifolds with nonpositive sectional curvature. Because of their exponential growth at infinity and the validity of the Kunze-Stein phenomenon, there are better dispersive properties and then stronger Strichartz inequalities. Such phenomenon was first observed in real hyperbolic spaces, which are the simplest models of non-compact symmetric spaces of rank one, see \cite{Fon97,Ion00a,Ban07,AnPi09,IoSt09,MeTa11,APV12,AnPi14}. The generalization for arbitrary ranks is recently achieved in \cite{AnZh23,AMPVZ23}. On the other hand, on general \textit{compact} manifolds, one has only local-in-time Strichartz inequalities with some loss of derivatives. See, for instance, \cite{Bou93a,Bou93b,BGT04,GePi10}. Motivated by these interesting phenomena for the Strichartz inequality, we ask naturally that, as another primary tool in the study of dispersive equations, {\it how the smoothing property works on non-compact symmetric spaces?} The smoothing properties have been extensively studied in the Euclidean setting over the past three decades. See \cite{Sjo87,CoSa88,Veg88,KaYa89,BeDe91,Wat91,BeKl92,Sim92,KPV98,Sug98,Wal02,Hos03,Sug03,Chi08,RuSu12,DAn15,RuSu16} and references therein. We will discuss some of these works in detail in the following subsections by comparing them with our results on symmetric spaces. In the non-Euclidean backgrounds, there is also much literature on the \textit{local-in-time} smoothing properties. See, for instance, \cite{CKS95,Doi96,Doi00,Bur04,MMT08,Dat09,BGH10,ChWu13,ChMe14,BHS20}. The present paper focuses on the \textit{global-in-time} smoothing properties, which are less known than the rich local-in-time theory. As highlighted in \cite{RoTa07}, the main difficulty is that, besides the semiclassical analysis in high frequency, one also requires a more detailed analysis in low and medium frequencies. In that paper, Rodnianski and Tao established the global-in-time smoothing estimate with \textit{inhomogeneous weights} on asymptotically flat manifolds obeying the non-trapping condition. See also \cite{BoHa10,VaWu10,Bou11} for relevant estimates in this setting. Similar estimates with inhomogeneous weights were previously considered in symmetric spaces \cite{Kai14} and graded Lie groups \cite{Man17}. See also \cite{LLOS18,GeLe21} for recent progress on other related problems in hyperbolic spaces, such as smoothing estimates for the Schrödinger equation with potentials or $L^p$-estimates with $p>2$. In this paper, we begin by establishing the Kato-type smoothing properties, namely, global-in-time smoothing estimates with \textit{homogeneous weights} for the Schrödinger equation. This is achieved by proving the resolvent estimate and the Stein-Weiss inequality, which are of independent interests as well. We emphasize that our setting does not enjoy the dilation property, hence the common rescaling method fails. Our second main result is to generalize the comparison principles from \cite{RuSu12} to non-compact symmetric spaces. This robust method allows us to deduce different types of smoothing properties for the wave equation, Klein-Gordon equation, and Schrödinger-type equations with general orders (even with some time-variable coefficients). In particular, we observe that some estimates which are known to fail on the Euclidean plane, hold on the hyperbolic plane. Most of our arguments rely on the harmonic analysis on Riemannian symmetric spaces. For simplicity, from now on, a symmetric space always means the non-compact type, and the smoothing estimate refers to the global-in-time estimate. We will denote by $\Delta$ the Laplace-Beltrami operator on an $n$-dimensional symmetric space $\mathbb{X}$ and by $D$ its shifted Laplacian, see the following section for more details. To make the difference, let $\Delta_{\mathbb{R}^N}$ be the usual Laplacian in $\mathbb{R}^{N}$ ($N\ge2$) and $\widetilde{D}_{x}=(-i\partial{x_{1}},\,...\,,-i\partial{x_{N}})$. For $x\in\mathbb{X}$ or $x\in\mathbb{R}^{N}$, we denote by $|x|$ the (geodesic) distance between $x$ and the origin, and let $\langle{x}\rangle=(1+|x|^{2})^{1/2}$. Throughout the paper, the notation $a\lesssim{b}$ between two positive expressions means that $a\le{C}b$ for some constants $C>0$, and $a\asymp{b}$ means $a\lesssim{b}\lesssim{a}$. \subsection{Smoothing estimates} Consider the free Schrödinger equation in $\mathbb{R}^{N}$: \begin{align}\label{S1 Sch} (i\partial_{t}+\Delta_{\mathbb{R}^{N}})\,u(t,x)\,=\,0, \qquad\,u(0,x)\,=u_{0}(x), \end{align} whose solution is given by $u(t,x)=e^{it\Delta_{\mathbb{R}^{N}}}u_{0}(x)$. It is known that the solution operator $e^{it\Delta_{\mathbb{R}^{N}}}$ preserves the $L^2$-norm for each fixed time $t\in\mathbb{R}$. The smoothing property is a regularity improvement in the sense that, we could gain extra regularity (in comparison with the initial data) by integrating the solution to \eqref{S1 Sch} in time. More precisely, the solution to \eqref{S1 Sch} satisfies the smoothing property: \begin{align}\label{S1 BSmoothing} \|B(x,\widetilde{D}_{x})u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{R}^{N})}\, \lesssim\,\|u_{0}\|_{L^{2}(\mathbb{R}^{N})}, \end{align} where $B(x,\widetilde{D})$ is one of the following operators: \begin{table}[ht] \setlength{\tabcolsep}{20pt} \renewcommand{\arraystretch}{2} \begin{tabular}{|c|c|c|c|c|} \hline \cellcolor{gray!25} Type & \cellcolor{gray!25} $B(x,\widetilde{D})$ & \cellcolor{gray!25} Regularity condition \\ \hline (I) &$|x|^{\alpha-1}|\widetilde{D}|^{\alpha}$ & $1-\frac{N}{2}<\alpha<\frac12$ \\ \hline (II) & $\langle{x}\rangle^{-s}|\widetilde{D}|^{\frac12}$ & $s>\frac12$\\ \hline (III) &$\langle{x}\rangle^{-s}\langle{\widetilde{D}}\rangle^{\frac12}$ & $s\ge1$ ($s>1$ if $N=2$) \\ \hline \end{tabular} \vspace{10pt} \caption{Regularity conditions in $\mathbb{R}^{N}$ ($N\ge2$) for the Schrödinger equation.} \label{S1 TableSchRN} \end{table} \textit{Restriction theorem} and \textit{resolvent estimate} (or their variants) are two main standard methods to deduce the above smoothing estimates. In \cite{RuSu12}, the authors introduced two other tools: the \textit{canonical transform} and the \textit{comparison principle}. The first helps to simplify the smoothing estimate to some 1-dimensional estimates, and the latter allows one to transfer smoothing estimates among different equations. In this paper, we will deduce the Kato-type smoothing property by establishing the resolvent estimate, and widen its regularity range by proving the Stein-Weiss inequality. By extending the comparison principle to symmetric spaces, we deduce new smoothing estimates for wave and Klein-Gordon equations. \subsection{Statement of main results} Consider a non-compact symmetric space $\mathbb{X}=G/K$ of rank $\ell\ge1$, where $G$ and $K$ are suitable Lie groups. Let $n\ge2$ and $\nu\ge3$ be its manifold dimension and dimension at infinity (or pseudo-dimension, see Section \ref{Section.2 Prelim} for more details about these notations). Let $\Delta$ be the Laplace-Beltrami operator on $\mathbb{X}$ and denote by $D^{2}=-\Delta-|\rho|^{2}$ its shifted Laplacian. Here $|\rho|^{2}$ is the bottom of the $L^2$ spectrum of $-\Delta$ on $\mathbb{X}$. We consider the natural Schrödinger equation \begin{align}\label{S1 Schrodinger X} \begin{cases} i\partial_{t}u(t,x)\,+\,D_{x}^{2}u(t,x)\,=\,0, \qquad\,t\in\mathbb{R},\,\,\,x\in\mathbb{X},\\[5pt] u(0,x)\,=\,u_{0}(x), \end{cases} \end{align} whose solution is given by $u(t,x)=e^{itD_{x}^{2}}u_{0}(x)$. The first part of this article focuses on the following smoothing property. \begin{theorem}[Kato-type smoothing property]\label{main thm smoothing} Let $\mathbb{X}$ be a symmetric space of dimension $n\ge3$ and pseudo-dimension $\nu\ge3$. Suppose that $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\alpha<\frac{1}{2}$. Then, the solution to the Schrödinger equation \eqref{S1 Schrodinger X} satisfies the smoothing property \begin{align}\label{main thm smoothing schrodinger} \||x|^{\alpha-1}\,D_{x}^{\alpha}\,u \|_{L^2(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\,\|u_{0}\|_{L^2(\mathbb{X})}. \end{align} Moreover, if $\mathbb{X}$ is of dimension $n=2$, then \eqref{main thm smoothing schrodinger} holds for all $-\frac{1}{2}<\alpha<\frac{1}{2}$. \end{theorem} \begin{remark} The regularity condition in Theorem \ref{main thm smoothing} is optimal in some special cases. For example, when $G/K$ is a symmetric space with $G$ complex, the manifold dimension $n$ and the pseudo-dimension $\nu$ coincide, and the estimate \eqref{main thm smoothing schrodinger} fails for any $\alpha\le1-\frac{\nu}{2}$ or $\alpha\ge\frac12$, see Remark \ref{S3 remark optimal}. \end{remark} \begin{remark}\label{S1 Kato} In Kato's theory, for a self-adjoint operator $H$ in a separable Hilbert space $\mathcal{H}$, one says that a densely-defined closed operator $A$ on $\mathcal{H}$ is $H$-smooth if \begin{align*} |\im\big((H-\zeta)^{-1}\,A^{*}f,\,A^{*}f\big)|\, \lesssim\, \|f\|_{\mathcal{H}}^{2}, \qquad\forall\,\zeta\in\mathbb{C}\smallsetminus{\mathbb{R}}. \end{align*} Moreover, it is known that $A$ is $H$-smooth if and only if the smoothing property \begin{align*} \int_{\mathbb{R}}\diff{t}\, \|Ae^{-itH}f\|_{\mathcal{H}}^{2}\, \lesssim\,\|f\|_{\mathcal{H}}^{2} \end{align*} holds, see \cite{Kat66,KaYa89}. Theorem \ref{main thm smoothing} is equivalent to say that, for suitable $\alpha$, the operator $|x|^{\alpha-1}D_{x}^{\alpha}$ is $D^{2}$-smooth on $\mathbb{X}$. In the $2$-dimensional case, $\mathbb{X}=\mathbb{H}^{2}$ is a hyperbolic plane. It has rank $\ell=1$ and pseudo-dimension $\nu=3$. Theorem \ref{main thm smoothing} shows that $|x|^{\alpha-1}D^{\alpha}$ is $D^{2}$-smooth on $\mathbb{H}^{2}$ for all $-\frac{1}{2}<\alpha<\frac{1}{2}$, while the operator $|x|^{\alpha-1}\widetilde{D}^{\alpha}$ is $(-\Delta_{\mathbb{R}^{2}})$-smooth on $\mathbb{R}^{2}$ if and only if $0<\alpha<\frac{1}{2}$. It follows in particular that the weight $|x|^{-1}$ is $D^{2}$-smooth on $\mathbb{H}^{2}$. \end{remark} Note that the usual rescaling argument, which is used to establish the smoothing estimate with homogeneous weights in the Euclidean space, is not valid in the current setting. We require delicate and different analysis around or away from the origin. Our Theorem \ref{main thm smoothing} is achieved by combining the following resolvent estimate and the Stein-Weiss inequality. \begin{theorem}[Resolvent estimate]\label{main thm resolv} Let $\mathbb{X}$ be a symmetric space of rank $\ell\ge1$. Suppose that $-\frac{1}{2}<\alpha<\frac{1}{2}$ if $\ell=1$ and $1-\frac{\ell}{2}<\alpha<\frac{1}{2}$ if $\ell\ge2$. Then, for all $f\in{L^2}(\mathbb{X})$, we have \begin{align}\label{S3 resolvent} \sup_{\zeta\in\mathbb{C}\smallsetminus\mathbb{R}}\, |\im(D^{2\alpha}(D^{2}-\zeta)^{-1}\,f,\,f)_{L^{2}(\mathbb{X})}|\, \lesssim\, \||\cdot|^{1-\alpha}\,f\|_{L^{2}(\mathbb{X})}^{2}. \end{align} \end{theorem} On symmetric spaces of higher rank ($\ell\ge2$), Theorem \ref{main thm resolv} shows that the smoothing property \eqref{main thm smoothing schrodinger} holds when $1-\frac{\ell}{2}<\alpha<\frac12$. We widen this regularity range to $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\alpha<\frac{1}{2}$ by proving the Stein-Weiss inequality on symmetric spaces. This inequality is known as the Hardy-Littlewood-Sobolev inequality with double weights, see \cite{StWe58}. As an elementary tool in harmonic analysis, it has been extended to many other non-Euclidean settings, such as the Heisenberg group \cite{HLZ12}, the Carnot group \cite{GMS10}, and the homogeneous Lie group \cite{KRS19}. Recall that these three settings enjoy the dilation property. On symmetric spaces, the $L^{p}$-$L^{q}$-boundedness of the operator $(-\Delta-|\rho|^{2}+\xi^{2})^{-\sigma/2}$, with $\xi\ge0$ and $\re\sigma\ge0$, were progressively established in the 90s, see for instance \cite{Str83,Var88,Loh89,Ank92,CGM93}. We refer to \cite[pp. 109-111]{CGM93} for a review of these works. See also \cite{BFL08,MaSa08,Bec15,LLY20,Bec21} for studies on the best constant problem in real hyperbolic spaces. The authors in \cite{KKR23} have recently established the Stein-Weiss inequality on symmetric spaces for the operator $(-\Delta-|\rho|^{2}+\xi^{2})^{-\sigma/2}$, with $\xi>0$ large enough. In that case, the corresponding convolution kernel provides additional exponential decay at infinity. Hence, their approach can not be applied in our limiting case where $\xi=0$, which needs a more delicate analysis. The following theorem is an $L^2$ generalization of the Stein-Weiss inequality associated with the operator $D^{-\sigma}=(-\Delta-|\rho|^{2})^{-\sigma/2}$ on symmetric spaces. \begin{theorem}[Stein-Weiss inequality]\label{main thm SW} Let $\mathbb{X}$ be a symmetric space of dimension $n\ge2$ and pseudo-dimension $\nu\ge3$. Suppose that $\sigma>0$ and $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ satisfy $\sigma=\gamma_{1}+\gamma_{2}$. Then, for all $f\in{L^{2}}(\mathbb{X})$, we have \begin{align}\label{main thm SW ineq} \|\,|\cdot|^{-\gamma_{1}}\,D^{-\sigma}\,f\|_{L^{2}(\mathbb{X})}\, \lesssim\, \||\cdot|^{\gamma_{2}}\,f\|_{L^{2}(\mathbb{X})}. \end{align} \end{theorem} \begin{remark} The condition $\gamma_{1}+\gamma_{2}=\sigma>0$ excludes the case where $\gamma_{1}=\gamma_{2}=0$. Recall that the operator $D^{-\sigma}$ is not $L^{2}$-bounded for any $\sigma>0$, see for instance \cite{Loh89}. When $\sigma=0$, the inequality \eqref{main thm SW ineq} holds if and only if $\gamma_{1}=\gamma_{2}=0$. \end{remark} \begin{remark}\label{S1 rmk SWoptimal} Since the convolution kernel of $D^{-\sigma}$ behaves differently depending on whether it is close to or away from the origin, the manifold dimension and the pseudo-dimension both play essential roles, and the conditions $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ in Theorem \ref{main thm SW} are both necessary, see Remark \ref{S3 rmk SWoptimal}. However, we observe from Theorem \ref{main thm resolv} that the smoothing property \eqref{main thm smoothing schrodinger} holds in rank one for all $1-\frac{\nu}{2}<\alpha<\frac{1}{2}$ ($\nu=3$). In other words, its regularity range depends only on the pseudo-dimension, which is related to the vanishing order of the Plancherel measure at the origin. We conjecture that it would be the same case in general ranks, but it is difficult to reach by relying only on the Stein-Weiss inequality. \end{remark} In the second part of this paper, we extend the comparison principles from \cite{RuSu12} to symmetric spaces, see Theorem \ref{S4 Comparison principle} and Corollary \ref{S4 Secondary comparison principle}. Based on the smoothing properties of the Schrödinger equation, we deduce different types of smoothing estimates for other equations. Let us start with the Schrödinger-type equations with general orders. Consider the Cauchy problem with order $m>0$: \begin{align}\label{S1 CPm} (i\partial_{t}+D_{x}^{m})\,u(t,x)=\,0, \qquad\,u(0,x)\,=\,u_{0}(x), \end{align} whose solution is given by $u(t,x)=e^{itD_{x}^{m}}u_{0}(x)$. The following result describes all three types of smoothing properties on symmetric spaces for the Schrödinger-type equation \eqref{S1 CPm}. \begin{theorem}\label{main thm SchSmoothing} Let $\mathbb{X}$ be a symmetric space of rank $\ell\ge1$, dimension $n\ge2$ and pseudo-dimension $\nu\ge3$. Suppose that $m>0$ and $A(x,D)$ is defined as in Table \ref{S1 TableSch}. Then $A(x,D)$ is $D^{m}$-smooth on $\mathbb{X}$, namely, the solution to the Cauchy problem \eqref{S1 CPm} satisfies the smoothing property: \begin{align}\label{main SmoothSchA} \|A(x,D_{x})\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}. \end{align} \end{theorem} \begin{table}[ht] \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{2} \begin{tabular}{|c|c|c|c|} \hline \cellcolor{gray!25} Type & \cellcolor{gray!25} $A(x,D)$ & \cellcolor{gray!25} $\ell=1$ & \cellcolor{gray!25} $\ell\ge2$ \\ \hline \textnormal{(I)} & $|x|^{\alpha-\frac{m}{2}}D^{\alpha}$ & $\frac{m-3}{2}<\alpha<\frac{m-1}{2}$ & $\frac{m-\min\lbrace{n,\nu}\rbrace}{2}<\alpha<\frac{m-1}{2}$\\ \hline \textnormal{(II)} &$\langle{x}\rangle^{-s}D^{\frac{m-1}{2}}$ & \multicolumn{2}{c|}{$s>\frac{1}{2}$ \textnormal{and} $m>0$} \\ \hline \textnormal{(III)} &$\langle{x}\rangle^{-s}\langle{D}\rangle^{\frac{m-1}{2}}$ & \multicolumn{2}{c|}{{$s\ge\frac{m}{2}$ \textnormal{and} $1<m<\nu$}} \\ \hline \end{tabular} \vspace{10pt} \caption{Regularity conditions on $\mathbb{X}$ for the Schrödinger-type equations with order $m>0$.} \label{S1 TableSch} \vspace{-15pt} \end{table} \begin{remark}\label{S1 RemarkHn} In particular, we can write the above smoothing estimates in hyperbolic spaces $\mathbb{H}^{n}$ as follows: \begin{align*} \||x|^{\alpha-\frac{m}{2}}\,D_{x}^{\alpha}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{H}^{n})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{H}^{n})}, \qquad\,\tfrac{m-3}{2}<\alpha<\tfrac{m-1}{2}\,\,\,\textnormal{and}\,\,\,m>0, \\[5pt] \|\langle{x}\rangle^{-s}\,D_{x}^{\frac{m-1}{2}}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{H}^{n})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{H}^{n})}, \qquad\,s>\tfrac{1}{2}\,\,\,\textnormal{and}\,\,\,m>0, \\[5pt] \|\langle{x}\rangle^{-s}\,\langle{D_{x}}\rangle^{\frac{m-1}{2}}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{H}^{n})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{H}^{n})}, \qquad\,s\ge\tfrac{m}{2}\,\,\,\textnormal{and}\,\,\,1<m<3, \end{align*} where the regularity conditions depend only on the pseudo-dimension $\nu=3$. \end{remark} Let us compare the regularity conditions on symmetric spaces with the ones in the Euclidean setting. Recall that $B(x,\widetilde{D})$ is $\widetilde{D}_{x}^{m}$-smooth on $\mathbb{R}^{N}$ if it belongs to one of the following: \begin{table}[ht] \setlength{\tabcolsep}{20pt} \renewcommand{\arraystretch}{2} \begin{tabular}{|c|c|c|c|c|} \hline \cellcolor{gray!25} Type & \cellcolor{gray!25} $B(x,\widetilde{D})$ & \cellcolor{gray!25} Regularity condition in $\mathbb{R}^{N}$ \\ \hline (I) &$|x|^{\alpha-\frac{m}{2}}|\widetilde{D}|^{\alpha}$ & $\frac{m-N}{2}<\alpha<\frac{m-1}{2}\,\,\,\textnormal{and}\,\,\,m>0$ \\ \hline (II) & $\langle{x}\rangle^{-s}|\widetilde{D}|^{\frac{m-1}{2}}$ & $s>\frac{1}{2}$ and $m>0$ \\ \hline (III) &$\langle{x}\rangle^{-s}\langle{\widetilde{D}}\rangle^{\frac{m-1}{2}}$ & $s\ge\frac{m}{2}$ and $1<m<N$ ($s>\frac{m}{2}$ if $N=2$)\\ \hline \end{tabular} \vspace{10pt} \caption{Regularity conditions on $\mathbb{R}^{N}$ ($N\ge2$) for the Schrödinger-type equations with order $m>0$.} \vspace{-15pt} \end{table} For the Schrödinger equation ($m=2$) in $\mathbb{R}^{N}$, the Type (I) smoothing estimate was established by Kato and Yajima for $0\le{\alpha}<\frac12$ when $N\ge3$ and $0<{\alpha}<\frac12$ when $N=2$, see \cite{KaYa89}. Sugimoto \cite{Sug98} extended their regularity range to $1-\frac{N}{2}<\alpha<\frac12$ for all $N\ge2$, which is sharp and clarifies why $\alpha=0$ must be excluded when $N=2$. As a consequence, we know that $|x|^{-1}$ is not $(-\Delta_{\mathbb{R}^{2}})$-smooth on $\mathbb{R}^{2}$ as we mentioned in Remark \ref{S1 Kato}. In \cite[Theorem 5.2]{RuSu12}, the authors obtained the Type (I) estimate in $\mathbb{R}^{N}$ for all $N\ge2$ and $m>0$ satisfying $\frac{m-N}{2}<\alpha<\frac{m-1}{2}$, and pointed out that all the cases with different orders $m$ are equivalent to each other according to the comparison principle. This is also the case on symmetric spaces. The Type (II) smoothing estimate in $\mathbb{R}^{N}$ ($N\ge2$) was proved in \cite{BeKl92} when $m=2$ and in \cite{Chi08} when $m>1$. A simpler proof based on the canonical transform was given in \cite[Theorem 5.1]{RuSu12}, where the authors showed that the type (II) estimate holds in fact for all $m>0$. By extending the arguments carried out in \cite{Chi08}, Kaizuka proved this estimate on symmetric spaces for $m>1$, see \cite{Kai14}. Using the comparison principle, we show that it holds for all $m>0$ as in the Euclidean setting. Note that $m=1$ is an important case corresponding to wave-type equations. The Type (III) estimate has also been partially proved in \cite{Kai14}: the author showed that it holds on higher rank ($\ell\ge2$) symmetric spaces for all $1<m<\ell$. As a consequence of our improved inhomogeneous Stein-Weiss inequality (Corollary \ref{S3 SWCor}), the Type (III) estimate holds in fact on general symmetric spaces in the full range $1<m<\nu$. Note that this regularity condition is sharp and depends only on the pseudo-dimension $\nu$. In particular, it indicates that $\langle{x}\rangle^{-1}\langle{D}\rangle^{1/2}$ is $D^{2}$-smooth on $\mathbb{H}^{2}$. Recall that $\langle{x}\rangle^{-s}\langle{\widetilde{D}}\rangle^{1/2}$ is $(-\Delta_{\mathbb{R}^2})$-smooth on $\mathbb{R}^{2}$ if and only if $s>1$, see \cite{KaYa89,Wal02, Chi08, RuSu12}. Based on the above smoothing properties of Schrödinger-type equations and using comparison principles, we deduce smoothing estimates for some \textit{time-degenerate} and \textit{relativistic} Schrödinger equations as well, see Section \ref{subsection other examples}. Other noteworthy consequences of the comparison principles are the following smoothing properties of the wave and Klein-Gordon equations. \begin{theorem}\label{main smoothingKG} Let $\mathbb{X}$ be a symmetric space of rank $\ell\ge1$, dimension $n\ge2$ and pseudo-dimension $\nu\ge3$. Consider the Cauchy problem \begin{align}\label{S1 KG} \begin{cases} (\partial_{t}^{2}+D_{x}^{2}+\zeta)\,u(t,x)=\,0,\\[5pt] u(0,x)\,=\,u_{0}(x),\,\partial_{t}|_{t=0}\,u(t,x)\,=\,u_{1}(x), \end{cases} \end{align} which is the wave equation when $\zeta=0$ and the Klein-Gordon equation when $\zeta>0$. Let $s>\frac12$. Suppose that $-1<\beta<0$ when $\ell=1$ and $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\beta<\frac12$ when $\ell\ge2$. Then, the solution to the Cauchy problem \eqref{S1 KG} satisfies the following smoothing properties. \begin{itemize} \item (Wave equation) If $\zeta=0$ , we have \begin{align} \|\langle{x}\rangle^{-s}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}, \label{S1 wave1}\\[5pt] \||x|^{\beta-\frac{1}{2}}\,D_{x}^{\beta}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}. \label{S1 wave2} \end{align} \item (Klein-Gordon equation) If $\zeta>0$, we have \begin{align}\label{S1 wave3} \|\langle{x}\rangle^{-1}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}. \end{align} \end{itemize} \end{theorem} \begin{remark}\label{S1 RmkWave} Smoothing properties of wave-type equations are well-known in the Euclidean setting, see for instance \cite{Ben94,RuSu12}. Since we deduce the above estimates from the Schrödinger equation, we observe different phenomena in $2$-dimensional cases as well. On the one hand, the estimate \eqref{S1 wave2} holds on $\mathbb{H}^{2}$ for all $-1<\beta<0$, while a similar estimate holds on $\mathbb{R}^{2}$ only for $-\frac12<\beta<0$. On the other hand, an estimate such as \eqref{S1 wave3} does not hold on $\mathbb{R}^2$ unless one considers the weight $\langle{x}\rangle^{-s}$ with $s>1$ instead of $\langle{x}\rangle^{-1}$. \end{remark} \subsection{Layout} This paper is organized as follows. After a short review of harmonic analysis on symmetric spaces, we prove the Stein-Weiss inequality and establish the smoothing properties of the Schrödinger equation in Section \ref{Section.3 Smoothing}. In Section \ref{Section.4 Comparison}, we extend the comparison principles to symmetric spaces and deduce different types of smoothing properties for some other equations. Two technical lemmas are placed in Appendix. \section{Preliminaries}\label{Section.2 Prelim} In this section, we review briefly harmonic analysis on Riemannian symmetric spaces of non-compact type. We adopt the standard notation and refer to \cite{Hel78,Hel94,Hel00,GaVa88} for more details. \subsection{Non-compact symmetric spaces} Let $G$ be a semisimple Lie group, connected, non-compact, with the finite center, and let $K$ be a maximal compact subgroup of $G$. The homogeneous space $\mathbb{X}=G/K$ is a Riemannian symmetric space of non-compact type. Let $\mathfrak{g}=\mathfrak{k}\oplus\mathfrak{p}$ be the Cartan decomposition of the Lie algebra $\mathfrak{g}$ of $G$. The Killing form of $\mathfrak{g}$ induces a $K$-invariant inner product $\langle.\,,\,.\rangle$ on $\mathfrak{p}$ and therefore a $G$-invariant Riemannian metric on $\mathbb{X}$. Fix a maximal abelian subspace $\mathfrak{a}$ in $\mathfrak{p}$. The rank of $\mathbb{X}$ is the dimension $\ell$ of $\mathfrak{a}$. We identify $\mathfrak{a}$ with its dual $\mathfrak{a}^{*}$ by means of the inner product inherited from $\mathfrak{p}$. Let $\Sigma\subset\mathfrak{a}$ be the root system of $( \mathfrak{g},\mathfrak{a})$. Once a positive Weyl chamber $\mathfrak{a}^{+}\subset\mathfrak{a}$ has been selected, $\Sigma^{+}$ (resp. $\Sigma_{r}^{+}$ or $\Sigma_{s}^{+}$) denotes the corresponding set of positive roots (resp. positive reduced roots or simple roots). Let $n$ be the dimension and $\nu$ be the dimension at infinity (or pseudo-dimension) of $\mathbb{X}$: \begin{align}\label{S2 Dimensions} n\,=\, \ell\,+\,\sum_{\alpha \in \Sigma^{+}}\,m_{\alpha} \quad\textnormal{and}\quad \nu\,=\,\ell\,+\,2|\Sigma_{r}^{+}|, \end{align} where $m_{\alpha}$ is the dimension of the positive root subspace $\mathfrak{g}_{\alpha}$. Notice that these two dimensions behave differently depending on the geometric structure of $\mathbb{X}$. For example, $\nu=3$ while $n\ge2$ is arbitrary in rank one, $\nu=n$ if $G$ is complex, and $\nu=2n-\ell>n$ when $G$ is split. Let $\mathfrak{n}$ be the nilpotent Lie subalgebra of $\mathfrak{g}$ associated to $\Sigma^{+}$ and let $N=\exp\mathfrak{n}$ be the corresponding Lie subgroup of $G$. We have decompositions \begin{align*} \begin{cases} \,G\,=\,N\,(\exp\mathfrak{a})\,K \quad&\textnormal{(Iwasawa)}, \\[5pt] \,G\,=\,K\,(\exp\overline{\mathfrak{a}^{+}})\,K \quad&\textnormal{(Cartan)}. \end{cases} \end{align*} On the one hand, we can write the Haar measure $\diff{g}$ on $G$ in the Cartan decomposition: \begin{align*} \int_{G}\diff{g}\,f(g)\, =\,\const\,\int_{K}\,\diff{k}_{1}\, \int_{\mathfrak{a}^{+}}\,\diff{g}^{+}\, \underbrace{\prod_{\alpha\in\Sigma^{+}}\, (\sinh\langle{\alpha,g^{+}}\rangle)^{m_{\alpha}} }_{\delta(g^{+})}\, \int_{K}\,\diff{k}_{2}\,f(k_{1}(\exp g^{+})k_{2}). \end{align*} Notice that $\langle{\alpha,g^{+}}\rangle$ is nonnegative for every $\alpha\in\Sigma^{+}$ and all $g^{+}\in\overline{\mathfrak{a}^{+}}$. Let $\rho\in\mathfrak{a}^{+}$ be the half sum of all positive roots counted with their multiplicities: \begin{align*} \rho\, =\,\frac{1}{2}\,\sum_{\alpha\in\Sigma^{+}} \,m_{\alpha}\,\alpha. \end{align*} The density $\delta(g^{+})$ satisfies \begin{align}\label{S2 density} \delta(g^{+})\, \asymp\, \prod_{\alpha\in\Sigma^{+}} \Big\lbrace \frac{\langle\alpha,g^{+}\rangle} {1+\langle\alpha,g^{+}\rangle} \Big\rbrace^{m_{\alpha}}\, e^{\langle2\rho,g^{+}\rangle}\, \lesssim\, \begin{cases} |g^{+}|^{n-\ell}\, &\quad\textnormal{if}\;\;|g^{+}|\le\,1,\\[5pt] e^{\langle2\rho,g^{+}\rangle}\, &\quad\textnormal{for all}\;\;g^{+}\in\overline{\mathfrak{a}^{+}}. \end{cases} \end{align} On the other hand, we can normalize the Haar measure $\diff{g}$ such as \begin{align} \int_{G}\diff{g}\,f(g)\, =\, \int_{N}\diff{n}\,\int_{\mathfrak{a}}\diff{A}\, e^{\langle{-2\rho,A}\rangle}\, \int_{K}\diff{k}\,f(n(\exp{A})k) \end{align} where $A=A(g)$ is the unique $\mathfrak{a}$-component of $g$ in the Iwasawa decomposition. \subsection{Harmonic analysis on symmetric spaces} The harmonic analysis has been well developed on symmetric spaces. In the present paper, we shall need different types of transforms, such as the Helgason-Fourier transform $\mathcal{F}$, the Harish-Chandra transform $\mathcal{H}$, the Radon transform $\mathcal{R}$, and the modified Radon transform $\mathcal{JR}$. We review their definitions and basic properties in the following. Bearing in mind that the Cartan subspace $\mathfrak{a}$ is an $\ell$-dimensional flat submanifold of $\mathbb{X}$, we denote by $\mathcal{F}_{\mathfrak{a}}$ the classical Fourier transform: \begin{align*} \mathcal{F}_{\mathfrak{a}}f(\lambda)\, =\,\int_{\mathfrak{a}}\diff{H}\,e^{-i\langle{\lambda,H}\rangle}f(H)\, \qquad\textnormal{and}\qquad \mathcal{F}_{\mathfrak{a}}^{-1}g(H)\, =\,\int_{\mathfrak{a}}\diff{\lambda}\,e^{i\langle{\lambda,H}\rangle}g(\lambda)\, \end{align*} for suitable functions $f$ and $g$ on $\mathfrak{a}$. \subsubsection{Helgason-Fourier transform} Let $f$ be a Schwarz function on $\mathbb{X}$. Denote by $M$ the centralizer of $\exp\mathfrak{a}$ in $K$ and $\diff{b}$ a $K$-invariant normalized measure on $B=K/M$. The Helgason-Fourier transform and its inverse formula are defined by \begin{align}\label{S2 Helgason Fourier} \mathcal{F}f(\lambda,kM)\, =\, \int_{G}\diff{g}\, e^{\langle{-i\lambda+\rho,A(k^{-1}g)}\,\rangle}\,f(g) \qquad\forall\,\lambda\in\mathfrak{a},\;\; \forall\,k\in{K}, \end{align} and \begin{align}\label{S2 Inverse Helgason Fourier} f(gK)\,=\,|W|^{-1}\, \int_{\mathfrak{a}}\diff{\lambda}\,|\mathbf{c}(\lambda)|^{-2}\, \int_{B}\diff{b}\,e^{\langle{i\lambda+\rho,\,A(k^{-1}g)}\rangle}\, \mathcal{F}f(\lambda,b), \qquad\forall\,g\in{G}, \end{align} where $W$ denotes the Weyl group associated to $\Sigma$, see for instance \cite[Ch.III]{Hel94}. Here $|\mathbf{c}(\lambda)|^{-2}$ is the so-called \textit{Plancherel density} and can be expressed via the Gindikin-Karpelevič formula: \begin{align}\label{S2 Plancherel Density} |\mathbf{c}(\lambda)|^{-2}\, =\,\prod_{\alpha\in\Sigma_{r}^{+}}\, |\mathbf{c}_{\alpha} (\langle\alpha,\lambda\rangle)|^{-2}, \end{align} where each \textit{Plancherel factor} $|\mathbf{c}_{\alpha}(\cdot)|^{-2}$ is explicitly defined by some Gamma functions and extends to an analytic function in a neighbourhood of the real axis, see \cite[Theorem 6.14, p.447]{Hel00}. Moreover, for every $\alpha\in\Sigma_{r}^{+}$, the factor $|\mathbf{c}_{\alpha}(\cdot)|^{-2}$ is a homogeneous differential symbol of order $m_{\alpha}+m_{2\alpha}$ and satisfies \begin{align} |\mathbf{c}_{\alpha}(r)|^{-2}\,\asymp\, |r|^{2}(1+|r|)^{m_{\alpha}+m_{2\alpha}-2}\, \qquad\forall\,r\in\mathbb{R}. \end{align} As a product of one-dimensional symbols, the Plancherel density $|\mathbf{c}(\lambda)|^{-2}$ is not a symbol on $\mathfrak{a}$ in general: \begin{align}\label{S2 Plancherel Estimates} \begin{cases} |\mathbf{c}(\lambda)|^{-2}\, \lesssim\, |\lambda|^{\nu-\ell} & \textnormal{if \,} |\lambda|\le1,\\[5pt] |\nabla_{\mathfrak{a}}^{k}|\mathbf{c}(\lambda)|^{-2}|\, \lesssim\, |\lambda|^{n-\ell} & \textnormal{if \,} |\lambda|\ge1 \,\,\,\textnormal{and}\,\,\,k\in\mathbb{N}. \end{cases} \end{align} The Plancherel theorem states that the Helgason-Fourier transform $\mathcal{F}$ extends to an isometry of $L^{2}(\mathbb{X})$ into $L^{2}(\mathfrak{a}\times{B},|W|^{-1}|\mathbf{c}(\lambda)|^{-2}\diff{\lambda}\diff{b})$, see for instance \cite[Theorem 1.5, p.227]{Hel94}. \subsubsection{Harish-Chandra transform} A function $f$ is called bi-$K$-invariant on $\mathbb{X}$ if $f(k_{1}gk_{2})=f(g)$ for all $k_{1},k_{2}\in{K}$ and $g\in{G}$. With such functions, the Helgason-Fourier transform \eqref{S2 Helgason Fourier} reduces to the Harish-Chandra transform \begin{align} \mathcal{H}f(\lambda)\, =\,\int_{G}\diff{g}\,\varphi_{-\lambda}(g)\,f(g) \quad\forall\,\lambda\in\mathfrak{a},\;\; \forall\,f\in\mathcal{S}(K\backslash{G/K}), \label{S22 HarishChandra} \end{align} where \begin{align*} \varphi_{\lambda}(g) = \int_{K}\diff{k}\, e^{\langle{i\lambda+\rho,\,A(kg)}\rangle} \end{align*} is the \textit{elementary spherical function}, see \cite[Theorem 4.3, p.418]{Hel00}. For every $\lambda$ in $\mathfrak{a}$, the spherical function $\varphi_{\lambda}$ is bi-$K$-invariant and satisfies $|\varphi_{\lambda}|\le\varphi_{0}$, where \begin{align}\label{S2 phi0} \varphi_{0}(\exp{H})\, \asymp\,\Big\lbrace{ \prod_{\alpha\in\Sigma_{r}^{+}}(1+\langle{\alpha,H\rangle})\, }\Big\rbrace\,e^{-\langle{\rho,H}\rangle} \qquad\forall\,H\in\overline{\mathfrak{a}^{+}}. \end{align} Denote by $\mathcal{S}(\mathfrak{a})^{W}$ the subspace of $W$-invariant functions in the Schwartz space $\mathcal{S}(\mathfrak{a})$. Then $\mathcal{H}$ is an isomorphism between $\mathcal{S}(K\backslash{G/K})$ and $\mathcal{S}(\mathfrak{a})^{W}$. The inverse formula of the Harish-Chandra transform is given by \begin{align}\label{Inverse Harish-Chandra} f(g)\, =\,\const\,\int_{\mathfrak{a}}\diff{\lambda}\, |\mathbf{c(\lambda)}|^{-2}\, \varphi_{\lambda}(g)\, \mathcal{H}f(\lambda) \quad \forall\,g\in{G},\ \forall\,f\in\mathcal{S}(K\backslash{G/K}). \end{align} \subsubsection{Radon and modified Radon transforms} The Helgason-Fourier transform $\mathcal{F}$ is the flat Fourier transform $\mathcal{F}_{\mathfrak{a}}$ of the Radon transform \begin{align*} \mathcal{R}f(H,kM)\,=\, e^{-\langle{\rho,H}\rangle}\, \int_{N}\diff{n}\,f(n(\exp{H})k) \qquad\forall\,(H,kM)\in\,\mathfrak{a}\times{B}. \end{align*} In other words, for all $\lambda\in\mathfrak{a}$ and $k\in{K}$, we can write \begin{align}\label{S2 transformXtoR} \mathcal{F}f(\lambda,kM)\,=\, \mathcal{F}_{\mathfrak{a}}[ \mathcal{R}f(\cdot,kM)](\lambda) \end{align} see \cite[Ch.II, \S3]{Hel94}. This transform allows us to borrow some techniques from the Euclidean Fourier analysis in studying harmonic analysis on symmetric spaces. Denote by $\mathcal{J}$ the Fourier multiplier on $\mathfrak{a}$ with the symbol $|\mathbf{c}(\lambda)|^{-1}$. The $\mathcal{JR}$-transform is an isometry from $L^{2}(\mathbb{X})$ to $L^{2}(\mathfrak{a}\times{B},\,|W|^{-1}\diff{H}\diff{b})$. Indeed, by using the Plancherel formula with respect to $\mathcal{F}_{\mathfrak{a}}$ and $\mathcal{F}$, we have \begin{align*} \|\mathcal{JR}f \|_{L^{2}(\mathfrak{a}\times{B},\,|W|^{-1}\diff{H}\diff{b})}\, &=\, \||\mathbf{c}(\lambda)|^{-1}\,\mathcal{F}_{\mathfrak{a}}\mathcal{R}f \|_{L^{2}(\mathfrak{a}\times{B},\, |W|^{-1}\diff{\lambda}\diff{b})}\\[5pt] &=\, \|\mathcal{F}f \|_{L^{2}(\mathfrak{a}\times{B},\, |W|^{-1}\,|\mathbf{c}(\lambda)|^{-2}\diff{\lambda}\diff{b})}\\[5pt] &=\,\|f\|_{L^{2}(\mathbb{X})}. \end{align*} In \cite[Proposition 3.1]{Kai14}, the author proved that, for any $\sigma>0$, the $\mathcal{JR}$ transform is a continuous map from $L^{2}(\mathbb{X},\langle{x}\rangle^{2\sigma}\diff{x})$ to $L^{2}(\mathfrak{a}\times{B},\langle{H}\rangle^{2\sigma}\diff{H}\diff{b})$. The following lemma shows that similar continuity remains valid with homogeneous weights. Its proof is not so different from the original one, we include the details in Appendix for the sake of completeness. \begin{lemma}\label{S2 JR continuity} For any $\sigma\ge0$, we have \begin{align} \|\mathcal{JR}f\|_{L^{2}(\mathfrak{a}\times{B},\, |H|^{2\sigma}\diff{H}\diff{b})}\, \lesssim\, \|f\|_{L^{2}(\mathbb{X},\,|x|^{2\sigma}\diff{x})}. \end{align} \end{lemma} \section{Kato-type smoothing property on symmetric spaces}\label{Section.3 Smoothing} In this section, we establish the Kato-type smoothing property for the Schrödinger equation on $\mathbb{X}$. We start with the Stein-Weiss inequality, namely, Theorem \ref{main thm SW}. Together with the resolvent estimate Theorem \ref{main thm resolv}, we deduce Theorem \ref{main thm smoothing}. \subsection{Stein-Weiss inequality} Recall the operator $D^{-\sigma}=(-\Delta-|\rho|^2)^{-\sigma/2}$ where $\sigma>0$. Notice that the balance condition $\sigma=\gamma_{1}+\gamma_{2}$ with $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ implies $\sigma<\min\lbrace{n,\nu}\rbrace$. Denote by $k_{\sigma}$ the bi-$K$-invariant convolution kernel of the operator $D^{-\sigma}$, it satisfies \begin{align}\label{S3 Riesz ker estim} k_{\sigma}(x)\,\asymp\, \begin{cases} |x|^{\sigma-n} &\qquad\textrm{if}\,\,\,|x|\le1\,\,\,\textnormal{and}\,\,\, 0<\sigma<n, \\[5pt] |x|^{\sigma-\nu}\,\varphi_{0}(x)\, &\qquad\textrm{if}\,\,\,|x|\ge1\,\,\,\textnormal{and}\,\,\, 0<\sigma<\nu, \end{cases} \end{align} see \cite[Theorem 4.2.2]{AnJi99}. Then Theorem \ref{main thm SW} is equivalent to the following proposition. \begin{proposition}\label{S3 SW prop} Let $\mathbb{X}$ be a symmetric space of dimension $n\ge2$ and pseudo-dimension $\nu\ge3$. Let $\sigma>0$, $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$, and $\sigma=\gamma_{1}+\gamma_{2}$. Then the operator $\mathcal{T}$ defined by \begin{align} \mathcal{T}f(x)\,=\, \int_{\mathbb{X}}\diff{y}\, |x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y) \label{S32 opT} \end{align} is bounded from $L^{2}(\mathbb{X})$ into $L^{2}(\mathbb{X})$. \end{proposition} Because of the contrasting behaviors of the convolution kernel, as well as the density volume, we shall need different arguments depending on whether $|x|$ and $|y|$ are small or large. Roughly saying, when $|x|$ and $|y|$ are both small, the density volumes grow polynomially. In this case, we extend a key lemma from \cite{StWe58} to symmetric spaces. If $|x|$ or $|y|$ is large, the density volume grows exponentially, and we need to combine the dyadic decomposition with suitable Kunze-Stein phenomena. The proof of Proposition \ref{S3 SW prop} is based on the following two lemmas. \begin{lemma}[Kunze-Stein phenomena]\label{S3 KS lemma} Let $g$ be a bi-$K$-invariant function in $\mathcal{S}(\mathbb{X})$. Then, for any $f\in\mathcal{S}(\mathbb{X})$, we have \begin{align} \|g*f\|_{L^2(\mathbb{X})}\, \lesssim\, \|g\varphi_{0}\|_{L^1(\mathbb{X})}\,\|f\|_{L^2(\mathbb{X})}, \label{S3 KS1} \end{align} and \begin{align} \|g*f\|_{L^2(\mathbb{X})}\, \lesssim\, \|g\|_{L^2(\mathbb{X},\,\langle{x}\rangle^{\nu}\diff{x})}\, \|f\|_{L^2(\mathbb{X})}. \label{S3 KS2} \end{align} \end{lemma} \begin{remark}\label{rmk KS} These properties are consequences of Herz's principle (see \cite{Her70}) with bi-$K$-invariant functions. The inequality \eqref{S3 KS1} follows by \cite{Cow97}, see also \cite{APV11,Zha21}. The weighted version \eqref{S3 KS2} was previously stated in \cite[Corollary 2.6]{Kai14} for $g\in{L^2(\mathbb{X},\,\langle{x}\rangle^{\sigma}\diff{x})}$ with $\sigma>\nu$. Our lemma shows that such inequality remains valid in the critical case where $\sigma=\nu$, provided that $g$ is bi-$K$-invariant. In fact, the function $g$ involves only the bi-$K$-invariant convolution kernel in our proof of the Stein-Weiss inequality, and this endpoint improvement is crucial. \end{remark} \begin{proof} According to Remark \ref{rmk KS} and property \eqref{S3 KS1}, it is sufficient to show that \begin{align*} \|g\varphi_{0}\|_{L^1(\mathbb{X})}\, \lesssim\, \|g\|_{L^2(\mathbb{X},\,\langle{x}\rangle^{\nu}\diff{x})}. \end{align*} Let $\chi_{0}\in\mathcal{C}_{c}^{\infty}(\mathbb{R}_{+})$ be a cut-off function such that $\supp\chi_{0}\subset[0,1]$ and $\chi_{0}=1$ on $[0,\frac12]$. For all $x\in\mathbb{X}$, we denote by $\widetilde{\chi}_{0}(x)=\chi_{0}(|x|)$ and \begin{align*} \widetilde{\chi}_{j}(x)\, =\,\chi_{0}(2^{-j}|x|)\,-\,\chi_{0}(2^{-j+1}|x|) \qquad\forall\,j\ge1, \end{align*} which are all bi-$K$-invariant cut-off functions on $\mathbb{X}$. For every $j\ge1$, $\widetilde{\chi}_{j}$ is compactly supported in $\lbrace{x\in\mathbb{X}\,|\,2^{j-2}\le|x|\le2^{j}}\rbrace$. In particular, we have $\sum_{j\in\mathbb{N}}\widetilde{\chi}_{j}=1$ and \begin{align*} \|g\|_{L^2(\mathbb{X},\,\langle{x}\rangle^{\nu}\diff{x})}\, \asymp\, \Big\lbrace{\sum_{j\in\mathbb{N}}\,2^{\nu{j}}\, \|\widetilde{\chi}_{j}^{1/2}g\|_{L^{2}(\mathbb{X})}^{2} }\Big\rbrace^{1/2} \qquad\forall\,g\in{\mathcal{S}(\mathbb{X})}. \end{align*} By using the partition of unity and the Cauchy-Schwarz inequality, we have \begin{align*} \|g\varphi_{0}\|_{L^1(\mathbb{X})}^{2}\, &\lesssim\, \sum_{j\in\mathbb{N}}\, \Big\lbrace{ \int_{\mathbb{X}}\diff{x}\,\widetilde{\chi}_{j}(x)\,|g(x)|\,\varphi_{0}(x) }\Big\rbrace^{2}\\[5pt] &\le\, \sum_{j\in\mathbb{N}}\, \Big\lbrace{ \int_{\mathbb{X}}\diff{x}\,\widetilde{\chi}_{j}(x)\,|g(x)|^{2} }\Big\rbrace\, \Big\lbrace{ \int_{\mathbb{X}}\diff{x}\,\widetilde{\chi}_{j}(x)\,\varphi_{0}^{2}(x) }\Big\rbrace. \end{align*} According to the density estimate \eqref{S2 density} and estimate \eqref{S2 phi0} of the ground spherical function, we obtain \begin{align*} \int_{\mathbb{X}}\diff{x}\, (\widetilde{\chi}_{0}(x)+\widetilde{\chi}_{1}(x))\, \varphi_{0}^{2}(x)\, <\,+\infty \end{align*} and \begin{align*} \int_{\mathbb{X}}\diff{x}\,\widetilde{\chi}_{j}(x)\,\varphi_{0}^{2}(x)\, &\lesssim\, \int_{1\le|x^{+}|\le2^{j}}\diff{x^{+}}\,\delta(x^{+})\, \varphi_{0}^{2}(\exp{x^{+}})\\[5pt] &\lesssim\, \int_{1}^{2^{j}}\diff{r}\,r^{\nu-\ell+\ell-1}\, \lesssim\,2^{\nu{j}} \end{align*} for all $j\ge2$. We deduce that \begin{align*} \|g\varphi_{0}\|_{L^1(\mathbb{X})}^{2}\, \lesssim\, \sum_{j\in\mathbb{N}}\, 2^{\nu{j}}\,\|\widetilde{\chi}_{j}^{1/2}g\|_{L^{2}(\mathbb{X})}^{2}\, \asymp\, \|g\|_{L^2(\mathbb{X},\,\langle{x}\rangle^{\nu}\diff{x})}^{2}, \end{align*} which completes the proof of Lemma \ref{S3 KS lemma}. \end{proof} The next lemma extends \cite[Lemma 2.1]{StWe58} to symmetric spaces. Notice that the additional condition \eqref{S3 SWlemma cdt} appears naturally, since the growth of volume on symmetric spaces has different behaviors. We will include its detailed proof in Appendix. \begin{lemma}[Stein-Weiss lemma]\label{S3 SW lemma} Let $\mathcal{K}:\mathbb{R}_{+}\times\mathbb{R}_{+}\rightarrow\mathbb{R}_{+}$ be a homogeneous function of degree $-\ell$ such that \begin{align*} \int_{0}^{+\infty}\diff{s}\,s^{\tfrac{\ell}{2}-1}\,\mathcal{K}(1,s)\,<\,+\infty. \end{align*} Let $\kappa_{1}$, $\kappa_{2}$ be two bi-$K$-invariant functions on $G$ such that \begin{align}\label{S3 SWlemma cdt} \kappa_{j}(\exp{x}^{+})\,\delta^{1/2}(x^{+})\, =\,\bigO(1) \qquad\forall\,x\in{G},\,\, \forall\,j=1,\,2, \end{align} where $x^{+}\in\mathfrak{a}$ is the middle component of $x$ in the Cartan decomposition. Then the operator $S:L^{2}(G)\rightarrow{L^{2}}(G)$ defined by \begin{align*} Sf(x)\, =\,\kappa_{1}(x)\, \int_{G}\diff{y}\, \mathcal{K}(|x|,|y|)\, \kappa_{2}(y)\,f(y) \end{align*} is bounded. \end{lemma} Now, let us turn to the proof of Proposition \ref{S3 SW prop}. \begin{proof}[Proof of Proposition \ref{S3 SW prop}] We prove the $L^2$-boundedness of $\mathcal{T}$ in different cases depending whether $|x|$, $|y|$ and $|y^{-1}x|$ are small or large. For $k=1,2,...,7$, we define \begin{align}\label{S3 operators Tj} \mathcal{T}_{k}f(x)\,=\, \int_{\mathbb{X}}\diff{y}\,\psi_{k}(x,y)\, |x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\,|y|^{-\gamma_{2}}\,f(y), \end{align} where $\psi_{j}(x,y)$ are suitable cut-off functions which will be specified in each case. Recall that $\chi_{0}$ is a cut-off function compactly supported in $[0,1]$, and $\chi_{0}=1$ on $[0,\frac12]$. Denote by $\chi_{\infty}=1-\chi_{0}$. \begin{figure}[b] \begin{subfigure}[b]{0.49\textwidth} \centering \input{FigureSwCase1and2} \caption{Cases (i) and (ii)} \label{Fig Cases12} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \centering \input{FigureSwCase3} \caption{Cases (iii) and (iv)} \label{Fig Case3} \end{subfigure} \caption{Different cases in the study of $\mathcal{T}$.} \vspace{-20pt} \end{figure} \noindent\textbf{Case (i): if $|x|$ and $|y|$ are comparable.} Let $\psi_{1}(x,y)=\chi_{0}(\frac{|y|}{4|x|})\chi_{\infty}(\frac{2|y|}{|x|})$. Then $\psi_{1}$ does not vanish when $\frac{1}{4}\le\frac{|y|}{|x|}\le4$ (see the red zone in Figure \ref{Fig Cases12}). We decompose dyadically \begin{align}\label{S3 dyadic} \|\mathcal{T}_{1}f\|_{L^{2}(\mathbb{X})}^{2}\, =\, \sum_{j\in\mathbb{Z}}\, \int_{\lbrace{x\in\mathbb{X}\,:\, 2^{j}\le{|x|}\le2^{j+1}}\rbrace}\diff{x}\, \Big|{ \int_{\mathbb{X}}\diff{y}\,\psi_{1}(x,y)\, |x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y) }\Big|^{2}. \end{align} Notice that $\frac{1}{4}\le\frac{|y|}{|x|}\le4$ and $2^{j}\le{|x|}\le2^{j+1}$ imply $2^{j-2}\le{|y|}\le2^{j+3}$ and $|y^{-1}x|\le2^{j+4}$. Then we have $|x|^{-\gamma_{1}}|y|^{-\gamma_{2}}\lesssim2^{-\sigma{j}}$, provided that $\gamma_{1}+\gamma_{2}=\sigma>0$. Hence \begin{align}\label{S3 case1 estim1} \|\mathcal{T}_{1}f\|_{L^{2}(\mathbb{X})}^{2}\, &\lesssim\, \sum_{j\in\mathbb{Z}}\,2^{-2\sigma{j}}\, \int_{\mathbb{X}}\diff{x}\, \Big|{ \int_{\mathbb{X}}\diff{y}\, \chi_{0}(\tfrac{|y|}{2^{j+4}})\, \chi_{\infty}(\tfrac{|y|}{2^{j-2}})\, \chi_{0}(\tfrac{|y^{-1}x|}{2^{j+5}})\,k_{\sigma}(y^{-1}x)\,f(y) }\Big|^{2}\notag\\[5pt] &=\,\sum_{j\in\mathbb{Z}}\,2^{-2\sigma{j}}\, \big\|\big(\chi_{0}(\tfrac{|\cdot|}{2^{j+4}})\, \chi_{\infty}(\tfrac{|\cdot|}{2^{j-2}})\, f\big) *\big(\chi_{0}(\tfrac{|\cdot|}{2^{j+5}})k_{\sigma}\big) \big\|_{L^{2}(\mathbb{X})}^{2}. \end{align} we deduce from the Kunze-Stein phenomenon \eqref{S3 KS1} that \begin{align}\label{S3 case1 estim2} \big\|\big(\chi_{0}(\tfrac{|\cdot|}{2^{j+4}})\, \chi_{\infty}(\tfrac{|\cdot|}{2^{j-2}})\, f\big) &*\big(\chi_{0}(\tfrac{|\cdot|}{2^{j+5}})k_{\sigma}\big) \big\|_{L^{2}(\mathbb{X})}\notag\\[5pt] &\lesssim\, \big\|\chi_{0}(\tfrac{|\cdot|}{2^{j+4}})\, \chi_{\infty}(\tfrac{|\cdot|}{2^{j-2}})\, f\big\|_{L^{2}(\mathbb{X})}\, \big\|\chi_{0}(\tfrac{|\cdot|}{2^{j+5}})\,k_{\sigma}\, \varphi_{0}\big\|_{L^{1}(\mathbb{X})}. \end{align} According to the kernel estimate \eqref{S3 Riesz ker estim}, the density estimate \eqref{S2 density}, and estimate \eqref{S2 phi0} of the ground spherical function, we obtain, on the one hand, \begin{align}\label{S3 case1 estim3} \|\chi_{0}(\tfrac{|\cdot|}{2^{j+5}})\,k_{\sigma}\, \varphi_{0}\|_{L^{1}(\mathbb{X})}\, &\lesssim\, \int_{|x^{+}|\le1}\,\diff{x}^{+}\, |x^{+}|^{\sigma-n}\,|x^{+}|^{n-\ell}\, +\, \int_{1\le|x^{+}|\le2^{j+5}}\,\diff{x}^{+}\, |x^{+}|^{\sigma-\nu}\, |x^{+}|^{\nu-\ell}\notag\\[5pt] &=\, \int_{0}^{2^{j+5}}\,\diff{r}\,r^{\sigma-1}\, \lesssim\,2^{\sigma{j}} \end{align} provided that $\sigma>0$ and $j\ge {-4}$. On the other hand, if $j<{-4}$, we have similarly \begin{align}\label{S3 case1 estim4} \|\chi_{0}(\tfrac{|\cdot|}{2^{j+5}})\,k_{\sigma}\, \varphi_{0}\|_{L^{1}(\mathbb{X})}\, &\lesssim\, \int_{|x^{+}|\le2^{j+5}}\,\diff{x^{+}}\, |x^{+}|^{\sigma-n}\,|x^{+}|^{n-\ell}\notag\\[5pt] &=\,\int_{0}^{2^{j+5}}\,\diff{r}\,r^{\sigma-1}\, \lesssim\,2^{\sigma{j}} \end{align} by using \eqref{S3 Riesz ker estim} and \eqref{S2 density} again. We deduce from \eqref{S3 case1 estim1}, \eqref{S3 case1 estim2}, \eqref{S3 case1 estim3}, and \eqref{S3 case1 estim4}, that \begin{align}\label{S3 estim T1} \|\mathcal{T}_{1}f\|_{L^{2}(\mathbb{X})}^{2}\, \lesssim\, \sum_{j\in\mathbb{Z}}\, \int_{\lbrace{x\in\mathbb{X}\,:\, 2^{j}\le{|x|}\le2^{j+1}}\rbrace}\diff{x}\, |f(x)|^{2}\, =\,\|f\|_{L^{2}(\mathbb{X})}^{2} \end{align} provided that $\gamma_{1}+\gamma_{2}=\sigma>0$. \noindent\textbf{Case (ii): if $|x|$ and $|y|$ are not comparable, but both small.} We denote by $\psi_{2}(x,y)=\chi_{0}(\frac{|x|}{2})\chi_{0}(\frac{2|y|}{|x|})$ and $\psi_{3}(x,y)=\chi_{0}(\frac{|y|}{2})\chi_{\infty}(\frac{|y|}{4|x|})$ two cut-off functions. Notice that $\supp\psi_{2}\cup\supp\psi_{3}$ corresponds to the blue zones in Figure \ref{Fig Cases12}. Since $|y^{-1}x|$ is also small in this case, we know from \eqref{S3 Riesz ker estim} that, \begin{align*} \mathcal{T}_{k}f(x)\, \asymp\, \int_{\mathbb{X}}\diff{y}\, \psi_{k}(x,y)\,|x|^{-\gamma_{1}}\, |y^{-1}x|^{\sigma-n}\,|y|^{-\gamma_{2}}\,f(y) \qquad\textnormal{with}\,\,\,k=2,\,3. \end{align*} On the one hand, when $\frac{|y|}{|x|}\le\frac{1}{2}$, we have $|y^{-1}x|\ge\frac{|x|}{2}$ and then \begin{align*} \mathcal{T}_{2}f(x)\, &\lesssim\, \underbrace{\vphantom{\int} \chi_{0}(\tfrac{|x|}{2})\, |x|^{\tfrac{\ell-n}{2}} }_{\kappa_{1}(x)}\, \int_{\mathbb{X}}\diff{y}\, \underbrace{\vphantom{\int} \chi_{0}(\tfrac{2|y|}{|x|})\, |x|^{-\gamma_{1}+\sigma-n+\tfrac{n-\ell}{2}}\, |y|^{-\gamma_{2}+\tfrac{n-\ell}{2}} }_{\mathcal{K}(|x|,|y|)}\, \underbrace{\vphantom{\int} \chi_{0}(\tfrac{|y|}{2})\, |y|^{\tfrac{\ell-n}{2}} }_{\kappa_{2}(y)} f(y)\, \end{align*} provided that $0<\sigma<n$. Here, we observe that $(-\gamma_{1}+\sigma-n+\frac{n-\ell}{2})+(-\gamma_{2}+\frac{n-\ell}{2})=-\ell$, since $\gamma_1+\gamma_2=\sigma$, and \begin{align*} \chi_{0}(\tfrac{|z^{+}|}{2})\,|z^{+}|^{\tfrac{\ell-n}{2}}\, \delta(z^{+})^{\tfrac{1}{2}}\, =\,\bigO(1) \qquad\forall\,z\in\mathbb{X}, \end{align*} according to \eqref{S2 density}. Moreover, we have \begin{align*} \int_{0}^{\frac{1}{2}}\diff{s}\, s^{\tfrac{\ell}{2}-1}\, s^{-\gamma_{2}+\tfrac{n-\ell}{2}}\, <\,+\infty \end{align*} provided that $\gamma_2<\frac{n}{2}$. We deduce from Lemma \ref{S3 SW lemma} that $\mathcal{T}_{2}$ is $L^{2}$-bounded. On the other hand, when $\frac{|y|}{|x|}\ge2$, we have $|y^{-1}x|\ge\frac{|y|}{2}$ and similarly \begin{align*} \mathcal{T}_{3}f(x)\, \lesssim\, \chi_{0}(\tfrac{|x|}{2})\, |x|^{\tfrac{\ell-n}{2}}\, \int_{\mathbb{X}}\diff{y}\, \chi_{\infty}({|y|}/{4|x|})\, |x|^{-\gamma_{1}+\tfrac{n-\ell}{2}}\, |y|^{-\gamma_{2}+\sigma-n+\tfrac{n-\ell}{2}}\, \chi_{0}(\tfrac{|y|}{2})\, |y|^{\tfrac{\ell-n}{2}}\,f(y). \end{align*} We deduce from the same argument that $\mathcal{T}_{3}$ is $L^2$-bounded since \begin{align*} \int_{2}^{\infty}\diff{s}\, s^{\tfrac{\ell}{2}-1}\, s^{\gamma_{1}-n+\tfrac{n-\ell}{2}}\, <\,\infty \end{align*} provided that $\gamma_{1}<\frac{n}{2}$. Therefore, for all $\gamma_{1},\gamma_{2}<\frac{n}{2}$ such that $\gamma_{1}+\gamma_{2}= \sigma>0$, we have \begin{align}\label{S3 estim T2T3} \|\mathcal{T}_{k}f\|_{L^{2}(\mathbb{X})}\, \lesssim\,\|f\|_{L^{2}(\mathbb{X})} \qquad\textnormal{with}\,\,\,k=2,\,3. \end{align} \noindent\textbf{Case (iii): if $|x|$ and $|y|$ are not comparable, and not both small or both large.} We define $\psi_{4}(x,y)=\chi_{\infty}(\frac{|y|}{2})\chi_{0}(2|x|)$ and $\psi_{5}(x,y)=\chi_{\infty}(\frac{|x|}{2})\chi_{0}(2|y|)$. The support of $\psi_{4}$ (resp. $\psi_{5}$) corresponds to the blue (resp. red) shaded rectangle in Figure \ref{Fig Case3}. According to the local Harnack inequality of the elementary spherical function, we have $\varphi_{0}(y^{-1}x)\lesssim\varphi_{0}(y)$ when $|x|$ is bounded, see for instance \cite[Proposition 4.6.3]{GaVa88} or \cite[Remark 4.5]{APZ23}. Hence, for all $(x,y)\in\supp\psi_{4}$ and $0<\sigma<\nu$, \begin{align}\label{S3 phi0comp} k_{\sigma}(y^{-1}x)\, &\asymp\, |y^{-1}x|^{\sigma-\nu}\, \varphi_{0}(y^{-1}x)\notag\\[5pt] &\lesssim\, |y|^{\sigma-\nu}\varphi_{0}(y)\, \lesssim\, |y^{+}|^{\frac{2\sigma-\nu-\ell}{2}}\,e^{-\langle{\rho,y^{+}}\rangle}. \end{align} Combining this estimate with the density estimates \eqref{S2 density}, we obtain \begin{align*} \|\mathcal{T}_{4}f(x)\|_{L^{2}(\mathbb{X})}^{2}\, &=\, \int_{\mathbb{X}}\diff{x}\, \Big| \int_{\mathbb{X}}\diff{y}\, \psi_{4}(x,y)\,|x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y) \Big|^{2}\\[5pt] &\lesssim\, \int_{\mathbb{X}}\diff{x}\,|x|^{-2\gamma_{1}}\, \Big\lbrace{ \int_{\mathbb{X}}\diff{y}\, \psi_{4}^{2}(x,y)\,k_{\sigma}^{2}(y^{-1}x)\,|y|^{-2\gamma_{2}} }\Big\rbrace\, \|f\|_{L^{2}(\mathbb{X})}^{2}, \end{align*} where \begin{align*} \int_{\mathbb{X}}\diff{y}\, \psi_{4}^{2}(x,y)\,k_{\sigma}^{2}(y^{-1}x)\,|y|^{-2\gamma_{2}}\, &\lesssim\,\chi_{0}(2|x|)\, \int_{\mathfrak{a}^{+}}\diff{y^{+}}\,\delta(y^{+})\, \chi_{\infty}(\tfrac{|y^{+}|}{2})\, |y^{+}|^{2\sigma-\nu-\ell-2\gamma_{2}}\,e^{-2\langle{\rho,y^{+}}\rangle}\\[5pt] &\lesssim\,\chi_{0}(2|x|) \underbrace{ \int_{1}^{+\infty}\diff{r}\,r^{2\gamma_{1}-\nu-1} }_{<\,+\infty}, \end{align*} provided that $\gamma_{1}+\gamma_{2}\ge\sigma$ and $\gamma_{1}<\frac{\nu}{2}$. On the other hand, we have \begin{align*} \int_{\mathbb{X}}\diff{x}\,\chi_{0}(2|x|)\,|x|^{-2\gamma_{1}}\, \lesssim\, \int_{|x^{+}|\le\frac12}\diff{x^{+}}\, |x^{+}|^{-2\gamma_{1}}\,|x^{+}|^{n-\ell}\,<\,+\infty, \end{align*} provided that $\gamma_{1}<\frac{n}{2}$. Therefore, $\mathcal{T}_{4}$ is $L^{2}$-bounded. One can show the $L^2$-boundedness of $\mathcal{T}_{5}$ by using similar arguments. Therefore, for all $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ satisfying $\gamma_{1}+\gamma_{2}\ge\sigma>0$, we have \begin{align}\label{S3 estim T4T5} \|\mathcal{T}_{k}f\|_{L^{2}(\mathbb{X})}\, \lesssim\,\|f\|_{L^{2}(\mathbb{X})} \qquad\textnormal{with}\,\,\,k=4,\,5. \end{align} \noindent\textbf{Case (iv): if $|x|$ and $|y|$ are not comparable, but both large.} In the last case, let us define $\psi_{6}(x,y)=\chi_{\infty}(\frac{|y|}{4|x|})\,\chi_{\infty}(|x|)$ and $\psi_{7}(x,y)=\chi_{0}(\frac{2|y|}{|x|})\,\chi_{\infty}(|y|)$, see the non-shaded blue and red triangles in Figure \ref{Fig Case3} for zones corresponding to their supports. For any $(x,y)\in\supp\psi_{6}$, we have $\frac{|y|}{|x|}\ge2$ and $|x|\ge\frac12$, which imply that $|y|\ge1$, $|x|\asymp\langle{x}\rangle$, and $\frac{|y|}{2}<|y^{-1}x|<\frac{3|y|}{2}$. Then, for any $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$ satisfying $\gamma_{1}+\gamma_{2}\ge\sigma>0$, and for all $(x,y)$ in the support of $\psi_{6}$, we have \begin{align*} |x|^{-\gamma_{1}}\,|y|^{-\gamma_{2}}&= \langle{x}\rangle^{-\frac{\nu}{2}}|y^{-1}x|^{\frac{\nu}{2}-\sigma} \underbrace{\vphantom{\int} |x|^{-\gamma_1} \langle x \rangle^{\frac{\nu}{2}}|y^{-1}x|^{\sigma-\frac{\nu}{2}} |y|^{-\gamma_2} }_{\lesssim\,|y|^{-\gamma_1+\frac{\nu}{2}+\sigma-\frac{\nu}{2}-\gamma_2}\,\le\,1}. \end{align*} Combining this inequality with the partition of unity defined in the proof of Lemma \ref{S3 KS lemma}, we obtain \begin{align*} \|\mathcal{T}_{6}f(x)\|_{L^{2}(\mathbb{X})}^{2}\, &=\, \int_{\mathbb{X}}\diff{x}\, \Big| \int_{\mathbb{X}}\diff{y}\,\psi_{6}(x,y)\, |x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y) \Big|^{2}\\[5pt] &\lesssim\, \int_{\mathbb{X}}\diff{x}\,\langle{x}\rangle^{-\nu}\, \Big| \int_{\mathbb{X}}\diff{y}\, \psi_{6}(x,y)\,k_{\sigma}(y^{-1}x)\, |y^{-1}x|^{\frac{\nu}{2}-\sigma}\,f(y) \Big|^{2}\\[5pt] &\lesssim\, \sum_{j\in\mathbb{N}}\, \int_{\mathbb{X}}\diff{x}\, \langle{x}\rangle^{-\nu}\, \Big| \int_{\mathbb{X}}\diff{y}\,\widetilde{\chi}_{j}(y)\, \psi_{6}(x,y)\,k_{\sigma}(y^{-1}x)\, |y^{-1}x|^{\frac{\nu}{2}-\sigma}\,f(y) \Big|^{2}. \end{align*} Recall that for all $y\in\supp\widetilde{\chi}_{j}\subset[2^{j-2},2^{j}]$ and $(x,y)\in\supp\psi_{6}$, we have $\chi_{0}(\frac{|y^{-1}x|}{2^{j+2}})=1$ and $\chi_{\infty}(2|y^{-1}x|)=1$. We deduce from the duality of \eqref{S3 KS2} that \begin{align*} \|\mathcal{T}_{6}f(x)\|_{L^{2}(\mathbb{X})}^{2}\, &\lesssim\, \sum_{j\in\mathbb{N}}\,2^{(\nu-2\sigma)j}\, \int_{\mathbb{X}}\diff{x}\,\langle{x}\rangle^{-\nu}\, \Big| \int_{\mathbb{X}}\diff{y}\, \chi_{0}(\tfrac{|y^{-1}x|}{2^{j+2}})\, \chi_{\infty}(2|y^{-1}x|)\, k_{\sigma}(y^{-1}x)\, \widetilde{\chi}_{j}(y)\,f(y) \Big|^{2}\\[5pt] &=\, \sum_{j\in\mathbb{N}}\,2^{(\nu-2\sigma)j}\, \int_{\mathbb{X}}\diff{x}\,\langle{x}\rangle^{-\nu}\, \big|(\widetilde{\chi}_{j}\,f)\,*\, (\chi_{0}(\tfrac{|\cdot|}{2^{j+2}})\, \chi_{\infty}(2|\cdot|)\,k_{\sigma}) \big|^{2}\\[5pt] &\lesssim\, \sum_{j\in\mathbb{N}}\,2^{(\nu-2\sigma)j}\, \|\chi_{0}(\tfrac{|\cdot|}{2^{j+2}})\, \chi_{\infty}(2|\cdot|)\,k_{\sigma}\|_{L^{2}(\mathbb{X})}^{2}\, \|\widetilde{\chi}_{j}f\|_{L^{2}(\mathbb{X})}^{2}, \end{align*} where \begin{align*} \|\chi_{0}(\tfrac{|\cdot|}{2^{j+2}})\, \chi_{\infty}(2|\cdot|)\, k_{\sigma}\|_{L^{2}(\mathbb{X})}^{2}\, &\asymp\, \int_{\mathbb{X}}\diff{x}\, \chi_{0}(\tfrac{|x|}{2^{j+2}})\, \chi_{\infty}(2|x|)\, |x|^{2\sigma-2\nu}\,\varphi_{0}^{2}(x)\\[5pt] &\lesssim\, \int_{1\le|x^{+}|\le2^{j}}\diff{x^{+}}\, |x^{+}|^{2\sigma-\nu-\ell}\\[5pt] &=\,\int_{1}^{2^{j}}\,\diff{r}\,r^{2\sigma-\nu-1}\, \lesssim\,2^{(2\sigma-\nu)j}. \end{align*} We finally obtain \begin{align*} \|\mathcal{T}_{6}f(x)\|_{L^{2}(\mathbb{X})}^{2}\, \lesssim\, \sum_{j\in\mathbb{N}}\, \|\widetilde{\chi}_{j}f\|_{L^{2}(\mathbb{X})}^{2}\, \asymp\,\|f\|_{L^{2}(\mathbb{X})}^{2}, \end{align*} provided that $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$ and $\gamma_{1}+\gamma_{2}\ge\sigma>0$. We omit the similar proof for the operator $\mathcal{T}_{7}$ and conclude that, for all $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$ satisfying $\gamma_{1}+\gamma_{2}\ge\sigma>0$, \begin{align}\label{S3 estim T6T7} \|\mathcal{T}_{k}f\|_{L^{2}(\mathbb{X})}\, \lesssim\,\|f\|_{L^{2}(\mathbb{X})} \qquad\textnormal{with}\,\,\,k=6,\,7. \end{align} \noindent\textbf{Conclusion:} Since $\bigcup_{1\le{k}\le7}\supp\psi_{k}$ covers $\mathbb{X}$, we deduce, from \eqref{S3 estim T1}, \eqref{S3 estim T2T3}, \eqref{S3 estim T4T5}, and \eqref{S3 estim T6T7} that \begin{align*} \|\mathcal{T}f\|_{L^{2}(\mathbb{X})}\, \lesssim\,\sum_{1\le{k}\le7} \|\mathcal{T}_{k}f\|_{L^{2}(\mathbb{X})}\, \lesssim\,\|f\|_{L^{2}(\mathbb{X})} \end{align*} provided that $\sigma>0$, $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$, and $\sigma=\gamma_{1}+\gamma_{2}$. \end{proof} \begin{remark} If we assume that $f$ is in addition bi-$K$-invariant in Theorem \ref{main thm SW} and Proposition \ref{S3 SW prop}, the last two cases in the above proof will be simplified according to the following trick: \begin{align} \int_{G}\diff{y}\,\varphi_{0}(y^{-1}x)\,f(y) =\,\int_{G}\diff{y}\,f(y)\, \int_{K}\diff{k}\,\varphi_{0}(y^{-1}kx)\, =\,\varphi_{0}(x)\, \int_{G}\diff{y}\,\varphi_{0}(y)\,f(y), \label{S32 SplitSph} \end{align} since $\diff{k}$ is a normalized measure on $K$ and $\varphi_{0}$ is a spherical function. In fact, \eqref{S32 SplitSph} implies that \begin{align*} \mathcal{T}_{j}(x)\, &=\, \int_{\mathbb{X}}\diff{x}\, \psi_{k}(x,y)\,|x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y)\\[5pt] &\asymp\,\varphi_{0}(x)\, \int_{\mathbb{X}}\diff{x}\,\psi_{k}(x,y)\, |x|^{-\gamma_{1}}\,|y^{-1}x|^{\sigma-\nu}\, |y|^{-\gamma_{2}}\,\varphi_{0}(y)\,f(y) \end{align*} for each $4\le{k}\le7$. Then we can conclude by using Lemma \ref{S3 SW lemma}. \end{remark} \begin{remark}\label{S3 rmk SWoptimal} The regularity conditions $\gamma_{1},\gamma_{2}<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ in Theorem \ref{main thm SW} are necessary. In the Euclidean setting, the necessity of conditions occurring in the Stein-Weiss inequality is well explained in the recent note \cite{Ngo21}. On symmetric spaces, the kernel $k_{\sigma}$ behaves differently depending on whether it is close to or away from the origin. Hence the manifold dimension and the pseudo-dimension both play essential roles. To check this, it is sufficient to show that, for any $\gamma_{1},\gamma_{2}\ge\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$, the double integral \begin{align} \int_{\mathbb{X}}\diff{x}\,\Big| \int_{\mathbb{X}}\diff{y}\, |x|^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, |y|^{-\gamma_{2}}\,f(y) \Big|^{2} \label{S32 doubleInt} \end{align} is not finite for $x$ or $y$ located in some specific regions. Let us define a subset of $\mathfrak{a}^{+}$ consisting of vectors away from the walls: \begin{align*} \mathfrak{a}_{1}\, =\,\lbrace{ H\in\mathfrak{a}^{+}\,|\, \langle{\alpha,H}\rangle\asymp|H|,\,\,\,\forall\,\alpha\in\Sigma^{+} }\rbrace. \end{align*} Then, for any vector $H\in\mathfrak{a}_{1}$, we have \begin{align*} \delta(H)\, \gtrsim\, \begin{cases} |H|^{n-\ell} \qquad&\textnormal{if $|H|$ is bounded from above},\\[5pt] e^{\langle{2\rho,H}\rangle} \qquad&\textnormal{if $|H|$ is bounded from below}. \end{cases} \end{align*} \begin{itemize}[leftmargin=*] \item Let $f$ be a cut-off function such that $\supp{f}=\lbrace{y\in\mathbb{X}\,|\,\frac14\le|y|\le\frac12}\rbrace$. Notice that, for all $0<\sigma<n$, $|x|\le\frac12$, and $\frac14\le|y|\le\frac12$, we have \begin{align*} k_{\sigma}(y^{-1}x)\, \asymp\, |y^{-1}x|^{\sigma-n}\, \gtrsim\, |y|^{\sigma-n} \end{align*} according to \eqref{S3 Riesz ker estim}. Hence, \begin{align*} \eqref{S32 doubleInt}\, &\gtrsim\, \int_{\lbrace{x\,\in\,K(\exp{\mathfrak{a}_{1}})K\,|\,|x|\le\frac12}\rbrace} \diff{x}\,|x|^{-2\gamma_{1}}\, \Big|\int_{K(\exp{\mathfrak{a}_{1}})K\,\cap\,\supp{f}}\diff{y}\, |y|^{\sigma-n-\gamma_{2}}\Big|^{2}\\[5pt] &\gtrsim\, \int_{|x^{+}|\le\frac12}\diff{x^{+}}\,|x^{+}|^{-2\gamma_{1}+n-\ell}\, \Big| \int_{\frac14\le|y^{+}|\le\frac12}\diff{y^{+}}\, |y^{+}|^{\sigma-\gamma_{2}-\ell} \Big|^{2}\\[5pt] &=\, \Big\lbrace{ \int_{0}^{\frac12}\diff{r}\,r^{-2\gamma_{1}+n-1} }\Big\rbrace\, \Big\lbrace{\underbrace{ \int_{\frac14}^{\frac12}\diff{r}\,r^{\sigma-\gamma_{2}-1}}_{=\,\const} }\Big\rbrace, \end{align*} where the first integral is not finite for any $\gamma_{1}\ge\frac{n}{2}$. The necessity of $\gamma_{2}<\frac{n}{2}$ can be handled in the same way. \item Let $f$ be a bi-$K$-invariant cut-off function such that $\supp{f}=\lbrace{y\in\mathbb{X}\,|\,\frac{1}{2}\le|y|\le1}\rbrace$. For all $0<\sigma<\nu$, $|x|\ge2$, and $\frac{1}{2}\le|y|\le1$, we have $1\le|y^{-1}x|\le2|x|$ and then \begin{align*} k_{\sigma}(y^{-1}x)\, \asymp\, |y^{-1}x|^{\sigma-\nu}\,\varphi_{0}(y^{-1}x)\, \gtrsim\, |x|^{\sigma-\nu}\,\varphi_{0}(y^{-1}x)\, \end{align*} according to \eqref{S3 Riesz ker estim} again. Since $f$ is bi-$K$-invariant, we deduce from \eqref{S32 SplitSph} and \eqref{S2 phi0} that \begin{align*} \eqref{S32 doubleInt}\, &\gtrsim\, \int_{\lbrace{x\,\in\,K(\exp{\mathfrak{a}_{1}})K\,|\,|x|\ge2}\rbrace} \diff{x}\,|x|^{-2\gamma_{1}+2\sigma-2\nu}\, \Big| \int_{\frac12\le|y|\le1}\diff{y}\, |y|^{-\gamma_{2}}\,\varphi_{0}(y^{-1}x)\, \Big|^{2}\\[5pt] &=\, \int_{\lbrace{x\,\in\,K(\exp{\mathfrak{a}_{1}})K\,|\,|x|\ge2}\rbrace} \diff{x}\,|x|^{2\gamma_{2}-2\nu}\,\varphi_{0}^{2}(x)\, \underbrace{\Big| \int_{\frac12\le|y|\le1}\diff{y}\, |y|^{-\gamma_{2}}\,\varphi_{0}(y) \Big|^{2}}_{=\,\const}\\[5pt] &\gtrsim\, \int_{|x^{+}|\ge2}\diff{x^{+}}\, |x^{+}|^{2\gamma_{2}-\nu-\ell}\, \end{align*} which is not finite for any $\gamma_{2}\ge\frac{\nu}{2}$. Similarly, we can show that $\gamma_{1}<\frac{\nu}{2}$ is a necessary condition as well. \end{itemize} \end{remark} \begin{remark}\label{S3 RmkSWCor} In the proof of Proposition \ref{S3 SW prop}, the condition $\gamma_{1}+\gamma_{2}=\sigma$ is only required in the first two cases. In other two cases, it is sufficient to conclude with $\gamma_{1}+\gamma_{2}\ge\sigma$. If we consider the inequality with \textit{inhomogeneous} weights instead of the homogeneous ones, we can get rid of the analysis around the origin, then establish an inhomogeneous version of Stein-Weiss inequality with relaxed conditions $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$ and $\gamma_{1}+\gamma_{2}\ge\sigma$, see the following Corollary. This type of inequality has been considered in \cite{Kai14} for $\gamma_{1}+\gamma_{2}>\sigma$. The endpoint improvement here allows one to deduce straightforwardly the Type (III) smoothing estimate in the full regularity range, see Theorem \ref{main thm SchSmoothing} and \cite[Remark 4.1]{Kai14}. \end{remark} \begin{corollary}\label{S3 SWCor} Let $\sigma>0$, $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$, and $\gamma_{1}+\gamma_{2}\ge\sigma$. Then we have \begin{align}\label{S3 inSW} \|\langle{\cdot}\rangle^{-\gamma_{1}}\,D_{x}^{-\sigma}\, f\|_{L^{2}(\mathbb{X})}\, \lesssim\, \|\langle{\cdot}\rangle^{\gamma_{2}}\,f\|_{L^{2}(\mathbb{X})}. \end{align} \end{corollary} \begin{proof} Let us show that the operator $\widetilde{\mathcal{T}}:L^{2}(\mathbb{X})\rightarrow{L}^{2}(\mathbb{X})$ defined by \begin{align*} \widetilde{\mathcal{T}}f(x)\,=\, \int_{\mathbb{X}}\diff{y}\, \langle{x}\rangle^{-\gamma_{1}}\,k_{\sigma}(y^{-1}x)\, \langle{y}\rangle^{-\gamma_{2}}\,f(y) \end{align*} is $L^{2}$-bounded. On the one hand, if $|x|$ and $|y|$ are both large, then $\langle{x}\rangle^{-\gamma_{1}}\langle{y}\rangle^{-\gamma_{2}}\asymp|x|^{-\gamma_{1}}|y|^{-\gamma_{2}}$ for any $\gamma_{1},\gamma_{2}\in\mathbb{R}$ and we go back to cases (i) and (iv) in the proof of Proposition \ref{S3 SW prop}. Notice that in Case (i), if one considers the inhomogeneous weight $\langle{x}\rangle$ instead of the homogeneous one, it is sufficient to take $j\in\mathbb{N}$ instead of $j\in\mathbb{Z}$ in the dyadic decomposition \eqref{S3 dyadic}, then conclude with relaxed condition $\gamma_{1}+\gamma_{2}\ge\sigma>0$, see \eqref{S3 estim T1} and \eqref{S3 estim T6T7}. On the other hand, if $|x|$ and $|y|$ are both small, then $|y^{-1}x|$ is also small and $\langle{x}\rangle^{-\gamma_{1}}\langle{y}\rangle^{-\gamma_{2}}$ is bounded for any $\gamma_{1},\gamma_{2}\in\mathbb{R}$. By using the the Kunze-Stein phenomenon \eqref{S3 KS1} and the kernel estimate \eqref{S3 Riesz ker estim}, we obtain \begin{align}\label{S3 Jap small1} \int_{\mathbb{X}}\diff{x}\, \Big|\int_{\mathbb{X}}\diff{y}\, \chi_{0}(|y^{-1}x|)\,k_{\sigma}(y^{-1}x)\,f(y)\Big|^{2}\, &=\,\|(\chi_{0}k_{\sigma})*f\|_{L^{2}(\mathbb{X})}^{2}\notag\\[5pt] &\lesssim\,\|\chi_{0}k_{\sigma}\varphi_{0}\|_{L^{1}(\mathbb{X})}^{2}\, \|f\|_{L^{2}(\mathbb{X})}^{2} \end{align} where \begin{align}\label{S3 Jap small2} \|\chi_{0}k_{\sigma}\varphi_{0}\|_{L^{1}(\mathbb{X})}\, \lesssim\, \int_{|x^{+}|\le1}\diff{x}\, |x^{+}|^{\sigma-n}\,|x^{+}|^{n-\ell}\, <\,+\infty \end{align} for any $\sigma>0$. In the remaining cases where $|x|$ and $|y|$ are not both small or both large, we go back to Case (iii) in the proof of Proposition \ref{S3 SW prop}. Notice that, in contrast to $|x|^{-\gamma_{1}}$, the inhomogeneous weight $\langle{x}\rangle^{-\gamma_{1}}$ has no contribution when $|x|$ is small. Hence the condition $\gamma_{1}<\frac{n}{2}$ is not required. Similarly, we can remove the condition $\gamma_{2}<\frac{n}{2}$ as well when $|y|$ is small. We deduce that $\widetilde{\mathcal{T}}$ is $L^{2}$-bounded, provided that $\gamma_{1},\gamma_{2}<\frac{\nu}{2}$ and $\gamma_{1}+\gamma_{2}\ge\sigma>0$. \end{proof} \subsection{Resolvent estimate and smoothing property} We prove in this part the resolvent estimate stated in Theorem \ref{main thm resolv}. Combining it with the Stein-Weiss inequality Theorem \ref{main thm SW}, we deduce Theorem \ref{main thm smoothing}. As we mentioned in the introduction, the standard scaling argument carried out in \cite{KaYa89,Sug03} fails in the present setting. We prove Theorem \ref{main thm resolv} along the lines in \cite{Chi08,Kai14} with a more careful analysis around the origin, since we are considering the homogeneous weights. We will combine the improved $L^2$-continuity of $\mathcal{JR}$-transform (Lemma \ref{S2 JR continuity}) with two estimates borrowing from the Euclidean Fourier analysis. Recall that $\mathfrak{a}$ is an $\ell$-dimensional flat submanifold of $\mathbb{X}$. Let $g$ be a reasonable function on $\mathfrak{a}$. Then we have the following. \begin{itemize} \item (Besov embedding) If $\ell=1$ and $\frac12<\theta<\frac32$, we have \begin{align}\label{S3 Morrey} |g(\lambda_{1})-g(\lambda_{2})|\, \lesssim\,|\lambda_{1}-\lambda_{2}|^{\theta-\frac12}\, \||\cdot|^{\theta}\, \mathcal{F}_{\mathfrak{a}}g\|_{L^{2}(\mathfrak{a})} \qquad\forall\,\lambda_{1},\lambda_{2}\in\mathfrak{a}. \end{align} When $\theta=1$, the estimate \eqref{S3 Morrey} is the classical Morrey inequality. For other $\frac12<\theta<\frac32$, it is sufficient to notice that the Hölder-Zygmund space with index $0<\theta-\frac12<1$ is the standard Besov space $B_{\infty\infty}^{\theta-1/2}(\mathfrak{a})$, which is embedded into $B_{22}^{\theta}(\mathfrak{a})$. See, for instance, \cite[Sect. 2.2.2]{Saw18}. \vspace{5pt}\item (Fourier restriction theorem) If $\ell\ge2$ and $\frac12<\theta<\frac{\ell}{2}$, we have \begin{align}\label{S3 Restriction} \int_{|\lambda|=\,r}\diff{\sigma}_{\lambda}\, |(\mathcal{F}_{\mathfrak{a}}g)(\lambda)|^{2}\, \,\lesssim\, r^{2\theta-1}\,\int_{\mathfrak{a}}\diff{\lambda}\, |\lambda|^{2\theta}\,|g(\lambda)|^{2} \qquad\forall\,r>0, \end{align} see for instance \cite[Theorem 5.6]{BlSa92}. Here $\diff{\sigma}_{\lambda}$ denotes the usual surface measure. \end{itemize} \begin{proof}[Proof of Theorem \ref{main thm resolv}] According to the Plancherel formula and the transform \eqref{S2 transformXtoR}, we write \begin{align*} (D^{2\alpha}(D^{2}-\zeta)^{-1}\,f,\,f)_{L^{2}(\mathbb{X})}\, =\,|W|^{-1}\,\int_{B}\diff{b}\,\int_{\mathfrak{a}}\diff{\lambda}\, |\mathbf{c}(\lambda)|^{-2}\,|\mathcal{F}f(\lambda,b)|^{2}\, \frac{|\lambda|^{2\alpha}}{|\lambda|^{2}-\zeta}, \end{align*} where \begin{align*} |\mathbf{c}(\lambda)|^{-2}\,|\mathcal{F}f(\lambda,b)|^{2}\, =\,\big||\mathbf{c}(\lambda)|^{-1}\, \mathcal{F}_{\mathfrak{a}} [\mathcal{R}f(\cdot,b)](\lambda)\big|^{2}\, =\,\big|\mathcal{F}_{\mathfrak{a}} [\mathcal{JR}f(\cdot,b)](\lambda)\big|^{2}, \end{align*} for all $\lambda\in\mathfrak{a}$ and $b\in{B}$. When $\mathbb{X}$ is of rank $\ell=1$, by applying \eqref{S3 Morrey} with $g(\lambda)=\mathbf{c}(\lambda)^{-1}\mathcal{F}f(\lambda,B)$, $\lambda_{2}=0$ and $\theta=1-\alpha$, we have \begin{align*} |\mathbf{c}(\lambda)|^{-2}\,|\mathcal{F}f(\lambda,B)|^{2}\, \lesssim\, |\lambda|^{1-2\alpha}\, \||\cdot|^{1-\alpha}\, \mathcal{JR}f(\cdot,B)\|_{L^{2}(\mathfrak{a})}^{2}, \end{align*} provided that $-\frac12<\alpha<\frac12$. When $\mathbb{X}$ is of rank $\ell\ge2$, we use \eqref{S3 Restriction} with $\theta=1-\alpha$, and obtain \begin{align*} (D^{2\alpha}(D^{2}-\zeta)^{-1}\,f,\,f)_{L^{2}(\mathbb{X})}\, &=\,|W|^{-1}\,\int_{B}\diff{b}\,\int_{0}^{+\infty}\diff{r}\, \frac{r^{2\alpha}}{r^{2}-\zeta}\, \int_{|\lambda|=\,r}\diff{\sigma}_{\lambda}\, \big|\mathcal{F}_{\mathfrak{a}} [\mathcal{JR}f(\cdot,b)](\lambda)\big|^{2}\\[5pt] &\lesssim\,|W|^{-1}\,\int_{B}\diff{b}\,\int_{0}^{+\infty}\diff{r}\, \frac{r}{r^{2}-\zeta}\, \int_{\mathfrak{a}}\diff{\lambda}\, |\lambda|^{2-2\alpha}\, \big|\mathcal{JR}f(\lambda,b)\big|^{2} \end{align*} provided that $1-\frac{\ell}{2}<\alpha<\frac{1}{2}$. Hence, for all $\ell\ge1$, we have \begin{align*} |\im(D^{2\alpha}(D^{2}-\zeta)^{-1}\,f,\,f)_{L^{2}(\mathbb{X})}|\, &\lesssim\, \|\mathcal{JR}f\|_{L^{2}(\mathfrak{a}\times{B}, \,|\lambda|^{2-2\alpha}\diff{\lambda}\diff{b})}^{2} \Big|\im\int_{0}^{+\infty}\diff{r}\, \frac{r}{r^{2}-\zeta}\Big|\\[5pt] &\,\lesssim\,\|f\|_{L^{2}(\mathbb{X},|x|^{2-2\alpha}\diff{x})}, \end{align*} where \begin{align*} \Big|\im\int_{0}^{+\infty}\diff{r}\,\frac{r}{r^{2}-\zeta}\Big|\, \le\,\frac{1}{2}\,\int_{0}^{+\infty}\diff{s}\, \frac{|\im\zeta|}{(s-\re\zeta)^{2}+(\im\zeta)^{2}}\, \le\,\frac{\pi}{2}\, \qquad\forall\,\zeta\in\mathbb{C}\smallsetminus\mathbb{R}. \end{align*} We conclude that, for suitable $\alpha$, \begin{align*} |\im(D^{2\alpha}(D^{2}-\zeta)^{-1}\,f,\,f)_{L^{2}(\mathbb{X})}|\, \lesssim\,\|f\|_{L^{2}(\mathbb{X},|x|^{2-2\alpha}\diff{x})}, \end{align*} according to Lemma \ref{S2 JR continuity}. \end{proof} \begin{remark} We give here only an estimate for the imaginary part. According to Remark \ref{S1 Kato}, this is enough to deduce the smoothness for the corresponding operator, namely, the smoothing property for the free Schrödinger equation. The real part estimate can be handled along the lines in \cite{Chi08,Kai14}, using the Stein-Weiss inequality \eqref{main thm SW ineq} instead of theirs. This allows one to prove the so-called super-smoothness for the corresponding operator, see \cite{Kat66,KaYa89}. \end{remark} Combining the resolvent estimate \eqref{main thm resolv} with the Stein-Weiss inequality \eqref{main thm SW ineq}, we deduce our Theorem \ref{main thm smoothing}. \begin{proof}[Proof of Theorem \ref{main thm smoothing}] According to Remark \ref{S1 Kato} and Theorem \ref{main thm resolv}, we know that the smoothing property \eqref{main thm smoothing schrodinger} holds for all $-\frac12<\alpha<\frac12$ in rank one ($\ell=1$) and $1-\frac{\ell}{2}<\alpha<\frac12$ in higher ranks ($\ell\ge2$). It remains for us to show that the smoothing property \eqref{main thm smoothing schrodinger} remains valid for all $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\alpha\le1-\frac{\ell}{2}$ when $\ell\ge2$. Let $0<\varepsilon<\frac12$ be a small constant. We write \begin{align*} \||x|^{\alpha-1}\,D_{x}^{\alpha}\,u \|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, =\, \||x|^{-(1-\alpha)}\,D_{x}^{-(\frac12-\varepsilon-\alpha)}\, D_{x}^{\frac12-\varepsilon}\,u \|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})} \end{align*} where $\frac12-\varepsilon-\alpha\ge\frac{\ell}{2}-\frac{1}{2}-\varepsilon>\frac{\ell}{2}-1\ge0$ and $1-\alpha<\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace$ fulfil conditions of the Stein-Weiss inequality \eqref{main thm SW ineq}. Hence, we obtain \begin{align*} \||x|^{\alpha-1}\,D_{x}^{\alpha}\,u \|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \||x|^{-\frac12-\varepsilon}\,D_{x}^{\frac12-\varepsilon}\, u\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}. \end{align*} Notice that \begin{align*} \||x|^{-\frac12-\varepsilon}\,D_{x}^{\frac12-\varepsilon}\, u\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, =\, \||x|^{(\frac12-\varepsilon)-1}\,D_{x}^{\frac12-\varepsilon}\, u\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}, \end{align*} where $1-\frac{\ell}{2}\le0<\frac{1}{2}-\varepsilon<\frac12$ for all $\ell\ge2$. We deduce that the smoothing property \begin{align*} \||x|^{\alpha-1}\,D_{x}^{\alpha}\,u \|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} remains valid for all $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\alpha\le1-\frac{\ell}{2}$ when $\ell\ge2$. We conclude that the smoothing property \eqref{main thm smoothing schrodinger} holds for all $1-\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace<\alpha<\frac12$ in higher ranks. \end{proof} \begin{remark}\label{S3 remark optimal} On symmetric spaces $G/K$ where $G$ is complex, the regularity condition in Theorem \ref{main thm smoothing} is optimal, i.e., the smoothing property \begin{align*} \||x|^{\alpha-1}\,D_{x}^{\alpha}\,u \|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} cannot hold for any $\alpha\le1-\frac{\nu}{2}$ or $\alpha\ge\frac12$. When $G$ is complex, we can write the spherical Fourier transform \eqref{S22 HarishChandra} as \begin{align*} f(x)\,=\,\const\,\varphi_{0}(x)\, \int_{\mathfrak{p}}\diff{\lambda}\, \mathcal{H}f(\lambda)\,e^{-i\langle{\lambda,x}\rangle} \qquad\forall\,f\in\mathcal{S}(K\backslash{G/K}), \end{align*} where $\mathfrak{p}$ is an $n$-dimensional flat space, see \cite[Theorem 4.7 and Theorem 9.1]{Hel00}. Let $u_{0}$ be a bi-$K$-invariant function such that its spherical Fourier transform is radial in $\mathfrak{p}$. Then \begin{align*} D_{x}^{\alpha}\,e^{itD_{x}^{2}}u_{0}(x)\, &=\,\const\,\varphi_{0}(x)\, \int_{\mathfrak{p}}\diff{\lambda}\, |\lambda|^{\alpha}\,e^{-it|\lambda|^{2}}\, (\mathcal{H}u_{0})(|\lambda|)\, e^{-i\langle{\lambda,x}\rangle}\\[5pt] &=\,\const\,\varphi_{0}(x)\, \int_{0}^{\infty}\diff{r}\, r^{\alpha}\,e^{-itr^{2}}\, (\mathcal{H}u_{0})(r)\, \int_{|\lambda|=r}\diff{\sigma_{\lambda}}\, e^{-i\langle{\lambda,x}\rangle}, \end{align*} where the inner integral is the modified Bessel function: \begin{align*} \int_{|\lambda|=r}\diff{\sigma_{\lambda}}\, e^{-i\langle{\lambda,x}\rangle}\, =\,r^{\frac{n}{2}}\,|x|^{\frac{2-n}{2}}\,J_{\frac{n-2}{2}}(r|x|). \end{align*} By making the change of variable $r=\sqrt{s}$, we obtain \begin{align*} D_{x}^{\alpha}\,e^{itD_{x}^{2}}u_{0}(x)\, =\,\const\,|x|^{\frac{2-n}{2}}\,\varphi_{0}(x)\, \int_{0}^{\infty}\diff{s}\,e^{-its} s^{\frac{\alpha}{2}+\frac{n}{4}-\frac{1}{2}}\, (\mathcal{H}u_{0})(\sqrt{s})\,J_{\frac{n-2}{2}}(\sqrt{s}|x|). \end{align*} Together with the Plancherel formula (in variable $t$), we deduce that \begin{align} &\||x|^{\alpha-1}\,D_{x}^{\alpha}\,e^{itD^{2}}u_{0} \|_{L^2(\mathbb{R}_{t}\times\mathbb{X})}^{2} \notag\\[5pt] &=\,\const\, \int_{\mathbb{X}}\diff{x}\,|x|^{2\alpha-n}\,\varphi_{0}^{2}(x)\, \int_{0}^{\infty}\diff{s}\,s^{\alpha+\frac{n}{2}-1}\, |(\mathcal{H}u_{0})(\sqrt{s})|^{2}\, \Big|J_{\frac{n-2}{2}}(\sqrt{s}|x|)\Big|^{2} \notag\\[5pt] &=\,\const\, \int_{0}^{\infty}\diff{s}\,s^{\alpha+\frac{n}{2}-1}\, |(\mathcal{H}u_{0})(\sqrt{s})|^{2}\, \int_{\mathbb{X}}\diff{x}\,|x|^{2\alpha-n}\,\varphi_{0}^{2}(x)\, \Big|J_{\frac{n-2}{2}}(\sqrt{s}|x|)\Big|^{2}. \label{S31 innerintegral} \end{align} For any fixed $s>0$, the Bessel function $|J_{\frac{n-2}{2}}(\sqrt{s}|x|)|$ behaves asymptotically as $|x|^{\frac{n-2}{2}}$ when $|x|\rightarrow{0}$ and $|x|^{-{1}/{2}}$ when $|x|\rightarrow\infty$. Moreover, notice on the one hand that \begin{align*} \int_{\lbrace{x\,\in\,{K(\exp\mathfrak{a}_{1})K}\,|\,|x|\le1}\rbrace}\diff{x}\, |x|^{2\alpha-n}\,\varphi_{0}^{2}(x)\,|x|^{n-2}\, &\gtrsim\, \int_{|x^{+}|\le1}\diff{x^{+}}\,|x^{+}|^{2\alpha-2}\,|x|^{n-\ell}\\[5pt] &=\, \int_{0}^{1}\diff{r}\,r^{2\alpha-2+n-1} \end{align*} which is finite if and only if $\alpha>1-\frac{n}{2}$. On the other hand, \begin{align*} \int_{\lbrace{x\,\in\,{K(\exp\mathfrak{a}_{1})K}\,|\,|x|\ge1}\rbrace} \diff{x}\,|x|^{2\alpha-n}\,\varphi_{0}^{2}(x)\,|x|^{-1}\, &\gtrsim\, \int_{|x^{+}|\ge1}\diff{x^{+}}\,|x^{+}|^{2\alpha-n+\nu-\ell-1}\\[5pt] &=\, \int_{1}^{+\infty}\diff{r}\,r^{2\alpha-n+\nu-2} \end{align*} which is finite provided that $\alpha<\frac{\nu-n}{2}+\frac{1}{2}$. Here $\mathfrak{a}_{1}\subset\mathfrak{a}^{+}$ is the subset consisting of vectors away from the walls, see Remark \ref{S3 rmk SWoptimal}. Since $n=\nu$ in the case where $G$ is complex, then the inner integral on the right hand side of \eqref{S31 innerintegral} is finite if and only if $1-\frac{\nu}{2}<\alpha<\frac{1}{2}$. Hence, when $G$ is complex, the smoothing property \eqref{main thm smoothing schrodinger} cannot hold for any $\alpha\le1-\frac{\nu}{2}$ or $\alpha\ge\frac12$. \end{remark} \section{Comparison principle on symmetric spaces}\label{Section.4 Comparison} Consider two evolution equations corresponding to operators $a_{1}(D_{x})$ and $a_{2}(D_{x})$: \begin{align*} \begin{cases} (i\partial_{t}+a_{1}(D_{x}))\,u(t,x)=\,0,\\[5pt] u(0,x)\,=\,u_{0}(x), \end{cases} \qquad\textnormal{and}\qquad \begin{cases} (i\partial_{t}+a_{2}(D_{x}))\,v(t,x)=\,0,\\[5pt] v(0,x)\,=\,v_{0}(x), \end{cases} \end{align*} whose solutions are given by $u(t,x)=e^{ita_{1}(D_{x})}u_{0}(x)$ and $v(t,x)=e^{ita_{2}(D_{x})}v_{0}(x)$. The comparison principle allows one to compare smoothing properties between these two different equations when the symbols of $a_{1}(D_{x})$ and $a_{2}(D_{x})$ satisfy certain relations. We extend this tool to symmetric spaces along the lines in \cite[Theorem 2.5]{RuSu12}, since most of our arguments are made in the Cartan subspace $\mathfrak{a}$, which is an $\ell$-dimensional flat submanifold of $\mathbb{X}$. \begin{theorem}[First comparison principle] \label{S4 Comparison principle} Let $\tau_{1},\tau_{2}$ be two continuous functions on $\mathbb{R}_{+}$. Let $a_{1},a_{2}\in\mathcal{C}^{1}(\mathbb{R}_{+})$ be real-valued and strictly monotone functions on the support of a measurable function $\chi$ on $\mathbb{R}_{+}$. If there exist some constants $C>0$ such that two pairs of functions $\lbrace{\tau_{1},a_{1}}\rbrace$ and $\lbrace{\tau_{2},a_{2}}\rbrace$ fulfil the comparison condition \begin{align}\label{S4 CC} \frac{|\tau_{1}(r)|}{|a_{1}'(r)|^{1/2}}\, \le\,C\,\frac{|\tau_{2}(r)|}{|a_{2}'(r)|^{1/2}} \tag{CC} \end{align} for all $r\in\supp\chi$ satisfying $a_{1}'(r)\neq0$ and $a_{2}'(r)\neq0$, then for any measurable function $\omega$ on $\mathbb{X}$, we have \begin{align}\label{S4 CC weight} \|\omega(x)\,\chi(D_{x})\,\tau_{1}(D_{x})\,e^{ita_{1}(D_{x})}\, &u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\notag\\[5pt] &\le\,C\, \|\omega(x)\,\chi(D_{x})\,\tau_{2}(D_{x})\,e^{ita_{2}(D_{x})}\, u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}, \end{align} where the equality holds if \eqref{S4 CC} holds with equality. \end{theorem} \begin{remark} When functions $a_1$ and $a_2$ satisfy \eqref{S4 CC} for all $r\in\mathbb{R}_{+}$, Theorem \ref{S4 Comparison principle} and Corollary \ref{S4 Secondary comparison principle} hold globally without function $\chi$. The reason to introduce such a function into the estimates is that the comparison relation between symbols may vary in different frequencies. This is the case for wave-type equations, see Corollary \ref{corollary freq}. \end{remark} \begin{proof} As usual, we assume that all the integrals below make sense, we perform calculation on the set $a_{1}'(r)\neq0$, where the inverse of $a_{1}$ is differentiable. By using the inverse formula \eqref{S2 Inverse Helgason Fourier} of the Helgason-Fourier transform and polar coordinates, we write \begin{align*} &\chi(D_{x})\,\tau_{1}(D_{x})\, e^{ita_{1}(D_{x})}\,u_{0}(x)\\[5pt] &=\, |W|^{-1}\,\int_{\mathfrak{a}}\diff{\lambda}\,|\mathbf{c}(\lambda)|^{-2}\, (\chi\tau_{1})(|\lambda|)\,e^{ita_{1}(|\lambda|)}\, \int_{B}\diff{b}\,e^{\langle{i\lambda+\rho,\,A(k^{-1}x)}\rangle}\, \mathcal{F}u_{0}(\lambda,b)\\[5pt] &=\, |W|^{-1}\,\int_{0}^{+\infty}\diff{r}\, r^{\ell-1}\,(\chi\tau_{1})(r)\,e^{ita_{1}(r)}\, \underbrace{ \int_{\mathbb{S}^{\ell-1}}\,\diff{\sigma_{\eta}}\,|\mathbf{c}(r\eta)|^{-2}\, \int_{B}\,\diff{b}\, e^{\langle{ir\eta+\rho,\,A(k^{-1}x)}\rangle}\,\mathcal{F}u_{0}(r\eta,b) }_{=\,U(r,x)}. \end{align*} By substituting $r=a_{1}^{-1}(s)$ on the support of $\chi$, we have \begin{align*} &\chi(D_{x})\,\tau_{1}(D_{x})\, e^{ita_{1}(D_{x})}\,u_{0}(x)\\[5pt] &=\, |W|^{-1}\,\int_{a_{1}(\mathbb{R}_{+})}\diff{s}\, |(a_{1}^{-1})'(s)|\,(a_{1}^{-1}(s))^{\ell-1}\, (\chi\tau_{1})(a_{1}^{-1}(s))\, U(a_{1}^{-1}(s),x)\,e^{its}. \end{align*} Applying the Plancherel formula (in variable $t$), we obtain \begin{align*} &\|\chi(D_{x})\,\tau_{1}(D_{x})\,e^{ita_{1}(D_{x})}\, u_{0}(x)\|_{L^{2}(\mathbb{R})}^{2}\\[5pt] &=\,(2\pi)^{-1}|W|^{-2}\, \int_{a_{1}(\mathbb{R}_{+})}\,\diff{s}\, |(a_{1}^{-1})'(s)|^{2}\,|a_{1}^{-1}(s)|^{2\ell-2}\, |(\chi\tau_{1})(a_{1}^{-1}(s))|^{2}\, |U(a_{1}^{-1}(s),x)|^{2}\\[5pt] &=\,(2\pi)^{-1}|W|^{-2}\, \int_{0}^{+\infty}\,\diff{r}\, r^{2\ell-2}\,|\chi(r)|^{2}\, \frac{|\tau_{1}(r)|^{2}}{|a_{1}'(r)|}\, |U(r,x)|^{2}. \end{align*} Here, we have used the substitution $s=a_{1}(r)$ and the identity $(a_{1}^{-1})'(a_{1}(r))=a_{1}'(r)^{-1}$. We deduce from the comparison condition \eqref{S4 CC} that \begin{align*} &\|\chi(D_{x})\,\tau_{1}(D_{x})\, e^{ita_{1}(D_{x})}\,u_{0}(x)\|_{L^{2}(\mathbb{R}_{t})}\\[5pt] &\le\, C\,(2\pi)^{-1}|W|^{-2}\, \int_{0}^{+\infty}\,\diff{r}\, r^{2\ell-2}\,|\chi(r)|^{2}\, \frac{|\tau_{2}(r)|^{2}}{|a_{2}'(r)|}\, |U(r,x)|^{2}\\[5pt] &=\,C\, \|\chi(D_{x})\,\tau_{2}(D_{x})\, e^{ita_{2}(D_{x})}\,u_{0}(x)\|_{L^{2}(\mathbb{R}_{t})} \end{align*} for all $x\in\mathbb{X}$. Then $\eqref{S4 CC weight}$ follows. \end{proof} \subsection{General order Schrödinger-type equations} The above comparison principle allows us to deduce some new smoothing estimates from the model case. Consider the Schrödinger-type equation of order $m>0$: \begin{align}\label{S4 SchroOrder} (i\partial_{t}+D_{x}^{m})\,u(t,x)=\,0, \qquad\,u(0,x)\,=\,u_{0}(x), \end{align} whose solution is given by $u(t,x)=e^{itD_{x}^{m}}u_{0}(x)$. We will show that the solution to the Cauchy problem \eqref{S4 SchroOrder} satisfies the smoothing property: \begin{align}\label{S4 SmoothSchA} \|A(x,D_{x})\,u\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}, \end{align} namely, Theorem \ref{main thm SchSmoothing}. Recall that $A(x,D)$ is defined as one of the following: \begin{table}[ht] \setlength{\tabcolsep}{5pt} \renewcommand{\arraystretch}{2} \begin{tabular}{|c|c|c|c|} \hline \cellcolor{gray!25} Type & \cellcolor{gray!25} $A(x,D)$ & \cellcolor{gray!25} $\ell=1$ & \cellcolor{gray!25} $\ell\ge2$ \\ \hline \textnormal{(I)} & $|x|^{\alpha-\frac{m}{2}}D^{\alpha}$ & $\frac{m-3}{2}<\alpha<\frac{m-1}{2}$ & $\frac{m-\min\lbrace{n,\nu}\rbrace}{2}<\alpha<\frac{m-1}{2}$\\ \hline \textnormal{(II)} &$\langle{x}\rangle^{-s}D^{\frac{m-1}{2}}$ & \multicolumn{2}{c|}{$s>\frac{1}{2}$ \textnormal{and} $m>0$} \\ \hline \textnormal{(III)} &$\langle{x}\rangle^{-s}\langle{D}\rangle^{\frac{m-1}{2}}$ & \multicolumn{2}{c|}{{$s\ge\frac{m}{2}$ \textnormal{and} $1<m<\nu$}} \\ \hline \end{tabular} \vspace{10pt} \caption{Regularity conditions on $\mathbb{X}$ for the Schrödinger-type equations with order $m>0$.} \label{S4 TableSch} \end{table} Let us clarify what is already known about Theorem \ref{main thm SchSmoothing} and what remains for us to prove. The Type (I) estimate is proved for $m=2$ in Theorem \ref{main thm smoothing}. In \cite{Kai14}, the author showed \begin{itemize}[itemsep=5pt] \item the Type (II) estimate for $m>1$; \item the Type (III) estimate for $\ell\ge2$ and $1<m<\ell$; \item the Type (III) estimate for $1<m\le\nu$, but with restricted condition $s>\frac{m}{2}$. \end{itemize} Notice that if the Type (II) estimate holds for $m=1$, then the Type (III) estimate holds for $s>\frac{m}{2}$ with $m=1$. As we mentioned in Remark \ref{S3 RmkSWCor}, the Type (III) estimate in the critical (optimal) case $s=\frac{m}{2}$ is a consequence of the improved Stein-Weiss inequality \eqref{S3 inSW}, see also \cite[Remark 4.1]{Kai14}. It remains for us to prove the Type (I) estimate for all $m>0$ and complete the Type (II) estimate when $0<m\le1$. They are the consequences of the comparison principle. \begin{proof}[Proof of Theorem \ref{main thm SchSmoothing}] Let $-\frac{1}{2}<\alpha'<\frac12$ when $\ell=1$ and $1-{\min\lbrace{\frac{n}{2},\frac{\nu}{2}}\rbrace}<\alpha'<\frac{1}{2}$ when $\ell\ge2$. Notice that, for any $m>0$ and $r>0$, the two pairs of functions \begin{align*} \lbrace{\tau_{1}(r)=r^{\frac{m}{2}+\alpha'-1},a_{1}(r)=r^{m}}\rbrace \qquad\textnormal{and}\qquad \lbrace{\tau_{2}(r)=r^{\alpha'},a_{2}(r)=r^{2}}\rbrace \end{align*} satisfy the comparison condition \eqref{S4 CC} with $C=\sqrt{\frac{2}{m}}$. Hence \begin{align*} \||x|^{\alpha'-1}\,D_{x}^{\frac{m}{2}+\alpha'-1}\, &e^{-itD_{x}^{m}}\,u_{0}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\\[5pt] &=\,\sqrt{\frac{2}{m}}\, \||x|^{\alpha'-1}\,D_{x}^{\alpha'}\, e^{-itD_{x}^{2}}\,u_{0}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} according to Theorem \ref{S4 Comparison principle} and Theorem \ref{main thm smoothing}. We deduce the Type (I) smoothing estimate by taking $\alpha=\frac{m}{2}+\alpha'-1$. Now, suppose that $0<m\le1<m'$. For any $r>0$, the two pairs of functions $\lbrace\tau_{1}(r)=r^{{(m-1)}/{2}},$ $a_{1}(r)=r^{m}\rbrace$ and $\lbrace{\tau_{2}(r)=r^{{(m'-1)}/{2}},a_{2}(r)=r^{m'}}\rbrace$ satisfy \eqref{S4 CC} with $C=\sqrt{\frac{m}{m'}}$. Then, we obtain \begin{align*} \|\langle{x}\rangle^{-s}\,|D_{x}|^{\frac{m-1}{2}}\, &e^{-itD_{x}^{m}}\,u_{0}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\\[5pt] &=\,\sqrt{\frac{m}{m'}}\, \|\langle{x}\rangle^{-s}\,|D_{x}|^{\frac{m'-1}{2}}\, e^{-itD_{x}^{m'}}\,u_{0}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} for all $s>\frac12$. Hence, the Type (II) smoothing estimate holds for all $m>0$. \end{proof} \subsection{Wave and Klein-Gordon equations} Another important example is the Cauchy problem \begin{align}\label{S4 KGequ} \begin{cases} (\partial_{t}^{2}+D_{x}^{2}+\zeta)\,u(t,x)=\,0,\\[5pt] u(0,x)\,=\,u_{0}(x),\,\partial_{t}|_{t=0}\,u(t,x)\,=\,u_{1 }(x), \end{cases} \end{align} which is the wave equation when $\zeta=0$ and the Klein-Gordon equation when $\zeta>0$. To establish the smoothing properties for \eqref{S4 KGequ}, we introduce the following secondary comparison principle. \begin{corollary}[Secondary comparison principle] \label{S4 Secondary comparison principle} Suppose that $s>1/2$ and $\alpha$ satisfies \begin{align}\label{S4 RankCdt} \begin{cases} -\frac12<\alpha<\frac12 &\qquad\textnormal{if $\ell=1$},\\[5pt] 1-\frac{\min\lbrace{n,\nu}\rbrace}{2}<\alpha<\frac12 &\qquad\textnormal{if $\ell\ge2$}. \end{cases} \end{align} Let $a\in\mathcal{C}^{1}(\mathbb{R}_{+})$ be a real-valued and strictly monotone function on the support of a measurable function $\chi$ on $\mathbb{R}_{+}$. Let $\tau\in\mathcal{C}^{0}(\mathbb{R}_{+})$ be such that, for some $C>0$, we have \begin{align}\label{S4 SCC} |\tau(r)|\,\le\,C\,\sqrt{|a'(r)|} \tag{SCC} \end{align} for all $r\in\supp\chi$. Then, the solution to the Cauchy problem \begin{align}\label{S4 SecEqu} (i\partial_{t}+a(D_{x}))u(t,x)\,=\,0, \qquad\,u(t,x)\,=\,u_{0}(x), \end{align} satisfies \begin{align} \|\langle{x}\rangle^{-s}\,\chi(D_{x}) \tau(D_{x})\,e^{ita(D_{x})}\, u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\,\|u_{0}\|_{L^{2}(\mathbb{X})}, \label{S4 SCC1}\\[5pt] \||x|^{\alpha-1}\,\chi(D_{x})\,D_{x}^{\alpha-\frac{1}{2}}\, \tau(D_{x})\,e^{ita(D_{x})}\, u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\,\|u_{0}\|_{L^{2}(\mathbb{X})}. \label{S4 SCC2} \end{align} \end{corollary} \begin{proof} This corollary is a straightforward consequence of the first comparison principle and the smoothing property of the Schrödinger equation. Notice that if $\tau(r)$ and $a(r)$ satisfy the second comparison condition \eqref{S4 SCC} with $C=\frac{1}{\sqrt{2}}$, then $\lbrace{\tau_{1}(r)=r^{\alpha-1/2}\tau(r),a_{1}(r)=a(r)}\rbrace$ and $\lbrace{\tau_{2}(r)=r^{\alpha},a_{2}(r)=r^{2}}\rbrace$ fulfill \eqref{S4 CC} with the same constant. Hence, for all $\alpha$ satisfies \eqref{S4 RankCdt}, we have \begin{align*} \||x|^{\alpha-1}\,\chi(D_{x})\,D_{x}^{\alpha-\frac{1}{2}}\, &\tau(D_{x})\, e^{ita(D_{x})}\,u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\\[5pt] &\lesssim\, \||x|^{\alpha-1}\,\chi(D_{x})\,D_{x}^{\alpha}\, e^{itD_{x}^{2}}\,u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\,\|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} according to Theorem \ref{S4 Comparison principle} and Theorem \ref{main thm smoothing}. Similarly, since $\lbrace{\tau(r),a(r)}\rbrace$ and $\lbrace{\tau_{3}(r)=r^{{(m-1)}/{2}},a_{3}(r)=r^{m}}\rbrace$ satisfy \eqref{S4 CC} for any $m>0$, we deduce from Theorem \ref{S4 Comparison principle} and Theorem \ref{main thm smoothing} again that \begin{align*} \|\langle{x}\rangle^{-s}\,\chi(D_{x}) &\tau(D_{x})\,e^{ita(D_{x})}\, u_{0}(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\\[5pt] &\lesssim\, \|\langle{x}\rangle^{-s}\,\chi(D_{x}) D_{x}^{\frac{m-1}{2}}\,e^{itD_{x}^{m}}\, f(x)\|_{L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\,\|u_{0}\|_{L^{2}(\mathbb{X})} \end{align*} for any $s>\frac12$. \end{proof} By applying the above corollary with $a(r)=\sqrt{r^{2}+\mu}$, where $\mu\ge0$, we obtain the following smoothing estimates in different frequencies. Notice that \begin{align*} \sqrt{|a'(r)|}\,\ge\, \begin{cases} 1 &\quad\,\textnormal{if}\,\,\,\mu=0\,\,\, \textnormal{or}\,\,\,r>1,\\[5pt] \sqrt{r} &\quad\,\textnormal{if}\,\,\,0<r\le1. \end{cases} \end{align*} \begin{corollary}\label{corollary freq} Suppose that $s>1/2$ and $\alpha$ satisfies \eqref{S4 RankCdt}. Let $\chi$ be a smooth cut-off function on $\mathbb{R}_{+}$ such that $\chi=1$ around the origin and denote by $U_{l}=\chi(D)u_{0}$ and $U_{h}=(1-\chi(D))u_{0}$ the initial data corresponding to the low and high frequencies. Then, for all $\mu\ge0$, \begin{align} \||x|^{\alpha-1}D^{\alpha}\, e^{\pm{it}\sqrt{D_{x}^{2}+\mu}}\,U_{l}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{l}\|_{L^{2}(\mathbb{X})}, \label{S4 low1}\\[5pt] \||x|^{\alpha-1}D^{\alpha-\frac{1}{2}}\, e^{\pm{it}\sqrt{D_{x}^{2}+\mu}}\,U_{h}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{h}\|_{L^{2}(\mathbb{X})}, \label{S4 high1}\\[5pt] \|\langle{x}\rangle^{-s}\,D_{x}^{\frac{1}{2}}\, e^{\pm{it}\sqrt{D_{x}^{2}+\mu}}\,U_{l}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{l}\|_{L^{2}(\mathbb{X})}, \label{S4 low2}\\[5pt] \|\langle{x}\rangle^{-s}\, e^{\pm{it}\sqrt{D_{x}^{2}+\mu}}\,U_{h}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{h}\|_{L^{2}(\mathbb{X})}. \label{S4 high2} \end{align} Moreover, in the limiting case where $\mu=0$, we have better estimates in the low-frequency part: \begin{align} \||x|^{\alpha-1}D_{x}^{\alpha-\frac{1}{2}}\, e^{\pm{it}D_{x}}\,U_{l}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{l}\|_{L^{2}(\mathbb{X})}, \label{S4 low3}\\[5pt] \|\langle{x}\rangle^{-s}\, e^{\pm{it}D_{x}}\,U_{l}(x)\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|U_{l}\|_{L^{2}(\mathbb{X})}. \label{S4 low4} \end{align} \end{corollary} The usual way to relate smoothing estimates of wave and Schrödinger equations relies on changing variables in the corresponding restriction theorems. The previous corollary allows us to relate them simply according to the comparison principle. By combining estimates \eqref{S4 high1} and \eqref{S4 low3}, as while as \eqref{S4 high2} and \eqref{S4 low4}, we deduce the following smoothing estimates for the wave equation. \begin{theorem}\label{S4 wave thm} Consider the Cauchy problem \eqref{S4 KGequ} with $\zeta=0$, namely, the wave equation. We have the smoothing properties \begin{align} \|\langle{x}\rangle^{-s}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}, \label{S4 wave1}\\[5pt] \||x|^{\beta-\frac{1}{2}}\,D_{x}^{\beta}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}, \label{S4 wave2} \end{align} for any $s>\frac12$ and $\beta$ satisfies \begin{align} \begin{cases} -1<\beta<0 &\qquad\textnormal{if}\,\,\,\ell=1, \\[5pt] \frac{1-\min\lbrace{n,\nu}\rbrace}{2}<\beta<0 &\qquad\textnormal{if}\,\,\,\ell\ge2. \end{cases} \label{S4 RankCdtbeta} \end{align} \end{theorem} Similar estimates such as \eqref{S4 wave1} and \eqref{S4 wave2} are well-known in $\mathbb{R}^{N}$ for $N\ge2$ and $\frac{1-N}{2}<\beta<0$, see \cite{Ben94,RuSu12}. In particular, since the regularity range of the Kato-type smoothing property of the Schrödinger equation is wider on $\mathbb{H}^2$ (see Remark \ref{S1 Kato}), analogous phenomenon appears in studying the wave equation: estimate \eqref{S4 wave2} holds for all $-1<\beta<0$ on $\mathbb{H}^2$, while similar estimate holds on $\mathbb{R}^2$ if and only if $-\frac{1}{2}<\beta<0$. By combining \eqref{S4 high2} and \eqref{S4 low1} (with $\alpha=0$), we can deduce the following smoothing property for the Klein-Gordon equation. \begin{theorem}\label{S4 KG thm} Consider the Cauchy problem \eqref{S4 KGequ} with $\zeta>0$, namely, the Klein-Gordon equation. We have the smoothing property \begin{align}\label{S4 smoothingKG} \|\langle{x}\rangle^{-1}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, &\lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}\,+\,\|D_{x}^{-1}\,u_{1}\|_{L^{2}(\mathbb{X})}. \end{align} \end{theorem} Similar estimate as \eqref{S4 smoothingKG} has been established in $\mathbb{R}^{N}$ for $N\ge3$, see \cite{Ben94,RuSu12}. Moreover, if one considers the weight $\langle{x}\rangle^{-s}$ with $s>1$ instead of $\langle{x}\rangle^{-1}$, similar smoothing property remains valid on $\mathbb{R}^{2}$. In our setting, the estimate \eqref{S4 smoothingKG} always holds even in the $2$-dimensional case. The reason is that $|x|^{-1}$ is $D^{2}$-smooth on $\mathbb{H}^{2}$, which is not the case on $\mathbb{R}^{2}$. \subsection{Other examples}\label{subsection other examples} Many other equations in the Euclidean setting admit the smoothing properties, but are less considered on more general manifolds because of the lack of physical backgrounds. From the point of view of mathematical analysis, the above arguments also allow us to deduce their smoothing properties on symmetric spaces easily. The following are two examples. \begin{corollary}[Smoothing estimate of the relativistic Schrödinger equation]\label{S4 relaSch} Consider the Cauchy problem \begin{align*} \begin{cases} (i\partial_{t}-\sqrt{1+D_{x}^{2}})\,u(t,x)=\,0,\\[5pt] u(0,x)\,=\,u_{0}(x). \end{cases} \end{align*} Then, we have the smoothing property \begin{align} \|\langle{x}\rangle^{-1}\,u\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}. \label{S4 rela} \end{align} \end{corollary} Analogous estimate holds in $\mathbb{R}^{N}$ for all $N\ge3$, and the order of its weight $\langle{x}\rangle^{-1}$ is sharp, see \cite{BeNe97,Wal02}. Notice that we do not need the limiting absorption principle used in \cite{BeNe97}. The smoothing property \eqref{S4 rela} is a straightforward consequence of estimates \eqref{S4 high2} and \eqref{S4 low1}. Therefore, we have similar phenomena to the Klein-Gordon equation on $\mathbb{H}^{2}$. Beyond the Schrödinger-type equation with \textit{constant coefficients}, we can also deduce similar smoothing properties for some \textit{time-variable coefficients} equations on symmetric spaces. Consider the Cauchy problem \begin{align}\label{S4 SchroOrderTime} (i\partial_{t}+\theta'(t)D_{x}^{m})\,u(t,x)=\,0, \qquad\,u(0,x)\,=\,u_{0}(x), \end{align} where $\theta$ is a suitable function on $\mathbb{R}$. In \cite{FeRu20}, the authors established the comparison principle and some smoothing properties for \eqref{S4 SchroOrderTime} in $\mathbb{R}^{N}$ when $\theta$ satisfies \begin{align}\label{S4 timecoefhyp} \begin{cases} \theta\in\mathcal{C}^{1}(\mathbb{R}),\\ \theta(0)=0,\\ \textnormal{$\theta$ is strictly monotone or $\theta'$ vanishes at finitely many points}. \end{cases} \end{align} See also \cite{KPRV05,MMT12,CiRe14,FeSt21} and the references therein for equations with other variable coefficients. Notice that the Cauchy problem \eqref{S4 SchroOrderTime} with $\theta$ satisfying \eqref{S4 timecoefhyp} covers the Schrödinger equation \eqref{S4 SchroOrder} if one sets $\theta(t)=t$. The following analogous property is a straightforward consequence of \cite[Lemma 2.1]{FeRu20} and Theorem \ref{main thm SchSmoothing}. \begin{corollary}\label{S4 timeSch} Suppose that $\theta$ meets the condition \eqref{S4 timecoefhyp} and $A(x,D)$ is described as in Table \ref{S1 TableSch}. Then, we have \begin{align} \||\theta'(t)|^{\frac{1}{2}}\,A(x,D_{x})\,e^{i\theta(t)D_{x}^{m}}\,u_{0}\|_{ L^{2}(\mathbb{R}_{t}\times\mathbb{X})}\, \lesssim\, \|u_{0}\|_{L^{2}(\mathbb{X})}. \end{align} \end{corollary}
79623a1ba99b94031cdf4ff3e40e1e9defc2b8bb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Quantum sensing} Early in the 20th Century a series of experiments revealed that all waves exhibit particle-like behaviour, all particles exhibit wave-like behaviour, and that these phenomena are associated with an intrinsic indeterminacy in the outcomes of measurements. The nature of physical reality was questioned, and the language of classical mechanics was replaced by the formalism of quantum mechanics. It is now accepted that no matter how great the skill of an observer, the outcome of a single measurement on any simple physical system of any basic physical quantity is profoundly uncertain. When describing how systems evolve with time, we are forced into a new mechanics where probability distributions evolve in a deterministic manner, rather than the dynamical variables themselves. When several variables are measured, either simultaneously or sequentially, conditional probabilities come into play, and experimental measurements of one kind can influence those of another without any classical interactions being present. No physical quantity can be regarded as having an {\em actual} value until a measurement is made. This way of thinking is not merely a rebranding of classical statistical physics, where it is not humanly possible to keep track of every microscopic degree of freedom, such as the motion of every water molecule in a steam engine, but is intrinsic to the way in which we gather information about the physical world. Should scientists have ever gone down the quantum rabbit hole? Well, quantum mechanics is not optional, but is essential if we are to build mathematical models that replicate the behaviour of experimental systems. Quantum mechanics applies to all dynamical variables (not merely the mechanics of elementary particles), and therefore it applies to electrical quantities such as voltage, current, power, electric and magnetic fields, and dipole moments, etc. When devices and circuits are operated at low physical temperatures (10~mK to 4~K) to minimise thermal noise, the mysterious world of quantum mechanics is revealed, and it becomes {\em necessary} to use the techniques of quantum mechanics to describe the behaviour of electrical circuits. Because of the need to track probability distributions, the analysis of circuit elements, such as transformers, transmission lines, power detectors, mixers and amplifiers becomes complicated, and one is forced into asking questions about the influence of quantisation (vacuum fluctuations, squeezing, back action and entanglement) on the fidelity and sensitivity of electrical measurements. From a measurement perspective there is some target that we wish to probe, and this target must be described quantum mechanically. Likewise, there is a sensor, which is usually part of a larger electrical circuit, which itself must be described quantum mechanically. The purpose of the sensor is to create a macroscopic quantity that can be recorded, and which carries faithful information about properties of the target. {\em Quantum Sensing} refers to the quantisation of the dynamical variables of the target and the interaction with the quantum behavior of the instrument carrying out the measurement. This interaction is potentially complicated because of the need to minimise the variance of the recorded signal, taking into account quantum uncertainty, and the inevitable quantum disturbance caused by the sensor. The quantum states of the target and sensor evolve in time in a mutually interactive way, which is the essence of quantum sensing. This article introduces the emerging field of quantum sensors and electronics for fundamental physics. The work is first put in context by describing a number of fundamental problems in physics where ultra-low-noise experiments are needed. Measurement principles are then discussed, focusing on electromagnetic fields in the microwave (30 cm, 1 GHz) to far-infrared (30 $\mu$m, 10 THz) wavelength range, where the transition from wave-like to particle-like behaviour is most pronounced. Special consideration is given to passive circuits, power detectors and signal amplifiers, but the principles are closely related to other kinds of quantum measurement such as stress, strain, speed and orientation. Towards the end of the article, the relative sensitivities of power detectors and amplifiers are compared, and a number of ultra-low-noise superconducting devices are described. Throughout the article two points are emphasised: (i) current experiments fall short of theoretical sensitivity limits, and a new generation of technology is needed to push down into the quantum-dominated regime; (ii) considerable innovation is possible by bringing together concepts from quantum information theory, quantum field theory, classical circuit theory, and device physics into a mathematical framework that can be used for modelling. \section{Why are ultra-sensitive measurements needed?} A new generation of fundamental physics experiments requires access to a family of ultra-low-noise sensors that can operate over the radio, microwave and far-infrared regions of the electromagnetic spectrum. For example, there is a need to produce an all-sky map the polarisation state of the Cosmic Microwave Background Radiation (CMBR) to one part in 10$^{9}$ to understand the role of gravitational waves in forming structure in the early Universe \cite{Aba1,Xu1}. There is a need to observe the most distance galaxies (z$>$5) to understand how galaxies first formed and evolved, and arrived at the local universe we see today. There is a need to understand the nature of Dark Matter (DM), with one strongly motivated possibility being the existence of a family of low-mass particles ($\mu$eV to meV) that interact only weakly with electromagnetic fields \cite{Ant1,Sik1}. Determining the absolute mass of the neutrino remains one of the most pressing problems in laboratory physics, and an international effort is underway to understand how it can be achieved by measuring, to one part in 10$^{6}$, the energies of individual electrons released during the radioactive decay of Tritium \cite{Ash1,Asn1}. There is a need for laboratory experiments that can probe the nature of spacetime and its relationship with the fundamental postulates of quantum field theory. These experiments, and others, require sensors that push at the limits imposed by quantum mechanics. \section{Power detection - a classical perspective} \label{sec_pow_det} At long wavelengths, scientists measure power, both dissipative and reactive, whereas at short wavelengths, they measure photon rates, and if possible count photons. Whilst it is true that average power $P$ is related to average photon rate $W$ by $P = \hbar \omega W$, the quantisation of the electromagnetic field goes well beyond this simple equality, and on moving from microwave to optical wavelengths, the behaviour of low-noise instruments changes significantly. \subsection{Radiation and noise} \label{sec_rad_nse} At long wavelengths, waveguiding systems tend to be {\em single mode} meaning that there is a single spatial transverse degree of freedom available for carrying power from the source to the detector. At infrared and optical wavelengths, free-space beams tend to be highly {\em multimode}, meaning that there is a large number of transverse degrees of freedom available. By counting modes in wavevector space ($k$-space), the longitudinal mode rate $J$ can be calculated for (i) a transverse electromagnetic (TEM) transmission line, (ii) radiation into a half space, and (iii) radiation into solid angle $\Omega = 4 \pi \sin^{2} (\theta_{\rm m}/2)$ where $\theta_{\rm m}$ is the half opening angle of the beam: Table~\ref{modes}. Because most modes travel obliquely to the assumed optical axis, it is necessary to weight each mode by a projected area, giving an effective solid angle $\Omega_{\rm eff} = \pi \sin^{2} (\theta_{\rm m})$: $\Omega_{\rm eff} \rightarrow \Omega$ as $\theta_{\rm m} \rightarrow 0$. The overall differential modal flux can then be written ${\rm d}J = N {\rm d} \omega / 2 \pi$, where $N = A \Omega / \lambda^{2}$ is the effective number of transverse modes (not including polarisation). The effective number of transverse modes can be calculated rigorously using diffraction theory, and the above expression for $N$ is accurate even when the throughput is small and the transmission spectrum tapers off gradually with mode number. The modal structure of fields is classical, but it is important to appreciate that each mode constitutes a degree of freedom, which must be quantised accordingly. \begin{table} \begin{center} {\begin{tabular}{cccc} & Mode Rate & Effective Mode Rate & Number of Modes \\ \hline TEM line & ${\rm d}J = \frac{1}{2 \pi} {\rm d} \omega$ & $ {\rm d} J = \frac{1}{2 \pi} {\rm } {\rm d} \omega$ & 1 \\ \hline Half Space & ${\rm d}J = \frac{A \omega^{2}}{4 \pi^{2} c^{2}} {\rm d} \omega $& ${\rm d} J = \frac{A \omega^{2}}{8 \pi^{2} c^{2}} {\rm d} \omega $ & $N = \frac{1}{2} \frac{A \Omega}{\lambda^{2}} = \frac{A \Omega_{\rm eff}}{\lambda^{2}}$ \\ \hline Solid Angle $\Omega$ & $ {\rm d}J = \frac{A \omega^{2} \Omega}{8 \pi^{3} c^{2}} {\rm d} \omega $ & $ {\rm d} J = \frac{A \omega^{2} \Omega_{\rm eff}}{8 \pi^{3} c^{2}} {\rm d} \omega $ & $N = \frac{A \Omega_{\rm eff}}{\lambda^{2}}$ \\ \hline \end{tabular}} \end{center} \label{modes} \caption{Longitude mode rate ${\rm d}J$, effective longitudinal mode rate after projecting onto a plane, and effective number of transverse modes $N$ for a TEM transmission line, radiation into a half space and radiation into physical solid angle $\Omega$. $A$ is the area of the source, and $\Omega_{\rm eff}$ is the effective solid angle of the beam. } \end{table} If each mode, comprising both longitudinal and transverse parts, is quantised and in a thermal state at temperature $T_{\rm p}$, the average power $P$ and its variance $(\Delta P)^{2}$ become \begin{align} \nonumber P & = \frac{A \Omega_{\rm eff}}{\lambda^{2}} \int \frac{{\rm d} \omega}{2 \pi} \, \frac{\hbar \omega}{e^{\hbar \omega / k T_{\rm p}} - 1} \\ \label{A1} & = \int {\rm d} \nu \, P(\nu) \\ \nonumber (\Delta P)^{2} & = \frac{1}{\tau} \frac{A \Omega_{\rm eff}}{\lambda^{2}} \int \frac{{\rm d} \omega}{2 \pi} \, \frac{ ( \hbar \omega )^{2} e^{\hbar \omega / k T_{\rm p}}}{ \left( e^{\hbar \omega / k T_{\rm p}} -1 \right)^{2}} \\ \label{A2} & = \frac{1}{\tau} \int {\rm d} \nu \, \left[ \frac{P(\nu)^{2}}{N} + h \nu P(\nu) \right], \end{align} where $P(\nu) = N h \nu n(\nu)$ is the average spectral power, $n(\nu)$ is the single-mode thermal occupancy, and $\tau$ is the time over which energy is collected to give the recorded power. If a perfectly matched planar detector is illuminated by a thermal field, these expressions give the average power and fluctuations in power measured. Although $P$ can be subtracted from the recorded output to reveal any additional signal, the fluctuations are troublesome, and act as a noise source in addition to any noise generated by the detector itself. The first term in Equation (\ref{A2}) reduces to the radiometer equation $\Delta T_{\rm p} = T_{\rm p} / \sqrt{B_{\rm pre} \tau}$ for a single-mode detector when $\hbar \omega < k T_{\rm p}$, where $B_{\rm pre}$ is the pre-detection bandwidth. $\Delta T_{\rm p}$ is the amount by which the source temperature must change to produce an output that is discernable above the noise. Notice that $B_{\rm pre} \tau \approx \tau / \tau_{\rm c}$ is the number of independent samples having coherence time $\tau_{\rm c}$ in the integration period $\tau$. The first term in Equation (\ref{A2}) can be regarded as coming from Gaussian distributed fluctuations in the envelope of a classical wave. The second term is dominant when $\hbar \omega > k T_{\rm p}$, and has a variance that is proportion to the mean, which is characteristic of Poisson statistics. It is indicative of the photon counting statistics of a coherent quantum state, where the variance in occupancy is equal to the mean. Thus, classical fluctuations tend to dominate at low frequencies and quantum fluctuations at high frequencies, but in general they appear together and add as uncorrelated fluctuations, which is suggestive of different physical origins. For a 10 mK source, the changeover occurs at about 200 MHz. This blended behaviour can also be understood in terms of a Poisson mixture comprising an ensemble of Poisson distributions having continuously distributed means. Assume that there is a single transverse mode, for example a TEM transmission line. Suppose that the individual longitudinal modes, each lasting about $1/{\rm d} \nu$, are in coherent states, but that the complex amplitudes $\alpha$ vary. By definition, a coherent state is an eigenstate of the annihilation operator, $\hat{a} | \alpha \rangle = \alpha | \alpha \rangle$; it corresponds mostly closely to a coherent classical wave having complex amplitude $\alpha$. $\hat{a}$ is not Hermitian, and so does not represent a directly-measurable single quantity, unlike the in and out of phase components. If $P(\xi)$ is the probability distribution of $\xi \equiv |\alpha^{2}|$, where for a coherent state $|\alpha|^{2}$ is also the average occupancy $\langle n \rangle$. The probability of detecting $n$ photons in a longitudinal mode is given by the conditional probability $P(n|\xi)$, but because we are interested in the probability of detecting $n$ photons over the whole ensemble, equivalently over a long integration time $\tau > 1/{\rm d} \nu$, \begin{equation} \label{A3} P(n) = \int {\rm d} \xi \, P(n|\xi) P(\xi). \end{equation} Using $E[n] = \sum n P(n)$ for the expectation value, and remembering that for a Poisson distribution $E[n|\xi] = \xi$, it can be shown using straightforward algebra that \begin{equation} \label{A4} E[n] = E[\xi] = E[|\alpha|^{2}], \end{equation} which reproduces Equation (\ref{A1}). The average power in a wave having a randomly varying amplitude gives the average occupancy of the underlying Poisson process. More interestingly, the variance $V[n] = E[n^{2}] - (E[n])^{2}$ becomes \begin{align} \label{A5} \nonumber V[n] & = V[|\alpha|^{2}] + E[|\alpha|^{2}] \\ & = V [|\alpha|^{2}] + V[n|E[|\alpha|^{2}]]. \end{align} The variance of the occupancy is the sum of the variance of $|\alpha|^{2}$, the classical noise, and the variance of a pure Poisson process $V[n|E(|\alpha|^{2})]$, the quantum noise, having $E[|\alpha|^{2}]$ as its parameter. If the power is averaged for time $\tau$, the variance in the observations is \begin{align} \label{A6} \nonumber (\Delta P)^{2} & = \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V(n) \\ & = \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V [|\alpha|^{2}] \\ \nonumber & + \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V \left[ n|E[|\alpha|^{2}] \right]. \end{align} When trying to progress, the first term of Equation (\ref{A6}) is awkward, because $V [|\alpha|^{2}]$ is needed, which depends on the unspecified distribution $P(\xi)$. Assume, in the spirit of the central limit theorem, that the quadrature components of the complex amplitude $\alpha$ are Gaussian variates. $P(\xi)$ is then a chi-squared distribution with one degree of freedom. A Gaussian distribution, however, has the feature that all of its moments can be calculated from the first and second moments. So, a more elegant approach is to say that $V [|\alpha|^{2}] = E[\alpha \alpha^{\ast} \alpha \alpha^{\ast}] - (E[\alpha \alpha^{\ast}])^{2}$, and then use the moment theorem (Isserlis' theorem) for complex Gaussian random processes \cite{Ree1,Iss1} to give $V [|\alpha|^{2}] =(E[|\alpha|^{2}])^{2} = n(\nu)^{2}$. The second term of Equation (\ref{A6}) is more straightforward because the variance is that of a pure Poisson distribution, $V \left[ n|E[|\alpha|^{2}] \right] = n(\nu)$, and so overall \begin{align} \label{A7} (\Delta P)^{2} & = \frac{1}{\tau} \int {\rm d} \nu \, \left[ \frac{P(\nu)^{2}}{N} + h \nu P(\nu) \right], \end{align} which reproduces the classical and quantum noise terms in Equation (\ref{A2}). An electromagnetic wave exhibits both wave-like and particle-like behaviour. Thus, heuristically, and with great caution, the image is that of a `blizzard' of particulate photons having an inhomogeneous density distribution. Photon counting statistics can be used to characterise different kinds of bunching at low light levels. At long wavelengths, photon energies are small (4 $\mu$eV at 1 GHz) whereas at short wavelengths they are large (40 meV at 10 THz). The photon rate in a wave carrying fixed power increases significantly as the frequency is falls, making it difficult to distinguish individual events. Generally speaking, as the frequency increases, a beam accrues more transverse modes and the quantum statistics changes, leading to rich and complex behaviour. At long wavelengths, background-limited detectors are characterised in terms of average photon flux and temporal variations in the flux, whereas at short wavelengths, they are characterised in terms of photon counting statistics and average dark count rate. \subsection{Multimode power detectors} \label{sec_mul_det} Section (\ref{sec_rad_nse}) assumes that the output of a detector replicates the statistical behaviour of the power at its input. When a detector can absorb energy through a number of transverse modes simultaneously, the situation is more complicated because it will in general respond differently to each of the modes. The characteristics of multimode classical fields are best described by second-order correlation functions, which for convenience can be written in terms of dyads $\overline{\overline{E}}({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \equiv {\rm E} \left[ {\bf E} ({\bf r}_{1},t_{1}) {\bf E}({\bf r}_{2},t_{2}) \right]$, where ${\bf E}({\bf r}_{1},t)$ is the vector valued electric field at space-time point $({\bf r},t)$. Correlation functions can be written using tensor or matrix notation, but dyadic algebra \cite{Mor1} is common in electromagnetism, and elasticity, and it is particularly convenient for vector-valued correlations, not least because of its similarity with scalar expressions. In paraxial systems, energy is assumed to flow with respect to some optical axis, $z$, and the field vectors are taken to be transverse, and so two dimensional. Using generic linear systems theory, or by formulating detailed electromagnetic models, it can be shown that the recorded power can always be written in the form \begin{align} \label{B1} P(t) & = \int_{\cal D} {\rm d}^{3} {\bf r}_{1} \int_{\cal D} {\rm d}^{3} {\bf r}_{2} \int {\rm d} t_{1} \int {\rm d}t_{2} \, \overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}), \end{align} where $\overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is a dyadic field, called the {\em response tensor}, which characterises the energy-absorbing properties of the device. The spatial integrals are evaluated over the input reference surface of the device, whose outline defines some domain ${\cal D}$. $\overline{\overline{X}} \cdot \cdot \, \overline{\overline{Y}}$ denotes contraction of the dyads $\overline{\overline{X}}$ and $\overline{\overline{Y}}$ to a scalar, and corresponds to the `trace of the product' ${\rm Tr} \left[ X Y \right]$ when matrices are used to represent correlations between polarisations. $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ can be transformed into the spatial ($k$) domain and/or the temporal frequency ($\omega$) domain, and the same functional form returns, suggesting that Equation (\ref{B1}) describes a basic physical process. For common detectors, $\overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is time-shift invariant, and if the field is statistically stationary, $\overline{\overline{E}} ({\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is also time shift invariant. The detected power is then time invariant, and (\ref{B1}) reduces to the spectral form \begin{align} \label{B2} P & = \int_{\cal D} {\rm d}^{3} {\bf r}_{1} \int_{\cal D} {\rm d}^{3} {\bf r}_{2} \int {\rm d} \omega \, \overline{\overline{D}} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega). \end{align} Mathematically, Equation (\ref{B2}) describes the full contraction of two tensor fields to a real-valued quantity, and is the most obvious way of creating a scalar, the measured power, from the second-order correlation function of the partially coherent field. In the abstract vector space of tensor fields, Equation (\ref{B2}) describes the orthogonal projection of a tensor that describes the state of coherence of the field onto a tensor that describes the state of coherence to which the detector is maximally receptive. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 20cm 7cm 4cm, width=60mm]{Figure1.pdf} \caption{Natural modes of the field ${\bf U}_{n} ({\bf r})$, carrying power $\beta_{n}$, couple to the natural modes of the detector ${\bf R}_{m} ({\bf r})$, having responsivity $\alpha_{m}$, through the scattering parameters $S_{mn}$. The total power absorbed, and recorded, is given by a sum of these processes.} \label{figure1} \end{figure} Because $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ are Hermitian, and square integrable, they can be diagonalised by the decompositions \begin{align} \label{B3} \overline{\overline{D}} ({\bf r}_{1},{\bf r}_{2},\omega) & = \sum_{m} \alpha_{m} {\bf R}_{m} ({\bf r}_{1}) {\bf R}_{m}^{\ast} ({\bf r}_{2}) \\ \label{B4} \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega) & = \sum_{n} \beta_{n} {\bf U}_{n} ({\bf r}_{1}) {\bf U}_{n}^{\ast} ({\bf r}_{2}), \end{align} where all quantities on the right are frequency dependent. The basis functions ${\bf R}_{m} ({\bf r})$ form an orthogonal set over ${\cal D}$, and likewise for ${\bf U}_{n} ({\bf r})$. Substituting Equations (\ref{B3}) and (\ref{B4}) in Equation (\ref{B2}) gives \begin{align} \label{B5} P & = \int {\rm d} \omega \, \sum_{mn} \alpha_{m} (\omega) \beta_{n} (\omega) | S_{mn} (\omega) |^{2} \\ \nonumber S_{mn} (\omega) & = \int_{\cal D} {\rm d} {\bf r} \, {\bf R}_{m}^{\ast} ({\bf r}) \cdot {\bf U}_{n} ({\bf r}). \end{align} which is called the {\em coupled mode model} \cite{Wth1,Sak1}. In Equation (\ref{B4}), the partially coherent field $\overline{\overline{E}}$ is described by an incoherent superposition of fully coherent fields ${\bf U}_{n} ({\bf r})$, each of which carries power $\beta_{n}$. In Equation (\ref{B3}), the response function $\overline{\overline{D}}$ is described by a set of complex-valued reception patterns ${\bf R}_{m} ({\bf r})$, each of which has some responsivity $\alpha_{m}$. The reception patterns are the individual degrees of freedom through which the device can absorb energy. In the $k$ domain, they correspond to the complex-valued angular beam patterns of the individual modes of the device. They are determined by the shape and size of the device (the boundary conditions), and the spatial coherence length of the solid-state processes responsible for absorption: such as electron-phonon interactions or spin-wave damping. According to Equation (\ref{B5}), the detected power is given by scattering, $S_{mn} (\omega)$, the natural modes of the field onto the natural modes of the detector: Figure \ref{figure1}. The coupling is maximised when the field modes and detector modes couple in one-to-one correspondence. From a photon perspective, one might now be concerned about the appearance of additional partition-noise effects. Suppose that two different detectors, $a$ and $b$, are somewhere in an incoming optical beam: for example, two pixels in an imaging array. It can be shown \cite{Sak1}, using the Poisson mixture technique and Gaussian moment theorem, that the covariance $C[P_{a},P_{b}]$ of the outputs of the detectors due to fluctuations in the incident field are \begin{align} \label{B6} & C[P_{a},P_{b}] = \\ \nonumber & \frac{1}{\tau} \int {\rm d} \omega \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{1} \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{2} \int_{{\cal D}_{b}} {\rm d}^{3} {\bf r}_{3} \int_{{\cal D}_{b}} {\rm d}^{3} {\bf r}_{4} \, \overline{\overline{D}}_{a} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \overline{\overline{E}} ({\bf r}_{2},{\bf r}_{3},\omega) \cdot \cdot \, \overline{\overline{D}}_{b} ({\bf r}_{3},{\bf r}_{4},\omega) \cdot \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{4},\omega) \\ \nonumber & + \frac{\delta_{a,b}}{\tau} \int {\rm d} \omega \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{1} \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{2} \, \hbar \omega \overline{\overline{D}}_{a} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega). \end{align} The first term characterises the classical fluctuations, and the second term the photon shot noise. $a=b$ gives the variance in the output of a single detector. When $a \neq b$, the Dirac delta function $\delta_{a,b}$ indicates that photon absorption in different detectors is not correlated. Equations (\ref{B2}) and (\ref{B6}) may seem complicated, but when $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ are sampled spatially for numerical modelling they reduce to the Trace of a product of matrices. They are valuable when choosing the sizes, spacings and layouts of pixels in an imaging array to optimise efficiency and information recovery. For example, the modal approach is well suited to understanding straylight and radiation noise in pixels that couple poorly to the high-transmission modes of the preceding optical system. In addition, the response function technique can be used to model the behaviour of complete instruments, rather than just the detectors, leading to many applications. \section{Power detection - a quantum perspective} \label{sec_pow_qua} Section \ref{sec_pow_det} describes a way of modelling power detectors, but if we adopt a quantum-mechanical approach, do we arrive at the same mathematical form? Consider an energy absorbing system having certain physical properties described by the Hermitian operators $\hat{A}$ and $\hat{B}$, and a source described by a generalised force $\hat{F}$, which may be an electric field, magnetic field, vector potential or some other perturbing quantity such as a strain field: Figure 2. If $\hat{H}_{\rm sys}$ and $\hat{H}_{\rm src}$ are the Hamiltonians of the system and source, the overall Hamiltonian is $\hat{H} = \hat{H}_{\rm sys} + \hat{H}_{\rm src} + \hat{H}_{\rm int} =\hat{H}_{0} + \hat{H}_{\rm int}$, where \begin{align} \label{C1} \hat{H}_{\rm int} (t) & = \kappa \int_{\cal V} {\rm d}^{3} {\bf r} \, \hat{A} ({\bf r},t) \cdot \hat{F} ({\bf r},t), \end{align} and $\kappa$ is a variational parameter. The interaction Hamiltonian $\hat{H}_{\rm int} (t)$ means that the source influences the time evolution of the system, and vice versa. If the force is constant over the volume of the system, $\hat{F} ({\bf r},t) = \hat{F} (t)$, it can be taken outside of the integral; if the force is a scalar, $\hat{F} ({\bf r},t) = F ({\bf r},t)$, it merely scales the interaction energy. $\hat{F} ({\bf r},t)$ acts on the state space of the source, which in the case of electromagnetic radiation is the multi-mode Fock space of field. The connected property of the system $\hat{A} ({\bf r},t)$ acts on the multi-particle state space of the system, such as the electrons in a conductor. If the source and system are not entangled prior to the force being applied, the initial composite state is the tensor product $| \psi (t_{0}) \rangle = | \psi_{\rm sys} (t_{0}) \rangle | \psi_{\rm src} (t_{0}) \rangle$. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 20cm 7cm 4cm, width=60mm]{Figure2.pdf} \caption{Generalised force $\hat{F} ({\bf r},t)$ acts on an energy absorbing system having physical characteristics $\hat{A} ({\bf r},t)$ and $\hat{B} ({\bf r},t)$. The rate at which work is done on the system gives rise to a classical measure of instantaneous power $P(t)$.} \label{figure2} \end{figure} In the Schr\"{o}dinger Picture, the composite state evolves from $t_{0}$ to $t$ according to the time evolution operator, $| \psi(t) \rangle = \hat{U}(t,t_{0}) | \psi(t_{0}) \rangle$. In the Heisenberg Picture, time evolution is attached to the operators themselves, $\hat{A}^{\rm H}(t) = \hat{U}^{\dagger}(t,t_{0}) \hat{A}(t_{0}) \hat{U}(t,t_{0})$, leaving the states invariant $| \psi(t) \rangle = | \psi(t_{0}) \rangle$, and leading to the idea of measuring a quantity at some point in time. For perturbative influences, it is best to use the Interaction Picture, where part of the time evolution is attached to the states, and part to the Hermitian operators. Define a new time-shift operator $\hat{S}(t,t_{0}) = e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar} \hat{U}(t,t_{0})$, where the part that would have happened anyway in the absence of the perturbation is removed from $\hat{U}(t,t_{0})$ through time reversal, $e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar}$. The states then evolve according to the perturbation, $| \psi(t)\rangle^{\rm I} = \hat{S}(t,t_{0}) | \psi(t_{0}) \rangle$, and the operators according to the free evolution $\hat{A}^{\rm I}(t) = e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar} \hat{A}(t_{0}) e^{- i \hat{H}_{0} (t-t_{0}) / \hbar}$. It can be shown that time shift operator, or scattering operator, is given by \begin{align} \label{C2} \hat{S}(t,t_{0}) = \stackrel{\leftarrow}{\cal T} \left[ \exp \left\{ \left( \frac{-i}{\hbar} \right) \int_{t_{0}}^{t} {\rm d}t' \hat{H}^{I}_{\rm int} (t') \right\} \right] \hspace{10mm} t \ge t_{0}, \end{align} where $\stackrel{\leftarrow}{\cal T}$ indicates that once the exponential is written as a power series, all operators should be arranged in order of increasing time, from right to left. Although Equation (\ref{C2}) comes from the iterated solution of a differential equation, it can be appreciated, with some care, that if time is discretised, evolution occurs through a sequence of exponential factors where the interaction energy is approximately constant during each step. First-order theory only uses the first two terms of Equation (\ref{C2}), \begin{align} \label{C3} \hat{S} (t,t_{0}) & \approx 1 - \frac{i}{\hbar} \int_{t_{0}}^{t} {\rm d}t' \, \hat{H}^{I}_{\rm int} (t'), \end{align} linearising $\hat{S} (t,t_{0})$ in $\kappa$. The higher-order terms, describing more complicated virtual processes, could be included if nonlinear behaviour is of interest. Suppose that some other physical quantity $\hat{B} ({\bf r},t)$ responds to the perturbation. Without additional assumptions, straightforward algebra shows that \begin{align} \nonumber \hat{B}^{H} ({\bf r},t) & = \hat{S}^{\dagger} (t,t_{0}) \hat{B}^{I} ({\bf r},t) \hat{S}(t,t_{0}) \\ \label{C4} & \approx \hat{B}^{I} ({\bf r},t) - \kappa \frac{i}{\hbar} \int_{t_{0}}^{t} {\rm d}t' \, \int_{\cal V} {\rm d}^{3} {\bf r}' \, \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \cdot \hat{F}^{I} ({\bf r}',t') \\ \nonumber & \equiv \hat{B}_{0}^{\rm H} ({\bf r},t) + \kappa \Delta \hat{B}^{\rm H} ({\bf r},t), \end{align} The first term, $\hat{B}_{0}^{H} ({\bf r},t)$, describes the free evolution of the system, and the second term, $\Delta \hat{B}^{H} ({\bf r},t)$, describes the linearised change brought about by the perturbation. Being in the Heisenberg picture, the expectation value of $\hat{B}^{H} ({\bf r},t)$ is found with respect to the state of the system at $t_{0}$, which is the reference time for the phase factors, and precedes the time at at which the perturbation turns on. Carrying out the same operation on the right of Equation (\ref{C4}): \begin{align} \label{C5} \langle \Delta \hat{B}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F} ({\bf r}',t') \rangle_{t_{0}}, \end{align} which has factored because the source and system are not entangled at $t_{0}$. The upper limit on the integral has been changed by including the step function $\theta(t-t')$, which enforces causality, and the lower limit has been changed because the source is assumed to turn on after $t_{0}$. Equation (\ref{C5}) is called Kubo's formula \cite{Kub1}, and is the quantum equivalent of a classical response function. How should the expectation values be evaluated? The source and system are in definite quantum states at $t_{0}$, but because of the extremely large number of degrees of freedom involved (for example the numerous electrons in an absorbing film), we cannot hope to know, or wish to know, what they are! The expectation values are therefore calculated through $\langle \hat{X} \rangle = \sum_{n} P_{n} \langle n | \hat{X} | n \rangle = {\rm Tr} [ \hat{\rho} \hat{X} ]$ where $\hat{\rho} = \sum_{n} P_{n} | n \rangle \langle n |$, and $P_{n}$ is the probability that the state is one of a complete set of eigenstates, $| n \rangle$. The density operator $\hat{\rho}$ incorporates two uncertainties: (i) our lack of certainty about which state the configuration is in; (ii) nature's lack of certainty about which eigenstate the system will collapse into when a measurement is made. If the absorber is cooled by a refrigerator, the system's density operator is $\hat{\rho}_{\rm sys} = \exp [\hat{H}_{\rm sys}/kT_{\rm sys}]$; and if the source is thermal radiation emitted by a warm load $\hat{\rho}_{\rm src} = \exp[\hat{H}_{\rm src}/kT_{\rm src}]$. By introducing these thermodynamic operators, an {\it open quantum system} has been created, hiding the quantum mechanics of the refrigerator and thermal source from view. Using density operators, and elevating the response to vector-valued quantities, \begin{align} \label{C6} \langle \Delta \hat{B}^{H} ({\bf r},t) \rangle_{t_{0}} & = \int_{-\infty}^{+\infty} {\rm d}t' \, \int_{\cal V} {\rm d}^{3} {\bf r}' \, \overline{\overline{D}}_{\rm BA}({\bf r},t; {\bf r}';t') \cdot {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F} ({\bf r}',t') \right] \\ \label{C7} \overline{\overline{D}}_{BA}({\bf r}, t; {\bf r}',t') & = \frac{-i}{\hbar} \theta(t-t') {\rm Tr} \left[ \hat{\rho}_{\rm sys} (t_{0}) \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \right]. \end{align} Equation (\ref{C7}) is the dyadic form of Kubo's formula \cite{Kub1}, which describes how macroscopic characteristics such as impedance, dielectric constant, and permeability emerge from the microscopic behaviour of the solid state system. The elements of $\overline{\overline{D}}(t,t')$ are called {\em retarded Green's functions}, and it can be shown that they describe rather beautifully how a system responds when an excitation is introduced. For example, the injection of an electron or hole at one space-time point may be correlated with the appearance of an electron or hole elsewhere. Crucially, there is no need to follow every degree of freedom, but simply to know how the system responds when excitations are introduced. Green's functions are used extensively for modelling the behaviour of materials, such as the bulk impedance of superconducting films \cite{Zub1} and proximity effects \cite{Bel1}. To calculate the behaviour of a power detector, it is necessary to know the instantaneous rate at which work is done on the system by the source, which is given by \begin{align} \label{C8} \hat{P} (t^{\prime \prime}) & = \int_{\cal V} {\rm d}^{3} {\bf r} \, \hat{F}( {\bf r}, t^{\prime \prime}) \frac{\rm d}{{\rm d}t^{\prime \prime}} \Delta \hat{A} ({\bf r},t^{\prime \prime}), \end{align} which should be compared with force times rate of change of displacement. Substituting $\Delta \hat{A} ({\bf r},t^{\prime \prime})$ and calculating the expectation value gives \begin{align} \label{C9} \langle \hat{P} (t^{\prime \prime}) \rangle & = \int_{\cal V} {\rm d}^{3} {\bf r} \int_{\cal V} {\rm d}^{3} {\bf r}' \int_{-\infty}^{+\infty} {\rm d}t' \, \, \left\{ \frac{- i}{\hbar} \frac{\rm d}{{\rm d}t^{\prime \prime}} \theta(t^{\prime \prime}-t') {\rm Tr} \left[ \hat{\rho}_{\rm sys}( t_{0}) \left[ \hat{A}^{I} ({\bf r},t^{\prime \prime}), \hat{A}^{I} ({\bf r}',t') \right] \right] \right\} \\ \nonumber & \times {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}, t^{\prime \prime}) \hat{F}( {\bf r}', t') \right]. \end{align} Equation (\ref{C9}) nearly has the form of Equation (\ref{B1}), but not quite: (i) Equation (\ref{C9}) describes energy flow into the system, but does not give the quantity that is recorded at the output. (ii) Equation (\ref{B1}) has surface integrals whereas Equation (\ref{C9}) has volume integrals, and so the system is a volumetric absorber, rather than having a reference surface. Strictly, the volume integrals need transforming into surface integrals, but if the source is uniform, the problem simplifies anyway. (iii) The source term corresponds to excitation in the absence of any scattering in the medium, and so ignores screening. If screening is included, the functional form of the coupled-mode model does not change, but the expression for the response tensor does; say in the case of multi-layered patterned detector arrays \cite{Wth2}. (iv) Equation (\ref{C9}) is based on the instantaneous work done, and so potentially includes energy flowing in and out of the detector in a reactive manner. All of these items can be dealt with in a straightforward way, returning the functional form of (\ref{B1}). For example in the case of (i), a detector has some responsivity, which converts the instantaneous power into the recorded output, such as a voltage, and this conversion is band limited. Equation (\ref{C9}) can be convolved with some causal response function $g(t-t^{\prime \prime})$, which describes the conversion process, to give \begin{align} \label{C10} P(t) & = \int_{\cal V} {\rm d} {\bf r}_{1} \int_{\cal V} {\rm d} {\bf r}_{2} \int {\rm d} t_{1} \int {\rm d}t_{2} \, K(t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \, F({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}), \end{align} where \begin{align} \label{C11} K(t; {\bf r}_{1},t_{1} ; {\bf r}_{2},t_{2}) & = g(t-t_{1}) \frac{\rm d}{{\rm d}t_{1}} \left\{ \frac{- i}{\hbar} \Theta(t_{1}-t_{2}) {\rm Tr} \left[ \hat{\rho}_{\rm sys} (t_{0}) \left[ \hat{A}^{I} ({\bf r}_{1},t_{1}), \hat{A}^{I} ({\bf r}_{2},t_{2}) \right] \right] \right\} \\ \label{C12} F({\bf r}_{1}, t_{1}; {\bf r}_{2},t_{2}) & = {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}_{1}, t_{1}) \hat{F}( {\bf r}_{2}, t_{2}) \right]. \end{align} Equation(\ref{C10}) is very similar to (\ref{B1}), but now the process responsible for energy absorption and the source fields are both described by quantum correlation functions, and so quantum properties are included. Creating an output signal by simply smoothing the expected value of the absorbed power is somewhat arbitrary. An information-theoretic approach is as follows. Suppose that $P(u|v)$ is the conditional probability that an observer records output $u$ when the object being measured is in eigenstate $| v \rangle$. If the object is actually in state $\hat{\rho}_{\rm int}$, then \begin{align} \nonumber P( u ) & = \int {\rm d} v \, P(u|v) \langle v | \hat{\rho}_{\rm int} | v \rangle \\ \label{C13} & = {\rm Tr} \left[ \hat{W} (u) \hat{\rho}_{\rm int} \right], \end{align} where \begin{align} \label{C14} \hat{W} (u) & = \int {\rm d} v \, P(u|v) | v \rangle \langle v |. \end{align} $\hat{W} (u)$ looks like a density operator, but is a {\em measurement operator}. Equation (\ref{C9}) can be cast in this way, where $| v \rangle$ is the state of the composite system, $| \psi(t) \rangle$, and $P(u|v)$ is closely related to $g(t-t^{\prime \prime})$. $\hat{W} (t)$ describes the act of acquiring classical information about the absorbing system, as it interacts quantum mechanically with the source. This shift of perspective is not merely pedantry; it is intimately related to the notion of back action, where the state of the object being probed changes as a consequence of information being accrued. In information theoretic descriptions of quantum measurement, the quantum system being probed (the source) first interacts with some other quantum system (the device), which may itself be warm and in a mixed state, and which then provides a classical estimator of some aspect of the source's behaviour. According to Von Neumann, when a measurement is made, the system collapses onto the eigenstate of the eigenvalue recorded. An identical measurement, immediately after the first, returns the same result with certainty. But this projective approach to state collapse does not seem to fit with the language of imperfectly constrained and continuous measurement, which is needed in the case of electrical circuits and sensing. Measurement operators such as $\hat{W} (t)$ allow a more nuanced approach. If the object being probed is initially in some mixed quantum state $\hat{\rho}_{\rm int}$, then after measurement it `collapses' into some new mixed, as distinct from pure, quantum state $\hat{\rho}_{\rm fin}$. Subsequent identical measurements allow additional information to be gathered; rather than simply repeating the same result as if there is no information left to be extracted. Each time $\hat{W} (t)$ is applied, information is acquired, and the entropy falls. Understanding the relationship between quantum information theory and sensor physics is an important part of progressing quantum technology. In the case of (iv), assume that the source is a single-mode, time-harmonic wave, \begin{equation} \label{C16} \hat{F}( {\bf r}, t) = f( {\bf r}) \hat{a} e^{-i \omega_{0} t} + f^{\ast}( {\bf r}) \hat{a}^{\dagger} e^{+i \omega_{0} t}, \end{equation} where $f( {\bf r})$ is some general factor. Then \begin{align} \nonumber & {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}_{1}, t_{1}) \hat{F}( {\bf r}_{2}, t_{2}) \right] \equiv \\ \label{C17} & f( {\bf r}_{1}) f( {\bf r}_{2}) e^{-i \omega_{0} (t_{1}+t_{2})} \langle \hat{a} \hat{a} \rangle + f^{\ast}( {\bf r}_{1}) f^{\ast}( {\bf r}_{2}) e^{+i \omega_{0} (t_{1}+t_{2})} \langle \hat{a}^{\dagger} \hat{a}^{\dagger} \rangle \\ \nonumber & + f( {\bf r}_{1}) f^{\ast}( {\bf r}_{2}) e^{-i \omega_{0} (t_{1}-t_{2})} \langle \hat{a} \hat{a}^{\dagger} \rangle + f^{\ast}( {\bf r}_{1}) f( {\bf r}_{2}) e^{+i \omega_{0} (t_{1}-t_{2})} \langle \hat{a}^{\dagger} \hat{a} \rangle , \end{align} The first two terms, in $t_{1}+t_{2}$, are fast as $t_{1}$ and $t_{2}$ increase, and are removed by the output smoothing; the last two terms, in $t_{1}-t_{2}$, are slow, and result in the recorded power. If the source is in a coherent quantum state, having complex amplitude $a$, then $\langle \hat{a}^{\dagger} \hat{a} \rangle \rightarrow a^{\ast} a$ and $\langle \hat{a} \hat{a}^{\dagger} \rangle \rightarrow a^{\ast} a + 1$, and for a high-occupancy state the source correlation function becomes a fully coherent classical correlation function. Therefore, the response function $K(t;{\bf r}_{1}, t_{1};{\bf r}_{2},t_{2})$ characterises the response to both classical and quantum sources; and its general properties can be imported from classical considerations, or by knowing how the device responds to quantum excitations. Equation (\ref{B1}) is defined in terms of {\em average power absorbed}, but Equation (\ref{C10}) is based on instantaneous {\em work done}, which potentially includes energy flowing in and out of the device, say a thin-film absorber, in a reactive manner. Because the response function is time-shift invariant, it can be Fourier transformed in $t_{1} - t_{2}$: $K({\bf r}_{1}; {\bf r}_{2}; t_{1} - t_{2}) \mapsto K({\bf r}_{1}; {\bf r}_{2}; \omega)$. Describing (\ref{C10}) in the Fourier domain, and using the classical-coherent limit of Equation (\ref{C17}), it can be shown that $P(t) \propto \left[ 1 + \cos (2 \omega_{0} t) \right] \cos (\theta) + \sin(2 \omega_{0} t) \sin(\theta)$, where $\theta$ is the phase of $K({\bf r}_{1}; {\bf r}_{2}; \omega)$. For $-\pi/2 \le \theta \le + \pi/2$ the first term, which is proportional to $\cos(\theta)$, the {\em power factor}, is always positive and describes time varying power dissipated in the detector: the power that flows into and stays in the detector varies in time (a principle closely related to {\em homodyne} detection); it has a time-averaged value of unity (a principle exploited in power detectors). The second term, which is proportional to $\sin(\theta)$, has a time-averaged average value of zero, and describes energy sloshing in and out of the detector. Thus the real part of the response function, $\Re \left[ K({\bf r}; {\bf r}'; \omega) \right]$, characterises dissipation, and is a manifestation of Fermi's Golden Rule, and the imaginary part, $ \Im \left[ K({\bf r}; {\bf r}'; \omega) \right]$, energy storage. These processes happen at the input regardless of the smoothing action of the output filter, which ensures that only the time average part of the dissipated power contributes to the recorded output. One word of warning is that in physics and engineering, the roles of the real and imaginary parts of $K({\bf r}_{1}; {\bf r}_{2}; \omega)$ are swapped. In engineering, it is the real part of an impedance that describes loss, $R + j \omega L$, but in physics, it is the imaginary part of Kubo's susceptance that describes loss, and radiates noise as described by the {\em dissipation fluctuation theory} \cite{Flc1}. This difference can be traced to Equation (\ref{C8}), where physicists use the dynamical variables $\hat{F}( {\bf r}, t)$ and $\Delta \hat{A} ({\bf r},t)$, whereas engineers use $\hat{F}( {\bf r}, t)$ and ${ \rm d} \Delta \hat{A} ({\bf r},t) / {\rm d} t$, and so is a matter of convention only. Finally, we note that for vector-valued fields, and adopting the engineering convention, it is the Hermitian part of the response tensor that corresponds to dissipated energy, and the anti-Hermitian part that corresponds to stored energy. Thus $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ is the Hermitian part of $\overline{\overline{K}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ when Equation (\ref{C10}) is upgraded to its full tensor form, and used in Equation (\ref{B1}). The coupled mode model, Section \ref{sec_mul_det}, is based on the fact that the response function and correlation function in Equation (\ref{B1}) are both Hermitian and so can be diagonalised. In the quantised case, the response function is the same as the classical case, and so it can be diagonlised as before giving the modes of the detector. The field correlation function, however, is not Hermitian, Equation (\ref{C17}): the positive and negative parts have different physical interpretations. The positive frequency part describes photon absorption, whereas the negative frequency part describes photon emission, including spontaneous emission. This difference leads to characteristic features of thermal fields. For a more complete description of the quantised case, the response function and quantum correlation function should be split into their Hermitian and anti-Hermitian parts. The behaviour becomes more involved, but the overall scheme still describes the way in which the degrees of freedom present in the source couple to the degrees of freedom in the system available for absorbing energy. One effect is that a detector can radiate energy back into the source, and this radiated energy, which is fluctuating, can act as a noise source because it causes the energy stored in the device to vary: fluctuating power travels in both directions. Normally, noise is considered to be something that comes in from outside! If the device is at a higher temperature than the source, including internal heating such as hot-electron effects, the radiation noise can be greater than the noise associated with the source itself. Once again it is clear that the source and detector must be considered parts of a collected whole if all aspects of behaviour are to be understood. The fact that $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ characterises power absorption when classical sources are used is of considerable importance, because it shows that the complex-valued reception patterns, or antenna patterns, ${\bf R}_{m} ({\bf r}_{1})$ and the associated responsivities $\alpha_{m}$ can be determined interferometrically through power measurements alone. The technique is called Energy Absorption Interferometry (EAI) \cite{EAI1} and is closely related to aperture synthesis astronomy, but now the device under test is radiated with two phase locked coherent sources, rather than measuring the angular correlations emitted by thermal sources. Moreover, $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ connects principles of reciprocity, where the dynamical degrees of freedom responsible for absorbing energy are the same as the degrees of freedom responsible for imposing near-field and far-field correlations on the thermally radiated fields \cite{Rec1}. \section{Linear amplifiers - a classical perspective} \label{sec_amp_cla} Many aspects of sensing relate to measuring voltages and currents, or at least to amplifying weak signals to a level where classical signal processing can be performed. Rather than analysing complex circuits, it is common to place standard configurations in black boxes, to drive the ports with a set of independent variables (voltage and/or current) and to observe the response through a set of dependent variables. This approach focuses on those degrees of freedom accessible from the outside, and gives rise to small-signal impedance, admittance and hybrid circuit parameters. The multiport network approach leads to many general theorems, and can answer questions such as `is this device capable of producing gain' and if so `what embedding network is needed to achieve it'. In addition, voltage and current sources can be added to the ports to represent internally generated noise. There must be one noise source for every dependent variable, and every pair of noise sources has a complex correlation coefficient. The noise sources can be referenced to other ports for convenience. Additionally, external circuit elements may also produce noise. In the case of amplifiers, a two-port network is sufficient. The noise sources are usually referenced to the input port, and take the form of a parallel current source and series voltage source \cite{Two1}. The correlation coefficient between them can be represented by introducing a fictitious noise impedance. Given an amplifying device, in a black box, one usually wishes to achieve three things: (i) Maximise the power gain by ensuring that the signal-source impedance $Z_{\rm s, pow}$ is conjugately matched to the impedance seen at the input of the loaded device. (ii) Choose a source impedance that interferes, and decorrelates to the highest possible degree, the currents induced in the output load by the two noise sources, as this minimises the recorded noise. There is some particular signal-source impedance $Z_{\rm s, nse}$ that minimises an amplifier's overall noise: a process called {\em noise matching}. (iii) Ensure that these optimisations do not result in the active device oscillating. Usually $Z_{\rm s, pow} \ne Z_{\rm s, nse}$ and ingenious schemes are needed to align the impedances, or simply to select the best compromise. The matter of whether it is best to have high gain or low noise is quantified through the concept of {\em noise measure}. At high frequencies, when the wavelength is smaller than the dimensions of the circuit, it is beneficial to use a travelling wave representation. The ports of the black box are loaded with transmission lines having real characteristic impedance $Z_{0}$. The independent variables are the amplitudes $a(t)$ of the waves incident on the ports; and the dependent variables are the amplitudes $b(t)$ of the waves travelling away from the ports \cite{Kur1}. The relationships, in the frequency domain, between the voltage $v(\omega)$ and current $i(\omega)$ at some reference plane and the complex wave amplitudes are \begin{align} \label{D1} a(\omega) & = \frac{1}{2 \sqrt{Z_{0}}} \left[ v(\omega) + i(\omega) Z_{0} \right] \\ \nonumber b(\omega) & = \frac{1}{2 \sqrt{Z_{0}}} \left[ v(\omega) - i(\omega) Z_{0} \right]. \end{align} Generally, $a(\omega)$ and $b(\omega)$ are stochastic quantities, which must be averaged over an ensemble. The average power spectral density incident on a port is $S_{\rm a} = E \left[ a a^{\ast} \right]$. Internal noise is represented by allowing noise waves to travel away from the ports even in the absence of external excitation. As in the discrete case, the travelling wave amplitudes may be correlated, and so a correlation matrix is needed whose diagonal elements are spectral powers. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 14cm 7cm 6cm, width=70mm]{Figure3.pdf} \caption{Scattering parameter representations: (a) One-port network with a signal source $a_{s}$. (b) Two-port network with internal noise sources $n_{1}$ and $n_{2}$.} \label{figure3} \end{figure} A multiport device is then represented by a signal flow graph. Figure \ref{figure3} shows signal flow graphs of (a) a one-port network with source, and (b) a two port network with internal noise sources. Figure 4 (a) shows the signal flow graph of a two-port network with the internal noise sources {\em referenced} to the input. Often, two-port networks are cascaded, and it is necessary to know the complex amplitude of the wave travelling away from the output in terms of the wave incident on the input. The existence of loops in connected networks, creating internal resonances, makes the analysis of signal flow graphs awkward. In the 1950's Mason introduced a {\em non-touching loop rule} \cite{Msn1} that allows expressions for interdependencies to be derived (Transducer Gain, Available Gain, Maximum Stable Gain, reflection coefficients, etc.) Consider the signal flow graph shown in Figure \ref{figure4} (a). The amplifying device is represented by a two-port network, with its internal noise sources referenced to the input. The sources comprise a noise wave effectively incident on the input $a_{n1}$, a noise wave travelling away from the input $b_{n1}$, and a complex correlation coefficient $\Gamma$ between them. As discussed in Section \ref{sec_rad_nse}, a perfectly matched passive warm termination radiates power into a single mode transmission line. Therefore, it is possible to describe power spectral densities in terms of equivalent temperatures: $T_{\rm a} = E \left[ a a^{\ast} \right] / k$ and $T_{\rm b} = E \left[ b b^{\ast} \right] / k$, where it is conventional to use the Rayleigh-Jeans limit in the definition: and it is only a definition. Also, the complex correlation coefficient between the wave amplitudes, $\Gamma$, can be written as a complex-valued `temperature' $T_{\rm c} = \Gamma \sqrt{ T_{\rm a} T_{\rm b}}$. If the device is connected to noiseless terminations having impedance $Z_{0}$, the noise power effectively incident on the device $E \left[ a a^{\ast} \right]$ accounts for all of the noise appearing at the output, and the noise temperature becomes $T_{\rm n} = T_{\rm a}$. However, a noise wave also travels away from the input, having noise temperature $T_{\rm b}$, and there is no reason why this should not be significantly larger than $T_{\rm a}$. Commercial suppliers report the noise temperature $T_{\rm a}$, but do not report $T_{\rm b}$, and so care is needed because noise power is fed back into the source. If the supplier has done a good job of minimising the external effects of the internal sources, $T_{\rm c} = 0$: otherwise you could find a source impedance that reduces the noise temperature below that claimed by the manufacturer! If an amplifier is connected to a source having a non-zero reflection coefficient $\Gamma_{\rm src}$, referenced to $Z_{0}$, Figure \ref{figure4} (b), the outward travelling wave is partially reflected back in, and if $b$ is correlated with $a$, the noise wave travelling away from the output can, because of constructive interference, be significantly enhanced. The noise temperature is given by \begin{align} \label{D2} T_{\rm n} = T_{\rm a} + |\Gamma_{\rm src}|^{2} T_{\rm b} + 2 T_{\rm c} |\Gamma_{\rm src}| \cos(\phi_{\rm c} + \phi_{\rm src}). \end{align} If the phase of the source reflection coefficient, $\phi_{\rm src}$, changes rapidly with frequency, say due to a long interconnecting cable, the noise temperature can vary widely and rapidly, with a peak variation $T_{\rm pk} = 4 T_{\rm c} | \Gamma_{\rm src} |$. Equation (\ref{D2}) does not include the input reflection coefficient of the amplifier, and so this equality holds regardless of whether the input of the amplifier is matched or not. Commercial amplifiers, as distinct from transistors, are noise matched internally, meaning that $T_{\rm c}=0$ at band centre. It can also be shown that when the source reflection coefficient $\Gamma_{\rm src}$ and input reflection coefficient of the amplifier $\Gamma_{\rm amp}$ are non zero, a resonant noise wave exists on the input transmission line, scaling as $\propto 1/|1-\Gamma_{\rm src} \Gamma_{\rm amp}|^{2}$, which can be large when $\Gamma_{\rm src} = \Gamma_{\rm amp}^{\ast}$. Here $\Gamma_{\rm amp} = S_{11}$. Another hazard is that most amplifiers have a large forward gain $|S_{21}|^{2}$ and tiny reverse gain $|S_{12}|^{2}$. Some ultra-low-noise amplifiers, such as certain parametric amplifiers, can have $|S_{12}|^{2} \approx 1$, and even gain in the reverse direction. Forward travelling noise from a second stage can be brought forward to the input of the first stage, and increase the overall noise temperature. To make sure that the system noise temperature is insensitive the source reflection coefficient, it is prudent to place a cooled circulator in front of the first amplifier. The noise-isolating role of the circulator has nothing to do with matching the input for maximum power gain, and because circulators are lossy and cumbersome, they are usually considered to be a technological nuisance. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 12cm 6cm 6cm, width=70mm]{Figure4.pdf} \caption{(a) Two-port scattering parameters with the internal noise sources referenced to the input. (b) Complete amplifier with signal source reflection coefficient $\Gamma_{\rm src}$.} \label{figure4} \end{figure} Before closing this section, it is beneficial to reconsider flow graphs. In technical literature, analysis is carried out using Mason's rule, and for cascaded networks only having sources at the input and output ports this is sufficient. In the case of complicated networks, having internal noise sources, it is algebraically tedious to trace the effect of every source to the external ports. To make matters worse, the internal sources may be correlated and only the correlation matrix of the internal sources is known. Then it is only possible to calculate, even in principle, the correlation matrix of the noise waves appearing at the external ports. In the quantum case, one only has correlation functions of the kind $\langle \hat{b}_{{\rm s},i} \hat{b}_{{\rm s},j} \rangle$, and so a direct mapping from the source correlation functions to output correlation functions is needed. To this end, the {\em connection matrix method}, which is built on the rich mathematical topic of {\em directed flow graphs}, is valuable. For any general network having $M$ nodes, and with $N$ sources entering the nodes, collect the travelling wave amplitudes at the nodes into column vector ${\mathsf d}$, and the travelling wave amplitudes of the sources entering the nodes into column vector ${\mathsf n}$, then \begin{align} \label{D3} {\mathsf d} & = {\mathsf C} {\mathsf d} + {\mathsf n} \\ \nonumber & = \left[ {\mathsf I} - {\mathsf C} \right]^{-1} {\mathsf n} \\ \nonumber & = {\mathsf K} {\mathsf n}, \end{align} where the $i,j$th entry in ${\mathsf C}$ is the complex scattering parameter that connects node $j$ to node $i$. $N$ may be smaller than $M$, resulting in ${\mathsf n}$ having some zero entries. ${\mathsf I}-{\mathsf C}$ is sparse because only a small number of nodes are connected directly. Only the correlation matrix of the sources is known, ${\mathsf N}_{\rm s}$, and so it is only possible to calculate the correlation matrix of the resulting travelling waves, ${\mathsf N}_{\rm c}$. Equation (\ref{D3}) gives \begin{align} \label{D4} {\mathsf N}_{\rm c} & = {\mathsf K} {\mathsf N}_{\rm s} {\mathsf K}^{\dagger}, \end{align} from which the correlation matrix of the waves of interest can be extracted. To carry out calculations when quantum noise is present, it is necessary to be careful about vacuum-state noise, because vacuum noise enters through even seemingly unused ports. The bosonic commutation relationships between the travelling wave amplitudes on the port transmission lines must be maintained. \section{Linear amplifiers - a quantum perspective} \label{sec_amp_qua} \subsection{Quantum equivalent circuits} \label{sec_eqv_qua} At low frequencies simple electrical circuits are modelled using discrete components, which are then quantised through Lagrangian methods \cite{Lng1}, but in the case of complicated circuits, one is left searching for the Lagrangian that gives the correct answer! Consider an $L$-$C$ resonator made of discrete components. It can be shown that a perfect classical voltage or current source places the resonator in a coherent state $|a\rangle$. $\hat{a}$ is not however Hermitian, and the complex amplitude $a$ is not directly measurable: it has two degrees of freedom, amplitude and phase. Instead, we are left with three possible real-valued measurements: the energy in the mode, the voltage across the resonator, and the current through the resonator. The voltage and current operators are \begin{align} \label{E1} \hat{v} (t) = i \left( \frac{\hbar \omega_{0}}{2 C} \right)^{1/2} \left( \hat{a}^{\dagger} e^{i \omega_{0} t} - \hat{a} e^{-i \omega_{0} t} \right) \\ \nonumber \hat{i} (t) = \left( \frac{\hbar \omega_{0}}{2 L} \right)^{1/2} \left( \hat{a}^{\dagger} e^{i \omega_{0} t} + \hat{a} e^{-i \omega_{0} t} \right). \end{align} If the resonator is in a coherent state, the number of excitations $n \equiv |a|^{2}$ is Poisson distributed. If one of the quadrature components, either $v$ or $i$, is measured repeatedly, with the system in the same coherent state before every measurement is made, the distributions show $(\Delta v)^{2} = \hbar \omega_{0} / 2 C$ and $(\Delta i)^{2} = \hbar \omega_{0} / 2 L$; these uncertainties do not depend on occupancy, and are minimum uncertainty states. This behaviour is illustrated in Figure \ref{figure5}. Now, however, quantum mechanics throws up a issue because $[i,v] = i \hbar \omega_{0}^{2}$, and so according to the generalised uncertainty relationship $ \Delta A \, \Delta B \ge \left| \langle i \left[ \hat{A},\hat{B} \right] \rangle \right| / 2$, it follows that $\Delta v \Delta i \ge \hbar \omega_{0}^{2} /2$. If the voltage is measured with an accuracy greater than the intrinsic uncertainty, $\sqrt{\hbar \omega_{0} / 2 C}$, then a subsequent measurement of current without re-establishing the state gives a variation that is considerably larger than $\sqrt{\hbar \omega_{0} / 2 L}$, and vica versa. It seems that immediate sequential measurements of voltage and current must be constrained by Heisenberg's uncertainty principle, in the same way that measurements of position and momentum are constrained in a freely oscillating mechanical resonator. A measurement of voltage or current leads to a {\em backaction} that changes the distribution that must be used on subsequent measurements. This general reasoning neglects to the role of dissipation, which leads to a finite quality factor $Q$, and introduces a time scale over which excitations are lost to the heat bath. In some cases with carefully chosen apparatus, it is possible to extract information without causing the state to change: a technique called {\em Quantum Non-Demolition} measurement. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 19cm 7cm 4cm, width=70mm]{Figure5.pdf} \caption{Annihilation operator as a phasor in the complex plane. The real and imaginary parts are the current and voltage respectively. The red circle shows the variance on measurements.} \label{figure5} \end{figure} If a resonator is weakly coupled to a heat bath having temperature $T_{\rm p}$, expectation values can be calculated using the density operator $\hat{\rho}_{\rm thm}$, giving \begin{align} \label{E2} (\Delta v)^{2} & = \frac{\hbar \omega_{0}}{2C} \left[ 2 \langle n \rangle + 1 \right] = \frac{\hbar \omega_{0}}{2C} \coth (\hbar \omega_{0} / 2kT_{\rm p}) \\ \nonumber (\Delta i)^{2} & = \frac{\hbar \omega_{0}}{2L} \left[ 2 \langle n \rangle + 1 \right] = \frac{\hbar \omega_{0}}{2L} \coth (\hbar \omega_{0} / 2kT_{\rm p}). \end{align} Weak coupling implies a low-value series resistor or a high-value parallel resistor, but Equation (\ref{E2}) has been derived without analysing the behaviour of an R-L-C circuit---only the density operator was used. In classical circuit analysis, a spectral noise voltage source, $(\Delta v)^{2} = 4 k T_{\rm p} R$, must be included in series with any resistor; in the quantum case, this expression changes to $(\Delta v)^{2} = 2 \hbar \omega_{0} R \coth (\hbar \omega_{0} / 2kT_{\rm p})$. When circuit analysis is carried out, these spectral quantities are, effectively, multiplied by a bandwidth to give the actual variance in voltage. For a single-pole resonator, which has a Lorentzian frequency response, the bandwidth is $ \pi f_{0} / 2Q = \pi f_{0} \omega_{0} L / 2 R$, and Equation (\ref{E2}) is recovered. In the limit $\hbar \omega_{0} / T_{\rm p} \rightarrow 0$, the classical expression $(\Delta v)^{2} \rightarrow 4kT_{\rm p}R$ is found. The transition from classical to quantum behaviour occurs when $\hbar \omega_{0} \approx kT_{\rm p}$, which for a 5 GHz resonator happens at $T_{\rm p}=$ 240 mK. Dilution refrigerator technology routinely achieves 10 mK, showing that even RF circuits can be operated in the quantum regime, motivating the need for quantum circuit theory at low temperatures. The density operator $\hat{\rho}_{\rm thm}$ is based on thermodynamic considerations, and so tacitly assumes that the mechanism responsible for dissipation behaves as a weakly coupled heat bath having a large number of degrees of freedom. The overall quantum system is no longer closed, because no attempt is being made to track the behaviour of every degree of freedom: such as the electron-phonon system in a resistor. Strictly, the density operator no longer obeys von Neumann's differential equation for quantum operators, and must be described by the more complicated dynamics of the {\em Lindblad Master Equation}, which unlike Schr\"{o}dinger's equation, describes the time evolution of mixed quantum states. In the case of high-frequency circuits, a transmission-line representation with scattering parameters is usually best. Transmission lines are easily quantised, but with a note of caution. The voltage and current at every point are in phase, and related through the real-valued characteristic impedance $Z_{0}$. The voltage and current commute, and so are not restricted by Heisenberg's uncertainty relationship. In fact, if the instantaneous voltage is measured, the current is already known through $ V / Z_{0}$. In the case of transmission lines, voltage and current are compatible observables, whereas the voltage and its time derivative, or the current and its time derivative, the {\em quadrature components}, are not. When combining quantised transmission line theory with scattering parameter representations, there is an unfortunate clash of notation. In classical scattering parameter theory, $a(\omega) = v^{+} (\omega) / \sqrt{Z_{0}}$ and $b (\omega) = v^{-} (\omega) / \sqrt{Z_{0}}$ are the normalised complex amplitudes of the counter-propagating waves $v^{+} (\omega)$ and $v^{-} (\omega)$, such that $|a(\omega)|^{2}$ and $|b(\omega)|^{2}$ are power spectral densities. But each wave has a creation and annihilation operator, and so we introduce the operator pairs $(\hat{a}(\omega),\hat{a}^{\dagger}(\omega))$ and $( \hat{b}(\omega),\hat{b}^{\dagger}(\omega))$ for the forward and backward travelling waves respectively. After quantisation, Equation (\ref{D1}) becomes \begin{align} \label{E3} \frac{\hat{v}^{+}(\omega)}{\sqrt{Z_{0}}} & = \frac{1}{2 \sqrt{Z_{0}}} \left[ \hat{v}(\omega) + \hat{i}(\omega) Z_{0} \right]= \left[ \frac{\hbar \omega}{2} \right]^{1/2} \hat{a} (\omega) \\ \nonumber \frac{\hat{v}^{-}(\omega)}{\sqrt{Z_{0}}} & = \frac{1}{2 \sqrt{Z_{0}}} \left[ \hat{v}(\omega) - \hat{i}(\omega) Z_{0} \right] = \left[ \frac{\hbar \omega}{2} \right]^{1/2} \hat{b} (\omega), \end{align} where $\hat{v}(\omega)$ and $\hat{i}(\omega)$ are the voltage and current at a plane, say the port of a network. The average one-sided power spectral density flowing in the forward direction is given by the symmetrised form $\hat{s}^{+} (\omega) = ( \hbar \omega / 2 ) \left( \hat{a} (\omega) \hat{a}^{\dagger} (\omega) + \hat{a}^{\dagger} (\omega)\hat{a} (\omega) \right) = ( \hbar \omega / 2 ) \left\{ \hat{a} (\omega) , \hat{a}^{\dagger} (\omega) \right\}$, and similarly for the reverse direction. In a travelling-wave representation, the annihilation operators of the outgoing waves depend linearly on the annihilation operators of the incoming waves, with the constants of proportionality being the scattering parameters. This view is plausible because if the incoming waves are in high-occupancy coherent states, the outputs must correspond to those of a classically driven system. For a multiport network, the input operators act on the tensor product of the `input states': $| p_{1} \rangle \cdots | p_{m} \rangle \cdots | p_{M} \rangle$. The output operators therefore act on the same state space: the outcomes of measurements on the outgoing waves are are described in terms of the states of the incoming waves. The scattering parameters are essentially complex probability amplitudes. In general terms, the waves incident on the ports do not have have to be in coherent states, and may even be in mixed states, such as thermal states, but in order for the scheme to be self consistent, the vacuum states of seemingly undriven ports must be included. It is often the case that incoming radiation is described solely in terms of quantum correlation functions (for example $\langle \hat{a} (\omega) \hat{a}^{\dagger}(\omega) \rangle$), and then only outgoing correlation functions can be determined. With care relating to ports in the vacuum state, this mapping can be achieved through the connection matrix method, Equation (\ref{D4}). Consider a multiport network, where one port comprises the input, another the output, and where $M > 2$; in other words, a microwave two-port network connects internally to a set of `internal' ports that influence the output but whose states are never measured. One way of eliminating interest would be to take the partial trace over the `internal ports', to yield the two-port behaviour. Another approach is to say that \begin{align} \label{E4} \left( \begin{array}{c} \hat{b}_{1} \\ \hat{b}_{2} \\ \end{array} \right) & = \left( \begin{array}{cc} S_{11} & S_{12} \\ S_{21} & S_{22} \\ \end{array} \right) \left( \begin{array}{c} \hat{a}_{1} \\ \hat{a}_{2} \\ \end{array} \right) + \left( \begin{array}{c} \hat{n}_{1} \\ \hat{n}_{2} \\ \end{array} \right), \\ \nonumber \hat{\mathsf b} & = {\mathsf S} \hat{\mathsf a} + \hat{\mathsf n}, \end{align} where for brevity explicit reference to $\omega$ is dropped. The vector-valued operator $\hat{\mathsf n}$ contains linear contributions from the internal ports, and acts on a suitably extended state space. One may be tempted to ignore vacuum contributions from the internal ports, but if this is done, the output operators do not then satisfy bosonic commutation relationships. It is now clear that even vacuum states are likely to contribute to the output in the form of an additive `noise' term. In many cases, this noise will be thermal, and in some cases may be at an effective temperature higher than the physical temperature of the device. \subsection{Quantum noise temperature} \label{sec_qua_nse_tmp} Even at low physical temperatures $T_{\rm p} \ll \hbar \omega / k $, and without internal heating such as hot electron effects, `noise' appears at the output in terms of a weighted linear combination of vacuum states, because of $\hat{\mathsf n}$. Therefore, every device must have some minimum noise temperature. What is the minimum noise temperature of a multiport network? This question depends on the properties of ${\mathsf S}$, such as reciprocity, unitarity, and even linearity, but in the case of an ideal two-port amplifier, with $S_{11}=S_{22}=0$, a simple but compelling argument is as follows. Equation (\ref{E4}) gives $\hat{b}_{2} = S_{21} \hat{a}_{1} + \hat{n}_{2}$, and because the source and noise terms correspond to different degrees of freedom, \begin{align} \label{E5} [\hat{b}_{2},\hat{b}^{\dagger}_{2}] & = |S_{21}|^{2} [\hat{a}_{1},\hat{a}^{\dagger}_{1}] + [\hat{n}_{2},\hat{n}^{\dagger}_{2}] \\ \nonumber [\hat{n}_{2},\hat{n}^{\dagger}_{2}] & = 1 - |S_{21}|^{2}, \end{align} where the second line follows because the operators correspond to travelling waves on transmission lines: $[\hat{a}_{1},\hat{a}^{\dagger}_{1}] = 1$ and $[\hat{b}_{1},\hat{b}^{\dagger}_{1}]=1$. The one-sided spectral density of the power travelling away from the output is given by \begin{align} \label{E6} s^{b} (\omega) = |S_{21}|^{2} s^{a} (\omega) + s^{n} (\omega). \end{align} $|S_{21}|^{2}$ appears as the transducer power gain of the amplifier, as in the classical case. For any operator, $\hat{X} \hat{X}^{\dagger} = \left[ \hat{X}, \hat{X}^{\dagger} \right] / 2 + \left\{ \hat{X}, \hat{X}^{\dagger} \right\}/2$, and so $\langle \left\{ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right\} \rangle = 2 \langle \hat{n}_{2} \hat{n}^{\dagger}_{2}\rangle - \langle \left[ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right] \rangle$. For an amplifier having power gain,$|s_{21}|^{2} > 1$, \begin{align} \label{E7} s^{n} (\omega) = \frac{\hbar \omega}{2} \langle \left\{ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right\} \rangle & \ge - \frac{\hbar \omega}{2} \langle \left[ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right] \rangle = \frac{\hbar \omega}{2} \left( |S_{21}|^{2} - 1 \right), \end{align} where Equation (\ref{E5}) has been used. The added noise $s^{n} (\omega)$ is zero for unity power gain: a lossless, passive device. Usually, the noise power at the output is referred to the input to define a noise temperature $T_{\rm n} = s^{n} (\omega) / |S_{21}|^{2} k$, giving \begin{align} \label{E9} T_{\rm n} & \ge \frac{\hbar \omega}{2k} \left( 1 - \frac{1}{|S_{21}|^{2}} \right). \end{align} For a high-gain amplifier there is a minimum noise temperature of $T_{\rm q} = \hbar \omega / 2k$, which is called the {\em standard quantum limit} (SQL). No phase preserving amplifier can have a noise temperature of less than the quantum limit \cite{Cav1}. This noise power adds to any intrinsic power from the source. If the source is also in its vacuum state, an additional half a photon of noise is added. One interpretation of the SQL is to say that at least one other source must be connected to an amplifier to provide the energy needed for amplification, and at the very least this must have vacuum fluctuations. Often, several internal sources are present, and so the SQL is not realised. To achieve the quantum limit it is prudent to choose a configuration that has the smallest number of connected degrees of freedom. A single-mode resonator parametric amplifier is a good example of this principle. $T_{\rm n}$ is a {\em noise temperature}, but actually describes an average spectral power, $k T_{\rm n}$, and so how does it relate to noise? The main use of an amplifier is to amplify a coherent voltage or current waveform, {\em preserving phase}, meaning that the gain is the same for each of the quadrature components (time shift invariance). Equation (\ref{E3}) shows that $\hat{v}^{+}(\omega)$ is not Hermitian, and so not measurable; but combining the positive and negative frequency parts, the quadrature components of $\hat{v}(t)$ at the output, namely $ \sqrt{\hbar \omega Z_{0} / 2} \left( \hat{b}(\omega) + \hat{b}^{\dagger}(\omega) \right) \cos (\omega t)$ and $ -i \sqrt{\hbar \omega Z_{0} / 2} \left( \hat{b}(\omega) - \hat{b}^{\dagger}(\omega) \right) \sin (\omega t)$, are individually Hermitian, and so measurable. To calculate the fluctuation in the travelling noise-wave voltage at the output, in the absence of a signal at the input, $\Delta v_{\rm opt}(t) = \sqrt{\langle \hat{v}_{\rm opt}^{2} (t) \rangle - \langle \hat{v}_{\rm opt} (t)\rangle^{2}}$ is required. For a statistically stationary state, such as a thermal state, it can be shown through straightforward algebra that $\left( \Delta v_{\rm opt}(t) \right) ^{2} / Z_{0} = s^{n} (\omega) = k T_{\rm n} |S_{21}|^{2}$, and finally $\Delta v_{\rm in}(t) = \sqrt{ k T_{\rm n} Z_{0} } $. The crucial point is that noise temperature is a measure of the variance of the amplitude of the noise voltage wave, and so is the relevant measure of sensitivity when an amplifier is used to amplify travelling voltage/current waveforms. This contrasts with a measurement of average power, where the fluctuation in power is the relevant measure of sensitivity. If a power detector follows an amplifier, the radiometer equation must be used: $\Delta T = T_{\rm n} / \sqrt{B \tau}$. In fact, if a signal is amplified, digitally sampled, and then analysed using an autocorrelation algorithm to give a power spectrum, the radiometer equation still holds for each spectral bin. In summary, the sensitivity of amplifiers is characterised by {\em noise temperature}, which is a spectral power, and the sensitivity of power detectors by {\em noise equivalent power}, which is a fluctuation in power for a given post-detection integration time. These are based on second-order and fourth-order statistics respectively. \subsection{Microscopic physics} \label{sec_mic_qua} \begin{figure} \noindent \centering \includegraphics[trim = 6cm 19cm 7cm 4cm, width=50mm]{Figure6.pdf} \caption{Two generalised forces $\hat{F}_{\rm in} ({\bf r},t)$ and $\hat{F}_{\rm out} ({\bf r},t)$ act on the physical properties $\hat{I} ({\bf r},t)$ and $\hat{O} ({\bf r},t)$, respectively, of a device to create a two-port network. In the frequency domain, the quantum response functions, retarded Greens functions, are essentially the two-port scattering parameters $S_{ij}$.} \label{figure6} \end{figure} It is usually sufficient to adopt a microwave systems approach to modelling, but to achieve the best possible performance, it is necessary to understand the relationship between quantum systems theory and the solid-state physics of the device. Rather than regarding the scattering parameters as coefficients, or complex probability amplitudes, it is possible to regard them as response functions in the spirit of Kubo's linear response theory. Consider the two-port network shown in Figure \ref{figure6}. There is some input quantity, $\hat{F}_{\rm in}$, such as the magnetic vector potential of an TEM wave, that couples to some property of the device $\hat{I}$, such as current density. They couple in the sense of combining to add an interaction term to the overall Hamiltonian, as in Equation (\ref{C1}). Likewise there is some output quantity $\hat{F}_{\rm out}$ that couples to some other property of the device $\hat{O}$. Various Kubo-like response formula then follow, as in Equation (\ref{C5}), \begin{align} \label{E1b} \langle \Delta \hat{I}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{I}^{I} ({\bf r},t), \hat{I}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F}_{\rm in} ({\bf r}',t') \rangle_{t_{0}} \\ \nonumber \langle \Delta \hat{O}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{O}^{I} ({\bf r},t), \hat{I}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F}_{\rm in} ({\bf r}',t') \rangle_{t_{0}}. \end{align} Additional steps are needed, depending on the device, to turn these expressions into scattering parameters, such as $S_{11}$ and $S_{12}$. For example, the volume integrals must be turned into surface integrals over the ports (although it may be possible to express the interaction Hamiltonian directly in terms of say the voltage at the terminals of the device, avoiding the explicit need for a volumetric formulation), and $\hat{F}_{\rm in}$ and $\hat{F}_{\rm out}$ must be described in terms of creation and annihilation operators to give travelling wave amplitudes. The central point, however, is that scattering parameters can now be identified as, essentially, Kubo response functions. Following procedures similar to those outlined in Section \ref{sec_pow_qua}, it is possible to calculate power gain, reactive and resistive input impedance, noise generation, etc., in terms of Kubo response functions. These calculations can be carried out using the scattering parameters, but now there is a direct connection with operators that describe the quantum behaviour of the device. Remember that Kubo response functions are retarded Greens functions, which describe how a solid-state system responds to the injection of an excitation. For example, an electron or hole may by created at one space-time point and one wishes to know the complex probability of an electron or hole appearing at another point. This deep connection between quantum systems theory and device physics is compelling and highly valuable. One important consideration relates to the distinction between backaction and noise. Noise is in a sense straightforward because it relates to the fluctuations present in outgoing waves when no external excitations are present. Backaction is more subtle because it relates to a change in the state of the applied field as a consequence of a measurement taking place. For example, the position of a particle can be measured precisely, but then all information about the momentum is lost. After this measurement, it is known where the particle is, but it is not known which way it is going. A subsequent measurement then suffers from the extreme nature of the first measurement. Often it is better not to measure the quantity of interest too precisely, so that further information can be gained at the second measurement. Electrical sensors act in the same way, and it is better for the first measurement not to constrain the the subsequent behaviour of the system too precisely. It seems that the operator $\hat{I}$ describes the way in which the amplifier feeds noise out of the input terminals, and determines the degree of backaction imposed. The manifestation of radiated noise and backaction depends on the basis used to represent the amplifier. The travelling wave representation, which is defined only to within an arbitrary real reference impedance $Z_{0}$, presents the effects in a different way to a discrete representation where a voltage and current source are placed at the input, as is commonly done in the case of operational amplifiers \cite{Cle1}. \subsection{Comparing performance} \label{sec_nse_com} Numerous fundamental physics experiments are based on measuring power spectra, but should these be carried out using low-noise detectors or low-noise amplifiers? From the perspective of sensitivity, it might be expected that the two are the same because power can be derived from voltage, and vice versa, but the measurement statistics are different, as has been seen. When an amplifier-detector combination is used to measure power from a thermal source, the smallest change in temperature that can be detected is given by the radiometer equation $\Delta T = T / \sqrt{B_{\rm pre} \tau}$, where $B_{\rm pre}$ is the pre-detection bandwidth, and $\tau$ is the post-detection integration time. The smallest change in power that can be detected is then $\Delta P = k T_{\rm n} B_{\rm pre} / \sqrt{{B_{\rm pre}} \tau}$, where the Rayleigh Jeans limit is assumed when defining noise temperature. The smallest change in power that can be detected using a power detector having an intrinsic noise equivalent power of $NEP_{\rm i}$ is $\Delta P = NEP_{\rm i} \sqrt{B_{\rm pst}} = NEP_{\rm i} / \sqrt{2 \tau}$, where $B_{\rm pst}$ is the noise equivalent post-detection bandwidth. Comparing these two gives an equivalent noise temperature of \begin{align} \label{F1} T_{\rm en} & = \frac{NEP_{\rm i}}{k \sqrt{2 B_{\rm pre}}}. \end{align} $NEP_{\rm i}$ characterises internally generated noise, and does not include any background noise. The reason is that the noise temperature of the amplifier does not include any background noise either. If background noise is included in both cases, thereby comparing system NEP with system noise temperature, Equation (\ref{F1}) can be used. In the case of a single-pole filter having a Lorentzian profile, it can be shown that the relevant noise bandwidth is $B_{\rm pre} \approx \pi \nu_{0} / 4 R$, where $R = \nu_{0} / \Delta \nu$ is the spectral resolution, and $\Delta \nu$ is the full width half maximum (FWHM). Using Equation (\ref{F1}) \begin{align} \label{F2} T_{\rm en} & = \left[ \frac{2 R}{k^{2} \pi \nu_{0} } \right]^{1/2} NEP_{\rm i}. \end{align} \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=140mm]{figure7.pdf} \caption{Black lines: System noise temperature of a quantum limited amplifier preceded by a source having a physical temperatures of 10 mK (solid), 50 mK (dotted), 4K (dashed), and 200 K (dash-dot). Blue lines: Noise power radiated by single-mode sources, expressed as a noise temperature, having physical temperatures of 10 mK (solid), 50 mK (dotted) and 4 K (dashed). Red lines: Equivalent noise temperatures of detectors having noise equivalent powers of $10^{-18}$, $10^{-19}$, $10^{-20}$, $10^{-21}$, $10^{-22}$ WHz$^{-1/2}$ with $R=100$. Green dotted lines: Equivalent noise temperatures of detectors having dark photon rates of 10, 100 and 1000 Hz with $R=100$. The lowest solid faint black line shows the equivalent noise temperature of a squeezed amplifier (10 dB) operating with squeezed source at 1 mK; it can be regarded as the absolute limit of any coherent system. The squares show a range of reported noise temperatures of coherent receivers operating at a variety of temperatures. Spectral line astronomy requires coherent systems having the SQL over 100-1000 GHz. FIR infrared space based astronomy requires incoherent systems having NEPs of order $10^{-20}$ WHz$^{-1/2}$. Single electron Cyclotron Radiation Emission Spectroscopy (CRES) requires coherent systems having the SQL at 20-30 GHz. The photon production rate in Haloscopes designed for dark matter detection are extremely small, and so long integration times must be used for all realistic measurements. } \label{figure7} \end{figure} Figure \ref{figure7} shows, as black lines, system noise temperature of a quantum limited amplifier preceded by a source having a physical temperature of 10 mK (solid), 50 mK (dotted), 4K (dashed) and 200 K (dash-dot). As frequency increases, the curves converge on a single line corresponding to $\hbar \omega / k$. Ordinarily, this indicates the sensitivity limit of coherent receivers. The SQL increases with $\nu_{0}$, but does not depend on bandwidth. A 10 mK source (typical of a dilution refrigerator) allows the SQL to be achieved at frequencies down to about 500 MHz. The dashed curve corresponds to a 4K source (typical of a pulse tube cooler), and shows that the SQL can be achieved down to 100 GHz. SIS mixers (see later) approach the SQL over the range 100 GHz to 1 THz. An upward looking radiometer or a space based radiometer always has the ~3 K CMBR as its background, and so there is no benefit in using quantum-noise-limited amplifiers below 10 GHz. The dash-dot line, corresponding to a 200 K source, is essentially the temperature seen by an Earth Observation instrument; it is clear that uncooled amplifiers are suitable for most remote sensing applications. The equivalent noise temperatures of detectors having noise equivalent powers of $10^{-18}$, $10^{-19}$, $10^{-20}$, $10^{-21}$, $10^{-22}$ WHz$^{-1/2}$ with $R=100$ are shown in red. The equivalent noise temperature improves as the centre frequency $\nu_{0}$ is increased, and as the resolution $R$ is lowered. The blue lines show the noise power radiated by single-mode sources, expressed as noise temperature, having physical temperatures of 10 mK (solid), 50 mK (dotted) and 4 K (dashed). A 10 mK source allows detectors having NEPs of better than $5 \times 10^{-21}$ WHz$^{-1/2}$ to be exploited down to 1 GHz: for R=100. There is no point in using a detector of any kind having a noise temperature that is significantly below the background limit. The plot shows that ultra-low-noise FIR power detectors, NEP$\approx 10^{-20}$ WHz$^{-1/2}$, can be used over 1-10 THz, where the blackbody spectrum of the CMBR falls away steeply. In cases such as these, where the noise is dominated by intrinsic detector noise, it can be beneficial to have a number of optical modes available for absorbing signal power, as described in Section \ref{sec_mul_det}, motivating the use of multimode detectors in space-based FIR astronomy. Figure \ref{figure7} shows why power detectors, often called {\em incoherent receivers}, are used for high frequency low-resolution measurements, whereas amplifiers, often called {\em coherent receivers}, are used for low frequency high-resolution measurements. The crossover occurs at millimetre wavelengths, where the two approaches have similar sensitivities. The squares show a range of state-of-the-art noise temperatures of coherent receivers operating at a variety of physical temperatures. The crucial point is that the various technologies all fall short of the SQL by a factor of a few, but track it with frequency. The plot suggests that coherent receiver technology starts to suffer from radiometric leakage at frequencies below 10 GHz even though the physical temperature is often lower. Controlling inadvertent stray light in a cryostat can be surprisingly challenging. The best detectors have NEPs of $10^{-19}$ to $10^{-20}$ WHz$^{-1/2}$ showing that two orders of magnitude improvement at microwave frequencies would be highly beneficial, leading to instruments that are far superior to even squeezed amplifiers for certain low-spectral-resolution (R=100) applications. Incoherent receivers can also suffer badly from radiometric leakage. It should not be assumed that the SQL, $T_{\rm q}$, cannot be beaten. This limit exists only in the case of phase preserving amplifiers. Some amplifiers, however, get the energy needed for amplification from a coherent pump tone, and can be engineered to have different gains for the in-phase and out-of-phase components. The noise temperature of one quadrature can be lowered below the SQL, but only at the expense of the noise temperature of the other: as required by Heisenberg's uncertainty principle. This process is called {\em squeezing}, and squeezing factors, gain ratios, of 10-15 dB are achievable. Higher squeezing factors are challenging because any phase imperfections, which may be time dependent, degrade the fidelity of the effect. The circle in Figure \ref{figure5} becomes an ellipse, and the higher the squeezing factor, the higher the sensitivity to changes in the orientation of the ellipse. The most sophisticated systems squeeze both the source, and the noise from the the amplifier, allowing exceptionally sensitive measurements to be made. The faint solid line is the SQL of a 10 dB squeezed system at 1 mK, showing that lower noise temperatures are possible if this exotic mode of operation can be achieved and developed for practical applications \section{Superconducting devices and circuits} Superconducting thin-film devices provide an excellent technological platform for exploiting concepts in quantum sensing. When a BCS superconductor is cooled below its critical temperature $T_{\rm c}$, an energy gap forms, $E_{\rm g} = 7 k T_{\rm c} /2$, and the material's electrical, magnetic and thermal characteristics change significantly. Most quantum devices operate at $T_{\rm p} \approx 0.1 \, T_{\rm c}$, but a few operate at higher temperatures, $T_{\rm p} \approx T_{\rm c}$. Some of the most important materials, deposited using ultra-high-vacuum sputtering, and patterned using ultraviolet lithography, are listed in Table \ref{Super}. The gap frequency $ f_{\rm g} = E_{\rm g} / h$ is important because below $f_{g}$ the material has near-zero electrical resistivity, whereas above $f_{g}$ superconducting pairs are broken to create single-electron excitations, quasiparticles, which have appreciable resistivity. Power detectors and photon counters usually operate at frequencies above $f_{\rm g}$, and extend up through the infrared and X-ray regions. Amplifiers, frequency convertors and transmission lines must avoid breaking pairs and so operate from kHz to fractional THz frequencies. Reactively sputtered materials such as NbN and NbTiN are popular because they allow operation to around 1.2 THz. It is also routine to fabricate devices using multilayers (TiAu, MoAu, TiAl). Although the films do not diffuse and stay physically distinct, a proximity effect causes superconducting quasiparticles and pairs to leak, and the multilayer behaves as a homogeneous superconductor having properties that are intermediate between those of the constituent layers. For example, $T_{\rm c}$ can be adjusted over the range 50-500 mK to a precision of about 5 mK. A long range lateral proximity effect also occurs, where a wiring contact can, without material diffusion, change the properties of the active device. Table \ref{Devices} lists a number of important device types, and indicates whether they rely on pair breaking or pair preservation. \begin{table} \begin{center} {\begin{tabular}{ccccc} Material & $T_{\rm c}$ (K) & $E_{\rm g}$ (meV) & $f_{g}$ (GHz) \\ \hline NbN & 16 & 4.8 & 1160 \\ Nb & 9.3 & 2.8 & 680 \\ Ta & 4.48 & 1.35 & 325 \\ Al & 1.2 & 0.36 & 90 \\ Mo & 0.9 &0.27 & 65 \\ Ti & 0.39 & 0.11 & 26 \\ \hline \end{tabular}} \end{center} \caption{Illustrative characteristics of superconductors commonly used to fabricate quantum sensors. $T_{\rm c}$ is the critical temperature, $E_{\rm g}$ the energy gap, and $f_{g}$ the associated pair breaking frequency.} \label{Super} \end{table} \begin{table} \begin{center} {\begin{tabular}{ccc} Material & Pair breaking & Wavelength range \\ \hline Passive & No & Microwave to submm \\ SQUID & No & RF \\ SIS & No & Submm \\ TES & Yes & Submm, FIR, Optical and X-ray \\ KID & Yes & Submm, FIR \\ Paramp & No & Microwave, MMwave \\ SPNWD & Yes & Optical \end{tabular}} \end{center} \caption{Various superconducting devices used in fundamental physics experiments. Descriptions are given in the text. The mode of operation and typical operating wavelength are listed. } \label{Devices} \end{table} In this short overview, it is not possible to list all of the superconducting components available, but it is useful the illustrate the breadth of the technology available. It should also be appreciated that many of the device types described below can be combined to create complex microcircuits having a high degrees of functionality: large format imaging arrays and chip spectrometers. \begin{figure} \noindent \centering \includegraphics[trim = 1cm 1cm 1cm 1cm, width=70mm]{figure8.jpg} \caption{Millimetre-wave thin film superconducting filter based on coplanar transmission line. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements).} \label{figure8} \vspace{2mm} \includegraphics[trim = 1cm 1cm 1cm 1cm, width=70mm]{figure9.jpg} \caption{Millimetre-wave superconducting filter based on parallel capacitors and series capacitors and inductors. The films are typically 100 nm thick. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements).} \label{figure9} \end{figure} {\bf Passive components:} RF components, such as micron-scale transmission lines, directional couplers, filters and loads can be realised in microstrip ($ 5 < Z_{0} <40 \, \Omega$) and coplanar ($ 70 < Z_{0} <150 \, \Omega$) configurations. These structures use 50-500 nm thick superconducting, normal metal and dielectric films, such as SiO and SiO$_{2}$, to create passive components that can operate to 1.2 THz: Figure \ref{figure8} and Figure \ref{figure9}. Superconducting RF components have a number of advantages: (i) For frequencies below $f_{\rm g}$, and at low powers, the films are essentially lossless enabling exceptional behaviour. Dielectric loss in deposited and surface oxides then becomes the biggest dissipative factor, and amorphous dielectrics lead to troublesome Two Level System (TLS) noise. In fact, the deposition and control of low-loss oxides is one of the biggest challenges facing the technology.(ii) The surface impedance of a superconductor is complex valued. The real part is caused by dissipation, usually in the form of quasiparticle scattering, and the imaginary part by reactive energy stored in the inertial behaviour of undamped pairs. If the cross section of the dielectric region is sufficiently small, the energy stored in the kinetic inductance of the film can be comparable with the energy stored in the electromagnetic field. The consequential reduction in wavelength results in devices being physically smaller than would otherwise be the case. (iii) Superconducting thin-film transmission lines are only lossless for powers below, roughly speaking, -50 dBm. At higher powers the quasiparticle population increases and heats due to the photon absorption rate being greater than the quasiparticle recombination rate, and the surface impedance changes. Superconducting resonators show a rich variety of behaviour because on tuning through the resonance the stored energy changes, modifying the equivalent circuit parameters: the resonant frequency and quality factor of the underlying resonance depend on the frequency and strength of exciting tone \cite{Cnt1}. {\bf Superconducting Quantum Interference Device:} SQUIDs were the first superconducting sensors to be used for science. They operate well below $T_{\rm c}$ because of the need to maintain a long-range coherent superconducting state \cite{Sqd1}. Imagine a closed loop of superconducting material. The line integral of the magnetic vector potential is equal to the flux enclosed, but according to the Aharanov-Bohm effect, the line integral contributes a phase factor to the bosonic wavefunction of the superconducting pairs. Because the phase around a closed loop must be single valued, only certain values of flux are allowed to exist inside the loop. The quantum of magnetic flux is $ \Phi_{0} = h / 2 e = 2.1 \times 10^{-15}$ Tm$^{-2}$. A DC SQUID comprises a superconducting ring in which two tunnel junctions are inserted on opposite sides. If the ring is current biased through two additional connections, arranged to give a symmetric configuration, the voltage across the tunnel junctions provides a measure of the screening current in the ring. The voltage is then periodic as an external signal flux is applied and individual quanta enter the ring. This device can be used for extremely sensitive field measurements (fT Hz$^{-1/2}$). Another level of sophistication uses the amplified voltage to feed back flux into the ring through a thin-film transformer. The feedback holds the total flux constant, and the feedback voltage gives a linear measure of the externally applied flux. Finally, a low-inductance input transformer can be added to create an ultra-low-noise current to voltage convertor. Noise currents of pA $\sqrt{\rm Hz}$ are routinely achieved. SQUIDS have been developed for applications such as biological and biomedical magnetometry, geology, and even oil exploration. SQUIDs are also be used for reading out and multiplexing Transition Edge Sensors. {\bf Superconducting Parametric Amplifier:} SPAs can be based on the nonlinear behaviour of SQUIDs by modulating the flux in the ring with an external RF pump source, or on the nonlinear behaviour of thin-film transmission lines \cite{Prm1}. In both cases, they must be used below $f_{\rm g}$. In the case of transmission lines, the signal is combined with a high-level pump tone, which modulates the parameters of the device and transfers energy to the signal, resulting in gain (10-20 dB). Half-wavelength superconducting resonators make excellent narrow band amplifiers at microwave frequencies, and are predicted to work well at millimetre wavelengths \cite{Prm2}. The advantages of resonators are that only small pump powers are needed, keeping phase noise low, and the number of degrees of freedom can be controlled, minimising the number of modes that can contribute to vacuum fluctuation noise. The bandwidths of resonators are small $R \ge 200$, and so for broadband applications $R \le 5$ travelling wave structures are needed. The difficulty with travelling wave devices is that the cross section of the transmission line must be small (50 nm thick films, 1-2 $\mu$m wide) to maximise the kinetic inductance fraction, but the lengths must be large (0.5 m) to maximise gain. It is difficult to achieve uniform, defect free fabrication, and there are other complications associated with dispersion engineering and harmonic suppression. Also, relatively large pump powers are needed, leading to heating. Superconducting films display both resistive and inductive nonlinearities, and these contribute simultaneously to the operation of amplifiers based on transmission lines \cite{Prm2}. Additionally, at least two non-linear mechanisms are present (gap moduation and quasiparticle generation), and these have different speeds and power thresholds. Overall, SPA's achieve exceptional behaviour at microwave frequencies, frequently approaching the quantum limit. Most configurations give phase-preserving amplification, but the intrinsic ultra-low-noise behaviour has also allowed squeezing to be demonstrated. {\bf Superconductor Insulator Superconductor mixer:} SIS mixers were the first superconducting devices to find widespread use in astrophysics \cite{Sis1,Sis2}, and now form a technological cornerstone of high resolution ($R \approx 10^{8}$) submillimetre-wave spectral line astronomy. As discussed in Section \ref{sec_nse_com}, high-resolution instruments favour coherent systems. SIS mixers exploit the complicated dynamics of quasiparticle tunnelling in dielectric barriers, but it is necessary to suppress Josephson pair tunnelling by the application of a small DC magnetic field in the plane of the barrier. A typical tunnel junction has an area of 1 $\mu $m$^{2}$, to minimise capacitance, allowing near quantum-noise-limited down conversion from submillimetre (100 GHz to 1 THz) to microwave (1-10 GHz) frequencies. SIS mixers are fascinating devices because they bridge the gap between classical mixers, based on notions of IV curve nonlinearity, and photon-energy convertors, based on notions of creation and annihilation operators acting on field states. As the frequency of the LO is increased, a changeover in behaviour occurs when the photon energy becomes greater than the scale size of the nonlinearity in the IV curve ($\approx$ 100 GHz). Photon-assisted tunneling steps appear due to the quasiparticle states on one side of the barrier being energy(essentially frequency) modulated. The change from classical to quantum behaviour brings gain on down conversion (in contrast to classical mixers which must have a 3 dB loss at best), quantum-limited noise temperature (due to the presence of the LO), and the appearance of quantum capacitance due to mismatched quasiparticle states on either side of the barrier creating a sloshing probability current. SIS mixer technology has enabled pioneering submillimetre-wave telescopes to be built, such as the James Clerk Maxwell Telescope in Hawaii, and the Atacama Large Millimeter Array in Chile, and has been flown to Lagrange Point 2 on the Herschel Space Telescope. \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=60mm]{figure10.jpg} \caption{Free-space far-infrared superconducting transition edge sensor. The superconducting MoAu bilayer at the bottom of the picture has gold bars to suppress noise. It is fabricated on a 200 nm thick SiN membrane, which is supported by legs which are 200 nm thick, 1 $\mu$m wide and can be up to 1 mm long. Nb wiring runs out along the two legs at the bottom of the picture. The infrared absorber comprises a few nm of disordered $\beta$-phase Ta. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements). } \label{figure10} \end{figure} {\bf Transition Edge Sensor:} TESs have been the work horse of submillimetre-wave astronomy for many years, particularly for mapping spatial variations in the intensity and polarisation of the CMBR, revealing multipole acoustic oscillations in the plasma of the early Universe. As discussed in Section \ref{sec_nse_com}, low-resolution instruments favour incoherent systems. The basic device comprises a superconducting film isolated from the heat bath by either SiN legs (200nm thick and 2 $\mu$m wide) or by judicious engineering of electron-phonon decoupling in the superconductor. When the superconducting film is connected to a low impedance (m$\Omega$) voltage source, electrothermal feedback causes the film to self bias on its superconducting transition. External radiation is then applied optically to a nearby absorbing film made of a different superconductor, Figure \ref{figure10}, or to a load that terminates a superconducting microstrip transmission line, Figure \ref{figure11}. When energy is absorbed, electrothermal feedback holds the operating point constant by swapping optical power for bias power. The bias current falls and is read out using a SQUID. Electrothermal feedback causes a TES to respond more quickly than the open-loop thermal time constant would suggest. TESs have been developed extensively for most of the electromagnetic spectrum, and although various noise mechanisms are present they can be suppressed to the point where the phonon shot noise in the thermal isolation dominates, giving NEPs of 10$^{-17}$ to 10$^{-20}$ WHz$^{-1/2}$. TESs can be assembled into very large arrays, and the microstrip versions have been combined with superconducting RF components to make chip spectrometers for astronomy and Earth Observation. TESs are also been used at optical wavelengths for laser interferometry, dark matter searches, and have been developed into a sophisticated technology for far-infrared and X-ray space telescopes. \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=70mm]{figure11.jpg} \caption{Microstrip coupled millimetre-wave transition edge sensor. The primary devices is a TiAl bilayer supported on a 200 nm SiN membrane. The legs are 4 $\mu$m wide and support Nb wiring and a superconducting microstrip transmission line, which is fabricated in Nb with SiO$_{2}$ insulator, is terminated in a 20 $\Omega$ gold/copper load. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements). } \label{figure11} \end{figure} {\bf Kinetic Inductance Detector:} KIDs are being developed to replace TESs in applications where very large format imaging arrays are needed \cite{Kid1}. They can been used from submillimetre to X-ray wavelengths, with time-resolved photon-counting spectroscopy being possible at the shortest wavelengths. A low-power coherent tone is applied to a superconducting microwave resonator so that its complex transmission factor can be monitored. The resonator can be a distributed transmission line, or a discrete circuit that takes the form of an optical pixel. When signal power, or indeed an individual signal photon, is absorbed by the material of the resonator, the surface impedance and resonant frequency change, and this modulates the microwave transmission amplitude and phase. The real promise of this device is that thousands of pixels can be weakly coupled to a single superconducting readout line, and a densely packed comb of microwave tones generated digitally. The output signal can then be sampled, and a real-time FFT used to measure the complex transmission factors of all of the devices simultaneously. The ambition is to create submillimetre-wave and far-infrared cameras having tens of thousands of pixels. Various trade offs have to be considered, but NEPs ranging from 10$^{-17}$ WHz$^{-1/2}$ to 10$^{-20}$ WHz$^{-1/2}$ have been achieved. A challenge with these devices is to ensure that optical behaviour can be maintained, in terms of beam patterns and efficiency, whilst not degrading the microwave response, such as responsivity and noise. These devices are subject to the complicated dynamics of superconducting resonators, and the generation of quasiparticles by the readout tone is a particular consideration. It is difficult to optimise the optical and readout characteristics simultaneously at frequencies much below 100 GHz, because the signal needs to break pairs but the readout needs to preserve pairs. The above list of devices is certainly not exhaustive. TESs operate at $T_{\rm c}$, which is usually chosen to be twice the bath temperature $T_{\rm b}$, and although $T_{\rm b}$ is typically in the range 50-300 mK, the active part of the device is not as cold as it might be, leading to noise. The Cold Electron Bolometer (CEB) is an ingenious device that overcomes this problem \cite{Ceb1}, enabling NEP's of 10$^{-21}$ WHz$^{-1/2}$ to be achieved. Also, Superconductor Nanowire Single Photon Detectors (SNSPD) are being used for optical photon counting \cite{Had1}, and Superconducting Qubits for microwave photon detection \cite{Qub2}. The squares in Figure \ref{figure7} show the noise temperatures of a range of different coherent receiver technologies. Again, this is far from exhaustive, but it can be seen that there is a particular need to further develop amplifiers, and solid-state squeezed systems for frequencies in the range 1-100 GHz. Additionally, there is a need to develop ultra-low-noise incoherent receivers for the whole of the microwave-FIR region: 0.1 GHz to 10 THz. \section{Concluding Remarks} A new generation of ultra-low-noise sensors is required. The systems and their associated devices must push at quantum limits and so must be designed using quantum mechanical methods. There is a particular need for detectors, amplifiers, frequency convertors and imaging arrays for radio to infrared wavelengths, where existing devices fall short of theoretical limits. In some cases, the needed advances will be achieved through refinements in existing technology, but in other cases new device types must be invented. Crucially, raw sensitivity is rarely sufficient, and other characteristics such as quantum efficiency, bandwidth, linearity, saturation power and stability must obtained simultaneously. One of the biggest challenges is to achieve artefact-free behaviour at the quantum level, particularly when an instrument is to be used in a harsh environment or flown in space. The needed innovations go beyond engineering methods and relate to the the development of theoretical and numerical tools. Quantum information theory, quantum field theory, device physics and classical circuit theory must be brought together, and described using a common language, if the quantum sensors challenge is to be tackled in a robust way. \section*{Acknowledgements} I am grateful to UKRI/STFC for the awards Quantum Technology for Measurement of Neutrino Mass (QTNM) ST/T006307/1, Quantum Sensors for the Hidden Sector (QSHS) ST/T006625/1, and Ultra-low-noise Superconducting Spectrometer Technology for Astrophysics ST/V000837/1. Over the years I have had numerous enlightening discussions with colleagues on superconducting device physics. In particular, I would like to thank Christopher Thomas, David Goldie, Songyuan Zhao, Michael Crane and Dorota Glowacka for their exceptional work on developing, fabricating and testing the many devices studied by the Quantum Sensors Group in Cambridge over a period of 20 years. I would also like to thank Dennis Molloy and David Sawford for their outstanding work on engineering and operating a long list of ultra-low-noise cryogenic systems. \section*{Biography} Stafford Withington has worked on ultra-low-noise experiments for astronomy and fundamental physics for many years, including the development of ultra-low-noise instruments for submillimetre-wave and far-infrared space-based applications. His quantum sensors group at Cambridge has been developing and fabricating superconducting devices, microcircuits and imaging arrays for over 20 years. Stafford is now Emeritus Professor of Physics at the University of Cambridge, and Visiting Professor and Senior Researcher in the Department of Physics at the University of Oxford. He has held fellowships at Downing College Cambridge, All Souls College Oxford, Queens College Oxford and a Royal Society Fellowship at Chalmers University Sweden. He worked for various companies early in his career, including Ferranti Electronics Ltd., Marconi Space and Defence Systems Ltd., and Rolls Royce aircraft Engines (1971) Ltd. \section{Quantum sensing} Early in the 20th Century a series of experiments revealed that all waves exhibit particle-like behaviour, all particles exhibit wave-like behaviour, and that these phenomena are associated with an intrinsic indeterminacy in the outcomes of measurements. The nature of physical reality was questioned, and the language of classical mechanics was replaced by the formalism of quantum mechanics. It is now accepted that no matter how great the skill of an observer, the outcome of a single measurement on any simple physical system of any basic physical quantity is profoundly uncertain. When describing how systems evolve with time, we are forced into a new mechanics where probability distributions evolve in a deterministic manner, rather than the dynamical variables themselves. When several variables are measured, either simultaneously or sequentially, conditional probabilities come into play, and experimental measurements of one kind can influence those of another without any classical interactions being present. No physical quantity can be regarded as having an {\em actual} value until a measurement is made. This way of thinking is not merely a rebranding of classical statistical physics, where it is not humanly possible to keep track of every microscopic degree of freedom, such as the motion of every water molecule in a steam engine, but is intrinsic to the way in which we gather information about the physical world. Should scientists have ever gone down the quantum rabbit hole? Well, quantum mechanics is not optional, but is essential if we are to build mathematical models that replicate the behaviour of experimental systems. Quantum mechanics applies to all dynamical variables (not merely the mechanics of elementary particles), and therefore it applies to electrical quantities such as voltage, current, power, electric and magnetic fields, and dipole moments, etc. When devices and circuits are operated at low physical temperatures (10~mK to 4~K) to minimise thermal noise, the mysterious world of quantum mechanics is revealed, and it becomes {\em necessary} to use the techniques of quantum mechanics to describe the behaviour of electrical circuits. Because of the need to track probability distributions, the analysis of circuit elements, such as transformers, transmission lines, power detectors, mixers and amplifiers becomes complicated, and one is forced into asking questions about the influence of quantisation (vacuum fluctuations, squeezing, back action and entanglement) on the fidelity and sensitivity of electrical measurements. From a measurement perspective there is some target that we wish to probe, and this target must be described quantum mechanically. Likewise, there is a sensor, which is usually part of a larger electrical circuit, which itself must be described quantum mechanically. The purpose of the sensor is to create a macroscopic quantity that can be recorded, and which carries faithful information about properties of the target. {\em Quantum Sensing} refers to the quantisation of the dynamical variables of the target and the interaction with the quantum behavior of the instrument carrying out the measurement. This interaction is potentially complicated because of the need to minimise the variance of the recorded signal, taking into account quantum uncertainty, and the inevitable quantum disturbance caused by the sensor. The quantum states of the target and sensor evolve in time in a mutually interactive way, which is the essence of quantum sensing. This article introduces the emerging field of quantum sensors and electronics for fundamental physics. The work is first put in context by describing a number of fundamental problems in physics where ultra-low-noise experiments are needed. Measurement principles are then discussed, focusing on electromagnetic fields in the microwave (30 cm, 1 GHz) to far-infrared (30 $\mu$m, 10 THz) wavelength range, where the transition from wave-like to particle-like behaviour is most pronounced. Special consideration is given to passive circuits, power detectors and signal amplifiers, but the principles are closely related to other kinds of quantum measurement such as stress, strain, speed and orientation. Towards the end of the article, the relative sensitivities of power detectors and amplifiers are compared, and a number of ultra-low-noise superconducting devices are described. Throughout the article two points are emphasised: (i) current experiments fall short of theoretical sensitivity limits, and a new generation of technology is needed to push down into the quantum-dominated regime; (ii) considerable innovation is possible by bringing together concepts from quantum information theory, quantum field theory, classical circuit theory, and device physics into a mathematical framework that can be used for modelling. \section{Why are ultra-sensitive measurements needed?} A new generation of fundamental physics experiments requires access to a family of ultra-low-noise sensors that can operate over the radio, microwave and far-infrared regions of the electromagnetic spectrum. For example, there is a need to produce an all-sky map the polarisation state of the Cosmic Microwave Background Radiation (CMBR) to one part in 10$^{9}$ to understand the role of gravitational waves in forming structure in the early Universe \cite{Aba1,Xu1}. There is a need to observe the most distance galaxies (z$>$5) to understand how galaxies first formed and evolved, and arrived at the local universe we see today. There is a need to understand the nature of Dark Matter (DM), with one strongly motivated possibility being the existence of a family of low-mass particles ($\mu$eV to meV) that interact only weakly with electromagnetic fields \cite{Ant1,Sik1}. Determining the absolute mass of the neutrino remains one of the most pressing problems in laboratory physics, and an international effort is underway to understand how it can be achieved by measuring, to one part in 10$^{6}$, the energies of individual electrons released during the radioactive decay of Tritium \cite{Ash1,Asn1}. There is a need for laboratory experiments that can probe the nature of spacetime and its relationship with the fundamental postulates of quantum field theory. These experiments, and others, require sensors that push at the limits imposed by quantum mechanics. \section{Power detection - a classical perspective} \label{sec_pow_det} At long wavelengths, scientists measure power, both dissipative and reactive, whereas at short wavelengths, they measure photon rates, and if possible count photons. Whilst it is true that average power $P$ is related to average photon rate $W$ by $P = \hbar \omega W$, the quantisation of the electromagnetic field goes well beyond this simple equality, and on moving from microwave to optical wavelengths, the behaviour of low-noise instruments changes significantly. \subsection{Radiation and noise} \label{sec_rad_nse} At long wavelengths, waveguiding systems tend to be {\em single mode} meaning that there is a single spatial transverse degree of freedom available for carrying power from the source to the detector. At infrared and optical wavelengths, free-space beams tend to be highly {\em multimode}, meaning that there is a large number of transverse degrees of freedom available. By counting modes in wavevector space ($k$-space), the longitudinal mode rate $J$ can be calculated for (i) a transverse electromagnetic (TEM) transmission line, (ii) radiation into a half space, and (iii) radiation into solid angle $\Omega = 4 \pi \sin^{2} (\theta_{\rm m}/2)$ where $\theta_{\rm m}$ is the half opening angle of the beam: Table~\ref{modes}. Because most modes travel obliquely to the assumed optical axis, it is necessary to weight each mode by a projected area, giving an effective solid angle $\Omega_{\rm eff} = \pi \sin^{2} (\theta_{\rm m})$: $\Omega_{\rm eff} \rightarrow \Omega$ as $\theta_{\rm m} \rightarrow 0$. The overall differential modal flux can then be written ${\rm d}J = N {\rm d} \omega / 2 \pi$, where $N = A \Omega / \lambda^{2}$ is the effective number of transverse modes (not including polarisation). The effective number of transverse modes can be calculated rigorously using diffraction theory, and the above expression for $N$ is accurate even when the throughput is small and the transmission spectrum tapers off gradually with mode number. The modal structure of fields is classical, but it is important to appreciate that each mode constitutes a degree of freedom, which must be quantised accordingly. \begin{table} \begin{center} {\begin{tabular}{cccc} & Mode Rate & Effective Mode Rate & Number of Modes \\ \hline TEM line & ${\rm d}J = \frac{1}{2 \pi} {\rm d} \omega$ & $ {\rm d} J = \frac{1}{2 \pi} {\rm } {\rm d} \omega$ & 1 \\ \hline Half Space & ${\rm d}J = \frac{A \omega^{2}}{4 \pi^{2} c^{2}} {\rm d} \omega $& ${\rm d} J = \frac{A \omega^{2}}{8 \pi^{2} c^{2}} {\rm d} \omega $ & $N = \frac{1}{2} \frac{A \Omega}{\lambda^{2}} = \frac{A \Omega_{\rm eff}}{\lambda^{2}}$ \\ \hline Solid Angle $\Omega$ & $ {\rm d}J = \frac{A \omega^{2} \Omega}{8 \pi^{3} c^{2}} {\rm d} \omega $ & $ {\rm d} J = \frac{A \omega^{2} \Omega_{\rm eff}}{8 \pi^{3} c^{2}} {\rm d} \omega $ & $N = \frac{A \Omega_{\rm eff}}{\lambda^{2}}$ \\ \hline \end{tabular}} \end{center} \label{modes} \caption{Longitude mode rate ${\rm d}J$, effective longitudinal mode rate after projecting onto a plane, and effective number of transverse modes $N$ for a TEM transmission line, radiation into a half space and radiation into physical solid angle $\Omega$. $A$ is the area of the source, and $\Omega_{\rm eff}$ is the effective solid angle of the beam. } \end{table} If each mode, comprising both longitudinal and transverse parts, is quantised and in a thermal state at temperature $T_{\rm p}$, the average power $P$ and its variance $(\Delta P)^{2}$ become \begin{align} \nonumber P & = \frac{A \Omega_{\rm eff}}{\lambda^{2}} \int \frac{{\rm d} \omega}{2 \pi} \, \frac{\hbar \omega}{e^{\hbar \omega / k T_{\rm p}} - 1} \\ \label{A1} & = \int {\rm d} \nu \, P(\nu) \\ \nonumber (\Delta P)^{2} & = \frac{1}{\tau} \frac{A \Omega_{\rm eff}}{\lambda^{2}} \int \frac{{\rm d} \omega}{2 \pi} \, \frac{ ( \hbar \omega )^{2} e^{\hbar \omega / k T_{\rm p}}}{ \left( e^{\hbar \omega / k T_{\rm p}} -1 \right)^{2}} \\ \label{A2} & = \frac{1}{\tau} \int {\rm d} \nu \, \left[ \frac{P(\nu)^{2}}{N} + h \nu P(\nu) \right], \end{align} where $P(\nu) = N h \nu n(\nu)$ is the average spectral power, $n(\nu)$ is the single-mode thermal occupancy, and $\tau$ is the time over which energy is collected to give the recorded power. If a perfectly matched planar detector is illuminated by a thermal field, these expressions give the average power and fluctuations in power measured. Although $P$ can be subtracted from the recorded output to reveal any additional signal, the fluctuations are troublesome, and act as a noise source in addition to any noise generated by the detector itself. The first term in Equation (\ref{A2}) reduces to the radiometer equation $\Delta T_{\rm p} = T_{\rm p} / \sqrt{B_{\rm pre} \tau}$ for a single-mode detector when $\hbar \omega < k T_{\rm p}$, where $B_{\rm pre}$ is the pre-detection bandwidth. $\Delta T_{\rm p}$ is the amount by which the source temperature must change to produce an output that is discernable above the noise. Notice that $B_{\rm pre} \tau \approx \tau / \tau_{\rm c}$ is the number of independent samples having coherence time $\tau_{\rm c}$ in the integration period $\tau$. The first term in Equation (\ref{A2}) can be regarded as coming from Gaussian distributed fluctuations in the envelope of a classical wave. The second term is dominant when $\hbar \omega > k T_{\rm p}$, and has a variance that is proportion to the mean, which is characteristic of Poisson statistics. It is indicative of the photon counting statistics of a coherent quantum state, where the variance in occupancy is equal to the mean. Thus, classical fluctuations tend to dominate at low frequencies and quantum fluctuations at high frequencies, but in general they appear together and add as uncorrelated fluctuations, which is suggestive of different physical origins. For a 10 mK source, the changeover occurs at about 200 MHz. This blended behaviour can also be understood in terms of a Poisson mixture comprising an ensemble of Poisson distributions having continuously distributed means. Assume that there is a single transverse mode, for example a TEM transmission line. Suppose that the individual longitudinal modes, each lasting about $1/{\rm d} \nu$, are in coherent states, but that the complex amplitudes $\alpha$ vary. By definition, a coherent state is an eigenstate of the annihilation operator, $\hat{a} | \alpha \rangle = \alpha | \alpha \rangle$; it corresponds mostly closely to a coherent classical wave having complex amplitude $\alpha$. $\hat{a}$ is not Hermitian, and so does not represent a directly-measurable single quantity, unlike the in and out of phase components. If $P(\xi)$ is the probability distribution of $\xi \equiv |\alpha^{2}|$, where for a coherent state $|\alpha|^{2}$ is also the average occupancy $\langle n \rangle$. The probability of detecting $n$ photons in a longitudinal mode is given by the conditional probability $P(n|\xi)$, but because we are interested in the probability of detecting $n$ photons over the whole ensemble, equivalently over a long integration time $\tau > 1/{\rm d} \nu$, \begin{equation} \label{A3} P(n) = \int {\rm d} \xi \, P(n|\xi) P(\xi). \end{equation} Using $E[n] = \sum n P(n)$ for the expectation value, and remembering that for a Poisson distribution $E[n|\xi] = \xi$, it can be shown using straightforward algebra that \begin{equation} \label{A4} E[n] = E[\xi] = E[|\alpha|^{2}], \end{equation} which reproduces Equation (\ref{A1}). The average power in a wave having a randomly varying amplitude gives the average occupancy of the underlying Poisson process. More interestingly, the variance $V[n] = E[n^{2}] - (E[n])^{2}$ becomes \begin{align} \label{A5} \nonumber V[n] & = V[|\alpha|^{2}] + E[|\alpha|^{2}] \\ & = V [|\alpha|^{2}] + V[n|E[|\alpha|^{2}]]. \end{align} The variance of the occupancy is the sum of the variance of $|\alpha|^{2}$, the classical noise, and the variance of a pure Poisson process $V[n|E(|\alpha|^{2})]$, the quantum noise, having $E[|\alpha|^{2}]$ as its parameter. If the power is averaged for time $\tau$, the variance in the observations is \begin{align} \label{A6} \nonumber (\Delta P)^{2} & = \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V(n) \\ & = \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V [|\alpha|^{2}] \\ \nonumber & + \frac{1}{\tau} \int d \nu \, N (\hbar \omega)^{2} V \left[ n|E[|\alpha|^{2}] \right]. \end{align} When trying to progress, the first term of Equation (\ref{A6}) is awkward, because $V [|\alpha|^{2}]$ is needed, which depends on the unspecified distribution $P(\xi)$. Assume, in the spirit of the central limit theorem, that the quadrature components of the complex amplitude $\alpha$ are Gaussian variates. $P(\xi)$ is then a chi-squared distribution with one degree of freedom. A Gaussian distribution, however, has the feature that all of its moments can be calculated from the first and second moments. So, a more elegant approach is to say that $V [|\alpha|^{2}] = E[\alpha \alpha^{\ast} \alpha \alpha^{\ast}] - (E[\alpha \alpha^{\ast}])^{2}$, and then use the moment theorem (Isserlis' theorem) for complex Gaussian random processes \cite{Ree1,Iss1} to give $V [|\alpha|^{2}] =(E[|\alpha|^{2}])^{2} = n(\nu)^{2}$. The second term of Equation (\ref{A6}) is more straightforward because the variance is that of a pure Poisson distribution, $V \left[ n|E[|\alpha|^{2}] \right] = n(\nu)$, and so overall \begin{align} \label{A7} (\Delta P)^{2} & = \frac{1}{\tau} \int {\rm d} \nu \, \left[ \frac{P(\nu)^{2}}{N} + h \nu P(\nu) \right], \end{align} which reproduces the classical and quantum noise terms in Equation (\ref{A2}). An electromagnetic wave exhibits both wave-like and particle-like behaviour. Thus, heuristically, and with great caution, the image is that of a `blizzard' of particulate photons having an inhomogeneous density distribution. Photon counting statistics can be used to characterise different kinds of bunching at low light levels. At long wavelengths, photon energies are small (4 $\mu$eV at 1 GHz) whereas at short wavelengths they are large (40 meV at 10 THz). The photon rate in a wave carrying fixed power increases significantly as the frequency is falls, making it difficult to distinguish individual events. Generally speaking, as the frequency increases, a beam accrues more transverse modes and the quantum statistics changes, leading to rich and complex behaviour. At long wavelengths, background-limited detectors are characterised in terms of average photon flux and temporal variations in the flux, whereas at short wavelengths, they are characterised in terms of photon counting statistics and average dark count rate. \subsection{Multimode power detectors} \label{sec_mul_det} Section (\ref{sec_rad_nse}) assumes that the output of a detector replicates the statistical behaviour of the power at its input. When a detector can absorb energy through a number of transverse modes simultaneously, the situation is more complicated because it will in general respond differently to each of the modes. The characteristics of multimode classical fields are best described by second-order correlation functions, which for convenience can be written in terms of dyads $\overline{\overline{E}}({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \equiv {\rm E} \left[ {\bf E} ({\bf r}_{1},t_{1}) {\bf E}({\bf r}_{2},t_{2}) \right]$, where ${\bf E}({\bf r}_{1},t)$ is the vector valued electric field at space-time point $({\bf r},t)$. Correlation functions can be written using tensor or matrix notation, but dyadic algebra \cite{Mor1} is common in electromagnetism, and elasticity, and it is particularly convenient for vector-valued correlations, not least because of its similarity with scalar expressions. In paraxial systems, energy is assumed to flow with respect to some optical axis, $z$, and the field vectors are taken to be transverse, and so two dimensional. Using generic linear systems theory, or by formulating detailed electromagnetic models, it can be shown that the recorded power can always be written in the form \begin{align} \label{B1} P(t) & = \int_{\cal D} {\rm d}^{3} {\bf r}_{1} \int_{\cal D} {\rm d}^{3} {\bf r}_{2} \int {\rm d} t_{1} \int {\rm d}t_{2} \, \overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}), \end{align} where $\overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is a dyadic field, called the {\em response tensor}, which characterises the energy-absorbing properties of the device. The spatial integrals are evaluated over the input reference surface of the device, whose outline defines some domain ${\cal D}$. $\overline{\overline{X}} \cdot \cdot \, \overline{\overline{Y}}$ denotes contraction of the dyads $\overline{\overline{X}}$ and $\overline{\overline{Y}}$ to a scalar, and corresponds to the `trace of the product' ${\rm Tr} \left[ X Y \right]$ when matrices are used to represent correlations between polarisations. $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ can be transformed into the spatial ($k$) domain and/or the temporal frequency ($\omega$) domain, and the same functional form returns, suggesting that Equation (\ref{B1}) describes a basic physical process. For common detectors, $\overline{\overline{D}} (t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is time-shift invariant, and if the field is statistically stationary, $\overline{\overline{E}} ({\bf r}_{1},t_{1};{\bf r}_{2},t_{2})$ is also time shift invariant. The detected power is then time invariant, and (\ref{B1}) reduces to the spectral form \begin{align} \label{B2} P & = \int_{\cal D} {\rm d}^{3} {\bf r}_{1} \int_{\cal D} {\rm d}^{3} {\bf r}_{2} \int {\rm d} \omega \, \overline{\overline{D}} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega). \end{align} Mathematically, Equation (\ref{B2}) describes the full contraction of two tensor fields to a real-valued quantity, and is the most obvious way of creating a scalar, the measured power, from the second-order correlation function of the partially coherent field. In the abstract vector space of tensor fields, Equation (\ref{B2}) describes the orthogonal projection of a tensor that describes the state of coherence of the field onto a tensor that describes the state of coherence to which the detector is maximally receptive. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 20cm 7cm 4cm, width=60mm]{Figure1.pdf} \caption{Natural modes of the field ${\bf U}_{n} ({\bf r})$, carrying power $\beta_{n}$, couple to the natural modes of the detector ${\bf R}_{m} ({\bf r})$, having responsivity $\alpha_{m}$, through the scattering parameters $S_{mn}$. The total power absorbed, and recorded, is given by a sum of these processes.} \label{figure1} \end{figure} Because $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ are Hermitian, and square integrable, they can be diagonalised by the decompositions \begin{align} \label{B3} \overline{\overline{D}} ({\bf r}_{1},{\bf r}_{2},\omega) & = \sum_{m} \alpha_{m} {\bf R}_{m} ({\bf r}_{1}) {\bf R}_{m}^{\ast} ({\bf r}_{2}) \\ \label{B4} \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega) & = \sum_{n} \beta_{n} {\bf U}_{n} ({\bf r}_{1}) {\bf U}_{n}^{\ast} ({\bf r}_{2}), \end{align} where all quantities on the right are frequency dependent. The basis functions ${\bf R}_{m} ({\bf r})$ form an orthogonal set over ${\cal D}$, and likewise for ${\bf U}_{n} ({\bf r})$. Substituting Equations (\ref{B3}) and (\ref{B4}) in Equation (\ref{B2}) gives \begin{align} \label{B5} P & = \int {\rm d} \omega \, \sum_{mn} \alpha_{m} (\omega) \beta_{n} (\omega) | S_{mn} (\omega) |^{2} \\ \nonumber S_{mn} (\omega) & = \int_{\cal D} {\rm d} {\bf r} \, {\bf R}_{m}^{\ast} ({\bf r}) \cdot {\bf U}_{n} ({\bf r}). \end{align} which is called the {\em coupled mode model} \cite{Wth1,Sak1}. In Equation (\ref{B4}), the partially coherent field $\overline{\overline{E}}$ is described by an incoherent superposition of fully coherent fields ${\bf U}_{n} ({\bf r})$, each of which carries power $\beta_{n}$. In Equation (\ref{B3}), the response function $\overline{\overline{D}}$ is described by a set of complex-valued reception patterns ${\bf R}_{m} ({\bf r})$, each of which has some responsivity $\alpha_{m}$. The reception patterns are the individual degrees of freedom through which the device can absorb energy. In the $k$ domain, they correspond to the complex-valued angular beam patterns of the individual modes of the device. They are determined by the shape and size of the device (the boundary conditions), and the spatial coherence length of the solid-state processes responsible for absorption: such as electron-phonon interactions or spin-wave damping. According to Equation (\ref{B5}), the detected power is given by scattering, $S_{mn} (\omega)$, the natural modes of the field onto the natural modes of the detector: Figure \ref{figure1}. The coupling is maximised when the field modes and detector modes couple in one-to-one correspondence. From a photon perspective, one might now be concerned about the appearance of additional partition-noise effects. Suppose that two different detectors, $a$ and $b$, are somewhere in an incoming optical beam: for example, two pixels in an imaging array. It can be shown \cite{Sak1}, using the Poisson mixture technique and Gaussian moment theorem, that the covariance $C[P_{a},P_{b}]$ of the outputs of the detectors due to fluctuations in the incident field are \begin{align} \label{B6} & C[P_{a},P_{b}] = \\ \nonumber & \frac{1}{\tau} \int {\rm d} \omega \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{1} \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{2} \int_{{\cal D}_{b}} {\rm d}^{3} {\bf r}_{3} \int_{{\cal D}_{b}} {\rm d}^{3} {\bf r}_{4} \, \overline{\overline{D}}_{a} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \overline{\overline{E}} ({\bf r}_{2},{\bf r}_{3},\omega) \cdot \cdot \, \overline{\overline{D}}_{b} ({\bf r}_{3},{\bf r}_{4},\omega) \cdot \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{4},\omega) \\ \nonumber & + \frac{\delta_{a,b}}{\tau} \int {\rm d} \omega \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{1} \int_{{\cal D}_{a}} {\rm d}^{3} {\bf r}_{2} \, \hbar \omega \overline{\overline{D}}_{a} ({\bf r}_{1},{\bf r}_{2},\omega) \cdot \cdot \, \overline{\overline{E}} ({\bf r}_{1},{\bf r}_{2},\omega). \end{align} The first term characterises the classical fluctuations, and the second term the photon shot noise. $a=b$ gives the variance in the output of a single detector. When $a \neq b$, the Dirac delta function $\delta_{a,b}$ indicates that photon absorption in different detectors is not correlated. Equations (\ref{B2}) and (\ref{B6}) may seem complicated, but when $\overline{\overline{D}}$ and $ \overline{\overline{E}}$ are sampled spatially for numerical modelling they reduce to the Trace of a product of matrices. They are valuable when choosing the sizes, spacings and layouts of pixels in an imaging array to optimise efficiency and information recovery. For example, the modal approach is well suited to understanding straylight and radiation noise in pixels that couple poorly to the high-transmission modes of the preceding optical system. In addition, the response function technique can be used to model the behaviour of complete instruments, rather than just the detectors, leading to many applications. \section{Power detection - a quantum perspective} \label{sec_pow_qua} Section \ref{sec_pow_det} describes a way of modelling power detectors, but if we adopt a quantum-mechanical approach, do we arrive at the same mathematical form? Consider an energy absorbing system having certain physical properties described by the Hermitian operators $\hat{A}$ and $\hat{B}$, and a source described by a generalised force $\hat{F}$, which may be an electric field, magnetic field, vector potential or some other perturbing quantity such as a strain field: Figure 2. If $\hat{H}_{\rm sys}$ and $\hat{H}_{\rm src}$ are the Hamiltonians of the system and source, the overall Hamiltonian is $\hat{H} = \hat{H}_{\rm sys} + \hat{H}_{\rm src} + \hat{H}_{\rm int} =\hat{H}_{0} + \hat{H}_{\rm int}$, where \begin{align} \label{C1} \hat{H}_{\rm int} (t) & = \kappa \int_{\cal V} {\rm d}^{3} {\bf r} \, \hat{A} ({\bf r},t) \cdot \hat{F} ({\bf r},t), \end{align} and $\kappa$ is a variational parameter. The interaction Hamiltonian $\hat{H}_{\rm int} (t)$ means that the source influences the time evolution of the system, and vice versa. If the force is constant over the volume of the system, $\hat{F} ({\bf r},t) = \hat{F} (t)$, it can be taken outside of the integral; if the force is a scalar, $\hat{F} ({\bf r},t) = F ({\bf r},t)$, it merely scales the interaction energy. $\hat{F} ({\bf r},t)$ acts on the state space of the source, which in the case of electromagnetic radiation is the multi-mode Fock space of field. The connected property of the system $\hat{A} ({\bf r},t)$ acts on the multi-particle state space of the system, such as the electrons in a conductor. If the source and system are not entangled prior to the force being applied, the initial composite state is the tensor product $| \psi (t_{0}) \rangle = | \psi_{\rm sys} (t_{0}) \rangle | \psi_{\rm src} (t_{0}) \rangle$. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 20cm 7cm 4cm, width=60mm]{Figure2.pdf} \caption{Generalised force $\hat{F} ({\bf r},t)$ acts on an energy absorbing system having physical characteristics $\hat{A} ({\bf r},t)$ and $\hat{B} ({\bf r},t)$. The rate at which work is done on the system gives rise to a classical measure of instantaneous power $P(t)$.} \label{figure2} \end{figure} In the Schr\"{o}dinger Picture, the composite state evolves from $t_{0}$ to $t$ according to the time evolution operator, $| \psi(t) \rangle = \hat{U}(t,t_{0}) | \psi(t_{0}) \rangle$. In the Heisenberg Picture, time evolution is attached to the operators themselves, $\hat{A}^{\rm H}(t) = \hat{U}^{\dagger}(t,t_{0}) \hat{A}(t_{0}) \hat{U}(t,t_{0})$, leaving the states invariant $| \psi(t) \rangle = | \psi(t_{0}) \rangle$, and leading to the idea of measuring a quantity at some point in time. For perturbative influences, it is best to use the Interaction Picture, where part of the time evolution is attached to the states, and part to the Hermitian operators. Define a new time-shift operator $\hat{S}(t,t_{0}) = e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar} \hat{U}(t,t_{0})$, where the part that would have happened anyway in the absence of the perturbation is removed from $\hat{U}(t,t_{0})$ through time reversal, $e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar}$. The states then evolve according to the perturbation, $| \psi(t)\rangle^{\rm I} = \hat{S}(t,t_{0}) | \psi(t_{0}) \rangle$, and the operators according to the free evolution $\hat{A}^{\rm I}(t) = e^{+ i \hat{H}_{0} (t-t_{0}) / \hbar} \hat{A}(t_{0}) e^{- i \hat{H}_{0} (t-t_{0}) / \hbar}$. It can be shown that time shift operator, or scattering operator, is given by \begin{align} \label{C2} \hat{S}(t,t_{0}) = \stackrel{\leftarrow}{\cal T} \left[ \exp \left\{ \left( \frac{-i}{\hbar} \right) \int_{t_{0}}^{t} {\rm d}t' \hat{H}^{I}_{\rm int} (t') \right\} \right] \hspace{10mm} t \ge t_{0}, \end{align} where $\stackrel{\leftarrow}{\cal T}$ indicates that once the exponential is written as a power series, all operators should be arranged in order of increasing time, from right to left. Although Equation (\ref{C2}) comes from the iterated solution of a differential equation, it can be appreciated, with some care, that if time is discretised, evolution occurs through a sequence of exponential factors where the interaction energy is approximately constant during each step. First-order theory only uses the first two terms of Equation (\ref{C2}), \begin{align} \label{C3} \hat{S} (t,t_{0}) & \approx 1 - \frac{i}{\hbar} \int_{t_{0}}^{t} {\rm d}t' \, \hat{H}^{I}_{\rm int} (t'), \end{align} linearising $\hat{S} (t,t_{0})$ in $\kappa$. The higher-order terms, describing more complicated virtual processes, could be included if nonlinear behaviour is of interest. Suppose that some other physical quantity $\hat{B} ({\bf r},t)$ responds to the perturbation. Without additional assumptions, straightforward algebra shows that \begin{align} \nonumber \hat{B}^{H} ({\bf r},t) & = \hat{S}^{\dagger} (t,t_{0}) \hat{B}^{I} ({\bf r},t) \hat{S}(t,t_{0}) \\ \label{C4} & \approx \hat{B}^{I} ({\bf r},t) - \kappa \frac{i}{\hbar} \int_{t_{0}}^{t} {\rm d}t' \, \int_{\cal V} {\rm d}^{3} {\bf r}' \, \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \cdot \hat{F}^{I} ({\bf r}',t') \\ \nonumber & \equiv \hat{B}_{0}^{\rm H} ({\bf r},t) + \kappa \Delta \hat{B}^{\rm H} ({\bf r},t), \end{align} The first term, $\hat{B}_{0}^{H} ({\bf r},t)$, describes the free evolution of the system, and the second term, $\Delta \hat{B}^{H} ({\bf r},t)$, describes the linearised change brought about by the perturbation. Being in the Heisenberg picture, the expectation value of $\hat{B}^{H} ({\bf r},t)$ is found with respect to the state of the system at $t_{0}$, which is the reference time for the phase factors, and precedes the time at at which the perturbation turns on. Carrying out the same operation on the right of Equation (\ref{C4}): \begin{align} \label{C5} \langle \Delta \hat{B}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F} ({\bf r}',t') \rangle_{t_{0}}, \end{align} which has factored because the source and system are not entangled at $t_{0}$. The upper limit on the integral has been changed by including the step function $\theta(t-t')$, which enforces causality, and the lower limit has been changed because the source is assumed to turn on after $t_{0}$. Equation (\ref{C5}) is called Kubo's formula \cite{Kub1}, and is the quantum equivalent of a classical response function. How should the expectation values be evaluated? The source and system are in definite quantum states at $t_{0}$, but because of the extremely large number of degrees of freedom involved (for example the numerous electrons in an absorbing film), we cannot hope to know, or wish to know, what they are! The expectation values are therefore calculated through $\langle \hat{X} \rangle = \sum_{n} P_{n} \langle n | \hat{X} | n \rangle = {\rm Tr} [ \hat{\rho} \hat{X} ]$ where $\hat{\rho} = \sum_{n} P_{n} | n \rangle \langle n |$, and $P_{n}$ is the probability that the state is one of a complete set of eigenstates, $| n \rangle$. The density operator $\hat{\rho}$ incorporates two uncertainties: (i) our lack of certainty about which state the configuration is in; (ii) nature's lack of certainty about which eigenstate the system will collapse into when a measurement is made. If the absorber is cooled by a refrigerator, the system's density operator is $\hat{\rho}_{\rm sys} = \exp [\hat{H}_{\rm sys}/kT_{\rm sys}]$; and if the source is thermal radiation emitted by a warm load $\hat{\rho}_{\rm src} = \exp[\hat{H}_{\rm src}/kT_{\rm src}]$. By introducing these thermodynamic operators, an {\it open quantum system} has been created, hiding the quantum mechanics of the refrigerator and thermal source from view. Using density operators, and elevating the response to vector-valued quantities, \begin{align} \label{C6} \langle \Delta \hat{B}^{H} ({\bf r},t) \rangle_{t_{0}} & = \int_{-\infty}^{+\infty} {\rm d}t' \, \int_{\cal V} {\rm d}^{3} {\bf r}' \, \overline{\overline{D}}_{\rm BA}({\bf r},t; {\bf r}';t') \cdot {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F} ({\bf r}',t') \right] \\ \label{C7} \overline{\overline{D}}_{BA}({\bf r}, t; {\bf r}',t') & = \frac{-i}{\hbar} \theta(t-t') {\rm Tr} \left[ \hat{\rho}_{\rm sys} (t_{0}) \left[ \hat{B}^{I} ({\bf r},t), \hat{A}^{\rm I} ({\bf r}',t') \right] \right]. \end{align} Equation (\ref{C7}) is the dyadic form of Kubo's formula \cite{Kub1}, which describes how macroscopic characteristics such as impedance, dielectric constant, and permeability emerge from the microscopic behaviour of the solid state system. The elements of $\overline{\overline{D}}(t,t')$ are called {\em retarded Green's functions}, and it can be shown that they describe rather beautifully how a system responds when an excitation is introduced. For example, the injection of an electron or hole at one space-time point may be correlated with the appearance of an electron or hole elsewhere. Crucially, there is no need to follow every degree of freedom, but simply to know how the system responds when excitations are introduced. Green's functions are used extensively for modelling the behaviour of materials, such as the bulk impedance of superconducting films \cite{Zub1} and proximity effects \cite{Bel1}. To calculate the behaviour of a power detector, it is necessary to know the instantaneous rate at which work is done on the system by the source, which is given by \begin{align} \label{C8} \hat{P} (t^{\prime \prime}) & = \int_{\cal V} {\rm d}^{3} {\bf r} \, \hat{F}( {\bf r}, t^{\prime \prime}) \frac{\rm d}{{\rm d}t^{\prime \prime}} \Delta \hat{A} ({\bf r},t^{\prime \prime}), \end{align} which should be compared with force times rate of change of displacement. Substituting $\Delta \hat{A} ({\bf r},t^{\prime \prime})$ and calculating the expectation value gives \begin{align} \label{C9} \langle \hat{P} (t^{\prime \prime}) \rangle & = \int_{\cal V} {\rm d}^{3} {\bf r} \int_{\cal V} {\rm d}^{3} {\bf r}' \int_{-\infty}^{+\infty} {\rm d}t' \, \, \left\{ \frac{- i}{\hbar} \frac{\rm d}{{\rm d}t^{\prime \prime}} \theta(t^{\prime \prime}-t') {\rm Tr} \left[ \hat{\rho}_{\rm sys}( t_{0}) \left[ \hat{A}^{I} ({\bf r},t^{\prime \prime}), \hat{A}^{I} ({\bf r}',t') \right] \right] \right\} \\ \nonumber & \times {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}, t^{\prime \prime}) \hat{F}( {\bf r}', t') \right]. \end{align} Equation (\ref{C9}) nearly has the form of Equation (\ref{B1}), but not quite: (i) Equation (\ref{C9}) describes energy flow into the system, but does not give the quantity that is recorded at the output. (ii) Equation (\ref{B1}) has surface integrals whereas Equation (\ref{C9}) has volume integrals, and so the system is a volumetric absorber, rather than having a reference surface. Strictly, the volume integrals need transforming into surface integrals, but if the source is uniform, the problem simplifies anyway. (iii) The source term corresponds to excitation in the absence of any scattering in the medium, and so ignores screening. If screening is included, the functional form of the coupled-mode model does not change, but the expression for the response tensor does; say in the case of multi-layered patterned detector arrays \cite{Wth2}. (iv) Equation (\ref{C9}) is based on the instantaneous work done, and so potentially includes energy flowing in and out of the detector in a reactive manner. All of these items can be dealt with in a straightforward way, returning the functional form of (\ref{B1}). For example in the case of (i), a detector has some responsivity, which converts the instantaneous power into the recorded output, such as a voltage, and this conversion is band limited. Equation (\ref{C9}) can be convolved with some causal response function $g(t-t^{\prime \prime})$, which describes the conversion process, to give \begin{align} \label{C10} P(t) & = \int_{\cal V} {\rm d} {\bf r}_{1} \int_{\cal V} {\rm d} {\bf r}_{2} \int {\rm d} t_{1} \int {\rm d}t_{2} \, K(t;{\bf r}_{1},t_{1};{\bf r}_{2},t_{2}) \, F({\bf r}_{1},t_{1};{\bf r}_{2},t_{2}), \end{align} where \begin{align} \label{C11} K(t; {\bf r}_{1},t_{1} ; {\bf r}_{2},t_{2}) & = g(t-t_{1}) \frac{\rm d}{{\rm d}t_{1}} \left\{ \frac{- i}{\hbar} \Theta(t_{1}-t_{2}) {\rm Tr} \left[ \hat{\rho}_{\rm sys} (t_{0}) \left[ \hat{A}^{I} ({\bf r}_{1},t_{1}), \hat{A}^{I} ({\bf r}_{2},t_{2}) \right] \right] \right\} \\ \label{C12} F({\bf r}_{1}, t_{1}; {\bf r}_{2},t_{2}) & = {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}_{1}, t_{1}) \hat{F}( {\bf r}_{2}, t_{2}) \right]. \end{align} Equation(\ref{C10}) is very similar to (\ref{B1}), but now the process responsible for energy absorption and the source fields are both described by quantum correlation functions, and so quantum properties are included. Creating an output signal by simply smoothing the expected value of the absorbed power is somewhat arbitrary. An information-theoretic approach is as follows. Suppose that $P(u|v)$ is the conditional probability that an observer records output $u$ when the object being measured is in eigenstate $| v \rangle$. If the object is actually in state $\hat{\rho}_{\rm int}$, then \begin{align} \nonumber P( u ) & = \int {\rm d} v \, P(u|v) \langle v | \hat{\rho}_{\rm int} | v \rangle \\ \label{C13} & = {\rm Tr} \left[ \hat{W} (u) \hat{\rho}_{\rm int} \right], \end{align} where \begin{align} \label{C14} \hat{W} (u) & = \int {\rm d} v \, P(u|v) | v \rangle \langle v |. \end{align} $\hat{W} (u)$ looks like a density operator, but is a {\em measurement operator}. Equation (\ref{C9}) can be cast in this way, where $| v \rangle$ is the state of the composite system, $| \psi(t) \rangle$, and $P(u|v)$ is closely related to $g(t-t^{\prime \prime})$. $\hat{W} (t)$ describes the act of acquiring classical information about the absorbing system, as it interacts quantum mechanically with the source. This shift of perspective is not merely pedantry; it is intimately related to the notion of back action, where the state of the object being probed changes as a consequence of information being accrued. In information theoretic descriptions of quantum measurement, the quantum system being probed (the source) first interacts with some other quantum system (the device), which may itself be warm and in a mixed state, and which then provides a classical estimator of some aspect of the source's behaviour. According to Von Neumann, when a measurement is made, the system collapses onto the eigenstate of the eigenvalue recorded. An identical measurement, immediately after the first, returns the same result with certainty. But this projective approach to state collapse does not seem to fit with the language of imperfectly constrained and continuous measurement, which is needed in the case of electrical circuits and sensing. Measurement operators such as $\hat{W} (t)$ allow a more nuanced approach. If the object being probed is initially in some mixed quantum state $\hat{\rho}_{\rm int}$, then after measurement it `collapses' into some new mixed, as distinct from pure, quantum state $\hat{\rho}_{\rm fin}$. Subsequent identical measurements allow additional information to be gathered; rather than simply repeating the same result as if there is no information left to be extracted. Each time $\hat{W} (t)$ is applied, information is acquired, and the entropy falls. Understanding the relationship between quantum information theory and sensor physics is an important part of progressing quantum technology. In the case of (iv), assume that the source is a single-mode, time-harmonic wave, \begin{equation} \label{C16} \hat{F}( {\bf r}, t) = f( {\bf r}) \hat{a} e^{-i \omega_{0} t} + f^{\ast}( {\bf r}) \hat{a}^{\dagger} e^{+i \omega_{0} t}, \end{equation} where $f( {\bf r})$ is some general factor. Then \begin{align} \nonumber & {\rm Tr} \left[ \hat{\rho}_{\rm src} (t_{0}) \hat{F}( {\bf r}_{1}, t_{1}) \hat{F}( {\bf r}_{2}, t_{2}) \right] \equiv \\ \label{C17} & f( {\bf r}_{1}) f( {\bf r}_{2}) e^{-i \omega_{0} (t_{1}+t_{2})} \langle \hat{a} \hat{a} \rangle + f^{\ast}( {\bf r}_{1}) f^{\ast}( {\bf r}_{2}) e^{+i \omega_{0} (t_{1}+t_{2})} \langle \hat{a}^{\dagger} \hat{a}^{\dagger} \rangle \\ \nonumber & + f( {\bf r}_{1}) f^{\ast}( {\bf r}_{2}) e^{-i \omega_{0} (t_{1}-t_{2})} \langle \hat{a} \hat{a}^{\dagger} \rangle + f^{\ast}( {\bf r}_{1}) f( {\bf r}_{2}) e^{+i \omega_{0} (t_{1}-t_{2})} \langle \hat{a}^{\dagger} \hat{a} \rangle , \end{align} The first two terms, in $t_{1}+t_{2}$, are fast as $t_{1}$ and $t_{2}$ increase, and are removed by the output smoothing; the last two terms, in $t_{1}-t_{2}$, are slow, and result in the recorded power. If the source is in a coherent quantum state, having complex amplitude $a$, then $\langle \hat{a}^{\dagger} \hat{a} \rangle \rightarrow a^{\ast} a$ and $\langle \hat{a} \hat{a}^{\dagger} \rangle \rightarrow a^{\ast} a + 1$, and for a high-occupancy state the source correlation function becomes a fully coherent classical correlation function. Therefore, the response function $K(t;{\bf r}_{1}, t_{1};{\bf r}_{2},t_{2})$ characterises the response to both classical and quantum sources; and its general properties can be imported from classical considerations, or by knowing how the device responds to quantum excitations. Equation (\ref{B1}) is defined in terms of {\em average power absorbed}, but Equation (\ref{C10}) is based on instantaneous {\em work done}, which potentially includes energy flowing in and out of the device, say a thin-film absorber, in a reactive manner. Because the response function is time-shift invariant, it can be Fourier transformed in $t_{1} - t_{2}$: $K({\bf r}_{1}; {\bf r}_{2}; t_{1} - t_{2}) \mapsto K({\bf r}_{1}; {\bf r}_{2}; \omega)$. Describing (\ref{C10}) in the Fourier domain, and using the classical-coherent limit of Equation (\ref{C17}), it can be shown that $P(t) \propto \left[ 1 + \cos (2 \omega_{0} t) \right] \cos (\theta) + \sin(2 \omega_{0} t) \sin(\theta)$, where $\theta$ is the phase of $K({\bf r}_{1}; {\bf r}_{2}; \omega)$. For $-\pi/2 \le \theta \le + \pi/2$ the first term, which is proportional to $\cos(\theta)$, the {\em power factor}, is always positive and describes time varying power dissipated in the detector: the power that flows into and stays in the detector varies in time (a principle closely related to {\em homodyne} detection); it has a time-averaged value of unity (a principle exploited in power detectors). The second term, which is proportional to $\sin(\theta)$, has a time-averaged average value of zero, and describes energy sloshing in and out of the detector. Thus the real part of the response function, $\Re \left[ K({\bf r}; {\bf r}'; \omega) \right]$, characterises dissipation, and is a manifestation of Fermi's Golden Rule, and the imaginary part, $ \Im \left[ K({\bf r}; {\bf r}'; \omega) \right]$, energy storage. These processes happen at the input regardless of the smoothing action of the output filter, which ensures that only the time average part of the dissipated power contributes to the recorded output. One word of warning is that in physics and engineering, the roles of the real and imaginary parts of $K({\bf r}_{1}; {\bf r}_{2}; \omega)$ are swapped. In engineering, it is the real part of an impedance that describes loss, $R + j \omega L$, but in physics, it is the imaginary part of Kubo's susceptance that describes loss, and radiates noise as described by the {\em dissipation fluctuation theory} \cite{Flc1}. This difference can be traced to Equation (\ref{C8}), where physicists use the dynamical variables $\hat{F}( {\bf r}, t)$ and $\Delta \hat{A} ({\bf r},t)$, whereas engineers use $\hat{F}( {\bf r}, t)$ and ${ \rm d} \Delta \hat{A} ({\bf r},t) / {\rm d} t$, and so is a matter of convention only. Finally, we note that for vector-valued fields, and adopting the engineering convention, it is the Hermitian part of the response tensor that corresponds to dissipated energy, and the anti-Hermitian part that corresponds to stored energy. Thus $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ is the Hermitian part of $\overline{\overline{K}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ when Equation (\ref{C10}) is upgraded to its full tensor form, and used in Equation (\ref{B1}). The coupled mode model, Section \ref{sec_mul_det}, is based on the fact that the response function and correlation function in Equation (\ref{B1}) are both Hermitian and so can be diagonalised. In the quantised case, the response function is the same as the classical case, and so it can be diagonlised as before giving the modes of the detector. The field correlation function, however, is not Hermitian, Equation (\ref{C17}): the positive and negative parts have different physical interpretations. The positive frequency part describes photon absorption, whereas the negative frequency part describes photon emission, including spontaneous emission. This difference leads to characteristic features of thermal fields. For a more complete description of the quantised case, the response function and quantum correlation function should be split into their Hermitian and anti-Hermitian parts. The behaviour becomes more involved, but the overall scheme still describes the way in which the degrees of freedom present in the source couple to the degrees of freedom in the system available for absorbing energy. One effect is that a detector can radiate energy back into the source, and this radiated energy, which is fluctuating, can act as a noise source because it causes the energy stored in the device to vary: fluctuating power travels in both directions. Normally, noise is considered to be something that comes in from outside! If the device is at a higher temperature than the source, including internal heating such as hot-electron effects, the radiation noise can be greater than the noise associated with the source itself. Once again it is clear that the source and detector must be considered parts of a collected whole if all aspects of behaviour are to be understood. The fact that $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ characterises power absorption when classical sources are used is of considerable importance, because it shows that the complex-valued reception patterns, or antenna patterns, ${\bf R}_{m} ({\bf r}_{1})$ and the associated responsivities $\alpha_{m}$ can be determined interferometrically through power measurements alone. The technique is called Energy Absorption Interferometry (EAI) \cite{EAI1} and is closely related to aperture synthesis astronomy, but now the device under test is radiated with two phase locked coherent sources, rather than measuring the angular correlations emitted by thermal sources. Moreover, $\overline{\overline{D}}({\bf r}_{1}; {\bf r}_{2}; \omega)$ connects principles of reciprocity, where the dynamical degrees of freedom responsible for absorbing energy are the same as the degrees of freedom responsible for imposing near-field and far-field correlations on the thermally radiated fields \cite{Rec1}. \section{Linear amplifiers - a classical perspective} \label{sec_amp_cla} Many aspects of sensing relate to measuring voltages and currents, or at least to amplifying weak signals to a level where classical signal processing can be performed. Rather than analysing complex circuits, it is common to place standard configurations in black boxes, to drive the ports with a set of independent variables (voltage and/or current) and to observe the response through a set of dependent variables. This approach focuses on those degrees of freedom accessible from the outside, and gives rise to small-signal impedance, admittance and hybrid circuit parameters. The multiport network approach leads to many general theorems, and can answer questions such as `is this device capable of producing gain' and if so `what embedding network is needed to achieve it'. In addition, voltage and current sources can be added to the ports to represent internally generated noise. There must be one noise source for every dependent variable, and every pair of noise sources has a complex correlation coefficient. The noise sources can be referenced to other ports for convenience. Additionally, external circuit elements may also produce noise. In the case of amplifiers, a two-port network is sufficient. The noise sources are usually referenced to the input port, and take the form of a parallel current source and series voltage source \cite{Two1}. The correlation coefficient between them can be represented by introducing a fictitious noise impedance. Given an amplifying device, in a black box, one usually wishes to achieve three things: (i) Maximise the power gain by ensuring that the signal-source impedance $Z_{\rm s, pow}$ is conjugately matched to the impedance seen at the input of the loaded device. (ii) Choose a source impedance that interferes, and decorrelates to the highest possible degree, the currents induced in the output load by the two noise sources, as this minimises the recorded noise. There is some particular signal-source impedance $Z_{\rm s, nse}$ that minimises an amplifier's overall noise: a process called {\em noise matching}. (iii) Ensure that these optimisations do not result in the active device oscillating. Usually $Z_{\rm s, pow} \ne Z_{\rm s, nse}$ and ingenious schemes are needed to align the impedances, or simply to select the best compromise. The matter of whether it is best to have high gain or low noise is quantified through the concept of {\em noise measure}. At high frequencies, when the wavelength is smaller than the dimensions of the circuit, it is beneficial to use a travelling wave representation. The ports of the black box are loaded with transmission lines having real characteristic impedance $Z_{0}$. The independent variables are the amplitudes $a(t)$ of the waves incident on the ports; and the dependent variables are the amplitudes $b(t)$ of the waves travelling away from the ports \cite{Kur1}. The relationships, in the frequency domain, between the voltage $v(\omega)$ and current $i(\omega)$ at some reference plane and the complex wave amplitudes are \begin{align} \label{D1} a(\omega) & = \frac{1}{2 \sqrt{Z_{0}}} \left[ v(\omega) + i(\omega) Z_{0} \right] \\ \nonumber b(\omega) & = \frac{1}{2 \sqrt{Z_{0}}} \left[ v(\omega) - i(\omega) Z_{0} \right]. \end{align} Generally, $a(\omega)$ and $b(\omega)$ are stochastic quantities, which must be averaged over an ensemble. The average power spectral density incident on a port is $S_{\rm a} = E \left[ a a^{\ast} \right]$. Internal noise is represented by allowing noise waves to travel away from the ports even in the absence of external excitation. As in the discrete case, the travelling wave amplitudes may be correlated, and so a correlation matrix is needed whose diagonal elements are spectral powers. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 14cm 7cm 6cm, width=70mm]{Figure3.pdf} \caption{Scattering parameter representations: (a) One-port network with a signal source $a_{s}$. (b) Two-port network with internal noise sources $n_{1}$ and $n_{2}$.} \label{figure3} \end{figure} A multiport device is then represented by a signal flow graph. Figure \ref{figure3} shows signal flow graphs of (a) a one-port network with source, and (b) a two port network with internal noise sources. Figure 4 (a) shows the signal flow graph of a two-port network with the internal noise sources {\em referenced} to the input. Often, two-port networks are cascaded, and it is necessary to know the complex amplitude of the wave travelling away from the output in terms of the wave incident on the input. The existence of loops in connected networks, creating internal resonances, makes the analysis of signal flow graphs awkward. In the 1950's Mason introduced a {\em non-touching loop rule} \cite{Msn1} that allows expressions for interdependencies to be derived (Transducer Gain, Available Gain, Maximum Stable Gain, reflection coefficients, etc.) Consider the signal flow graph shown in Figure \ref{figure4} (a). The amplifying device is represented by a two-port network, with its internal noise sources referenced to the input. The sources comprise a noise wave effectively incident on the input $a_{n1}$, a noise wave travelling away from the input $b_{n1}$, and a complex correlation coefficient $\Gamma$ between them. As discussed in Section \ref{sec_rad_nse}, a perfectly matched passive warm termination radiates power into a single mode transmission line. Therefore, it is possible to describe power spectral densities in terms of equivalent temperatures: $T_{\rm a} = E \left[ a a^{\ast} \right] / k$ and $T_{\rm b} = E \left[ b b^{\ast} \right] / k$, where it is conventional to use the Rayleigh-Jeans limit in the definition: and it is only a definition. Also, the complex correlation coefficient between the wave amplitudes, $\Gamma$, can be written as a complex-valued `temperature' $T_{\rm c} = \Gamma \sqrt{ T_{\rm a} T_{\rm b}}$. If the device is connected to noiseless terminations having impedance $Z_{0}$, the noise power effectively incident on the device $E \left[ a a^{\ast} \right]$ accounts for all of the noise appearing at the output, and the noise temperature becomes $T_{\rm n} = T_{\rm a}$. However, a noise wave also travels away from the input, having noise temperature $T_{\rm b}$, and there is no reason why this should not be significantly larger than $T_{\rm a}$. Commercial suppliers report the noise temperature $T_{\rm a}$, but do not report $T_{\rm b}$, and so care is needed because noise power is fed back into the source. If the supplier has done a good job of minimising the external effects of the internal sources, $T_{\rm c} = 0$: otherwise you could find a source impedance that reduces the noise temperature below that claimed by the manufacturer! If an amplifier is connected to a source having a non-zero reflection coefficient $\Gamma_{\rm src}$, referenced to $Z_{0}$, Figure \ref{figure4} (b), the outward travelling wave is partially reflected back in, and if $b$ is correlated with $a$, the noise wave travelling away from the output can, because of constructive interference, be significantly enhanced. The noise temperature is given by \begin{align} \label{D2} T_{\rm n} = T_{\rm a} + |\Gamma_{\rm src}|^{2} T_{\rm b} + 2 T_{\rm c} |\Gamma_{\rm src}| \cos(\phi_{\rm c} + \phi_{\rm src}). \end{align} If the phase of the source reflection coefficient, $\phi_{\rm src}$, changes rapidly with frequency, say due to a long interconnecting cable, the noise temperature can vary widely and rapidly, with a peak variation $T_{\rm pk} = 4 T_{\rm c} | \Gamma_{\rm src} |$. Equation (\ref{D2}) does not include the input reflection coefficient of the amplifier, and so this equality holds regardless of whether the input of the amplifier is matched or not. Commercial amplifiers, as distinct from transistors, are noise matched internally, meaning that $T_{\rm c}=0$ at band centre. It can also be shown that when the source reflection coefficient $\Gamma_{\rm src}$ and input reflection coefficient of the amplifier $\Gamma_{\rm amp}$ are non zero, a resonant noise wave exists on the input transmission line, scaling as $\propto 1/|1-\Gamma_{\rm src} \Gamma_{\rm amp}|^{2}$, which can be large when $\Gamma_{\rm src} = \Gamma_{\rm amp}^{\ast}$. Here $\Gamma_{\rm amp} = S_{11}$. Another hazard is that most amplifiers have a large forward gain $|S_{21}|^{2}$ and tiny reverse gain $|S_{12}|^{2}$. Some ultra-low-noise amplifiers, such as certain parametric amplifiers, can have $|S_{12}|^{2} \approx 1$, and even gain in the reverse direction. Forward travelling noise from a second stage can be brought forward to the input of the first stage, and increase the overall noise temperature. To make sure that the system noise temperature is insensitive the source reflection coefficient, it is prudent to place a cooled circulator in front of the first amplifier. The noise-isolating role of the circulator has nothing to do with matching the input for maximum power gain, and because circulators are lossy and cumbersome, they are usually considered to be a technological nuisance. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 12cm 6cm 6cm, width=70mm]{Figure4.pdf} \caption{(a) Two-port scattering parameters with the internal noise sources referenced to the input. (b) Complete amplifier with signal source reflection coefficient $\Gamma_{\rm src}$.} \label{figure4} \end{figure} Before closing this section, it is beneficial to reconsider flow graphs. In technical literature, analysis is carried out using Mason's rule, and for cascaded networks only having sources at the input and output ports this is sufficient. In the case of complicated networks, having internal noise sources, it is algebraically tedious to trace the effect of every source to the external ports. To make matters worse, the internal sources may be correlated and only the correlation matrix of the internal sources is known. Then it is only possible to calculate, even in principle, the correlation matrix of the noise waves appearing at the external ports. In the quantum case, one only has correlation functions of the kind $\langle \hat{b}_{{\rm s},i} \hat{b}_{{\rm s},j} \rangle$, and so a direct mapping from the source correlation functions to output correlation functions is needed. To this end, the {\em connection matrix method}, which is built on the rich mathematical topic of {\em directed flow graphs}, is valuable. For any general network having $M$ nodes, and with $N$ sources entering the nodes, collect the travelling wave amplitudes at the nodes into column vector ${\mathsf d}$, and the travelling wave amplitudes of the sources entering the nodes into column vector ${\mathsf n}$, then \begin{align} \label{D3} {\mathsf d} & = {\mathsf C} {\mathsf d} + {\mathsf n} \\ \nonumber & = \left[ {\mathsf I} - {\mathsf C} \right]^{-1} {\mathsf n} \\ \nonumber & = {\mathsf K} {\mathsf n}, \end{align} where the $i,j$th entry in ${\mathsf C}$ is the complex scattering parameter that connects node $j$ to node $i$. $N$ may be smaller than $M$, resulting in ${\mathsf n}$ having some zero entries. ${\mathsf I}-{\mathsf C}$ is sparse because only a small number of nodes are connected directly. Only the correlation matrix of the sources is known, ${\mathsf N}_{\rm s}$, and so it is only possible to calculate the correlation matrix of the resulting travelling waves, ${\mathsf N}_{\rm c}$. Equation (\ref{D3}) gives \begin{align} \label{D4} {\mathsf N}_{\rm c} & = {\mathsf K} {\mathsf N}_{\rm s} {\mathsf K}^{\dagger}, \end{align} from which the correlation matrix of the waves of interest can be extracted. To carry out calculations when quantum noise is present, it is necessary to be careful about vacuum-state noise, because vacuum noise enters through even seemingly unused ports. The bosonic commutation relationships between the travelling wave amplitudes on the port transmission lines must be maintained. \section{Linear amplifiers - a quantum perspective} \label{sec_amp_qua} \subsection{Quantum equivalent circuits} \label{sec_eqv_qua} At low frequencies simple electrical circuits are modelled using discrete components, which are then quantised through Lagrangian methods \cite{Lng1}, but in the case of complicated circuits, one is left searching for the Lagrangian that gives the correct answer! Consider an $L$-$C$ resonator made of discrete components. It can be shown that a perfect classical voltage or current source places the resonator in a coherent state $|a\rangle$. $\hat{a}$ is not however Hermitian, and the complex amplitude $a$ is not directly measurable: it has two degrees of freedom, amplitude and phase. Instead, we are left with three possible real-valued measurements: the energy in the mode, the voltage across the resonator, and the current through the resonator. The voltage and current operators are \begin{align} \label{E1} \hat{v} (t) = i \left( \frac{\hbar \omega_{0}}{2 C} \right)^{1/2} \left( \hat{a}^{\dagger} e^{i \omega_{0} t} - \hat{a} e^{-i \omega_{0} t} \right) \\ \nonumber \hat{i} (t) = \left( \frac{\hbar \omega_{0}}{2 L} \right)^{1/2} \left( \hat{a}^{\dagger} e^{i \omega_{0} t} + \hat{a} e^{-i \omega_{0} t} \right). \end{align} If the resonator is in a coherent state, the number of excitations $n \equiv |a|^{2}$ is Poisson distributed. If one of the quadrature components, either $v$ or $i$, is measured repeatedly, with the system in the same coherent state before every measurement is made, the distributions show $(\Delta v)^{2} = \hbar \omega_{0} / 2 C$ and $(\Delta i)^{2} = \hbar \omega_{0} / 2 L$; these uncertainties do not depend on occupancy, and are minimum uncertainty states. This behaviour is illustrated in Figure \ref{figure5}. Now, however, quantum mechanics throws up a issue because $[i,v] = i \hbar \omega_{0}^{2}$, and so according to the generalised uncertainty relationship $ \Delta A \, \Delta B \ge \left| \langle i \left[ \hat{A},\hat{B} \right] \rangle \right| / 2$, it follows that $\Delta v \Delta i \ge \hbar \omega_{0}^{2} /2$. If the voltage is measured with an accuracy greater than the intrinsic uncertainty, $\sqrt{\hbar \omega_{0} / 2 C}$, then a subsequent measurement of current without re-establishing the state gives a variation that is considerably larger than $\sqrt{\hbar \omega_{0} / 2 L}$, and vica versa. It seems that immediate sequential measurements of voltage and current must be constrained by Heisenberg's uncertainty principle, in the same way that measurements of position and momentum are constrained in a freely oscillating mechanical resonator. A measurement of voltage or current leads to a {\em backaction} that changes the distribution that must be used on subsequent measurements. This general reasoning neglects to the role of dissipation, which leads to a finite quality factor $Q$, and introduces a time scale over which excitations are lost to the heat bath. In some cases with carefully chosen apparatus, it is possible to extract information without causing the state to change: a technique called {\em Quantum Non-Demolition} measurement. \begin{figure} \noindent \centering \includegraphics[trim = 4cm 19cm 7cm 4cm, width=70mm]{Figure5.pdf} \caption{Annihilation operator as a phasor in the complex plane. The real and imaginary parts are the current and voltage respectively. The red circle shows the variance on measurements.} \label{figure5} \end{figure} If a resonator is weakly coupled to a heat bath having temperature $T_{\rm p}$, expectation values can be calculated using the density operator $\hat{\rho}_{\rm thm}$, giving \begin{align} \label{E2} (\Delta v)^{2} & = \frac{\hbar \omega_{0}}{2C} \left[ 2 \langle n \rangle + 1 \right] = \frac{\hbar \omega_{0}}{2C} \coth (\hbar \omega_{0} / 2kT_{\rm p}) \\ \nonumber (\Delta i)^{2} & = \frac{\hbar \omega_{0}}{2L} \left[ 2 \langle n \rangle + 1 \right] = \frac{\hbar \omega_{0}}{2L} \coth (\hbar \omega_{0} / 2kT_{\rm p}). \end{align} Weak coupling implies a low-value series resistor or a high-value parallel resistor, but Equation (\ref{E2}) has been derived without analysing the behaviour of an R-L-C circuit---only the density operator was used. In classical circuit analysis, a spectral noise voltage source, $(\Delta v)^{2} = 4 k T_{\rm p} R$, must be included in series with any resistor; in the quantum case, this expression changes to $(\Delta v)^{2} = 2 \hbar \omega_{0} R \coth (\hbar \omega_{0} / 2kT_{\rm p})$. When circuit analysis is carried out, these spectral quantities are, effectively, multiplied by a bandwidth to give the actual variance in voltage. For a single-pole resonator, which has a Lorentzian frequency response, the bandwidth is $ \pi f_{0} / 2Q = \pi f_{0} \omega_{0} L / 2 R$, and Equation (\ref{E2}) is recovered. In the limit $\hbar \omega_{0} / T_{\rm p} \rightarrow 0$, the classical expression $(\Delta v)^{2} \rightarrow 4kT_{\rm p}R$ is found. The transition from classical to quantum behaviour occurs when $\hbar \omega_{0} \approx kT_{\rm p}$, which for a 5 GHz resonator happens at $T_{\rm p}=$ 240 mK. Dilution refrigerator technology routinely achieves 10 mK, showing that even RF circuits can be operated in the quantum regime, motivating the need for quantum circuit theory at low temperatures. The density operator $\hat{\rho}_{\rm thm}$ is based on thermodynamic considerations, and so tacitly assumes that the mechanism responsible for dissipation behaves as a weakly coupled heat bath having a large number of degrees of freedom. The overall quantum system is no longer closed, because no attempt is being made to track the behaviour of every degree of freedom: such as the electron-phonon system in a resistor. Strictly, the density operator no longer obeys von Neumann's differential equation for quantum operators, and must be described by the more complicated dynamics of the {\em Lindblad Master Equation}, which unlike Schr\"{o}dinger's equation, describes the time evolution of mixed quantum states. In the case of high-frequency circuits, a transmission-line representation with scattering parameters is usually best. Transmission lines are easily quantised, but with a note of caution. The voltage and current at every point are in phase, and related through the real-valued characteristic impedance $Z_{0}$. The voltage and current commute, and so are not restricted by Heisenberg's uncertainty relationship. In fact, if the instantaneous voltage is measured, the current is already known through $ V / Z_{0}$. In the case of transmission lines, voltage and current are compatible observables, whereas the voltage and its time derivative, or the current and its time derivative, the {\em quadrature components}, are not. When combining quantised transmission line theory with scattering parameter representations, there is an unfortunate clash of notation. In classical scattering parameter theory, $a(\omega) = v^{+} (\omega) / \sqrt{Z_{0}}$ and $b (\omega) = v^{-} (\omega) / \sqrt{Z_{0}}$ are the normalised complex amplitudes of the counter-propagating waves $v^{+} (\omega)$ and $v^{-} (\omega)$, such that $|a(\omega)|^{2}$ and $|b(\omega)|^{2}$ are power spectral densities. But each wave has a creation and annihilation operator, and so we introduce the operator pairs $(\hat{a}(\omega),\hat{a}^{\dagger}(\omega))$ and $( \hat{b}(\omega),\hat{b}^{\dagger}(\omega))$ for the forward and backward travelling waves respectively. After quantisation, Equation (\ref{D1}) becomes \begin{align} \label{E3} \frac{\hat{v}^{+}(\omega)}{\sqrt{Z_{0}}} & = \frac{1}{2 \sqrt{Z_{0}}} \left[ \hat{v}(\omega) + \hat{i}(\omega) Z_{0} \right]= \left[ \frac{\hbar \omega}{2} \right]^{1/2} \hat{a} (\omega) \\ \nonumber \frac{\hat{v}^{-}(\omega)}{\sqrt{Z_{0}}} & = \frac{1}{2 \sqrt{Z_{0}}} \left[ \hat{v}(\omega) - \hat{i}(\omega) Z_{0} \right] = \left[ \frac{\hbar \omega}{2} \right]^{1/2} \hat{b} (\omega), \end{align} where $\hat{v}(\omega)$ and $\hat{i}(\omega)$ are the voltage and current at a plane, say the port of a network. The average one-sided power spectral density flowing in the forward direction is given by the symmetrised form $\hat{s}^{+} (\omega) = ( \hbar \omega / 2 ) \left( \hat{a} (\omega) \hat{a}^{\dagger} (\omega) + \hat{a}^{\dagger} (\omega)\hat{a} (\omega) \right) = ( \hbar \omega / 2 ) \left\{ \hat{a} (\omega) , \hat{a}^{\dagger} (\omega) \right\}$, and similarly for the reverse direction. In a travelling-wave representation, the annihilation operators of the outgoing waves depend linearly on the annihilation operators of the incoming waves, with the constants of proportionality being the scattering parameters. This view is plausible because if the incoming waves are in high-occupancy coherent states, the outputs must correspond to those of a classically driven system. For a multiport network, the input operators act on the tensor product of the `input states': $| p_{1} \rangle \cdots | p_{m} \rangle \cdots | p_{M} \rangle$. The output operators therefore act on the same state space: the outcomes of measurements on the outgoing waves are are described in terms of the states of the incoming waves. The scattering parameters are essentially complex probability amplitudes. In general terms, the waves incident on the ports do not have have to be in coherent states, and may even be in mixed states, such as thermal states, but in order for the scheme to be self consistent, the vacuum states of seemingly undriven ports must be included. It is often the case that incoming radiation is described solely in terms of quantum correlation functions (for example $\langle \hat{a} (\omega) \hat{a}^{\dagger}(\omega) \rangle$), and then only outgoing correlation functions can be determined. With care relating to ports in the vacuum state, this mapping can be achieved through the connection matrix method, Equation (\ref{D4}). Consider a multiport network, where one port comprises the input, another the output, and where $M > 2$; in other words, a microwave two-port network connects internally to a set of `internal' ports that influence the output but whose states are never measured. One way of eliminating interest would be to take the partial trace over the `internal ports', to yield the two-port behaviour. Another approach is to say that \begin{align} \label{E4} \left( \begin{array}{c} \hat{b}_{1} \\ \hat{b}_{2} \\ \end{array} \right) & = \left( \begin{array}{cc} S_{11} & S_{12} \\ S_{21} & S_{22} \\ \end{array} \right) \left( \begin{array}{c} \hat{a}_{1} \\ \hat{a}_{2} \\ \end{array} \right) + \left( \begin{array}{c} \hat{n}_{1} \\ \hat{n}_{2} \\ \end{array} \right), \\ \nonumber \hat{\mathsf b} & = {\mathsf S} \hat{\mathsf a} + \hat{\mathsf n}, \end{align} where for brevity explicit reference to $\omega$ is dropped. The vector-valued operator $\hat{\mathsf n}$ contains linear contributions from the internal ports, and acts on a suitably extended state space. One may be tempted to ignore vacuum contributions from the internal ports, but if this is done, the output operators do not then satisfy bosonic commutation relationships. It is now clear that even vacuum states are likely to contribute to the output in the form of an additive `noise' term. In many cases, this noise will be thermal, and in some cases may be at an effective temperature higher than the physical temperature of the device. \subsection{Quantum noise temperature} \label{sec_qua_nse_tmp} Even at low physical temperatures $T_{\rm p} \ll \hbar \omega / k $, and without internal heating such as hot electron effects, `noise' appears at the output in terms of a weighted linear combination of vacuum states, because of $\hat{\mathsf n}$. Therefore, every device must have some minimum noise temperature. What is the minimum noise temperature of a multiport network? This question depends on the properties of ${\mathsf S}$, such as reciprocity, unitarity, and even linearity, but in the case of an ideal two-port amplifier, with $S_{11}=S_{22}=0$, a simple but compelling argument is as follows. Equation (\ref{E4}) gives $\hat{b}_{2} = S_{21} \hat{a}_{1} + \hat{n}_{2}$, and because the source and noise terms correspond to different degrees of freedom, \begin{align} \label{E5} [\hat{b}_{2},\hat{b}^{\dagger}_{2}] & = |S_{21}|^{2} [\hat{a}_{1},\hat{a}^{\dagger}_{1}] + [\hat{n}_{2},\hat{n}^{\dagger}_{2}] \\ \nonumber [\hat{n}_{2},\hat{n}^{\dagger}_{2}] & = 1 - |S_{21}|^{2}, \end{align} where the second line follows because the operators correspond to travelling waves on transmission lines: $[\hat{a}_{1},\hat{a}^{\dagger}_{1}] = 1$ and $[\hat{b}_{1},\hat{b}^{\dagger}_{1}]=1$. The one-sided spectral density of the power travelling away from the output is given by \begin{align} \label{E6} s^{b} (\omega) = |S_{21}|^{2} s^{a} (\omega) + s^{n} (\omega). \end{align} $|S_{21}|^{2}$ appears as the transducer power gain of the amplifier, as in the classical case. For any operator, $\hat{X} \hat{X}^{\dagger} = \left[ \hat{X}, \hat{X}^{\dagger} \right] / 2 + \left\{ \hat{X}, \hat{X}^{\dagger} \right\}/2$, and so $\langle \left\{ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right\} \rangle = 2 \langle \hat{n}_{2} \hat{n}^{\dagger}_{2}\rangle - \langle \left[ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right] \rangle$. For an amplifier having power gain,$|s_{21}|^{2} > 1$, \begin{align} \label{E7} s^{n} (\omega) = \frac{\hbar \omega}{2} \langle \left\{ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right\} \rangle & \ge - \frac{\hbar \omega}{2} \langle \left[ \hat{n}_{2}, \hat{n}^{\dagger}_{2} \right] \rangle = \frac{\hbar \omega}{2} \left( |S_{21}|^{2} - 1 \right), \end{align} where Equation (\ref{E5}) has been used. The added noise $s^{n} (\omega)$ is zero for unity power gain: a lossless, passive device. Usually, the noise power at the output is referred to the input to define a noise temperature $T_{\rm n} = s^{n} (\omega) / |S_{21}|^{2} k$, giving \begin{align} \label{E9} T_{\rm n} & \ge \frac{\hbar \omega}{2k} \left( 1 - \frac{1}{|S_{21}|^{2}} \right). \end{align} For a high-gain amplifier there is a minimum noise temperature of $T_{\rm q} = \hbar \omega / 2k$, which is called the {\em standard quantum limit} (SQL). No phase preserving amplifier can have a noise temperature of less than the quantum limit \cite{Cav1}. This noise power adds to any intrinsic power from the source. If the source is also in its vacuum state, an additional half a photon of noise is added. One interpretation of the SQL is to say that at least one other source must be connected to an amplifier to provide the energy needed for amplification, and at the very least this must have vacuum fluctuations. Often, several internal sources are present, and so the SQL is not realised. To achieve the quantum limit it is prudent to choose a configuration that has the smallest number of connected degrees of freedom. A single-mode resonator parametric amplifier is a good example of this principle. $T_{\rm n}$ is a {\em noise temperature}, but actually describes an average spectral power, $k T_{\rm n}$, and so how does it relate to noise? The main use of an amplifier is to amplify a coherent voltage or current waveform, {\em preserving phase}, meaning that the gain is the same for each of the quadrature components (time shift invariance). Equation (\ref{E3}) shows that $\hat{v}^{+}(\omega)$ is not Hermitian, and so not measurable; but combining the positive and negative frequency parts, the quadrature components of $\hat{v}(t)$ at the output, namely $ \sqrt{\hbar \omega Z_{0} / 2} \left( \hat{b}(\omega) + \hat{b}^{\dagger}(\omega) \right) \cos (\omega t)$ and $ -i \sqrt{\hbar \omega Z_{0} / 2} \left( \hat{b}(\omega) - \hat{b}^{\dagger}(\omega) \right) \sin (\omega t)$, are individually Hermitian, and so measurable. To calculate the fluctuation in the travelling noise-wave voltage at the output, in the absence of a signal at the input, $\Delta v_{\rm opt}(t) = \sqrt{\langle \hat{v}_{\rm opt}^{2} (t) \rangle - \langle \hat{v}_{\rm opt} (t)\rangle^{2}}$ is required. For a statistically stationary state, such as a thermal state, it can be shown through straightforward algebra that $\left( \Delta v_{\rm opt}(t) \right) ^{2} / Z_{0} = s^{n} (\omega) = k T_{\rm n} |S_{21}|^{2}$, and finally $\Delta v_{\rm in}(t) = \sqrt{ k T_{\rm n} Z_{0} } $. The crucial point is that noise temperature is a measure of the variance of the amplitude of the noise voltage wave, and so is the relevant measure of sensitivity when an amplifier is used to amplify travelling voltage/current waveforms. This contrasts with a measurement of average power, where the fluctuation in power is the relevant measure of sensitivity. If a power detector follows an amplifier, the radiometer equation must be used: $\Delta T = T_{\rm n} / \sqrt{B \tau}$. In fact, if a signal is amplified, digitally sampled, and then analysed using an autocorrelation algorithm to give a power spectrum, the radiometer equation still holds for each spectral bin. In summary, the sensitivity of amplifiers is characterised by {\em noise temperature}, which is a spectral power, and the sensitivity of power detectors by {\em noise equivalent power}, which is a fluctuation in power for a given post-detection integration time. These are based on second-order and fourth-order statistics respectively. \subsection{Microscopic physics} \label{sec_mic_qua} \begin{figure} \noindent \centering \includegraphics[trim = 6cm 19cm 7cm 4cm, width=50mm]{Figure6.pdf} \caption{Two generalised forces $\hat{F}_{\rm in} ({\bf r},t)$ and $\hat{F}_{\rm out} ({\bf r},t)$ act on the physical properties $\hat{I} ({\bf r},t)$ and $\hat{O} ({\bf r},t)$, respectively, of a device to create a two-port network. In the frequency domain, the quantum response functions, retarded Greens functions, are essentially the two-port scattering parameters $S_{ij}$.} \label{figure6} \end{figure} It is usually sufficient to adopt a microwave systems approach to modelling, but to achieve the best possible performance, it is necessary to understand the relationship between quantum systems theory and the solid-state physics of the device. Rather than regarding the scattering parameters as coefficients, or complex probability amplitudes, it is possible to regard them as response functions in the spirit of Kubo's linear response theory. Consider the two-port network shown in Figure \ref{figure6}. There is some input quantity, $\hat{F}_{\rm in}$, such as the magnetic vector potential of an TEM wave, that couples to some property of the device $\hat{I}$, such as current density. They couple in the sense of combining to add an interaction term to the overall Hamiltonian, as in Equation (\ref{C1}). Likewise there is some output quantity $\hat{F}_{\rm out}$ that couples to some other property of the device $\hat{O}$. Various Kubo-like response formula then follow, as in Equation (\ref{C5}), \begin{align} \label{E1b} \langle \Delta \hat{I}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{I}^{I} ({\bf r},t), \hat{I}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F}_{\rm in} ({\bf r}',t') \rangle_{t_{0}} \\ \nonumber \langle \Delta \hat{O}^{H} ({\bf r},t) \rangle_{t_{0}} & = \frac{-i}{\hbar} \int_{-\infty}^{+\infty} dt' \, \theta(t-t') \int_{\cal V} {\rm d}^{3} {\bf r}' \, \langle \left[ \hat{O}^{I} ({\bf r},t), \hat{I}^{\rm I} ({\bf r}',t') \right] \rangle_{t_{0}} \cdot \langle \hat{F}_{\rm in} ({\bf r}',t') \rangle_{t_{0}}. \end{align} Additional steps are needed, depending on the device, to turn these expressions into scattering parameters, such as $S_{11}$ and $S_{12}$. For example, the volume integrals must be turned into surface integrals over the ports (although it may be possible to express the interaction Hamiltonian directly in terms of say the voltage at the terminals of the device, avoiding the explicit need for a volumetric formulation), and $\hat{F}_{\rm in}$ and $\hat{F}_{\rm out}$ must be described in terms of creation and annihilation operators to give travelling wave amplitudes. The central point, however, is that scattering parameters can now be identified as, essentially, Kubo response functions. Following procedures similar to those outlined in Section \ref{sec_pow_qua}, it is possible to calculate power gain, reactive and resistive input impedance, noise generation, etc., in terms of Kubo response functions. These calculations can be carried out using the scattering parameters, but now there is a direct connection with operators that describe the quantum behaviour of the device. Remember that Kubo response functions are retarded Greens functions, which describe how a solid-state system responds to the injection of an excitation. For example, an electron or hole may by created at one space-time point and one wishes to know the complex probability of an electron or hole appearing at another point. This deep connection between quantum systems theory and device physics is compelling and highly valuable. One important consideration relates to the distinction between backaction and noise. Noise is in a sense straightforward because it relates to the fluctuations present in outgoing waves when no external excitations are present. Backaction is more subtle because it relates to a change in the state of the applied field as a consequence of a measurement taking place. For example, the position of a particle can be measured precisely, but then all information about the momentum is lost. After this measurement, it is known where the particle is, but it is not known which way it is going. A subsequent measurement then suffers from the extreme nature of the first measurement. Often it is better not to measure the quantity of interest too precisely, so that further information can be gained at the second measurement. Electrical sensors act in the same way, and it is better for the first measurement not to constrain the the subsequent behaviour of the system too precisely. It seems that the operator $\hat{I}$ describes the way in which the amplifier feeds noise out of the input terminals, and determines the degree of backaction imposed. The manifestation of radiated noise and backaction depends on the basis used to represent the amplifier. The travelling wave representation, which is defined only to within an arbitrary real reference impedance $Z_{0}$, presents the effects in a different way to a discrete representation where a voltage and current source are placed at the input, as is commonly done in the case of operational amplifiers \cite{Cle1}. \subsection{Comparing performance} \label{sec_nse_com} Numerous fundamental physics experiments are based on measuring power spectra, but should these be carried out using low-noise detectors or low-noise amplifiers? From the perspective of sensitivity, it might be expected that the two are the same because power can be derived from voltage, and vice versa, but the measurement statistics are different, as has been seen. When an amplifier-detector combination is used to measure power from a thermal source, the smallest change in temperature that can be detected is given by the radiometer equation $\Delta T = T / \sqrt{B_{\rm pre} \tau}$, where $B_{\rm pre}$ is the pre-detection bandwidth, and $\tau$ is the post-detection integration time. The smallest change in power that can be detected is then $\Delta P = k T_{\rm n} B_{\rm pre} / \sqrt{{B_{\rm pre}} \tau}$, where the Rayleigh Jeans limit is assumed when defining noise temperature. The smallest change in power that can be detected using a power detector having an intrinsic noise equivalent power of $NEP_{\rm i}$ is $\Delta P = NEP_{\rm i} \sqrt{B_{\rm pst}} = NEP_{\rm i} / \sqrt{2 \tau}$, where $B_{\rm pst}$ is the noise equivalent post-detection bandwidth. Comparing these two gives an equivalent noise temperature of \begin{align} \label{F1} T_{\rm en} & = \frac{NEP_{\rm i}}{k \sqrt{2 B_{\rm pre}}}. \end{align} $NEP_{\rm i}$ characterises internally generated noise, and does not include any background noise. The reason is that the noise temperature of the amplifier does not include any background noise either. If background noise is included in both cases, thereby comparing system NEP with system noise temperature, Equation (\ref{F1}) can be used. In the case of a single-pole filter having a Lorentzian profile, it can be shown that the relevant noise bandwidth is $B_{\rm pre} \approx \pi \nu_{0} / 4 R$, where $R = \nu_{0} / \Delta \nu$ is the spectral resolution, and $\Delta \nu$ is the full width half maximum (FWHM). Using Equation (\ref{F1}) \begin{align} \label{F2} T_{\rm en} & = \left[ \frac{2 R}{k^{2} \pi \nu_{0} } \right]^{1/2} NEP_{\rm i}. \end{align} \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=140mm]{figure7.pdf} \caption{Black lines: System noise temperature of a quantum limited amplifier preceded by a source having a physical temperatures of 10 mK (solid), 50 mK (dotted), 4K (dashed), and 200 K (dash-dot). Blue lines: Noise power radiated by single-mode sources, expressed as a noise temperature, having physical temperatures of 10 mK (solid), 50 mK (dotted) and 4 K (dashed). Red lines: Equivalent noise temperatures of detectors having noise equivalent powers of $10^{-18}$, $10^{-19}$, $10^{-20}$, $10^{-21}$, $10^{-22}$ WHz$^{-1/2}$ with $R=100$. Green dotted lines: Equivalent noise temperatures of detectors having dark photon rates of 10, 100 and 1000 Hz with $R=100$. The lowest solid faint black line shows the equivalent noise temperature of a squeezed amplifier (10 dB) operating with squeezed source at 1 mK; it can be regarded as the absolute limit of any coherent system. The squares show a range of reported noise temperatures of coherent receivers operating at a variety of temperatures. Spectral line astronomy requires coherent systems having the SQL over 100-1000 GHz. FIR infrared space based astronomy requires incoherent systems having NEPs of order $10^{-20}$ WHz$^{-1/2}$. Single electron Cyclotron Radiation Emission Spectroscopy (CRES) requires coherent systems having the SQL at 20-30 GHz. The photon production rate in Haloscopes designed for dark matter detection are extremely small, and so long integration times must be used for all realistic measurements. } \label{figure7} \end{figure} Figure \ref{figure7} shows, as black lines, system noise temperature of a quantum limited amplifier preceded by a source having a physical temperature of 10 mK (solid), 50 mK (dotted), 4K (dashed) and 200 K (dash-dot). As frequency increases, the curves converge on a single line corresponding to $\hbar \omega / k$. Ordinarily, this indicates the sensitivity limit of coherent receivers. The SQL increases with $\nu_{0}$, but does not depend on bandwidth. A 10 mK source (typical of a dilution refrigerator) allows the SQL to be achieved at frequencies down to about 500 MHz. The dashed curve corresponds to a 4K source (typical of a pulse tube cooler), and shows that the SQL can be achieved down to 100 GHz. SIS mixers (see later) approach the SQL over the range 100 GHz to 1 THz. An upward looking radiometer or a space based radiometer always has the ~3 K CMBR as its background, and so there is no benefit in using quantum-noise-limited amplifiers below 10 GHz. The dash-dot line, corresponding to a 200 K source, is essentially the temperature seen by an Earth Observation instrument; it is clear that uncooled amplifiers are suitable for most remote sensing applications. The equivalent noise temperatures of detectors having noise equivalent powers of $10^{-18}$, $10^{-19}$, $10^{-20}$, $10^{-21}$, $10^{-22}$ WHz$^{-1/2}$ with $R=100$ are shown in red. The equivalent noise temperature improves as the centre frequency $\nu_{0}$ is increased, and as the resolution $R$ is lowered. The blue lines show the noise power radiated by single-mode sources, expressed as noise temperature, having physical temperatures of 10 mK (solid), 50 mK (dotted) and 4 K (dashed). A 10 mK source allows detectors having NEPs of better than $5 \times 10^{-21}$ WHz$^{-1/2}$ to be exploited down to 1 GHz: for R=100. There is no point in using a detector of any kind having a noise temperature that is significantly below the background limit. The plot shows that ultra-low-noise FIR power detectors, NEP$\approx 10^{-20}$ WHz$^{-1/2}$, can be used over 1-10 THz, where the blackbody spectrum of the CMBR falls away steeply. In cases such as these, where the noise is dominated by intrinsic detector noise, it can be beneficial to have a number of optical modes available for absorbing signal power, as described in Section \ref{sec_mul_det}, motivating the use of multimode detectors in space-based FIR astronomy. Figure \ref{figure7} shows why power detectors, often called {\em incoherent receivers}, are used for high frequency low-resolution measurements, whereas amplifiers, often called {\em coherent receivers}, are used for low frequency high-resolution measurements. The crossover occurs at millimetre wavelengths, where the two approaches have similar sensitivities. The squares show a range of state-of-the-art noise temperatures of coherent receivers operating at a variety of physical temperatures. The crucial point is that the various technologies all fall short of the SQL by a factor of a few, but track it with frequency. The plot suggests that coherent receiver technology starts to suffer from radiometric leakage at frequencies below 10 GHz even though the physical temperature is often lower. Controlling inadvertent stray light in a cryostat can be surprisingly challenging. The best detectors have NEPs of $10^{-19}$ to $10^{-20}$ WHz$^{-1/2}$ showing that two orders of magnitude improvement at microwave frequencies would be highly beneficial, leading to instruments that are far superior to even squeezed amplifiers for certain low-spectral-resolution (R=100) applications. Incoherent receivers can also suffer badly from radiometric leakage. It should not be assumed that the SQL, $T_{\rm q}$, cannot be beaten. This limit exists only in the case of phase preserving amplifiers. Some amplifiers, however, get the energy needed for amplification from a coherent pump tone, and can be engineered to have different gains for the in-phase and out-of-phase components. The noise temperature of one quadrature can be lowered below the SQL, but only at the expense of the noise temperature of the other: as required by Heisenberg's uncertainty principle. This process is called {\em squeezing}, and squeezing factors, gain ratios, of 10-15 dB are achievable. Higher squeezing factors are challenging because any phase imperfections, which may be time dependent, degrade the fidelity of the effect. The circle in Figure \ref{figure5} becomes an ellipse, and the higher the squeezing factor, the higher the sensitivity to changes in the orientation of the ellipse. The most sophisticated systems squeeze both the source, and the noise from the the amplifier, allowing exceptionally sensitive measurements to be made. The faint solid line is the SQL of a 10 dB squeezed system at 1 mK, showing that lower noise temperatures are possible if this exotic mode of operation can be achieved and developed for practical applications \section{Superconducting devices and circuits} Superconducting thin-film devices provide an excellent technological platform for exploiting concepts in quantum sensing. When a BCS superconductor is cooled below its critical temperature $T_{\rm c}$, an energy gap forms, $E_{\rm g} = 7 k T_{\rm c} /2$, and the material's electrical, magnetic and thermal characteristics change significantly. Most quantum devices operate at $T_{\rm p} \approx 0.1 \, T_{\rm c}$, but a few operate at higher temperatures, $T_{\rm p} \approx T_{\rm c}$. Some of the most important materials, deposited using ultra-high-vacuum sputtering, and patterned using ultraviolet lithography, are listed in Table \ref{Super}. The gap frequency $ f_{\rm g} = E_{\rm g} / h$ is important because below $f_{g}$ the material has near-zero electrical resistivity, whereas above $f_{g}$ superconducting pairs are broken to create single-electron excitations, quasiparticles, which have appreciable resistivity. Power detectors and photon counters usually operate at frequencies above $f_{\rm g}$, and extend up through the infrared and X-ray regions. Amplifiers, frequency convertors and transmission lines must avoid breaking pairs and so operate from kHz to fractional THz frequencies. Reactively sputtered materials such as NbN and NbTiN are popular because they allow operation to around 1.2 THz. It is also routine to fabricate devices using multilayers (TiAu, MoAu, TiAl). Although the films do not diffuse and stay physically distinct, a proximity effect causes superconducting quasiparticles and pairs to leak, and the multilayer behaves as a homogeneous superconductor having properties that are intermediate between those of the constituent layers. For example, $T_{\rm c}$ can be adjusted over the range 50-500 mK to a precision of about 5 mK. A long range lateral proximity effect also occurs, where a wiring contact can, without material diffusion, change the properties of the active device. Table \ref{Devices} lists a number of important device types, and indicates whether they rely on pair breaking or pair preservation. \begin{table} \begin{center} {\begin{tabular}{ccccc} Material & $T_{\rm c}$ (K) & $E_{\rm g}$ (meV) & $f_{g}$ (GHz) \\ \hline NbN & 16 & 4.8 & 1160 \\ Nb & 9.3 & 2.8 & 680 \\ Ta & 4.48 & 1.35 & 325 \\ Al & 1.2 & 0.36 & 90 \\ Mo & 0.9 &0.27 & 65 \\ Ti & 0.39 & 0.11 & 26 \\ \hline \end{tabular}} \end{center} \caption{Illustrative characteristics of superconductors commonly used to fabricate quantum sensors. $T_{\rm c}$ is the critical temperature, $E_{\rm g}$ the energy gap, and $f_{g}$ the associated pair breaking frequency.} \label{Super} \end{table} \begin{table} \begin{center} {\begin{tabular}{ccc} Material & Pair breaking & Wavelength range \\ \hline Passive & No & Microwave to submm \\ SQUID & No & RF \\ SIS & No & Submm \\ TES & Yes & Submm, FIR, Optical and X-ray \\ KID & Yes & Submm, FIR \\ Paramp & No & Microwave, MMwave \\ SPNWD & Yes & Optical \end{tabular}} \end{center} \caption{Various superconducting devices used in fundamental physics experiments. Descriptions are given in the text. The mode of operation and typical operating wavelength are listed. } \label{Devices} \end{table} In this short overview, it is not possible to list all of the superconducting components available, but it is useful the illustrate the breadth of the technology available. It should also be appreciated that many of the device types described below can be combined to create complex microcircuits having a high degrees of functionality: large format imaging arrays and chip spectrometers. \begin{figure} \noindent \centering \includegraphics[trim = 1cm 1cm 1cm 1cm, width=70mm]{figure8.jpg} \caption{Millimetre-wave thin film superconducting filter based on coplanar transmission line. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements).} \label{figure8} \vspace{2mm} \includegraphics[trim = 1cm 1cm 1cm 1cm, width=70mm]{figure9.jpg} \caption{Millimetre-wave superconducting filter based on parallel capacitors and series capacitors and inductors. The films are typically 100 nm thick. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements).} \label{figure9} \end{figure} {\bf Passive components:} RF components, such as micron-scale transmission lines, directional couplers, filters and loads can be realised in microstrip ($ 5 < Z_{0} <40 \, \Omega$) and coplanar ($ 70 < Z_{0} <150 \, \Omega$) configurations. These structures use 50-500 nm thick superconducting, normal metal and dielectric films, such as SiO and SiO$_{2}$, to create passive components that can operate to 1.2 THz: Figure \ref{figure8} and Figure \ref{figure9}. Superconducting RF components have a number of advantages: (i) For frequencies below $f_{\rm g}$, and at low powers, the films are essentially lossless enabling exceptional behaviour. Dielectric loss in deposited and surface oxides then becomes the biggest dissipative factor, and amorphous dielectrics lead to troublesome Two Level System (TLS) noise. In fact, the deposition and control of low-loss oxides is one of the biggest challenges facing the technology.(ii) The surface impedance of a superconductor is complex valued. The real part is caused by dissipation, usually in the form of quasiparticle scattering, and the imaginary part by reactive energy stored in the inertial behaviour of undamped pairs. If the cross section of the dielectric region is sufficiently small, the energy stored in the kinetic inductance of the film can be comparable with the energy stored in the electromagnetic field. The consequential reduction in wavelength results in devices being physically smaller than would otherwise be the case. (iii) Superconducting thin-film transmission lines are only lossless for powers below, roughly speaking, -50 dBm. At higher powers the quasiparticle population increases and heats due to the photon absorption rate being greater than the quasiparticle recombination rate, and the surface impedance changes. Superconducting resonators show a rich variety of behaviour because on tuning through the resonance the stored energy changes, modifying the equivalent circuit parameters: the resonant frequency and quality factor of the underlying resonance depend on the frequency and strength of exciting tone \cite{Cnt1}. {\bf Superconducting Quantum Interference Device:} SQUIDs were the first superconducting sensors to be used for science. They operate well below $T_{\rm c}$ because of the need to maintain a long-range coherent superconducting state \cite{Sqd1}. Imagine a closed loop of superconducting material. The line integral of the magnetic vector potential is equal to the flux enclosed, but according to the Aharanov-Bohm effect, the line integral contributes a phase factor to the bosonic wavefunction of the superconducting pairs. Because the phase around a closed loop must be single valued, only certain values of flux are allowed to exist inside the loop. The quantum of magnetic flux is $ \Phi_{0} = h / 2 e = 2.1 \times 10^{-15}$ Tm$^{-2}$. A DC SQUID comprises a superconducting ring in which two tunnel junctions are inserted on opposite sides. If the ring is current biased through two additional connections, arranged to give a symmetric configuration, the voltage across the tunnel junctions provides a measure of the screening current in the ring. The voltage is then periodic as an external signal flux is applied and individual quanta enter the ring. This device can be used for extremely sensitive field measurements (fT Hz$^{-1/2}$). Another level of sophistication uses the amplified voltage to feed back flux into the ring through a thin-film transformer. The feedback holds the total flux constant, and the feedback voltage gives a linear measure of the externally applied flux. Finally, a low-inductance input transformer can be added to create an ultra-low-noise current to voltage convertor. Noise currents of pA $\sqrt{\rm Hz}$ are routinely achieved. SQUIDS have been developed for applications such as biological and biomedical magnetometry, geology, and even oil exploration. SQUIDs are also be used for reading out and multiplexing Transition Edge Sensors. {\bf Superconducting Parametric Amplifier:} SPAs can be based on the nonlinear behaviour of SQUIDs by modulating the flux in the ring with an external RF pump source, or on the nonlinear behaviour of thin-film transmission lines \cite{Prm1}. In both cases, they must be used below $f_{\rm g}$. In the case of transmission lines, the signal is combined with a high-level pump tone, which modulates the parameters of the device and transfers energy to the signal, resulting in gain (10-20 dB). Half-wavelength superconducting resonators make excellent narrow band amplifiers at microwave frequencies, and are predicted to work well at millimetre wavelengths \cite{Prm2}. The advantages of resonators are that only small pump powers are needed, keeping phase noise low, and the number of degrees of freedom can be controlled, minimising the number of modes that can contribute to vacuum fluctuation noise. The bandwidths of resonators are small $R \ge 200$, and so for broadband applications $R \le 5$ travelling wave structures are needed. The difficulty with travelling wave devices is that the cross section of the transmission line must be small (50 nm thick films, 1-2 $\mu$m wide) to maximise the kinetic inductance fraction, but the lengths must be large (0.5 m) to maximise gain. It is difficult to achieve uniform, defect free fabrication, and there are other complications associated with dispersion engineering and harmonic suppression. Also, relatively large pump powers are needed, leading to heating. Superconducting films display both resistive and inductive nonlinearities, and these contribute simultaneously to the operation of amplifiers based on transmission lines \cite{Prm2}. Additionally, at least two non-linear mechanisms are present (gap moduation and quasiparticle generation), and these have different speeds and power thresholds. Overall, SPA's achieve exceptional behaviour at microwave frequencies, frequently approaching the quantum limit. Most configurations give phase-preserving amplification, but the intrinsic ultra-low-noise behaviour has also allowed squeezing to be demonstrated. {\bf Superconductor Insulator Superconductor mixer:} SIS mixers were the first superconducting devices to find widespread use in astrophysics \cite{Sis1,Sis2}, and now form a technological cornerstone of high resolution ($R \approx 10^{8}$) submillimetre-wave spectral line astronomy. As discussed in Section \ref{sec_nse_com}, high-resolution instruments favour coherent systems. SIS mixers exploit the complicated dynamics of quasiparticle tunnelling in dielectric barriers, but it is necessary to suppress Josephson pair tunnelling by the application of a small DC magnetic field in the plane of the barrier. A typical tunnel junction has an area of 1 $\mu $m$^{2}$, to minimise capacitance, allowing near quantum-noise-limited down conversion from submillimetre (100 GHz to 1 THz) to microwave (1-10 GHz) frequencies. SIS mixers are fascinating devices because they bridge the gap between classical mixers, based on notions of IV curve nonlinearity, and photon-energy convertors, based on notions of creation and annihilation operators acting on field states. As the frequency of the LO is increased, a changeover in behaviour occurs when the photon energy becomes greater than the scale size of the nonlinearity in the IV curve ($\approx$ 100 GHz). Photon-assisted tunneling steps appear due to the quasiparticle states on one side of the barrier being energy(essentially frequency) modulated. The change from classical to quantum behaviour brings gain on down conversion (in contrast to classical mixers which must have a 3 dB loss at best), quantum-limited noise temperature (due to the presence of the LO), and the appearance of quantum capacitance due to mismatched quasiparticle states on either side of the barrier creating a sloshing probability current. SIS mixer technology has enabled pioneering submillimetre-wave telescopes to be built, such as the James Clerk Maxwell Telescope in Hawaii, and the Atacama Large Millimeter Array in Chile, and has been flown to Lagrange Point 2 on the Herschel Space Telescope. \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=60mm]{figure10.jpg} \caption{Free-space far-infrared superconducting transition edge sensor. The superconducting MoAu bilayer at the bottom of the picture has gold bars to suppress noise. It is fabricated on a 200 nm thick SiN membrane, which is supported by legs which are 200 nm thick, 1 $\mu$m wide and can be up to 1 mm long. Nb wiring runs out along the two legs at the bottom of the picture. The infrared absorber comprises a few nm of disordered $\beta$-phase Ta. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements). } \label{figure10} \end{figure} {\bf Transition Edge Sensor:} TESs have been the work horse of submillimetre-wave astronomy for many years, particularly for mapping spatial variations in the intensity and polarisation of the CMBR, revealing multipole acoustic oscillations in the plasma of the early Universe. As discussed in Section \ref{sec_nse_com}, low-resolution instruments favour incoherent systems. The basic device comprises a superconducting film isolated from the heat bath by either SiN legs (200nm thick and 2 $\mu$m wide) or by judicious engineering of electron-phonon decoupling in the superconductor. When the superconducting film is connected to a low impedance (m$\Omega$) voltage source, electrothermal feedback causes the film to self bias on its superconducting transition. External radiation is then applied optically to a nearby absorbing film made of a different superconductor, Figure \ref{figure10}, or to a load that terminates a superconducting microstrip transmission line, Figure \ref{figure11}. When energy is absorbed, electrothermal feedback holds the operating point constant by swapping optical power for bias power. The bias current falls and is read out using a SQUID. Electrothermal feedback causes a TES to respond more quickly than the open-loop thermal time constant would suggest. TESs have been developed extensively for most of the electromagnetic spectrum, and although various noise mechanisms are present they can be suppressed to the point where the phonon shot noise in the thermal isolation dominates, giving NEPs of 10$^{-17}$ to 10$^{-20}$ WHz$^{-1/2}$. TESs can be assembled into very large arrays, and the microstrip versions have been combined with superconducting RF components to make chip spectrometers for astronomy and Earth Observation. TESs are also been used at optical wavelengths for laser interferometry, dark matter searches, and have been developed into a sophisticated technology for far-infrared and X-ray space telescopes. \begin{figure} \noindent \centering \includegraphics[trim = 0cm 0cm 0cm 0cm, width=70mm]{figure11.jpg} \caption{Microstrip coupled millimetre-wave transition edge sensor. The primary devices is a TiAl bilayer supported on a 200 nm SiN membrane. The legs are 4 $\mu$m wide and support Nb wiring and a superconducting microstrip transmission line, which is fabricated in Nb with SiO$_{2}$ insulator, is terminated in a 20 $\Omega$ gold/copper load. Fabricated by the Quantum Sensors Group in Cambridge (see Acknowledgements). } \label{figure11} \end{figure} {\bf Kinetic Inductance Detector:} KIDs are being developed to replace TESs in applications where very large format imaging arrays are needed \cite{Kid1}. They can been used from submillimetre to X-ray wavelengths, with time-resolved photon-counting spectroscopy being possible at the shortest wavelengths. A low-power coherent tone is applied to a superconducting microwave resonator so that its complex transmission factor can be monitored. The resonator can be a distributed transmission line, or a discrete circuit that takes the form of an optical pixel. When signal power, or indeed an individual signal photon, is absorbed by the material of the resonator, the surface impedance and resonant frequency change, and this modulates the microwave transmission amplitude and phase. The real promise of this device is that thousands of pixels can be weakly coupled to a single superconducting readout line, and a densely packed comb of microwave tones generated digitally. The output signal can then be sampled, and a real-time FFT used to measure the complex transmission factors of all of the devices simultaneously. The ambition is to create submillimetre-wave and far-infrared cameras having tens of thousands of pixels. Various trade offs have to be considered, but NEPs ranging from 10$^{-17}$ WHz$^{-1/2}$ to 10$^{-20}$ WHz$^{-1/2}$ have been achieved. A challenge with these devices is to ensure that optical behaviour can be maintained, in terms of beam patterns and efficiency, whilst not degrading the microwave response, such as responsivity and noise. These devices are subject to the complicated dynamics of superconducting resonators, and the generation of quasiparticles by the readout tone is a particular consideration. It is difficult to optimise the optical and readout characteristics simultaneously at frequencies much below 100 GHz, because the signal needs to break pairs but the readout needs to preserve pairs. The above list of devices is certainly not exhaustive. TESs operate at $T_{\rm c}$, which is usually chosen to be twice the bath temperature $T_{\rm b}$, and although $T_{\rm b}$ is typically in the range 50-300 mK, the active part of the device is not as cold as it might be, leading to noise. The Cold Electron Bolometer (CEB) is an ingenious device that overcomes this problem \cite{Ceb1}, enabling NEP's of 10$^{-21}$ WHz$^{-1/2}$ to be achieved. Also, Superconductor Nanowire Single Photon Detectors (SNSPD) are being used for optical photon counting \cite{Had1}, and Superconducting Qubits for microwave photon detection \cite{Qub2}. The squares in Figure \ref{figure7} show the noise temperatures of a range of different coherent receiver technologies. Again, this is far from exhaustive, but it can be seen that there is a particular need to further develop amplifiers, and solid-state squeezed systems for frequencies in the range 1-100 GHz. Additionally, there is a need to develop ultra-low-noise incoherent receivers for the whole of the microwave-FIR region: 0.1 GHz to 10 THz. \section{Concluding Remarks} A new generation of ultra-low-noise sensors is required. The systems and their associated devices must push at quantum limits and so must be designed using quantum mechanical methods. There is a particular need for detectors, amplifiers, frequency convertors and imaging arrays for radio to infrared wavelengths, where existing devices fall short of theoretical limits. In some cases, the needed advances will be achieved through refinements in existing technology, but in other cases new device types must be invented. Crucially, raw sensitivity is rarely sufficient, and other characteristics such as quantum efficiency, bandwidth, linearity, saturation power and stability must obtained simultaneously. One of the biggest challenges is to achieve artefact-free behaviour at the quantum level, particularly when an instrument is to be used in a harsh environment or flown in space. The needed innovations go beyond engineering methods and relate to the the development of theoretical and numerical tools. Quantum information theory, quantum field theory, device physics and classical circuit theory must be brought together, and described using a common language, if the quantum sensors challenge is to be tackled in a robust way. \section*{Acknowledgements} I am grateful to UKRI/STFC for the awards Quantum Technology for Measurement of Neutrino Mass (QTNM) ST/T006307/1, Quantum Sensors for the Hidden Sector (QSHS) ST/T006625/1, and Ultra-low-noise Superconducting Spectrometer Technology for Astrophysics ST/V000837/1. Over the years I have had numerous enlightening discussions with colleagues on superconducting device physics. In particular, I would like to thank Christopher Thomas, David Goldie, Songyuan Zhao, Michael Crane and Dorota Glowacka for their exceptional work on developing, fabricating and testing the many devices studied by the Quantum Sensors Group in Cambridge over a period of 20 years. I would also like to thank Dennis Molloy and David Sawford for their outstanding work on engineering and operating a long list of ultra-low-noise cryogenic systems. \section*{Biography} Stafford Withington has worked on ultra-low-noise experiments for astronomy and fundamental physics for many years, including the development of ultra-low-noise instruments for submillimetre-wave and far-infrared space-based applications. His quantum sensors group at Cambridge has been developing and fabricating superconducting devices, microcircuits and imaging arrays for over 20 years. Stafford is now Emeritus Professor of Physics at the University of Cambridge, and Visiting Professor and Senior Researcher in the Department of Physics at the University of Oxford. He has held fellowships at Downing College Cambridge, All Souls College Oxford, Queens College Oxford and a Royal Society Fellowship at Chalmers University Sweden. He worked for various companies early in his career, including Ferranti Electronics Ltd., Marconi Space and Defence Systems Ltd., and Rolls Royce aircraft Engines (1971) Ltd.
de97d8d17550525544a2949f716a195681a0abc7
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In this paper we study the connection between the Kato-Milne cohomology groups $H_p^{n+1}(F)$ over a field $F$ with $\operatorname{char}(F)=p$ for some prime integer $p$, and homogeneous polynomial forms of degree $p$ over $F$. The three main objectives of this work are: \begin{enumerate} \item Finding a number $n_0$ such that for any $n \geq n_0$, $H_p^{n+1}(F)=0$. \item Finding an upper bound for the symbol length of $H_p^2(F)$, which in turn provides an upper bound for the symbol length of $\prescript{}{p}Br(F)$. \item Finding a number $s$ such that any collection of $s$ inseparably linked decomposable differential forms in $H_p^{n+1}(F)$ are also separably linked. \end{enumerate} \subsection{The Kato-Milne Cohomology Groups} Given a prime number $p$ and a field $F$ of $\operatorname{char}(F)=p$, we consider the space of absolute differential forms $\Omega_F^1$, which is defined to be the $F$-vector space generated by the symbols $da$ subject to the relations $d(a+b)=da+db$ and $d(ab)=adb+bda$ for any $a,b \in F$. The space of $n$-differential forms $\Omega_F^n$ for any positive integer $n$ is then defined by the $n$-fold exterior power $\Omega_F^n=\bigwedge^n(\Omega_F^1)$, which is consequently an $F$-vector space spanned by $da_1\wedge\ldots\wedge da_n$, $a_i\in F$. The derivation $d$ extends to an operator $d\,:\,\Omega_F^n \to \Omega_F^{n+1}$ by $d(a_0da_1\wedge\ldots\wedge da_n)= da_0\wedge da_1\wedge\ldots\wedge da_n$. We define $\Omega_F^0=F$, $\Omega_F^n=0$ for $n<0$, and $\Omega_F=\bigoplus_{n\geq 0}\Omega_F^n$, the algebra of differential forms over $F$ with multiplication naturally defined by $$(a_0da_1\wedge\ldots\wedge da_n)(b_0db_1\wedge\ldots\wedge db_m)= a_0b_0da_1\wedge\ldots\wedge da_n\wedge db_1\wedge\ldots\wedge db_m\,.$$ There exists a well-defined group homomorphism $\Omega_F^n\to \Omega_F^n/d\Omega_F^{n-1}$, the Artin-Schreier map $\wp$, which acts on decomposable differential forms as follows: $$\alpha\frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d \beta_n}{\beta_n}\,\longmapsto\, (\alpha^p-\alpha)\frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d \beta_n}{\beta_n}.$$ The group $H_p^{n+1}(F)$ is defined to be $\coker(\wp)$. By \cite{Kato:1982}, in the case of $p=2$, there exists an isomorphism \begin{eqnarray*} H_2^{n+1}(F) &\stackrel{\cong}{\longrightarrow} & I_q^{n+1}(F)/I_q^{n+2}(F), \enspace \text{given by}\\ \alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d \beta_n}{\beta_n} & \longmapsto & \langle \langle \beta_1,\dots,\beta_n,\alpha]] \mod I_q^{n+2}(F) \end{eqnarray*} where $\langle \langle \beta_1,\dots,\beta_n,\alpha]]$ is a quadratic $n$-fold Pfister form. By \cite[Section 9.2]{GilleSzamuely:2006}, when $n=1$, there exists an isomorphism \begin{eqnarray*} H_p^2(F) &\stackrel{\sim}{\longrightarrow}& \prescript{}{p} Br(F), \enspace \text{given by}\\ \alpha \frac{d\beta}{\beta} &\longmapsto & [\alpha,\beta)_{p,F}, \end{eqnarray*} where $[\alpha,\beta)_{p,F}$ is the degree $p$ cyclic $p$-algebra $$F \langle x,y : x^p-x=\alpha, y^p=\beta, y x y^{-1}=x+1 \rangle.$$ In the special case of $p=2$ and $n=1$, these cyclic $p$-algebras are quaternion algebras $[\alpha,\beta)_{2,F}$ that can be identified with their norm forms which are quadratic 2-fold Pfister forms $\langle \langle \beta,\alpha]]$ (see \cite[Corollary 12.2 (1)]{EKM}). \subsection{$C_m$ and $\widetilde C_{p,m}$ Fields}\label{up_invariant} A $C_m$ field is a field $F$ over which every homogeneous polynomial form of degree $d$ in more than $d^m$ variables is isotropic (i.e. has a nontrivial zero). It was suggested in \cite[Chapter II, Section 4.5, Exercise 3 (b)]{Serre} that if $F$ is a $C_m$ field with $\operatorname{char}(F) \neq p$ then for any $n \geq m$, $H^{n+1}(F,\mu_p^{\otimes n})=0$. This fact is known for $p=2$ because of the Milnor conjecture, proven in \cite{Voevodsky}. It was proven in \cite{KrashenMatzri:2015} that for any prime $p>3$, $C_m$ field $F$ with $\operatorname{char}(F) \neq p$ and $n \geq \lceil (m-2) \log_2(p)+1 \rceil$, we have $H^{n+1}(F,\mu_p^{\otimes n})=0$. (The same result holds when $p=3$ for $n\geq \lceil (m-3)\log_2(3)+3\rceil$.) The analogous statement for fields $F$ with $\operatorname{char}(F)=p$ is that if $F$ is a $C_m$ field then $H_p^{n+1}(F)=0$ for every $n \geq m$. This is true, as stated in \cite[Chapter II, Section 4.5, Exercise 3 (a)]{Serre} and proven explicitly in \cite{ArasonBaeza:2010}. It follows from the fact that $C_m$ fields $F$ have $p$-rank at most $m$, i.e. $[F:F^p] \leq p^m$. We consider a somewhat different property of fields that avoids directly bounding their $p$-rank. We say that a homogeneous polynomial form of degree $p$ over $F$ is {\bf \boldmath{$p$}-regular} if there is no nonzero point where all the partial derivatives of order $p-1$ vanish. We denote by $u_p(F)$ the maximal dimension of an anisotropic $p$-regular form over $F$. We say $F$ is a $\widetilde{C}_{p,m}$ field if $u_p(F) \leq p^m$. We prove that if $F$ is $\widetilde{C}_{p,m}$ then for any $n \geq \lceil (m-1) \log_2(p) \rceil+1$, we have $H_p^{n+1}(F)=0$. (See Section \ref{tildeCm} for examples of $\widetilde C_{p,m}$ which are not $C_m$.) \begin{rem} Note that when $p=2$, the notion of a $p$-regular form coincides with nonsingular quadratic form, and $u_p(F)$ boils down to the $u$-invariant $u(F)$ of $F$. In this case, $\lceil (m-1) \log_2(p) \rceil+1=m$, which recovers the known fact that when $u(F) \leq 2^m$, we have $H_2^{m+1}(F) \cong I_q^{m+1}(F)/I_q^{m+2}(F)=0$. \end{rem} \subsection{Symbol Length in $H_p^2(F)$} By \cite[Theorem 30]{Albert:1968} (when $\operatorname{char}(F)=p$) and \cite{MS} (when $\operatorname{char}(F) \neq p$ and $F$ contains a primitive $p$th root of unity), $\prescript{}{p} Br(F)$ is generated by cyclic algebras of degree $p$. The symbol length of a class in $\prescript{}{p} Br(F)$ is the minimal number of cyclic algebras required in order to express this class as a tensor product of cyclic algebras. The symbol length of $\prescript{}{p} Br(F)$ is the supremum of the symbol length of all the classes in $\prescript{}{p} Br(F)$. Recall that when $\operatorname{char}(F)=p$, $\prescript{}{p} Br(F) \cong H_p^2(F)$. It was shown in \cite[Corollary 3.3]{Chapman:2017} that if the maximal dimension of an anisotropic form of degree $p$ over $F$ is $d$ then the symbol length of $\prescript{}{p} Br(F)$ is bounded from above by $\left \lceil \frac{d-1}{p} \right \rceil-1$, providing a characteristic $p$ analogue to a similar result obtained in \cite{Matzri:2016} in the case of $\operatorname{char}(F) \neq p$. As a result, if $F$ is $C_m$ then $d \leq p^m$ and so this upper bound boils down to $p^{m-1}-1$. However, the symbol length of $\prescript{}{p} Br(F)$ when $F$ is a $C_m$ field with $\operatorname{char}(F)=p$ is bounded from above by the $p$-rank which is at most $m$ (see \cite[Theorem 28]{Albert:1968}). We show that the forms discussed in \cite{Chapman:2017} are actually $p$-regular forms, which gives the upper bound $\left \lceil \frac{u_p(F)-1}{p} \right \rceil-1$ for the symbol length (which coincides with $\frac{u(F)}{2}-1$ when $p=2$ as in \cite[Corollary 4.2]{Chapman:2017}). In particular, if $F$ is $\widetilde C_{p,m}$ then the symbol length is bounded from above by $p^{m-1}-1$. (This bound is in fact sharp for $p=2$ as proven in \cite[Proposition 4.5]{Chapman:2017}.) \subsection{Separable and Inseparable Linkage}\label{SepInsLinkage} A differential form in $H_p^{n+1}(F)$ is called ``decomposable" if it can be written as $\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}$ for some $\alpha \in F$ and $\beta_1,\dots,\beta_n \in F^\times$. We say that a collection of decomposable differential forms $\omega_1,\dots,\omega_m$ in $H_p^{n+1}(F)$ are inseparably $\ell$-linked if they can be written as $$\omega_i=\alpha_i \frac{d \beta_{i,1}}{\beta_{i,1}} \wedge \dots \wedge \frac{d \beta_{i,n}}{\beta_{i,n}}, \enspace i \in \{1,\dots,m\}$$ such that $\beta_{1,k}=\dots=\beta_{m,k}$ for all $k \in \{1,\dots,\ell\}$. We say they are separably $\ell$-linked if they can be written in a similar way such that $\alpha_1=\dots=\alpha_m$ and $\beta_{1,k}=\dots=\beta_{m,k}$ for all $k \in \{1,\dots,\ell-1\}$. By the identification of decomposable differential forms with quaternion algebras (when $p=2$ and $n=1$), cyclic $p$-algebras (when $n=1$) and quadratic Pfister forms (when $p=2$), the notions of inseparable and separable linkage coincide with the previously defined notions of separable and inseparable linkages for these objects. In \cite{Draxl:1975} it was proven that inseparable (1-)linkage of pairs of quaternion algebras implies separable (1-)linkage as well. A counterexample to the converse was given in \cite{Lam:2002}. These results extend naturally to Hurwitz algebras (\cite{ElduqueVilla:2005}) and quadratic Pfister forms (\cite[Corollary 2.1.4]{Faivre:thesis}). In \cite[Corollary 5.4]{ChapmanGilatVishne:2017} it was shown that when $H_2^{n+2}(F)=0$, separable $n$-linkage and inseparable $n$-linkage for pairs of quadratic $(n+1)$-fold Pfister forms are equivalent. In \cite{Chapman:2015} it was proven that inseparable $(1-)$linkage for pairs of cyclic $p$-algebras of degree $p$ implies separable $(1-)$linkage as well, and that the converse is not necessarily true. It follows immediately that if two decomposable differential forms in $H_p^{n+1}(F)$ are inseparably $k$-linked then they are also separably $k$-linked. In this paper we generalize this statement for larger collections of forms: every collection of $1+\sum_{i=\ell}^n 2^{i-1}$ inseparably $n$-linked decomposable differential forms in $H_p^{n+1}(F)$ are also separably $\ell$-linked. In particular, this means that if three octonion algebras share a biquadratic purely inseparable field extension of $F$, then they also share a quaternion subalgebra. \section{Fields with bounded $u_p$-invariant}\label{tildeCm} There are examples in the literature of fields with $u(F)<2^{n+1}$ and unbounded $2$-rank (see \cite{MammoneTignolWadsworth:1991}). In particular, these fields are $\widetilde{C}_{2,m}$ but not $C_m$. \begin{ques} Is there a similar construction of $\widetilde{C}_{p,m}$ fields which are not $C_m$ for prime numbers $p>2$? \end{ques} The following construction gives an example of a field $F$ which is $\widetilde C_{p,0}$ but clearly not $C_0$: \begin{exmpl} Let $K$ be a field of characteristic $p$, $L=K(\lambda_1,\dots,\lambda_n)$ be the function field in $n$ algebraically independent variables ($n$ can also be $\infty$), and $F=L^{\operatorname{sep}}$ the separable closure of $L$. Then $F$ is $\widetilde{C}_{p,0}$ and not $C_0$. \end{exmpl} \begin{proof} Clearly it is not $C_0$ because its $p$-rank is at least $n$. To show that $F$ is $\widetilde{C}_{p,0}$ it is enough to show that every $p$-regular form $\varphi(x_1,\dots,x_m)$ of dimension $m \geq 1$ over $F$ is isotropic. Let $\varphi(x_1,\dots,x_m)$ be a $p$-regular form of dimension $m$ over $F$. Since there are no $p$-regular forms of dimension 1, $m>1$. Since $\varphi$ is $p$-regular, there exists a term with mixed variables and nonzero coefficient. Without loss of generality, assume the power of $x_1$ in this term is $d$ where $1 \leq d \leq p-1$. Write $\varphi$ as a polynomial in $x_1$ and coefficients in $F[x_2,\dots,x_m]$: $\varphi=c_p x_1^p+\dots+c_1 x_1+c_0$. The coefficient $c_d$ is a nonzero homogeneous polynomial form of degree $p-d$ in $m-1$ variables. Since it is nonzero, we have $c_d(a_2,\dots,a_m) \neq 0$ for some $a_2,\dots,a_m \in F$, not all zero. Without loss of generality, assume that $a_2 \neq 0$, which means we could assume $a_2=1$. Then $c_d(x_2,a_3 x_2,\dots,a_m x_2)$ is a nonzero 1-dimensional form. Hence $\varphi(x_1,x_2,a_3 x_2,\dots,a_m x_2)$ is a nondiagonal 2-dimensional form of degree $p$. We shall explain now why this form must be isotropic: Suppose it is anisotropic. Consider the polynomial $\varphi(x_1,1,a_3,\dots,a_m)$. This is a polynomial of degree $\leq p$ and at least $d$. If its degree is smaller than $p$ then since $F$ is separably closed, the polynomial decomposes into linear factors over $F$, and then it has a root in $F$, which means that $\varphi$ is isotropic. Assume the degree is $p$. Since it is of degree $p$, by \cite[Chapter V, Corollary 6.2]{Lang:2002} this field extension must be either separable or purely inseparable. It cannot be purely inseparable because it has a nonzero term besides the degree $p$ and $0$ terms. Therefore it must be separable, but that contradicts the fact that $F$ is separably closed. \end{proof} One can construct fields $F$ with bounded $u_p(F)$ and infinite $p$-rank in the following way: \begin{lem} Let $K$ be a field of characteristic $p$ with $p$-rank $r$. Then for any anisotropic $p$-regular form $\varphi(x_1,\dots,x_n)$ of dimension $n$, the function field $K(\varphi)=K(x_1,\dots,x_n : \varphi(x_1,\dots,x_n)=0)$ has $p$-rank $r+n-1$. One can also take $r=\infty$ and then the $p$-rank of $K(\varphi)$ is $\infty$ as well. \end{lem} \begin{proof} The function field $K(\varphi)$ of $\varphi$ is a degree $p$ separable extension of the function field $K(x_1,\dots,x_{n-1})$ in $n-1$ algebraically independent variables over $K$. The $p$-rank of $K(x_1,\dots,x_{n-1})$ is $r+n-1$ by \cite[Lemma 2.7.2]{FriedJarden}. By \cite[Lemma 2.7.3]{FriedJarden} the $p$-rank of $K(\varphi)$ is also $r+n-1$. \end{proof} \begin{cor} Let $K$ be a field of characteristic $p$ and $p$-rank $r$, and let $M$ be a positive integer. Then $K$ is a subfield of some field $F$ with $p$-rank at least $r$ and $u_p(F) \leq M$. \end{cor} \begin{proof} This $F$ is taken to be the compositum of the function fields of all anisotropic $p$-regular forms of dimension greater than $M$. By the previous lemma, the $p$-rank of $F$ is at least $r$. Clearly $u_p(F) \leq M$. \end{proof} The last corollary provides examples of fields $F$ which are $\widetilde C_{p,m}$ by taking $M=p^m$. The fact that if $F$ is $C_m$ then the field of Laurent series $F((\lambda))$ over $F$ is $C_{m+1}$ does not hold for $\widetilde{C}_{p,m}$ fields in general. For example, if one takes one of the fields constructed in \cite{MammoneTignolWadsworth:1991} with $u(F)=2^m<\hat{u}(F)$, then $F$ is $\widetilde{C}_{2,m}$. However, by \cite[Corollary 2.10]{Baeza:1982}, $u(F((\lambda)))=2\hat{u}(F)>2^{m+1}$, hence $F((\lambda))$ is not $\widetilde{C}_{2,m+1}$. \section{Symbol Length and $p$-Regular forms} In this section we describe certain properties of $p$-regular forms and make a note on the symbol length of classes in $\prescript{}{p}Br(F)$ when $F$ is a field of characteristic $p$ with bounded $u_p(F)$. Let $p$ be a prime integer and $F$ be a field of characteristic $p$. Let $V=F v_1+\dots+F v_m$ be an $m$-dimensional $F$-vector space. A map $\varphi : V \rightarrow F$ is called a homogeneous polynomial form of degree $p$ if it satisfies $$\varphi(a_1 v_1+\dots+a_m v_m)=\sum_{i_1+\dots+i_m=p} c_{i_1,\dots,i_m} a_1^{i_1} \dots a_m^{i_m}$$ for any $a_1,\dots,a_m \in F$ where $c_{i_1,\dots,i_m}$ are constants in $F$. We say that $\varphi$ is isotropic if there exists a nonzero $v$ in $V$ such that $\varphi(v)=0$. Otherwise $\varphi$ is anisotropic. We say $\varphi$ is $p$-regular if there is no $v \in V \setminus \{0\}$ for which all the order $p-1$ partial derivatives of $\varphi$ vanish. The nonexistence of such points does not depend on the choice of basis. In the special case of $p=2$, this notion coincides with nonsingularity. In particular, diagonal forms of degree $p$ over $F$ are not $p$-regular. Given a homogeneous polynomial form $\varphi : V \rightarrow F$, we can consider the scalar extension $\varphi_L$ of $\varphi$ from $F$ to $L$. \begin{lem}\label{closure} Given a field extension $L/F$, if $\varphi_L$ is a $p$-regular form for some homogeneous polynomial form $\varphi$ of degree $p$ over $F$ then $\varphi$ is $p$-regular as well. \end{lem} \begin{proof} Assume the contrary, that $\varphi$ is not $p$-regular, i.e. there exists $v \neq 0$ such that all the order $p-1$ partial derivatives of $\varphi$ vanish. Since the partial derivatives do not change under scalar extension, all the partial derivatives of $\varphi_L$ vanish at $v$, which means that $\varphi_L$ is not $p$-regular. \end{proof} \begin{defn} Given homogeneous polynomial forms $\varphi : V \rightarrow F$ and $\phi : W \rightarrow F$ of degree $p$, we define the direct sum $\varphi \perp \phi$ to be the homogeneous polynomial form $\psi : V \oplus W \rightarrow F$ defined by $\psi(v+w)=\varphi(v)+\phi(w)$ for any $v \in V$ and $w \in W$. \end{defn} \begin{lem}\label{directsum} The form $\varphi \perp \phi$ is $p$-regular if and only if both $\varphi$ and $\phi$ are $p$-regular. \end{lem} \begin{proof} If $\varphi$ is not $p$-regular, there exists a nonzero $v \in V$ such that all the order $p-1$ partial derivatives of $\varphi$ vanish. Then all the order $p-1$ partial derivatives of $\varphi \perp \phi$ vanish at the point $v \oplus 0$. In the opposite direction, if $\varphi \perp \phi$ is not $p$-regular, then there exists a nonzero $v \oplus w$ where all the order $p-1$ partial derivatives of $\varphi \perp \phi$ vanish. Without loss of generality, assume $v \neq 0$. Then all the order $p-1$ partial derivatives of $\varphi \perp \phi$ vanish at $v$. Since these derivatives are equal to the derivatives of $\varphi$ at $v$, $\varphi$ is not $p$-regular. \end{proof} \begin{lem}\label{fieldextension} Given a separable field extension $L/F$ of degree $p$, the norm form $N : L \rightarrow F$ is a homogeneous polynomial form of degree $p$. This form is $p$-regular. \end{lem} \begin{proof} By Lemma \ref{closure} it is enough to show that the scalar extension of $N$ to the algebraic closure $\overline{F}$ of $F$ is $p$-regular. Now, $L \otimes_F \overline{F}$ is $\underbrace{\overline{F} \times \dots \times \overline{F}}_{p \enspace \text{times}}$ and can be identified with the $p \times p$ diagonal matrices with entries in $\overline{F}$. Therefore we have $$N_{\overline{F}}(a_1 e_1+a_2 e_2+\dots+a_p e_p)=a_1 a_2 \dots a_p.$$ The latter is clearly $p$-regular. \end{proof} \begin{rem}\label{scalar} If $\varphi : V \rightarrow F$ is $p$-regular then for any nonzero scalar $c \in F$, $c \varphi$ defined by $(c\varphi)(v)=c \varphi(v)$ for any $v \in V$ is also $p$-regular. \end{rem} \begin{lem}\label{twodim} Given a prime number $p$ and a field $F$ with $\operatorname{char}(F) \geq p$ or $0$, the homogeneous polynomial form $$\varphi(a_1 v_1+a_2 v_2)=\alpha a_1^p-a_1 a_2^{p-1}+a_2^p$$ over the two-dimensional space $V=F v_1+F v_2$ is $p$-regular for any $\alpha \in F$. \end{lem} \begin{proof} It is enough to note that the partial derivative obtained by differentiating $p-1$ times with respect to $a_2$ is $(p-1)! a_1$ and the partial derivative obtained by differentiating $p-2$ times with respect to $a_2$ and once with respect to $a_1$ is $-(p-1)! a_2$. Therefore the only point where all the order $p-1$ partial derivatives vanish is $(0,0)$ and the form is $p$-regular. \end{proof} In \cite[Corollary 3.3]{Chapman:2017} it was proven that the symbol length of a $p$-algebra of exponent $p$ over $K$ is bounded by $\left \lceil \frac{d-1}{p} \right \rceil-1$ where $d$ is the maximal dimension of an anisotropic homogeneous polynomial form of degree $p$ over $K$. Apparently $d$ can be replaced with $u_p(F)$. \begin{thm}[{cf. \cite[Theorem 3.2 and Theorem 4.1]{Chapman:2017}}]\label{SymbolLength} Let $p$ be a prime integer and let $F$ be a field with $\operatorname{char}(F) = p$ and finite $u_p(F)$. Then every two tensor products $A=\bigotimes_{i=1}^m [\alpha_i,\beta_i)_{p,F}$ and $B=\bigotimes_{i=1}^\ell [\gamma_i,\delta_i)_{p,F}$ with $(m+\ell) p \geq u_p(F)-1$ can be changed such that $\alpha_1=\gamma_1$. \end{thm} \begin{proof} It is enough to show that the homogeneous polynomial form considered in the proof of \cite[Theorem 3.2]{Chapman:2017} is $p$-regular. This form $\varphi$ is the direct sum $\varphi' \perp \phi_1 \perp \dots \perp \phi_m \perp \psi_1 \perp \dots \perp \psi_\ell$ where $\varphi' : F \times F \rightarrow F$ is defined by $\varphi'(a,b)=(\sum_{i=1}^m(\alpha_i)-\sum_{i=1}^\ell(\gamma_i)) a^p-a^{p-1} b+b^p$, for each $i \in \{1,\dots,m\}$, $\phi_i$ is $\beta_i N_{F[x : x^p-x=\alpha_i]/F}$, and for each $i \in \{1,\dots,\ell\}$, $\psi_i$ is $\gamma_i N_{F[x : x^p-x=\delta_i]/f}$. By Lemma \ref{fieldextension} and Remark \ref{scalar}, the forms $\phi_1,\dots,\phi_m,\psi_1,\dots,\psi_\ell$ are $p$-regular. By Lemma \ref{twodim}, $\varphi'$ is also $p$-regular. Consequently, the form $\varphi$ is $p$-regular as direct sum of $p$-regular forms by Lemma \ref{directsum}. \end{proof} \begin{cor}[{cf. \cite[Corollary 3.3 and Corollary 4.2]{Chapman:2017}}] Let $p$ be a prime integer and let $F$ be a field with $\operatorname{char}(F) = p$ and finite $u_p(F)$. Then the symbol length in $H_p^2(F)$ is bounded from above by $\left \lceil \frac{u_p(F)-1}{p} \right \rceil -1$. \end{cor} \begin{proof} It follows from Theorem \ref{SymbolLength} in the same manner \cite[Corollary 3.3]{Chapman:2017} follows from \cite[Theorem 3.2]{Chapman:2017}. \end{proof} \begin{cor}\label{Corup} Let $p$ be a prime integer and let $F$ be a $\widetilde C_{p,m}$ field with $\operatorname{char}(F) = p$. Then the symbol length in $H_p^2(F)$ is bounded from above by $p^{m-1}-1$. \end{cor} \begin{proof} An immediate result of the previous corollary, given that for a $\widetilde C_{p,m}$ field $F$, $u_p(F) \leq p^m$. \end{proof} \section{Symbol Length and $p$-Rank} By \citep[Chapter 7, Theorem 28]{Albert:1968}, the $p$-rank of $F$ is an upper bound for the symbol length of $H_p^2(F)$. In fact, it can be shown that if the $p$-rank of $F$ is a finite integer $m$ then the symbol length of $H_p^{n+1}(F)$ is bounded from above by $\binom{m}{n}$ for any positive integer $n$. \begin{rem}\label{remCm} If $F$ is $C_m$ then its $p$-rank is $\leq m$. \end{rem} \begin{proof} Suppose $F$ is $C_m$ and that its $p$-rank is $n$. Then $F=F^p v_1 \oplus \dots \oplus F^p v_{p^n}$ where $v_1,\dots,v_{p^n}$ are linearly independent over $F^p$. Therefore the form $\varphi(a_1,\dots,a_{p^n})=v_1 a_1^p+\dots+v_{p^n} a_{p^n}^p$ is anisotropic. Since the dimension of $\varphi$ is $p^n$, $m$ must be $\geq n$. \end{proof} For a $C_m$ field (compared to $\widetilde C_{p,m}$) $F$, its $p$-rank ($\leq m$) provides a better upper bound for the symbol length of $H_p^2(F)$ than the upper bound given in Corollary \ref{Corup} ($p^{m-1}-1$). The following proposition shows that there exist cases where the symbol length is actually equal to the $p$-rank: \begin{prop} Let $K=F((\beta_1))\dots((\beta_n))$ be the field of iterated Laurent series in $n$ variables over a perfect field $F$ of characteristic $p$. Let $L/F$ be a $(\mathbb{Z}/p \mathbb{Z})^n$-Galois field extension given by $L=F[\wp^{-1}(\alpha_1),\dots,\wp^{-1}(\alpha_n)]$. Then the symbol length of the $p$-algebra $$D=[\alpha_1,\beta_1)_{p,K} \otimes \dots \otimes [\alpha_n,\beta_n)_{p,K}$$ is equal to the $p$-rank of $K$.\label{p1} \end{prop} \begin{proof} The $p$-rank of $K$ in the proposition above is $n$ by \cite[Chapter 2, Lemma 2.7.2]{FriedJarden}. The $p$-algebra $D$ is a generic abelian crossed product with maximal $(\mathbb{Z}/p \mathbb{Z})^n$-Galois subfield $K[\wp^{-1}(\alpha_1),\dots,\wp^{-1}(\alpha_n)]$, hence it is a division algebra (see \cite{AmitsurSaltman:1978}) and its symbol length is exactly $n$. \end{proof} \begin{rem} In order to construct a Galois extension satisfying the conditions of Proposition \ref{p1}, take $\alpha_1,\ldots,\alpha_n$ to be algebraically independent variables over $\mathbb F_p$. Let $F$ be the perfect closure of $\mathbb F_p(\alpha_1,\ldots,\alpha_n)$. Then each field extension $F[\wp^{-1}(\alpha_i)]$ is $(\mathbb{Z}/p \mathbb{Z})$-Galois and as a set they are mutually linearly independent. Therefore $L=F[\wp^{-1}(\alpha_1),\ldots,\wp^{-1}(\alpha_n)]$ is a $(\mathbb{Z}/p \mathbb{Z})^n$-Galois extension of the perfect field $F$. \label{r1} \end{rem} The following example presents a $C_m$ field with $p$-rank $n$ such that $m=2n+1$: \begin{exmpl} Let $F$ be the perfect closure of the function field $\mathbb{F}_p(\alpha_1,\dots,\alpha_n)$ as in remark \ref{r1}. Let $K=F((\beta_1))\dots((\beta_n))$ be the field of iterated Laurent series in $n$ variables over $F$. As mentioned above, the $p$-rank of $K$ is $n$, and the symbol length of the algebra $[\alpha_1,\beta_1)_{p,F} \otimes \dots \otimes [\alpha_n,\beta_n)_{p,F}$ is $n$. The field $\mathbb{F}_p(\alpha_1,\dots,\alpha_n)$ is a $C_{n+1}$ field, and hence so is its perfect closure $F$. Consequently, $K$ is a $C_{2n+1}$ field. \end{exmpl} \section{Class-Preserving Modifications of Decomposable Differential Forms}\label{Diff} In this section we study the class-preserving modifications of decomposable differential forms in $H_p^{n+1}(F)$. These modifications will be used in proving the main results of the following sections. In the special case of $p=2$ they coincide with the known modifications of quadratic Pfister forms (see \cite{AravireBaeza:1992}). \begin{lem}\label{calcs} Let $\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_n}{\beta_n}$ be a form in $H_p^{n+1}(F)$. Then: \begin{enumerate} \item[$(a)$] For any $i \in \{1,\dots,n\}$, $\omega=(\alpha+\beta_i) \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_n}{\beta_n}.$ \item[$(b)$] For any $i \in \{1,\dots,n\}$ and nonzero $f \in F[\wp^{-1}(\alpha)]$, if $\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)=0$ then $\omega=0$. Otherwise, $\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d(\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f) \beta_i)}{\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f) \beta_i} \wedge \dots\wedge \frac{d\beta_n}{\beta_n}.$ \item[$(c)$] For any $i \in \{1,\dots,n\}$ and $\gamma \in F$, if $\beta_i+\gamma^p=0$ then $\omega=0$. Otherwise, there exists some $\alpha' \in F$ such that $\omega=\alpha' \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d(\beta_i+\gamma^p)}{\beta_i+\gamma^p} \wedge \dots\wedge \frac{d\beta_n}{\beta_n}.$ \item[$(d)$] For any distinct $i,j \in \{1,\dots,n\}$, $$\omega=\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_i}{\beta_i} \wedge \dots \wedge \frac{d (\beta_i \beta_j)}{\beta_i \beta_j} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}.$$ \item[$(e)$] For any distinct $i,j \in \{1,\dots,n\}$, if $\beta_i+\beta_j=0$ then $\omega=0$. Otherwise, $$\omega=\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d (\beta_i+\beta_j)}{\beta_i+\beta_j} \wedge \dots \wedge \frac{d (\beta_i^{-1} \beta_j)}{\beta_i^{-1} \beta_j} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}.$$ \item[$(f)$] For any distinct $i,j \in \{1,\dots,n\}$ and $f \in F[\wp^{-1}(\alpha)]$, if $\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)=0$ then $\omega=0$. Otherwise, $$\omega=\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_i}{\beta_i} \wedge \dots \wedge \frac{d ((\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)) \beta_j)}{(\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)) \beta_j} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}.$$ \end{enumerate} \end{lem} \begin{proof} Parts $(a), (b)$ and $(c)$ are elementary and follow immediately from the equivalent statements for cyclic $p$-algebras (see \cite[Lemma 2.2]{Chapman:2017}). Part $(d)$ follows from the linearity of logarithmic differential forms: $\frac{d (\beta_i \beta_j)}{\beta_i \beta_j}=\frac{d \beta_i}{\beta_i}+\frac{d \beta_j}{\beta_j}$. For part $(e)$, if $\beta_i=-\beta_j$ then $d\beta_i \wedge d \beta_j=0$ and so $\omega=0$. Otherwise, it is enough to show that $$\frac{d \beta_i}{\beta_i} \wedge \frac{d \beta_j}{\beta_j}=\frac{d(\beta_i+\beta_j)}{\beta_i+\beta_j} \wedge \frac{d(\beta_i^{-1} \beta_j)}{\beta_i^{-1} \beta_j}.$$ To see this, notice that $d(\beta_i+\beta_j)\wedge d(\beta_i^{-1} \beta_j)=(\beta_i^{-1}+\beta_i^{-2} \beta_j) d \beta_i \wedge d \beta_j$ and divide both sides by $(\beta_i+\beta_j)(\beta_i^{-1} \beta_j)$. For $(f)$, recall that for any $a \in F$ and $b \in F^\times$, $[a,b)_{p,F}$ is split if and only if $b$ is a norm in the \'{e}tale extension $F[\wp^{-1}(a)]/F$, and otherwise it is a division algebra. If $\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)=0$ then the algebra $[\alpha,\beta_i)_{p,F}=0$ by the norm condition, and so $\omega=0$. Otherwise, consider the form $$\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_i}{\beta_i} \wedge \dots \wedge \frac{d ((\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)) \beta_j)}{(\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)) \beta_j} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}.$$ By $(d)$, it is equal to the sum of $\omega$ and the form $$\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_i}{\beta_i} \wedge \dots \wedge \frac{d (\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f))}{\beta_i+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}.$$ If $\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)=0$ then this form is clearly zero. Otherwise, by $(b)$ it is equal to $$\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)^{-1}\beta_i}{\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)^{-1}\beta_i} \wedge \dots \wedge \frac{d (\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)^{-1}\beta_i+1)}{\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)^{-1}\beta_i+1} \wedge \dots \wedge \frac{d \beta_n}{\beta_n},$$ which is zero by $(c)$. \end{proof} Part (b) of Lemma \ref{calcs} shows that $\beta_n$ can be replaced by any nonzero element represented by the $p$-regular form $\varphi:F[\wp^{-1}(\alpha)] \to F$ with $f\mapsto \beta_n\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f)$. In the following proposition we present a $p$-regular form of larger dimension whose values can also alter the last slot (at the possible cost of changing some of the other inseparable slots). \begin{prop}\label{polynomialform} Let $\alpha \in F$ and $\beta_1,\dots,\beta_n \in F^\times$, and write $\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_n}{\beta_n}$ for the corresponding form in $H_p^{n+1}(F)$. For each $(d_1,\dots,d_n) \in \underbrace{\{0,1\} \times \dots \times \{0,1\}}_{n \enspace \text{times}}$, let $V_{d_1,\dots,d_n}$ be a copy of $F[\wp^{-1}(\alpha)]$ and $\varphi_{d_1,\dots,d_n} : V_{d_1,\dots,d_n} \rightarrow F$ be the homogeneous polynomial form of degree $p$ defined by $\varphi_{d_1,\dots,d_n}(f)=\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(f) \cdot \beta_1^{d_1} \cdot \ldots \cdot \beta_n^{d_n}$. Write $$(\varphi,V)=\bigperp_{\begin{array}{r}0 \leq d_1,\dots,d_n \leq 1\\ (d_1,\dots,d_n) \neq (0,\dots,0)\end{array}} (\varphi_{d_1,\dots,d_n},V_{d_1,\dots,d_n}).$$ If there exists a nonzero $v$ such that $\varphi(v)=0$ then $\omega=0$. Otherwise, for every nonzero $v \in V$, there exist $\beta_1',\dots,\beta_{n-1}' \in F^\times$ such that : $$\omega=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-1}'}{\beta_{n-1}'} \wedge \frac{d \varphi(v)}{\varphi(v)}.$$ \end{prop} \begin{rem} To get a feel for the form $(\varphi,V)$, note that if we set $N=N_{F[\wp^{-1}(\alpha)]/F}$ then for $n=1$, $(\varphi,V) = (\varphi_1,V_1):F[\wp^{-1}(\alpha)] \to V$ with $x \mapsto N(x)\beta_1$. When $n=2$, $(\varphi,V):F[\wp^{-1}(\alpha)]^{\times 3} \to F$ with $(x,y,z) \mapsto N(x)\beta_1+N(y)\beta_2+N(z)\beta_1\beta_2$.\end{rem} \begin{proof}[Proof of Proposition 5.2] By induction on $n$. The case of $n=1$ holds by the analogy to cyclic $p$-algebras. Assume it holds for a certain $n-1$. We shall show it holds also for $n$. The vector space $V$ decomposes as $V_0 \oplus V_1$ where $V_0=\bigperp_{0 \leq d_1,\dots,d_{n-1} \leq 1} V_{d_1,\dots,d_{n-1},0}$ and $V_1=\bigperp_{0 \leq d_1,\dots,d_{n-1} \leq 1} V_{d_1,\dots,d_{n-1},1}$. The latter decomposes as $V_1=V_{0,\dots,0,1}+V_1'$. There is a natural isomorphism $\tau : V_1' \rightarrow V_0$ identifying each $V_{d_1,\dots,d_{n-1},1}$ with $V_{d_1,\dots,d_{n-1},0}$. Under this homomorphism, we have $\varphi(v)=\beta_n \varphi(\tau(v))$ for any $v \in V_1'$. Let $v$ be a nonzero vector in $V$. Then $v=v_0+v_1'+v_{0,1}$ where $v_0 \in V_0$, $v_1' \in V_1'$ and $v_{0,1} \in V_{0,\dots,0,1} (\cong F[\wp^{-1}(\alpha)])$. Note that $\varphi(v)=\varphi(v_0)+\varphi(v_1')+\varphi(v_{0,1})=\varphi(v_0)+\beta_n (\varphi(\tau(v_1))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1}))$. Write $v_1=v_1'+v_{0,1}$. If $v_1'=0$ and $v_{0,1}=0$ then the statement follows immediately from the induction hypothesis. If $v_1'=0$ and $v_{0,1} \neq 0$ then the statement follows from the induction hypothesis and Lemma \ref{calcs} $(e)$. Assume $v_1' \neq 0$. If $\varphi(\tau(v_1'))=0$ then by the induction hypothesis, $\omega=0$. Otherwise, by the induction hypothesis, there exist $\beta_1',\dots,\beta_{n-2}'$ such that $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}}=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'} \wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))}, \enspace \text{and so}$$ $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{ d\beta_n}{\beta_n}=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'} \wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))} \wedge \frac{d \beta_n}{\beta_n}.$$ By Lemma \ref{calcs} $(f)$, if $\varphi(\tau(v_1'))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1})=0$ then $\omega=0$, and otherwise we have $$\omega=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'} \wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))} \wedge \frac{d ((\varphi(\tau(v_1'))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1})) \beta_n)}{(\varphi(\tau(v_1'))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1})) \beta_n}.$$ Consequently $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{ d\beta_n}{\beta_n}=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{d (\varphi(v_1))}{\varphi(v_1)}.$$ If $v_0=0$, this completes the picture. Assume $v_0 \neq 0$. By the assumption, there exist $\beta_1'',\dots,\beta_{n-2}''$ such that $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}}=\alpha \frac{d \beta_1''}{\beta_1''}\wedge\ldots\wedge \frac{d\beta_{n-2}''}{\beta_{n-2}''} \wedge \frac{d \varphi(v_0)}{\varphi(v_0)}, \enspace \text{and so}$$ $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{ d\varphi(v_1)}{\varphi(v_1)}=\alpha \frac{d \beta_1''}{\beta_1''}\wedge\ldots\wedge \frac{d\beta_{n-2}''}{\beta_{n-2}''} \wedge \frac{d \varphi(v_0)}{\varphi(v_0)} \wedge \frac{d \varphi(v_1)}{\varphi(v_1)}.$$ By Lemma \ref{calcs} $(e)$, if $\varphi(v_0)+\varphi(v_1)=0$ then $\omega=0$, and otherwise $$\omega=\alpha \frac{d \beta_1''}{\beta_1''}\wedge\ldots\wedge \frac{d\beta_{n-2}''}{\beta_{n-2}''} \wedge \frac{d (\varphi(v_0)^{-1} \varphi(v_1))}{\varphi(v_0)^{-1} \varphi(v_1)} \wedge \frac{d (\varphi(v_0)+\varphi(v_1))}{\varphi(v_0)+\varphi(v_1)}$$ and since $\varphi(v)=\varphi(v_0)+\varphi(v_1)$, this proves the statement. \end{proof} \begin{cor} Using the same setting as Proposition \ref{polynomialform}, write $$V_1=\bigoplus_{0 \leq d_1,\dots,d_{n-1} \leq 1} V_{d_1,\dots,d_{n-1},1}.$$ Then for every nonzero $v_1 \in V_1$, assuming $\omega \neq 0$, we have $$\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots \wedge\frac{d\beta_{n-1}}{\beta_{n-1}} \wedge \frac{d \varphi(v)}{\varphi(v)}.$$ \end{cor} \begin{proof} The vector space $V_1$ decomposes as $V_1' \oplus V_{0,\dots,0,1}$ where $V_1'$ is the direct sum of all $V_{d_1,\dots,d_{n-1},1}$ with $(d_1,\dots,d_{n-1}) \neq (0,\dots,0)$. Take $\tau$ to be the natural isomorphism from $V_1'$ to $V_0$. Let $v_1$ be a nonzero element in $V_1$. It can therefore be written as $v_1=v_1'+v_{0,1}$ where $v_1' \in V_1'$ and $v_{0,1} \in V_{0,\dots,0,1} (\cong F[\wp^{-1}(\alpha)])$. By the previous proposition $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}}=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'}\wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))},$$ for some $\beta_1',\dots,\beta_{n-2}' \in F^\times$. Consequently, $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-2}}{\beta_{n-2}}\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{d \beta_n}{\beta_n}=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'}\wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))} \wedge \frac{d \beta_n}{\beta_n}.$$ By Lemma \ref{calcs} $(f)$ we have $$\omega=\alpha \frac{d \beta_1'}{\beta_1'}\wedge\ldots\wedge \frac{d\beta_{n-2}'}{\beta_{n-2}'}\wedge \frac{d \varphi(\tau(v_1'))}{\varphi(\tau(v_1'))} \wedge \frac{d (\varphi(\tau(v_1'))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1}))\beta_n}{(\varphi(\tau(v_1'))+\operatorname{N}_{F[\wp^{-1}(\alpha)]/F}(v_{0,1}))\beta_n}$$ and so $$\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d \beta_{n-1}}{\beta_{n-1}} \wedge \frac{d \beta_n}{\beta_n}=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-1}}{\beta_{n-1}} \wedge \frac{d \varphi(v_1)}{\varphi(v_1)}.$$ \end{proof} \begin{cor}\label{separable} Using the same setting as Proposition \ref{polynomialform}, let $\ell$ be an integer between $1$ and $n$. Write $$W=\bigoplus_{\begin{array}{r}0 \leq d_1,\dots,d_n \leq 1\\ (d_\ell,\dots,d_n) \neq (0,\dots,0)\end{array}} V_{d_1,\dots,d_n}.$$ Then for every nonzero $v \in W$, assuming $\omega \neq 0$, there exist $\beta_\ell',\dots,\beta_{n-1}' \in F$ such that $$\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots \wedge \frac{d \beta_{\ell-1}}{\beta_{\ell-1}} \wedge \frac{d \beta_\ell'}{\beta_\ell'} \wedge \dots \wedge \frac{d\beta_{n-1}'}{\beta_{n-1}'} \wedge \frac{d \varphi(v)}{\varphi(v)}.$$ \end{cor} \begin{proof} The vector space $W$ decomposes as $W_n \oplus W_{n-1} \oplus \dots \oplus W_\ell$ such that for each $k$ between $\ell$ and $n$, $$W_k=\bigoplus_{\begin{array}{r}0 \leq d_1,\dots,d_{k-1} \leq 1\\ d_k=1, d_{k+1}=\dots=d_n=0\end{array}} V_{d_1,\dots,d_n}.$$ Let $v$ be a nonzero vector in $W$. Then $v$ can be written accordingly as $v=v_n+\dots+v_\ell$ where each $v_k$ belongs to $W_k$. For each $k \in \{\ell,\dots,n\}$, by the previous lemma $$\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_{k-1}}{\beta_{k-1}} \wedge \frac{d \beta_k}{\beta_k}= \alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_{k-1}}{\beta_{k-1}} \wedge \frac{d \varphi(v_k)}{\varphi(v_k)}.$$ Therefore $$\alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}= \alpha \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_{\ell-1}}{\beta_{\ell-1}} \wedge \frac{d \varphi(v_\ell)}{\varphi(v_\ell)} \wedge \dots \wedge \frac{d \varphi(v_n)}{\varphi(v_n)}.$$ Then by Proposition \ref{calcs} $(e)$ we can change the last term to be $\frac{d(\varphi(v_\ell)+\dots+\varphi(v_n))}{\varphi(v_\ell)+\dots+\varphi(v_n)}$ at the cost of possibly changing the slots $\ell$ to $n-1$. \end{proof} \section{Linkage of Decomposable Differential Forms} \begin{thm}\label{collections} Given a field $F$ of $\operatorname{char}(F)=p$, a positive integer $n$ and an integer $\ell \in \{1,\dots,n\}$, every collection of $1+\sum_{i=\ell}^n 2^{i-1}$ inseparably $n$-linked decomposable differential forms in $H_p^{n+1}(F)$ are separably $\ell$-linked as well. \end{thm} \begin{proof} Write $m=2^{n-1}+\dots+2^{\ell-1}$. Let $\left\{\alpha_i \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_n}{\beta_n} : i \in \{0,\dots,m\}\right\}$ be the collection of $m+1$ inseparably $n$-linked decomposable differential forms in $H_p^{n+1}(F)$ under discussion. The number of $n$-tuples $(d_1,\dots,d_n)$ with $(d_\ell,\dots,d_n) \neq (0,\dots,0)$ is $m$. We denote arbitrarily the elements of the set $\{\beta_1^{d_1} \cdot \ldots \cdot \beta_n^{d_n} : 0 \leq d_1,\dots,d_n \leq 1, (d_\ell,\dots,d_n) \neq (0,\dots,0)\}$ by $\gamma_1,\dots,\gamma_m$ (with possible repetitions). Recall that the norm of an element $x+\lambda y$ in $F[\lambda : \lambda^p-\lambda=\alpha]$ is $x^p-x y^{p-1}+y^p \alpha$. Therefore by Corollary \ref{separable}, for every $i \in \{0,\dots,m\}$ and any choice of $x_{i,1},y_{i,1},\dots,x_{i,m},y_{i,m} \in F$ with $(x_{i,1},\dots,y_{i,m}) \neq (0,\dots,0)$ we can change the form $\alpha_i \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_n}{\beta_n}$ to $\alpha_i \frac{d \beta_1}{\beta_1} \wedge \dots \wedge \frac{d \beta_{\ell-1}}{\beta_{\ell-1}} \wedge \frac{d b_{i,ell}}{b_{i,ell}} \wedge \dots \wedge \frac{d b_{i,n-1}}{b_{i,n-1}} \wedge \frac{d \delta_i}{\delta_i}$ where $$\delta_i=\gamma_1 (x_{i,1}^p-x_{i,1} y_{i,1}^{p-1}+\alpha_i y_{i,1}^p)+\dots+\gamma_m (x_{i,m}^p-x_{i,m} y_{i,m}^{p-1}+\alpha_i y_{m,1}^p)$$ for some $b_{i,\ell},\dots,b_{i,n-1} \in F$. Therefore, in order to show that the forms in the collection are separably $\ell$-linked, it is enough to show that the following system of $m$ equations in $2m(m+1)$ variables has a solution: \begin{eqnarray*} \alpha_0+\delta_0 & = &\alpha_1+\delta_1\\ \alpha_0+\delta_0 & = &\alpha_2+\delta_2\\ & \vdots & \\ \alpha_0+\delta_0 & = &\alpha_m+\delta_m\\ \end{eqnarray*} If we take $x_{0,i}=x_{1,i}=\dots=x_{m,i}$, $y_{0,i}=1$ and $y_{i,i}=0$ and $y_{i,j}=1$ for all $i,j \in \{1,\dots,m\}$ with $i \neq j$, then the $i$th equation in this system becomes a linear equation in one variable $x_{0,i}$ (whose coefficient is $\gamma_i$). This system therefore has a solution. \end{proof} \begin{cor} Given a field $F$ of $\operatorname{char}(F)=2$, a positive integer $n$ and an integer $\ell \in \{1,\dots,n\}$, every collection of $1+\sum_{i=\ell}^n 2^{i-1}$ inseparably $n$-linked quadratic $(n+1)$-fold Pfister forms are also separably $\ell$-linked. \end{cor} \begin{proof} Follows Theorem \ref{collections} and the identification of quadratic Pfister forms with decomposable differential forms appearing in \cite{Kato:1982}. \end{proof} By the identification of octonion algebras with their 3-fold Pfister norm forms (\cite[Theorem 33.19]{BOI}), we conclude that if three octonion $F$-algebras share a biquadratic purely inseparable field extension of $F$ then they also share a quaternion subalgebra. (Recall that when $\operatorname{char}(F)=2$, an octonion algebra $A$ over $F$ is of the form $Q+Qz$ where $Q=[\alpha,\beta)_{2,F}$ is a quaternion algebra, $z^2=\gamma$ and $z \ell=\ell^\sigma z$ for every $\ell \in Q$ where $\sigma$ is the canonical involution on $Q$, $\alpha \in F$ and $\beta,\gamma \in F^\times$. In particular, when $A$ is a division algebra, $F[\sqrt{\beta},\sqrt{\gamma}]$ is a subfield of $A$ and a biquadratic purely inseparable field extension of $F$.) It is not known to the authors if the sizes of the collections mentioned in Theorem \ref{collections} are sharp in the sense that larger collections of inseparably $n$-linked differential forms in $H_p^{n+1}(F)$ need not be separably $\ell$-linked. Even the very special case of three inseparably linked quaternion algebras would be very interesting, in either direction. \begin{ques} Are every three inseparably linked quaternion algebras also separably linked? \end{ques} \section{Vanishing Cohomology Groups} One can use Proposition \ref{polynomialform} to show that for fields $F$ with finite $u_p(F)$ the cohomology groups $H_p^{n+1}(F)$ vanish from a certain point and on. See section \ref{up_invariant} for the definition of $u_p(F)$. \begin{thm}\label{Cohomological} Given a field $F$ with $\operatorname{char}(F)=p$, for any $n$ with $(2^n-1)p\geq u_p(F)$, $H^{n+1}_p(F)=0$. \end{thm} \begin{proof} Let $\omega=\alpha \frac{d \beta_1}{\beta_1}\wedge\ldots\wedge \frac{d\beta_{n-1}}{\beta_{n-1}}\wedge \frac{d \beta_n}{\beta_n}$ be an arbitrary decomposable form in $H_p^{n+1}(F)$. Consider the direct sum $\Phi$ of the form $\varphi$ from Proposition \ref{polynomialform} and the two-dimensional form $\psi(x,y)=\alpha x^p-x^{p-1} y+y^p$. Note that by Lemmas \ref{closure}, \ref{directsum} and \ref{twodim}, $\Phi$ is $p$-regular. If $(2^n-1) p+1 \geq u_p(F)$ then ${\operatorname{dim}}(\Phi)=(2^n-1) p+2$, hence $\Phi$ is isotropic, which means that there exist $(x_0,y_0,v_0) \neq 0$ such that $\psi(x_0,y_0)+\varphi(v_0)=0$. If $v_0=0$ then it means $\alpha \in \wp(F)$, and therefore $\omega$ is trivial. If $\varphi$ is isotropic then $\omega$ is trivial by Proposition \ref{polynomialform}. Otherwise, $\varphi(v_0) \neq 0$. By proposition \ref{polynomialform}, $\omega=\alpha \frac{d \beta_1'}{\beta_1'} \wedge \dots \wedge \frac{d \beta_{n-1}'}{\beta_{n-1}'} \wedge \frac{d \varphi(v_0)}{\varphi(v_0)}$ for some $\beta_1',\dots,\beta_{n-1}' \in F$. If $x_0=0$ then $\Phi(0,y_0,v_0)=y_0^p+\varphi(v_0)=0$, and so $\varphi(v_0)=(-y_0)^p$, which means $\omega$ is trivial. If $x_0 \neq 0$ then $\Phi(1,\frac{y_0}{x_0},\frac{v_0}{x_0})=0$ as well. By Proposition \ref{polynomialform}, $\omega=\alpha \frac{d \beta_1'}{\beta_1'} \wedge \dots \wedge \frac{d \beta_{n-1}'}{\beta_{n-1}'} \wedge \frac{d \varphi(\frac{v_0}{x_0})}{\varphi(\frac{v_0}{x_0})}$ for some $\beta_1',\dots,\beta_{n-1}' \in F$. Then, by Lemma \ref{calcs} $(a)$, we can add $\varphi(\frac{v_0}{x_0})$ to $\alpha$, but then the coefficient is congruent to $\Phi(1,\frac{y_0}{x_0},\frac{v_0}{x_0})$ modulo $\wp(F)$, and therefore $\omega$ is trivial. \end{proof} \begin{cor} If $F$ is a $\widetilde{C}_{p,m}$ field with $\operatorname{char}(F)=p$ then for all $n \geq \lceil (m-1) \log_2(p) \rceil+1$, $H_p^{n+1}(F)=0$. \end{cor} \begin{proof} Since $F$ is a $\widetilde{C}_{p,m}$ field, $u_p(F) \leq p^m$. If $n \geq \lceil (m-1) \log_2(p) \rceil+1$ then $2^n \geq p^{m-1}+1$, and so $2^n \cdot p \geq p^m+p$, which means that $2^n \cdot p-p \geq p^m$, and therefore $(2^n-1) p+1 \geq p^m+1 \geq u_p(F)+1$. \end{proof} \begin{exmpl} If we take $m=1$ then $\lceil (m-1) \log_2(p) \rceil+1=1$, and therefore $H_p^{1+1}(F)=H_p^2(F)=0$ for $\widetilde C_{p,1}$ fields $F$. The group $H_p^2(F)$ is isomorphic to $\prescript{}{p}Br(F)$, which is generated by symbol algebras $[\alpha,\beta)_{p,F}$. Another way of seeing that $\prescript{}{p}Br(F)$ is trivial for $\widetilde C_{p,1}$ is to consider an arbitrary generator $A=[\alpha,\beta)_{p,F}=F \langle x,y :x^p-x=\alpha, y^p=\beta, y x y^{-1}=x+1 \rangle$ and to show that it must be split. If $F[x]$ is not a field then this algebra is split, so suppose this is a field. Then consider the norm form $N : F[x] \rightarrow F$. For any choice of $f \in F[x]^\times$, the element $z=f y$ satisfies $z^p=N(f) \beta$ and $z x z^{-1}=x+1$, so $A=[\alpha,N(F) \beta)_{p,F}$. The element $w=x+z$ satisfies $w^p-w=\alpha+N(f) \beta$ (see \cite[Lemma 3.1]{Chapman:2015}). Now, consider the homogeneous polynomial form \begin{eqnarray*} \varphi &:& F \times F \times F[x] \rightarrow F\\ & & (a,b,f) \mapsto \alpha a^p-a^{p-1} b+b^p+N(f) \beta. \end{eqnarray*} This form is $p$-regular by Lemma \ref{fieldextension}, Remark \ref{scalar} and Lemma \ref{twodim}. It is isotropic because its dimension is $p+2$ and $F$ is $\widetilde C_{p,1}$, i.e. there exist $a_0,b_0 \in F$ and $f_0 \in F[x]$ not all zero such that $\varphi(a_0,b_0,f_0)=0$. The case of $a_0=b_0=0$ is impossible because then $N(f_0)=0$ which means that $F[x]$ is not a field, contrary to the assumption. If $a_0=0$ then the element $t=b_0+f_0 y$ satisfies $t^p=0$, which means $A$ is split. If $a_0 \neq 0$ then the element $t=x+\frac{b_0}{a_0}+\frac{1}{a_0} f_0 y$ satisfies $t^p-t=0$, which again means $A$ is split. \end{exmpl} Recall that for $p=2$, the notion of $u_p(F)$ coincides with the $u$-invariant of $F$. Then the statement recovers the known fact that if $n$ satisfies $2^{n+1} > u(F)$ then $H_2^{n+1}(F)=0$. \section*{Bibliography} \bibliographystyle{amsalpha}
f87f3721bdb9b629c90056e5f55d308828b01739
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{WHY CLOUD?} The unprecedented growth of the global smartphone market over the last decade has been mirrored by the more recent emergence and rapid expansion of enterprise Cloud Computing Platforms (CCP). CCP provide on-demand computing, storage and software accessible over the internet, allowing for the remote offloading of process-intensive tasks. This approach to server technology is an increasingly common long-term strategy for replacing the traditional manually maintained client-server hardware set-up \cite{labati2020cloud}. A clear advantage of CCPs are that they allow users of mobile devices to gain access to significant processing power, well beyond the means of any existing mobile device. This approach allows for patients to use even very dated mobile hardware to access the latest advances in automated medical image analysis. This essentially means that continual advances in this field are not tied to the computing capability of mobile devices, as such devices are simply consuming services from CCPs. Additionally, scalability becomes easier to manage, given the virtualised nature of cloud services. There is a growing trend in the use of ensemble CNNs in medical image analysis, whereby multiple CNNs are used to form a final prediction. Distributing mobile apps that use multiple models is not practical or possible given the limited permissible size of apps when distributed via online app stores. There is also the issue of intellectual property protection. Android apps are particularly easy to reverse engineer, so having the CNNs run on the server instead of the user's mobile device means that trained models are never publicly exposed. \section{SYSTEM ARCHITECTURE} The two major components created for the evaluation were (1) a cross-platform mobile app, and (2) a cloud-based deep learning framework that performed inference on foot photographs sent from mobile clients. A cross-platform framework was chosen for the development of the mobile client since the ultimate goal of this research is to provide patients with a means of remotely monitoring and diagnosing DFU using their own smart-phones, which primarily comprise of Android or iOS devices. An overview of the system physical architecture is shown in Fig. \ref{fig:PhysArchitecture}. The following sections describe how these components were utilised in the creation of our proposed framework. \begin{figure*} \centering \includegraphics[scale=0.56]{PhysicalArchitecture.pdf} \caption{An overview of the physical architecture showing the major structural components involved in the implementation.} \label{fig:PhysArchitecture} \end{figure*} \subsection{Mobile App} Cross-platform development can help to reduce the time and costs associated with developing apps for multiple mobile platforms. The mobile app developed for our evaluation was created using Ionic, a cross-platform framework based on the earlier Cordova framework. Screens within Ionic apps are rendered onto a standard WebView, in the same way that web pages are rendered in web browsers. There are also native elements within the framework however, including the ability to interface with the device's hardware features, such as sensors and cameras. Fig. \ref{fig:MobileAppScreens} shows the main data capture screens within the mobile app. The primary objective of our initial proof of concept evaluation was to determine the usability and reliability of our cross-platform mobile client and cloud-based framework in real-world settings. Ease of use was a primary motivating factor behind the design of the mobile app. Screens within the app display context-sensitive information in the form of an information bar at the top of each screen that was used to guide the user through the process of acquiring and uploading foot photographs. The UI and validation were designed so that it was not possible for the user to take the wrong action. Examples of this include: \begin{itemize} \item It was not possible to retake a photograph for the current foot if one had already been taken and uploaded. \item The user could not upload a photograph for any foot more than once. \item It was not possible to change left foot ``checked" tickbox if the left foot photo had been uploaded. \item It was not possible to change right foot ``checked" tickbox if the right foot photo had been uploaded. \end{itemize} Ionic utilises a Model View Controller (MVC) architecture, implemented using Angualr.js, which separates data, presentation of data and business logic. App data, including application state, is stored in a local SQLite database. \begin{figure*} \centering \includegraphics[scale=0.37]{footsnap_6.png} \caption{UI screenshots from the proposed cross-platform mobile client. From left to right, (top-left) patient QR code is scanned, (top-middle) clinician enters details of each foot being examined, (top-right) clinician enters number of visible ulcers, (bottom-left) photo acquisition of foot, (bottom-middle) inference result returned by cloud service, (bottom-right) examination complete.} \label{fig:MobileAppScreens} \end{figure*} \subsection{Oracle Mobile Cloud Service Software Development Kit (SDK)} Oracle provides a SDK for several mobile development platforms, including Ionic, which enables mobile clients to interface with Oracle Mobile Hub (OMH). The Oracle Mobile Cloud Service SDK is a HyperText Transfer Protocol Secure client layer, through which requests can be made to OMH and associated services using JavaScript Object Notation (JSON) via REpresentational State Transfer (REST) to transfer data between clients and the cloud service. \subsection{Cloud Platform} The cloud platform services developed for our proof-of-concept clinical evaluation were implemented using Oracle Cloud Infrastructure (OCI). OCI is an online enterprise scale cloud service offering Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). A breakdown of these service models are described in the following sub-sections. \subsubsection{Platform as a Service} Oracle Mobile Hub (OMH) and the Autonomous Transaction Processing Instance (ATPI) represent the PaaS elements used in the evaluation. OMH provides a gateway for mobile clients to access other internal cloud services, and includes features such as identity management, analytics, and Application Programming Interface (API) management. The ATPI hosts the Oracle 18c database, which is used for storing all data relating to the evaluation, including foot details entered by clinicians, photographs taken during patient appointments, inference results returned from the deep learning model and clinician confirmation of agreement with inference results. ATPI offers multiple deployment options that automatically configures the database depending on its targeted use case. For our evaluation, the ATPI was configured to use the Autonomous Transaction Processing workload type, which optimises the database with a bias towards processing high volumes of random data access. \subsubsection{Infrastructure as a Service} Oracle Compute represents the IaaS component of the project, consisting of a Virtual Machine (VM). Virtualisation software allows multiple guest systems to run on a single physical platform, where isolated environments can be created by multiplexing host's computing cycles and virtualising hardware resources. The VM hosts the core of the business logic, together with the frozen inference graph used for inference. For our proof-of-concept clinical evaluation, the operating system used was Ubuntu 16.04.6 LTS (xenial) with Nvidia GPU Cloud Machine Image shape, which defines the hardware configurations that are available to the VM instance. Hardware available on this shape included an Intel Xeon Gold 5120 2.20GHz CPU, and an Nvidia Tesla P100 SXM2 16GB GPU. We created two python programs to run on the VM, which were responsible for processing network, database, and image inference operations. The first of the two Python programs, ServerPy, handled incoming requests from mobile clients via OMH over REST. All incoming requests are handled by Flask - a Python web framework that allows for the routing of incoming REST requests to Python classes. Requests are made to either add new data to the database, update existing data, or to retrieve data from the database. Adding data to the database includes adding new patient foot data, patient foot photographs, and clinician agreement confirmation with inference results. Sending data from the database to requesting clients takes the form of server status codes, app version checks to ensure the user is using the correct version of the mobile app, and the results of completed inference requests. Additionally, when new photographs are received by ServerPy, the details are added to a jobs table in the ATPI database. The second Python program, AnnotatePy, was responsible for periodically reading the jobs table and retrieving the oldest incomplete job. The job is then processed, using TensorFlow for inference. Once inference is completed, the results are added to the database, and the job is marked as complete. This process operates as a queue, using a first in first out principle. \subsection{Deep Learning Framework} The DFU localisation model trained by Goyal et al. \cite{goyal2018robust} was selected for use during our proof-of-concept clinical evaluation. This single classifier model showed the highest mAP (91.8) in a comparison of supervised deep learning models trained and evaluated with DFU. The model was trained using 1,775 DFU images, with ground truth labelling provided by clinical diabetic foot experts at Lancashire Teaching Hospitals NHS Foundation Trust. It implements Faster R-CNN as the object localisation network to process feature extraction, with Inception-ResNetV2 used to classify the extracted feature maps. This model was trained using two-tier (partial and full) transfer learning using the MS COCO dataset, implements three distinct steps to perform localisation: \begin{enumerate} \item Feature extraction using Inception V2 which serves as input for later stages (proposals and classifier). \item Generation of proposals and refinement. \item Region of interest classifier and bounding box regressor to fine-tune bounding box accuracy. \end{enumerate} The model was trained using a heterogeneous dataset consisting of non-standardised images of DFU. Aspects such as orientation, distance from foot, capture device type, resolution, focal length, exposure time, ISO speed ratings, variances in the amount of the foot visible in the image, and lighting conditions resulted in a high level of variability in image characteristics. It could be argued that a non-standardised dataset is more desirable in the training of such a model, since this would increase the viability of its use in real world settings where a system would need to be able to take into account numerous uncontrolled environment variables. \begin{table*} \caption{Summary of Results for Individual Question Responses, reported in mean $\pm$ SD (standard deviation).} \label{tab:questions} \scalebox{1.0}{ \begin{tabular}{|l|l|l|} \hline & Question & mean$\pm$SD \\ \hline Q1 & The app was easy to use & 6.50$\pm$0.55\\ \hline Q2 & It was easy for me to learn to use the app & 6.83$\pm$0.41\\ \hline Q3 & The navigation was consistent when moving between screens & 5.83$\pm$0.98\\ \hline Q4 & The interface of the app allowed me to use all the functions offered by the app & 6.00$\pm$0.89\\ \hline Q5 & Whenever I made a mistake using the app, I could recover easily and quickly & 4.75$\pm$2.22\\ \hline Q6 & I like the interface of the app & 5.83$\pm$0.98\\ \hline Q7 & The information in the app was well organised, so I could easily find the information I needed & 6.00$\pm$1.27\\ \hline Q8 & The app adequately acknowledged and provided information to let me know the progress of my action & 5.00$\pm$2.35\\ \hline Q9 & Overall, I am satisfied with this app & 6.00$\pm$1.27\\ \hline Q10 & This app has all the functions and capabilities I expected it to have & 4.80$\pm$2.28\\ \hline \end{tabular} } \end{table*} \section{FINDINGS} Usability is a key factor in the adoption of mobile health apps, especially in cases where users are not within the typical age range of mobile users \cite{zapata2015mhealth}. Therefore, at the end of our six-month proof-of-concept evaluation, users of the system, clinicians, were asked to complete a usability questionnaire. The University of Pittsburgh Usability Questionnaire (Standalone Mobile Health App for Health Care Providers template) \cite{pitt2019usability} was used to obtain the qualitative measures for our evaluation. Zhou et al. validated the PITT Usability Questionnaire, and it was shown to have high internal consistency reliability \cite{zhou2019usability}. Questions that were not relevant to the app in its current state, such as questions 9, 15 and 18 were excluded as they were considered not to be relevant to the use of the app in its current prototype stage. We also added an additional free-text section at the end of the questionnaire asking clinicians to provide any other details of their experience using the app, and any recommendations for improvement. Six participating clinicians completed the questionnaire, with questions scored between 1 and 7; 1 being disagree and 7 being agree. They were also able to select a not applicable option if they believed that the statement did not relate to themselves. \subsection{Quantitative Analysis} Table \ref{tab:questions} shows a summary of mean and standard deviation for the ratings of each question. The questionnaire results indicate that all participating clinicians report high levels of satisfaction when using the app, with most of the mean scores being above 5, of a maximum score of 7. Questions 1 (m = 6.50; sd = 0.55) and 2 (m = 6.83; sd = 0.41), which relate to ease of use, provide the highest scores, which we regard as a good indicator that the app would be easy for patients to use in home settings, and meets one of the main criteria when taking into account the design of the app. The lowest scoring questions were Question 5 (m = 4.75; sd = 2.22) and Question 10 (m = 4.80; sd = 2.28), which related to how the app responds to user mistakes and expected app functionality respectively. This would indicate that the app design might benefit from further adjustments to enable users to more easily correct their mistakes. However, these issues would be negated in a patient-focused app since it would not contain any of the data entry elements currently present in the clinician-focused app prototype. \subsection{Qualitative Analysis} The free text responses provided by participating clinicians showed varying results that were not obvious to gauge from the answers to the Likert scale questions. Most participating clinicians were in agreement that the app was easy to use and functioned as expected. However, some clinicians experienced connectivity issues with the app due to the restrictive nature of free hospital WiFi, which resulted in occasionally slow upload of foot photographs. Such restrictions may mean that connected devices are automatically disconnected after a period of inactivity, with the only way to reconnect being via the device web browser, a process that has to be completed manually by the user. Clinicians also agreed that when the app detected an ulcer, the ulcer localisation results were generally highly accurate. Other responses noted the number of false positive detections, with clinicians indicating that such detections would occur around callouses or extravasation areas around the wound. However, one response noted that extravasations detected as ulcers would at least direct patients to the clinic for assessment. One response noted that the app would be less useful for clinicians in its current state as they already knew how to recognise the presence of an ulcer. However, other responses disagreed with this statement, highlighting the importance of regular photographic capture of DFUs for screening and remote serial analysis. A device that allows patients to self-screen at home could encourage diabetic patients to check their feet more regularly, and would enable clinicians to check patients' feet without the need for hospital visits. It was also noted that older patients might have difficulty using the app without assistance. This could be addressed with the help of a partner, family member or carer. Another solution would be to use of a selfie-stick attachment for the mobile device. \section{Recommendations and Future Work} The two Python programs that run on the VM, ServerPy and AnnotatePy, might be better deployed as Linux services, so that they are able to automatically recover from system crashes and other system malfunctions. For the mobile app, in the case of a larger scale patient-focused evaluation, it might be preferable to implement push notifications so that users do not have to keep the app open while waiting for the cloud service to return inference results. A form of data protection might also be employed to ensure that all data and photographs are encrypted on the device. Given that an internet connection is required for the app to function, encryption of all local data could be achieved by requesting encryption keys from the cloud service as and when needed, but without ever actually storing the keys on the device itself. A larger evaluation might require a more robust server-side solution, where multiple GPUs could be employed to complete a larger volume of jobs simultaneously. This could be achieved by sending each job to a single GPU, or distributing the workload of multiple jobs to multiple GPUs simultaneously. Further investigation is required to determine the most performant solution in this scenario. Our framework has been designed to encourage frequent patient self-monitoring, supporting early detection of DFU that will lead to earlier signposting to treatment and therefore improved ulcer healing. Early intervention is an important factor in improving healing rates. Tools and education programmes to give patients the knowledge and motivation to manage their condition are essential in the goal of reducing the negative effects of diabetes and DFU \cite{boodoo2017mhealth}. Many people diagnosed with DFU are older adults, therefore it will be important to ensure that any future apps created for use with our framework are extremely simple and easy to use. They should require the absolute minimum input from the user, and results should be presented in a form that are easy to understand. Usability will be the primary defining objective for a patient-focused version of the app. Minimal complexity will ensure the greatest adoption and impact of the system. In the current system implementation, foot photographs are uploaded and stored in an Oracle 18c database as Binary Large Objects, together with all the other data captured during the evaluation. In the context of an initial evaluation, this approach was deemed appropriate, given that the average photograph file size was 60KB. However, in the context of a larger study, it may be beneficial to separate photographs and database data. Excessive file storage within the database may impact its performance over time. Photographs could instead be saved to the VM file system. However, this may require careful design to ensure that photographs are archived and easily retrievable, which would likely involve managing directories. A more desirable alternative might be to utilise a cloud storage layer, where photographs are given unique IDs and are sent to the storage layer which removes the need to manage file system elements. Photographs can be easily retrieved from the storage layer using an API and the photograph UID. Further ahead, the amount of data collected by our system could increase exponentially, especially if used internationally. Such datasets would classify as Big Data, whereby a NoSQL database may prove to be a superior fit for such large volumes of data processing. During the analysis phase of the project, we explored the possibility of using a serverless solution, whereby the setup of a VM to host the Python applications could be bypassed. Instead, packages would be uploaded to a server space where application methods can be triggered by events received via REST requests. However, this approach to cloud computing is still in the early stages, with most providers not exposing access to GPU resources using this method. Following the positive results in user acceptance from our proof-of-concept evaluation, we plan for a larger scale study to be undertaken. This follow-up study will be patient-focused, where the app will be simplified and distributed to a larger number of users. In this study, the app will be used by patients, their friends, family, or carers, instead of clinicians. This next stage will provide confirmation of whether the app and associated technologies are suitable for large-scale real-world use. The technologies developed will form the basis of a platform to support future research into areas such as: \begin{itemize} \item Automated early detection of DFU, including the detection of signs of pre-ulceration. \item Automated classification and segmentation of DFU types: (1) DFU with no infection and no ischemia, (2) DFU with infection and no ischemia, (3) DFU with ischemia and no infection, (4) DFU with infection and ischemia. \item Automated segmentation of DFU tissue types determined by colour and texture features, including necrotic, epithelial, granulation and slough. \item Automated non-contact methods of monitoring DFU healing status over time. \item Automated non-contact methods of monitoring the periwound (surrounding tissue of a wound) as a potential indicator of wound healing. \item Automated non-contact methods of monitoring DFU edemia - a condition characterised by the swelling of all or parts of the foot. \item Automated non-contact methods used for DFU pathophysiology. \end{itemize} \section{CONCLUSION} In this work, we developed a cross-platform mobile app and a cloud-based deep learning framework for the automatic detection of DFU. The system was assessed for usability via qualitative methods, which showed that the system scored highly for system usability when used by clinicians in clinical settings. This work will provide the basis for a more extensive patient-focused evaluation of the system to determine its effectiveness when used by patients and their carers. The dataset obtained over the six-month evaluation period will be used to retrain the existing deep learning model to improve its effectiveness in detecting DFU at various stages of development. The longitudinal data will be used to form the basis of a refined model that will be used to detect the early signs of DFU. To the best of our knowledge, the framework created for this research is the first of its kind, where DFU can be automatically detected and localised by a fully integrated framework of state-of-the-art technologies with an easy to use app, producing high confidence scores, where inference is performed in the cloud. This could lead to the eventual expansion of our system for use as a tool, not just for patients to self-monitor, but also as a diagnosis tool for medical experts. Our framework can now be used as a platform for further research, including early detection of DFU, and monitoring of DFU healing status over time. Further ahead, our platform could be expanded into other areas of research and automatic medical wound analysis, including other pathologies, on any part of the human body. \section{ACKNOWLEDGMENT} The authors would like to thank Oracle Research for providing the IaaS and PaaS technologies that enabled our clinical evaluation to take place. Gratitude is also extended to Salford Royal NHS Foundation Trust, Lancashire Teaching Hospitals NHS Foundation Trust and Manchester University NHS Foundation Trust for their extensive support during the usability study. This project is funded by The Manchester Metropolitan Strategic Operation Fund and Oracle Innovator Accelerator Programme.
ed016067df925e3845008fc54ddc51d847759011
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Halide perovskites are exciting materials with exceptional optoelectronic properties, wide tunability, and a broad range of applications spanning solar cells \cite{Kojima2009, Liu2013, Stranks2015b}, light-emitting diodes (LEDs) \cite{Tan2014, Ling2016}, photo-detectors \cite{Zhang2018, Saidaminov2017, Ding2017} and X-ray scintillators \cite{Zhou2021}. Solar cells based on lead-halide perovskites ABX$_3$ with A=(CH$_3$NH$_3$)$^+$ (methylammonium, MA$^+$), (NH$_2$CHNH$_2$)$^+$ (formamidinium, FA$^+$), Cs$^+$, B=Pb$^{2+}$, and X=Cl$^-$, Br$^-$, I$^-$, can be processed at low-temperature, and have exceeded power conversion efficiencies of 25\% \cite{nrel2021}. Yet, commercialisation of perovskite-based solar cells and other devices is hampered by the lack of stability of the perovskite absorbers towards moisture, oxygen, light, heat, and electric fields \cite{Senocrate2019}. Various strategies have been applied to improve the stability of these materials, including encapsulation, (partial) replacement of the A site cation \cite{Saliba2016}, and passivation \cite{Schileo2020}. All-inorganic lead-halide perovskites CsPbX$_3$ have seen their own surge of interest, in particular because colloidal CsPbX$_3$ nanocrystals can exhibit very high photoluminescence quantum yields, with band gap energies and emission spectra tunable over the entire visible spectral region \cite{Huang2016a}. However, even all-inorganic halide perovskites can exhibit poor stability under electric fields \cite{He2018}. Material degradation and phase separation in both organic-inorganic and all-inorganic halide perovskites have been attributed to the migration of mobile ionic species \cite{Zhang2019}. Ion migration in halide perovskites has been studied since the 1980s \cite{Mizusaki1983}. The dominant migrating species in these materials are halogen ions \cite{Mosconi2016b, Meloni2016, Luo2017, Senocrate2017}, mediated by the presence of halogen defects. The mechanism of halogen migration has been studied both experimentally \cite{Mizusaki1983, Narayan1987, Eames2015, Yuan2016, Yang2015, Lee2019} and using first principles simulation techniques such as density functional theory (DFT) \cite{Eames2015, Egger2015, Azpiroz2015, Haruyama2015, Meloni2016, Oranskaia2018}. And while reported activation energies for these migration processes span a wide range from $\sim$0.1 to $\sim$1.0\,eV \cite{Eames2015, Mosconi2016a, Mosconi2016b, Meloni2016, Luo2017, Senocrate2017, Chen2019a}, there is a consensus that halogen migration is the primary channel for the ionic conductivity observed in halide perovskites. The large spread of the experimental values has been linked to synthesis conditions, experimental techniques, and the role of grain sizes for defect formation in polycrystalline thin films \cite{Futscher2019, Zhang2020b}. Prior first principles calculations of ion migration in both organic-inorganic and all-inorganic halide perovskites have, with a few exceptions, focused on ion migration in the bulk. However, in perovskite nanocrystals, ion migration at surfaces is expected to play an increasingly large role with decreasing particle size. Furthermore, in halide perovskite thin films, Kelvin probe force microscopy was used to demonstrate that ion migration is dominant at grain boundaries \cite{Xing2016, Yun2016}. More recently, the effect of surfaces and grain boundaries has also been explored via first principles calculations \cite{Oranskaia2018, Meggiolaro2019}. However, these computational studies have provided a mixed picture: Meggiolaro et al.~reported that migration energy barriers of interstitial iodine vacancies are little affected by surfaces \cite{Meggiolaro2019}. On the other hand, Oranskaia et al.~showed a clear effect of the surface on Br vacancy and interstitial migration in MAPbBr$_3$ and FAPbBr$_3$ \cite{Oranskaia2018}. For both materials, the activation energies of Br vacancy migration were computed to be 0.3\,eV lower than in the bulk. The variation in calculated results can have a number of sources such as differences in the applied level of theory, for example the choice of the DFT exchange-correlation functional and the method used for calculating migration barriers. A complication that is particular to the organic-inorganic perovskites, is that in the majority of DFT calculations, the rotational dynamics of the organic cation at room and higher temperatures are not taken into account. Instead structural models with fixed orientations of the molecular moieties are used, leading to significant differences in the potential energy landscape depending on the choice of molecular orientation \cite{Quarti2014}. Indeed, Oranskaia et al.~also showed that activation energies for Br migration significantly depend on the orientation of the organic moiety. The uncertainties associated with the choice of a suitable structural model for organic-inorganic perovskites at elevated temperatures, and the important role of surfaces in all-inorganic halide perovskite nanocrystals, are motivating our first principles study of bromine vacancy migration in the bulk and at a surface of cubic CsPbBr$_3$. Our DFT calculations of vacancy-mediated bromine migration paths and energy barriers in CsPbBr$_3$, show a significant dependence on the presence of a surface. We find that the migration barrier in the bulk is about twice as large as the one at the surface. We show that variations of the Br migration barriers are correlated with variations in the Pb-Br bond length of bonds in the vicinity of the vacancy: halide migration at the surface is facilitated by the larger structural flexibility of the surface as compared to the bulk. Furthermore, migration paths considerably differ between the surface and the bulk, which can also be traced back to more flexible bonds at the surface. Finally, we study the effect of surface modification with alkali halide monolayers, demonstrating that a NaCl passivation layer leads to an increase of the migration energy of Br vacancy migration to almost the value in the bulk of the material. \section{Methods}\label{methods} CsPbBr$_3$ is orthorhombic with $Pbnm$ symmetry at room temperature and undergoes two successive phase transitions to tetragonal ($P4/mbm$) at 88\textdegree C and to cubic ($Pm\bar{3}m$) at 130\textdegree C \cite{Hirotsu1974, Stoumpos2013}. Figure~\ref{fig1:unit-cell-setup} depicts the bulk and surface slab structures of cubic CsPbBr$_3$ used in this work. For constructing these structural models, we first performed a geometry optimization starting from the experimental high temperature crystal structure of CsPbBr$_3$ with $Pm\bar{3}m$ symmetry using DFT within the PBEsol approximation \cite{Perdew2008} as implemented in the Vienna Ab$-$initio Software Package (VASP) \cite{Kresse1993, Kresse1996}. The resulting optimized lattice parameter of 5.86\,\AA~is in very good agreement with the experimental value from X-ray diffraction \cite{Lopez2020}. To model the surface of CsPbBr$_3$, we designed two (001) surface slab supercells with distinct surface terminations by repeating the primitive $Pm\bar{3}m$ unit cell with PBEsol-optimized lattice parameters twice along the [100] direction, once along the [010] direction and six times along the out-of-plane [001] direction, with the bottom three layers fixed to bulk positions and the top three layers fully mobile. Unless otherwise specified, all our calculations were performed with this unit cell setup. Surface A is PbBr$_2$-terminated and surface B is CsBr-terminated. The surface energy is converged to within 25\,meV with respect to the slab thickness. To avoid spurious interactions between periodic images, we inserted 30\,\AA~of vacuum along the (001) direction. The bulk system features the same number of layers without vacuum. We used the projector augmented wave (PAW) method \cite{Kresse2014}, a cutoff energy for the plane-wave expansion of 300\,eV and a k-point grid with $4\times4\times4$ points for the bulk and $4\times4\times1$ points for the slab system. For geometry optimizations, we used a convergence criterion of 0.05\,eV/\AA. In all structural optimizations, the volume and shape of the unit cells was kept fixed. \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{figure1.eps} \caption{Bulk and slab supercells with A (PbBr$_2$) and B (CsBr) terminations. The label L$_i$ ($i=1-6$) enumerates layers in the slab structure. Surface slabs are separated by 30\,\AA~of vacuum along the [001] direction.} \label{fig1:unit-cell-setup} \end{figure} We computed migration paths and migration energies using the climbing-image Nudged Elastic Band (cNEB) approach \cite{Henkelman2000}, an optimization method for identifying the minimum energy path between a given initial and final state. We used three cNEB images, i.e., intermediate structure snapshots between the initial and the final state of the system, to simulate the migration of the Br vacancy. In the cNEB method, the images along the reaction path are optimized such that the highest energy image is driven up to the saddle point. The migration barrier represents the amount of energy necessary for an ion or a defect to move from the initial to the final state and is calculated as the difference between the energy of the saddle point and the energy of the initial state of the transition. \section{Results and discussion} \subsection{Structural changes upon vacancy formation} We start by performing geometry optimizations of the bulk and A- and B-terminated surfaces shown in Figure~\ref{fig1:unit-cell-setup}. As expected, the bulk system remains unaffected by further geometry optimization, while the slab structure is compressed at the surface, with axial Pb-Br bonds by more than 4\,\% shorter than in the bulk and almost unchanged equatorial bonds. The average relative variation of Pb-Br bonds per layer as compared to the bulk Pb-Br bond length of 2.93\,\AA~is shown in Figure~\ref{fig2:bonds-variations}(a), where we have averaged over all axial and equatorial Pb-Br bonds (per layer), respectively. \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{figure2a_2d.eps} \caption{(a) Relative variation of the bond lengths in the surface slab with respect to the undistorted bulk bond lengths. Relative bond length variation when a Br vacancy is introduced (b) at surface A; (c) at surface B; (d) in the bulk. The geometry optimized structure is shown below each panel, with the position of the Br vacancy highlighted in red.} \label{fig2:bonds-variations} \end{figure*} A DFT study on intrinsic point defects in CsPbBr$_3$ showed that under Br-poor growth conditions, bromine vacancies have the lowest formation energy among all possible point defects \cite{Kang2017}. In our surface slab structures, a Br vacancy can occupy three symmetry-inequivalent positions in each layer of surface A and B: axial, equatorial along the [100] direction, and equatorial along the [010] direction. Note, that these three vacancy positions also have different energies in our structural model for the bulk, which is an artifact of our asymmetric unit cell. We define the formation energy of a vacancy in the slab ($E_f^{slab}$) and in the bulk ($E_f^{bulk}$) as the difference between the energies of the unit cell with and without vacancy. In Table~\ref{tbl1:formation_energy}, we report the binding energy $E_B = E_f^{bulk} - E_f^{slab}$ to quantify by how much a vacancy prefers to bind to the surface as compared to the bulk. $E_B$ is largest in L$_1$ and converges to zero in subsequent layers, in agreement with results for MAPbI$_3$ by Meggiolaro et al.~\cite{Meggiolaro2019}, suggesting that the surface is more prone to defects. We further find that binding to surface A is preferred over binding to surface B, in line with observations of iodine vacancy clustering at MAI-terminated surfaces in MAPbI$_3$ \cite{Zhang2019a}. For completeness, we also report $E_B$ for the two equatorial vacancy positions and note that this value is significantly larger for the vacancy along [010] because of our $2\times1\times6$ unit cell setup. In the following, we will only discuss ion migration between axial vacancies. \begin{table}[ht] \centering \caption{Binding energies $E_B$ of Br vacancies to the surface as defined in the text.} \label{tbl1:formation_energy} \begin{tabular}{ccccc} \toprule Position & Layer & Termination & $E_B$ (eV)\\ \midrule \multirow{4}{*}{axial} & \multirow{2}{*}{1} & A & 0.42 \\ & & B & 0.23 \\ & \multirow{2}{*}{2} & A & 0.22 \\ & & B & 0.23 \\ & \multirow{2}{*}{3} & A & 0.05 \\ & & B & 0.02 \\ \multirow{2}{*}{equatorial along [100]} & \multirow{2}{*}{1} & A & 0.35 \\ & & B & 0.15 \\ \multirow{2}{*}{equatorial along [010]} & \multirow{2}{*}{1} & A & 2.57 \\ & & B & 2.43 \\ \bottomrule \end{tabular} \end{table} In Figure~\ref{fig2:bonds-variations}(b) and (c) we show the average Pb-Br bond length variation with respect to the undistorted bulk bond length upon introduction of the axial Br vacancy and geometry optimization. The creation of a Br vacancy at surface A leads to severe distortions of the system, featuring axial Pb-Br bonds reduced by up to 20\,\%. In contrast, introducing a vacancy at surface B leads to smaller variations and a less distorted structure. We have also calculated the variation of the Pb-Br bond length for a Br vacancy generated in the deeper lying surface layers, and find that even though the absolute value of the bond length variation differs for the two surface terminations, in both cases the variation in axial bond length is $\sim$5 times larger than the variation in equatorial bonds. Furthermore, the deeper the vacancy is created, the less compressed the structure is at the surface. In Figure~\ref{fig2:bonds-variations}(d) we show that formation of a Br vacancy within the bulk has similar consequences, i.e., a bond length compression in the vicinity of the vacancy. However, the distortions in the bulk are highly suppressed due to a more rigid structure, with fewer degrees of freedom in comparison with the surface slabs, leading to a zero average variation of the bond lengths when averaging over all mobile layers. Note that the large compression of more than 20\% in L$_1$ of surface A is an artifact of the asymmetric unit cell. In a $2\times2\times6$ cell, the compression of axial bonds is smaller than in the $2\times1\times6$ cell, with a relative bond length compression of 4.6\% in L$_1$ of surface A without a vacancy, 3.4\% in L$_1$ of surface A with a vacancy and hardly any variation with respect to the undistorted bulk for the case of a vacancy in the bulk. However, the trends for subsequent layers are similar to the $2\times1\times6$ unit cell. \subsection{Br vacancy migration in CsPbBr$_3$} Next, we use the cNEB method to determine the energy barrier of Br vacancy migration between two adjacent axial vacancy positions at both surfaces and for vacancy migration in layers L$_2$ and L$_3$ of the A-terminated surface. The migration barrier is calculated as the difference between the total energies of the initial state and the saddle point. Table~\ref{tbl2:activation_energy} summarizes our results. For the bulk structure, we compute a migration barrier of 0.65\,eV. Our specific unit cell setup is by design not suitable for direct comparison with experimental results or bulk calculations of halide migration in symmetric structural models, such as the one used in Ref.~\cite{Zhang2020}. However, our setup allows us to realize the same defect concentration and in-plane boundary conditions in the bulk and slab unit cells, and hence compare trends. It is worth mentioning that our calculations are in good agreement with the experimental values ranging between 0.72 and 0.66\,eV reported in the literature \cite{Mizusaki1983, Narayan1987}. However, our result is 140\,meV larger than the calculated migration barrier reported by Zhang et al.~\cite{Zhang2020}. This discrepancy may be explained based on different structures, defect concentrations, and approximations for the exchange correlation energy (Ref.~\cite{Zhang2020} uses the orthorhombic phase of CsPbBr$_3$ and the PBE approximation). Furthermore, we expect that our calculations represent an upper bound on the migration barriers, since we are neglecting the large, anharmonic vibrations reported for CsPbBr$_3$ and other halide perovskites at room and higher temperatures \cite{Yaffe2017}. \begin{table*}[htb] \caption{Calculated energies (in eV) of Br vacancy migration across two adjacent axial Br positions, and deviation (in \AA) from the straight migration path in CsPbBr$_3$ perovskite} \label{tbl2:activation_energy} \centering \begin{tabular}{cccccc} \toprule System & Mobile layers & Termination & Layer & Migration energy (eV) & $\delta$ (\AA)\\ \midrule bulk & & & & 0.65 & 0.13\\\midrule \multirow{5}{*}{slab} &\multirow{5}{*}{3} & \multirow{3}{*}{A} & 1 & 0.40 & 1.24\\ & & & 2 & 0.29 & 0.94\\ & & & 3 & 0.30 & 1.04\\ & & \multirow{2}{*}{B} & 1 & 0.31 & 0.77\\ & & & 2 & 0.30 & 0.67\\\midrule \multirow{4}{*}{slab} &\multirow{4}{*}{4} & \multirow{4}{*}{A} & 1 & 0.38 & 1.24\\ & & & 2 & 0.26 & 1.04\\ & & & 3 & 0.27 & 1.06\\ & & & 4 & 0.29 & 1.10\\ \bottomrule\end{tabular} \end{table*} Table~\ref{tbl2:activation_energy} and Figure~\ref{fig3:activation-energy-migration-path}(a) and (b) show the activation energy for Br migration at the A- and B-terminated surfaces, and within subsurface layers of surface A. Our first main finding is that the migration energy at both surfaces is substantially lower than that in the bulk, more than a factor of two at the B surface. We have confirmed that our finding of a significantly lower migration energy at the surface also holds in a $2\times2\times6$ unit cell setup, where we calculate migration energies of 0.48\,eV in the bulk and 0.28\,eV at the surface with values of 0.20\,eV and 0.26\,eV in surface layers L$_2$ and L$_3$). Furthermore we find that migration barriers at surface A and B are very similar; the migration barrier at surface A is only 90\,meV larger than that at surface B. Interestingly, the migration barrier in subsurface layer L$_2$ is $\sim$110 meV lower than directly at the surface, and only slowly increases in subsequent subsurface layers. We show in Figure~\ref{fig3:activation-energy-migration-path}(a) that the variation of the migration barrier with the layer number is correlated with relative bond lengths variations of axial and equatorial bond lengths with respect to the bulk. Smaller migration energies are associated with a significant axial compression of the surface. The slightly higher migration barrier at the surface of A as compared to subsequent surface layers is correlated with a subtle interplay between longer equatorial and shorter axial bond lengths as compared to the bulk. However, note that trends in migration barriers as a function of surface depth should be viewed with caution: In our calculations, the bottom layers of the surface slab are fixed to the bulk atomic positions, a constraint that might affect the magnitude of the calculated migration barriers in the layers adjacent to the fixed layers. We therefore calculated migration barriers for a surface slab with four mobile layers as well. The migration barriers for this system, also shown in Figure ~\ref{fig3:activation-energy-migration-path}(a) and Table~\ref{tbl2:activation_energy}, are slightly lower but follow the same trends. Observation of bulk-like migration barriers deeper into the surface would likely require structural models with more surface layers. Our results are also in line with observations by Oranskaia et al.~showing that larger lattice distortions lead to smaller migration energies for the through-cell migration of a Br vacancy in organic-inorganic perovskites \cite{Oranskaia2018}. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth]{figure3a_3c.eps} \caption{(a) Migration energy and average bond length variation in the slab structure as a function of the layer in which Br migration takes place. The migration energy in the bulk is also shown in blue for comparison. Open (closed) symbols correspond to the surface slab with three (four) mobile top layers. (b) Energy profiles and (c) migration paths, as computed with the cNEB approach for migration in the bulk (red), at the A surface (blue) and at the B surface (green). The structures correspond to the average atomic configurations of the two equivalent endpoints of each cNEB calculation overlaid with the position of the migrating Br ion along the migration path. The definition of the deviation $\delta$ from the linear path is shown in the structure corresponding to A surface.} \label{fig3:activation-energy-migration-path} \end{figure} A detailed analysis of the migration paths of the Br ion, associated with the energy profiles shown in Figure~\ref{fig3:activation-energy-migration-path}(b), reveals significant qualitative differences between the migration paths in the bulk and at the two surfaces, see Figure~\ref{fig3:activation-energy-migration-path}(c). In the bulk, the halide ion moves along an almost straight line from one axial vacancy position to the other - the shortest possible path. Contrary to that, the migration path is curved at both surfaces, with the saddle point deviating from the straight line. Analogous curved paths for vacancy migration between an equatorial and an axial position have previously been reported for oxide perovskites based on neutron diffraction \cite{Malavasi2010,Yashima2003} and computational \cite{SaifulIslam2000, Munoz-Garcia2014} studies. More recently, similar curved paths have been reported for Pb-based halide perovskites as well \cite{Eames2015, Zhang2020}. We quantify the curvature of the migration path, $\delta$, by computing the perpendicular distance of the Br ion in the cNEB saddle point configuration to the straight line between the initial and final positions of the Br ion, schematically represented in Figure~\ref{fig3:activation-energy-migration-path}(c). As reported in Table~\ref{tbl2:activation_energy}, we find that in the bulk, the Br ion follows an almost straight line, with a deviation more than 7 times lower than at the A and B surfaces. This finding highlights the more flexible nature of the surface, which can deform and accommodate a defect more easily, explaining the lower energy of Br vacancy migration as compared to the bulk. Finally, we observe that at both surfaces, the saddle point is bowed away from the surface, with $\delta$ at the A surface almost double of what it is at the B surface. We find that $\delta$ is correlated with the compression of axial Pb-Br bond lengths and can be traced back to the structural symmetry of the two surfaces: Formation of a Br vacancy is associated with the breaking of one bond at the B surface, leading to less restructuring as compared to surface A, where two bonds are broken and both the Pb-Br layer above and below the vacancy adjust to vacancy formation and migration. \subsection{Surface passivation with alkali halide monolayers} Motivated by the correlation between migration barriers and surface restructuring, we investigate the effect of surface modification on Br vacancy migration energies. Surface modification is a common strategy for passivating surface and interfacial defect states in halide perovskites \cite{Xue2020}. Chemical surface treatment with organic ligands has been shown to increase photoluminescence lifetimes and quantum yields \cite{Dane2015, DeQuilettes2016}. However, organic ligands may lead to problems with stability. Therefore, alkali halides have recently been suggested as interface modifiers between the halide perovskite absorber and the electron- or hole-transport layers in solar cells, with some studies showing that they lead to enhanced stability and device performance \cite{Liu2018, Chen2019b}. Moreover, a first principles study by Apergi et al.~demonstrated that alkali halide surface modifiers allow for improved electronic level alignment between the halide perovskite absorber and NiO hole transport layer with wide tunability to match those of various perovskite compositions \cite{Apergi2020}. Here, we investigate four alkali halide monolayers, NaBr, NaCl, KBr and KCl, and their effect on Br vacancy migration at the surface of CsPbBr$_3$. We construct our passivated systems by placing the monolayer on top of surface A and relaxing the structures. In Figure~\ref{fig4:passivation}(a) we show the particular case of a slab structure passivated with a NaCl monolayer. Upon geometry optimization we find that the axial Pb-Br bonds of the surface slab are significantly less compressed than those of the unpassivated structure for NaCl- and NaBr-passivated surfaces. In Figure~\ref{fig4:passivation}(b), we show that the variation of Pb-Br axial bonds at the A-surface is less than 2\,\% and that of equatorial bonds is negligible. In comparison with the undistorted CsPbBr$_3$ bulk structure, K-based monolayers feature longer bonds, leading to larger distortions induced by the lattice mismatch between the perovskite and the passivation layer. In fact, passivating the surface with a KBr monolayer does not reduce distortions and yields very similar bonds as compared to the unpassivated system. In contrast, NaCl and NaBr monolayers have bond lengths that differ by only 0.08\,\AA~from the bulk. We therefore hypothesize that passivation with NaCl and NaBr should lead to an increase of the Br vacancy migration barrier at the surface. \begin{figure}[ht] \centering \includegraphics[width=0.9\columnwidth]{figure4a_4b.eps} \caption{(a) Slab supercell passivated with NaCl monolayer at A-terminated surface (top and side views). (b) Pb-Br bond length variation at surface A of the slab structure as a function of passivation layer. For comparison, the Pb-Br bond lengths at surface A of the unpassivated slab structure are represented as squares.} \label{fig4:passivation} \end{figure} Following the same approach as before, we introduce a Br vacancy in the first A-surface layer of the NaCl-passivated and NaBr-passivated surface slab structure, respectively, and calculate the energy of Br vacancy migration in the surface layer. For the NaCl-passivated system, we find a migration barrier of 0.57\,eV, only 80\,meV lower than that computed for migration in the bulk. Interestingly, however, the vacancy follows a curved migration path, with a larger deviation from the straight line ($\delta=1.68$\,\AA) than in the unpassivated system. Furthermore, we find that passivating the surface with NaBr leads to a migration barrier of 0.48\,eV, slightly larger than that computed for the migration at the surface of the unpassivated slab system. This result, reinforces our hypothesis that larger variations of Pb-Br bond lengths lead to smaller migration energies and indicates the potential of simple alkali halide salts for suppressing halogen vacancy migration at surfaces of halide perovskites. \section{Conclusions} In conclusion, we performed a first principles DFT study of Br vacancy migration in CsPbBr$_3$ and showed that the migration barrier within the close-packed bulk structure of cubic CsPbBr$_3$ is roughly twice as large as that at either of the distinctly terminated (001) surfaces of the system. Our calculations suggest that the significant reduction of the migration barrier at the surface is due to the "softer" structure of the surface which allows for significant bond lengths variations as compared to the bulk. Motivated by this observation, we studied the effect of surface modification with alkali halide monolayers and demonstrated that passivation with NaCl significantly decreases the structural distortions seen in the unpassivated surface, in particular the compression of the axial Pb-Br bonds. Consequently, NaCl passivation leads to an increase of the Br vacancy migration barrier at the surface back to almost the value it has in the bulk. Our results highlight the important role of surfaces in determining perovskite stability by facilitating ion migration. The dependence of vacancy activation barriers on Pb-Br bond lengths, in particular the importance of axial bond length compression, suggests that strain engineering, for example via epitaxial growth, could be another viable route for suppressing ion migration in halide perovskites \cite{Chen2020}. We believe that future computational studies should be directed towards elucidating the role of grain boundaries, in particular in polycrystalline MAPbI$_3$. With the advent of machine-learning force fields with DFT accuracy, reliable structural models of large supercells of organic-inorganic halide perovskites and the inclusion of temperature effects in large-scale molecular dynamics simulations have become computationally feasible \cite{Jinnouchi2019}. \begin{acknowledgments} We thank S. H\"uttner for valuable discussions. This work was supported by the Bavarian State Ministry of Science and the Arts through the Collaborative Research Network Solar Technologies go Hybrid (SolTech), the Elite Network Bavaria, and the German Research Foundation (DFG) through SFB840 B7, and through computational resources provided by the Bavarian Polymer Institute (BPI). R.-I. Biega acknowledges support by the DFG program GRK1640. \end{acknowledgments}
66b4fa4e2bf84ea343717a34ebfccc2b27fdd7e3
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Modern highly contentious transactional workloads suffer from hotspots. A hotspot is one or a small number of database records that are frequently accessed by a large number of concurrent transactions. Conventional concurrency control protocols need to serialize such transactions in order to support strong isolation like serializability, even though the hotspot may comprise only a small fraction of a transaction’s execution time. \figref{fig:hotspot} illustrates the effect using a single hotspot of tuple $A$. For both pessimistic (\figref{fig:hotspot-2pl}) and optimistic (\figref{fig:hotspot-occ}) concurrency control, transactions wait or abort/restart at the granularity of entire transactions. Ideally, we want a concurrency control protocol to \textit{serialize transactions only for the duration of the hotspots} (e.g., in \figref{fig:hotspot-ideal}, transaction \txn{2} can access the hotspot immediately after \txn{1} finishes writing it) but execute the rest of the transactions in parallel. If the hotspot comprises only a small fraction of the transaction's runtime, such an ideal protocol can improve performance substantially. \begin{figure}[t]% \centering \begin{subfigure}[t]{0.32\columnwidth} \includegraphics[width=.75\linewidth,valign=t]{./new_figures/hotspot-ww.pdf} \caption{2PL} \label{fig:hotspot-2pl} \end{subfigure} \begin{subfigure}[t]{0.32\columnwidth} \includegraphics[width=.75\linewidth,valign=t]{./new_figures/hotspot-occ.pdf} \caption{OCC} \label{fig:hotspot-occ} \end{subfigure} \begin{subfigure}[t]{0.32\columnwidth} \includegraphics[width=.75\linewidth,valign=t]{./new_figures/hotspot-ideal.pdf} \caption{ideal} \label{fig:hotspot-ideal} \end{subfigure} \vspace{-.1in} \caption{Schedules of transactions with a hotspot A under 2PL, OCC, and an ideal case. \revised{("Write" means read-modify-write)}} \label{fig:hotspot \end{figure} \revised{Many production systems (MS Orleans \cite{orleans-txn}, IBM IMS/VS~\cite{gawlick1985varieties}, Hekaton \cite{diaconu13, larson11}, etc.) and research work \cite{agrawal1995ordered, ding2018improving, faleiro15, graefe2013controlled, opt-distributed, johnson2010aether, kimura2012efficient, mu2019deferred, shasha1995transaction, xie2015high, yan2016leveraging} mitigate hotspots by adding extra complication, but cannot achieve the ideal protocol mentioned above.} In particular, the ideal protocol needs to read dirty data written by another transaction that has not committed yet. For pessimistic concurrency control, this violates the conventional definition of 2PL --- \txn{1} can acquire new locks after releasing locks to other transactions. For OCC and \edits{hybrid concurrency control protocols such as MOCC~\cite{wang2016mostly} and CormCC~\cite{tang2018toward}}, a transaction makes its writes visible only after the execution finishes, which is inherently incompatible with the notion of accessing dirty writes early. Transaction chopping~\cite{shasha1995transaction,weikum2001transactional,zhang2013transaction} and its recent improvements (e.g., IC3~\cite{wang2016scaling} and Runtime Pipelining~\cite{xie2015high}) are a line of research that tried to enable early reads of dirty data. Transaction chopping performs program analysis to decompose a transaction into sub-transactions and allows a sub-transaction to make local updates visible immediately after it finishes. While these techniques can substantially improve performance, they have several severe limitations. \textbf{First}, these methods require the \textit{full knowledge of the workload} including the number of transaction templates and the columns/tables each transaction will access. Any new ad-hoc transaction incurs an expensive re-analysis of the entire workload. \textbf{Second}, chopping must follow specific criteria to avoid deadlocks and ensure serializability, which limits the level of parallelism can be potentially exploited (see \secref{sec:txn-chopping} for details). \textbf{Third}, conservative conflict detections based on the limited information before execution can enforce unnecessary waiting. For example, in IC3, two transactions accessing the same column of different tuples may end up causing contention. \edits{In this paper, we aim to explore the design space of allowing dirty reads for general database transactions without the extra assumptions made in transaction chopping.} To this end, we propose \textbf{Bamboo\xspace}, a pessimistic concurrency control protocol allowing transactions to read dirty data during the execution phase (thus violating 2PL), while still providing serializability. Bamboo\xspace is based on the Wound-Wait\xspace variant of 2PL and can be easily integrated into existing locking schemes. It allows a transaction to \textit{retire} its lock on a tuple after its last time updating the tuple so that other transactions can access the data. Annotations of the last write can be provided by programmers or programming analysis. To enforce serializability, Bamboo\xspace tracks dependency of dirty reads through the lock table and aborts transactions when the dependency is violated. One well-known problem of violating 2PL is the introduction of \textit{cascading aborts}~\cite{agrawal1995ordered} --- an aborted transaction causes all transactions that have read its dirty data to also abort. If not properly controlled, cascading aborts lead to a significant waste of resources and performance degradation. Through Bamboo\xspace, this paper explores the design space and trade-off of cascading aborts, evaluates its overhead, and proposes optimizations to mitigate these aborts. In summary, this paper makes the following contributions. \begin{itemize}[leftmargin=.2in] \item We developed Bamboo\xspace, a new concurrency control protocol that violates 2PL to improve parallelism for transactional workloads without requiring the knowledge of the workload ahead of time. Bamboo\xspace is provably correct. \item We conducted a thorough analysis (both qualitatively and quantitatively) of the cascading abort effect, and proposed optimizations to mitigate such aborts. \item We evaluated Bamboo\xspace in the context of both \textit{interactive transactions} and \textit{stored procedures}. \edits{In TPC-C, Bamboo\xspace demonstrated a performance improvement up to 2$\times$ for stored procedures and 4$\times$ for interactive transactions compared to the best baseline (i.e., Wait-Die and Silo respectively). Bamboo\xspace also outperforms IC3 by 2$\times$ when the attributes of hotspot tuples in TPC-C are truly shared by transactions.} \end{itemize} \vspace{-.15in} \subsection{Experimental Analysis on Bamboo without Cascading Aborts (Single Hotspot)} \label{ssec:exp-1hs} In this section, we evaluate the potential benefits of Bamboo\xspace in the ideal cases where only one hotspot is present and thus does not induce cascading aborts. \vspace{.05in} \noindent\textbf{Single Hotspot at Beginning} \vspace{.05in} We firstly design a synthetic workload with all random reads but a single \revised{read-modify-write} hotspot at the beginning. In stored-procedure mode, Bamboo\xspace shows 6$\times$ improvements against the best-performing 2PL-based protocols (Wait-Die) due to savings on waiting. In the interactive mode, Bamboo\xspace is up to 7$\times$ better than the best baseline (Wound-Wait). \vspace{.05in} \noindent\textbf{Varying Transaction Length} \vspace{.05in} \begin{figure}[t]% \begin{subfigure}[t]{0.48\linewidth} \includegraphics[width=0.85\linewidth]{./figures/hs1_req.pdf} \caption{Varying transaction length} \label{fig:one_hs_vs_threads \end{subfigure} \begin{subfigure}[t]{0.03\linewidth} \includegraphics[width=\linewidth]{./new_figures/1hs-top.pdf} \end{subfigure} \hspace{-.05in} \begin{subfigure}[t]{0.43\linewidth} \centering \includegraphics[width=0.9\linewidth]{./figures/hs1_pos.pdf} \caption{Varying hotspot position} \label{fig:one_hs_vs_hs_pos \end{subfigure} \hspace{-.05in} \begin{subfigure}[t]{0.03\linewidth} \includegraphics[width=\linewidth]{./new_figures/1hs-bot.pdf} \end{subfigure} \vspace{-.1in} \caption{Performance on synthetic benchmark with one hotspot at the beginning with various settings, stored-procedure mode} \end{figure} \revised{In Figure~\ref{fig:one_hs_vs_threads}, we vary the length of the transactions and report the speedup of Bamboo\xspace (BB) over Wound-Wait\xspace (WW). Firstly, the results shows that Bamboo\xspace has greater speedup for longer transactions by up to 19$\times$, which corresponds to a larger $A$ increasing the benefit in the modeling shown in~\secref{ssec:cascading-effect}. } Secondly, the speedup first increases as the number of threads increases, and then saturates or drops when even more threads come in. This is due to the limitation of the inherent level of parallelism in the workload. \vspace{.05in} \noindent\textbf{Varying Hotspot Position} \vspace{.05in} \revised{Instead of fixing the hotspot at the beginning, we vary the position of the hotspots in this experiment. The result is shown in Figure~\ref{fig:one_hs_vs_hs_pos}.} The two small figures on each side of the x-axis corresponding to the position of the hotspot when $x=0$ (beginning of transaction) and $x=1$ (end of transaction), respectively. We show the results for workloads with transactions of 16 operations, while the results for other transaction lengths have similar observations. Bamboo\xspace provides a higher speedup against Wound-Wait\xspace when the access of hotspot is earlier in a transaction. \revised{The result also aligns with the modeling as an early access gives larger $A_{ww}$ and thus greater benefit from $A_{ww} - A_{bb}$. } {\bf Summary:} Through reducing lock waiting time, Bamboo\xspace can improve performance significantly (up to 19$\times$ over Wound-Wait\xspace). Some factors have an impact on the performance gain --- Bamboo\xspace shows larger speedup under a higher level of parallelism (i.e., more threads), longer transactions, and ``earlier'' hotspot accesses. \subsection{Experimental Analysis on Bamboo with Cascading Aborts (Multiple Hotspots)}\label{ssec:exp-cascading} Next, we present empirical evaluation of Bamboo\xspace serving workloads that can induce cascading aborts to understand their effects. We start with synthetic workloads with two \revised{read-modify-write} hotspots and fourteen random reads. \revised{Here we use a dataset of more than 100 GB.} To study the tradeoff between waits and aborts, we precisely control the workloads as two types: (1) we fix the first hotspot at the beginning of each transaction while moving the second around. In this case, the benefit Bamboo\xspace gained over Wound-Wait\xspace is fixed and the chance of having cascading aborts increases as the distance between the two hotspots increases. We use the case to study how different magnitude of cascading aborts can affect the gains. (2) we fix the second hotspot at the end of the transactions and move the other around to study the case where the benefits and the chance of cascading aborts increase simultaneously for Bamboo\xspace. \vspace{.05in} \noindent\textbf{Fix One Hotspot at the Beginning} \vspace{.05in} \revised{In this experiment, we also show BAMBOO-base which does not have the second optimization of not retiring the last few operations. As shown in \figref{fig:hs2_fix0_th}, Bamboo\xspace outperforms Wound-Wait\xspace for all distances. \figref{fig:hs2_fix0_runtime} shows how Bamboo\xspace gains speedup by trading more aborts for less blocking. The improvements of Bamboo\xspace can be up to 3$\times$. When the distance $x=0.75$ (i.e., there are 10 operations between the two hotspots), Bamboo\xspace outperforms Wound-Wait by 37\%, although the abort rate of Bamboo\xspace is 72\% higher. This result can be explained by the model, which indicates an improvement of at least 21.7\% in Bamboo\xspace following $(A_{ww}-A_{bb})P_{\textit{conflict}} - B_{bb}*P_{\textit{cas\_abort}} = 15/16*1 - B_{bb}*0.72 \ge 0.217$. The two versions of Bamboo\xspace differ only when the second hotspot is at the end of the transaction ($x=1.0$). With the optimization, the last hotspot will not be retired which greatly reduces the bookkeeping overhead. \figref{fig:hs2_fix0_runtime} also illustrates that Bamboo\xspace retains its saving from blocking while not suffering from aborts with the optimization. } \begin{figure}[t]% \centering \begin{subfigure}[t]{0.03\columnwidth} \includegraphics[width=\linewidth]{./figures/2hs-top.pdf} \end{subfigure} \begin{subfigure}[t]{0.45\columnwidth} \includegraphics[width=\linewidth]{./figures/hs2_fix0.pdf} \caption{throughput} \label{fig:hs2_fix0_th} \end{subfigure} \begin{subfigure}[t]{0.03\columnwidth} \includegraphics[width=\linewidth]{./new_figures/2hs-tb.pdf} \end{subfigure} \begin{subfigure}[t]{0.45\columnwidth} \includegraphics[width=\linewidth]{./figures/hs2_fix0_runtime.pdf} \caption{runtime analysis} \label{fig:hs2_fix0_runtime} \end{subfigure} \caption{\textbf{One hotspot at the beginning} for Bamboo\xspace (left) and Wound-Wait\xspace (right), stored-procedure mode (32 threads) } \label{fig:hs2_fix0 \end{figure} \vspace{.05in} \noindent\textbf{Fix One Hotspot at the End} \vspace{.05in} We now fix the second hotspot at the end and change the position of the first hotspot. Compared to \figref{fig:hs2_fix1_th}, this workload here has less advantage for Bamboo\xspace to begin with and yet introduces more cascading aborts as the benefit increases. \revised{\figref{fig:hs2_fix1_runtime} shows that the time spent on aborts in Bamboo\xspace never exceeds the time spent on waiting in Wound-Wait\xspace. However, Bamboo\xspace without the second optimization (i.e., BAMBOO-base) may suffer from the overhead when it barely has benefits when $x = 0$, where the theoretical improvement is only 1/16. We note that such cost is a function of the workload and underlying system. It may be significant with stored-procedure as shown here, which makes our optimization necessary in mitigating the problem. With other system setups such as the interactive mode we will show in the later experiments, the trade-off between such overhead and the gain from retiring can greatly change. } \begin{figure}[t]% \centering \begin{subfigure}[t]{0.03\columnwidth} \includegraphics[width=\linewidth]{./new_figures/2hs-tail.pdf} \end{subfigure} \begin{subfigure}[t]{0.45\columnwidth} \center \includegraphics[width=\linewidth]{./figures/hs2_fix1.pdf} \caption{throughput} \label{fig:hs2_fix1_th} \end{subfigure} \begin{subfigure}[t]{0.03\columnwidth} \includegraphics[width=\linewidth]{./figures/2hs-tb.pdf} \end{subfigure} \begin{subfigure}[t]{0.45\columnwidth} \center \includegraphics[width=\linewidth]{./figures/hs2_fix1_runtime.pdf} \caption{runtime analysis} \label{fig:hs2_fix1_runtime} \end{subfigure} \caption{\textbf{The second hotspot at the end} for Bamboo\xspace (left) and Wound-Wait\xspace (right), stored-procedure mode (32 threads) \label{fig:hs2_fix1 \end{figure} {\bf Summary:} The potential benefit of Bamboo\xspace against Wound-Wait\xspace is a tradeoff between the benefit of reducing lock waiting time and the cost of cascading aborts and other overhead. Our measurements show that the benefits of reducing lock waiting time is usually greater than the cost of abort. However, Bamboo\xspace can suffer from overhead with certain system setup when the benefit is minimal. In this case, our optimization of conditionally retiring some write operations should be applied for such cases. \vspace{-.15in} \subsection{Experiments on YCSB} \label{ssec:ycsb} We now move to an even more complex workload --- YCSB with zipfian distribution. We will show how Bamboo\xspace performs compared to other baselines as the number of threads, data accessing distribution, and read ratio vary. Note that in general Bamboo\xspace only targets the high-contention setup in this workload as it is where hotspots (that most transactions would access) are present. The Yahoo! Cloud Serving Benchmark (YCSB)~\cite{cooper10} is a collection of workloads that are representative of large-scale services created by Internet-based companies. For all experiments in this section, we use a \revised{large scale database of more than 100 GB}, containing a single table with \revised{100 million} records. Each YCSB tuple has a single primary key column and then 10 additional columns each with 100 bytes of randomly generated string data. The DBMS creates a single hash index for the primary key. Each transaction in the YCSB workload by default accesses 16 records in the database. Each access can be either a read or an update. We control the overall read/write ratio of a transaction based on a specified $read\_ratio$. We also control the workload contention level through adjusting $\theta$, a parameter controlling the Zipfian data distribution. For example, when $\theta = 0$, all tuples have equal chances to be accessed. When $\theta=0.6$ or $\theta=0.8$, a hotspot of 10\% of the tuples in the database are accessed by $\sim$40\% and $\sim$60\% of all transactions, respectively. \begin{figure}[t]% \begin{subfigure}[t]{0.48\columnwidth} \center \includegraphics[width=\linewidth]{./figures/ycsb_thds_100g.pdf} \caption{throughput \label{fig:ycsb_threads \end{subfigure} \begin{subfigure}[t]{0.48\columnwidth}% \center \includegraphics[width=\linewidth]{./figures/ycsb_thds_100g_runtime.pdf} % \caption{runtime analysis \label{fig:ycsb_threads_runtime \end{subfigure} \vspace{-.1in} \caption{YCSB with varying thread count, stored-procedure mode ($\theta = 0.9$, $read\_ratio=0.5$)} \end{figure} \begin{figure}[t]% \centering \begin{subfigure}[t]{0.49\columnwidth} \center \includegraphics[width=\linewidth]{./figures/ycsb_long_txn.pdf} \caption{throughput \label{fig:ycsb_long_txn \end{subfigure} \begin{subfigure}[t]{0.49\columnwidth}% \center \includegraphics[width=\linewidth]{./figures/ycsb_long_txn_runtime.pdf} % \caption{runtime analysis \label{fig:ycsb_long_txn_runtime \end{subfigure} \vspace{-.1in} \caption{YCSB with 5\% long read-only transactions accessing 1000 tuples, stored-procedure mode ($\theta = 0.9$, $read\_ratio=0.5$) } \end{figure} {\bf Varying Number of Threads.} \revised{ \figref{fig:ycsb_threads} demonstrates that Bamboo\xspace's improvement against Wound-Wait\xspace with different number of threads in highly contentious YCSB ($\theta=0.9$) configured in stored-procedure mode. \figref{fig:ycsb_threads_runtime} shows Bamboo\xspace's benefits come from reducing waiting time without introducing many aborts. With 64 threads, Bamboo\xspace achieves the maximum speedup against Wound-Wait\xspace, which is up to 1.77$\times$. All 2PL-based protocols show degradation after 32 threads and Bamboo\xspace underperforms SILO when the thread count more than 96. This is mainly due to the intrinsic lock thrashing problem in 2PL~\cite{thomasian1993two}.} \revised{ {\bf Long Read-Only Transaction.} This experiment uses a workload with 5\% long read-only transactions accessing 1000 tuples and 95\% read-write transactions accessing 16 tuples. \figref{fig:ycsb_long_txn} shows that Bamboo\xspace outperforms all other protocols most of the time. Compared with waiting-based protocols, Bamboo\xspace benefits much from reducing waiting while rarely aborts. It shows an improvement of up to 5$\times$ against Wound-Wait\xspace. Bamboo\xspace's optimization of no RAW conflict also contribute to the scenario as long read-only transactions will not block writes nor cause cascading aborts. SILO experience performance degradation in this case since long transactions may starve and aborts dominate the runtime as shown in \figref{fig:ycsb_long_txn_runtime}. Bamboo\xspace also outperforms No-Wait as Bamboo\xspace ensures the priorities of transactions and commit 20\% more long transactions than No-Wait when the thread count is 120. } {\bf Varying Read Ratio.} We examine how varying $read\_ratio$ would influence the performance of different protocols in stored-procedure mode. Bamboo\xspace shows improvements against all other protocols regardless of the read ratio. \revised{The percentage of improvement ranges from 27\% to 71\% {\bf Varying Data Accessing Distribution.} \figref{fig:ycsb_zipfian} shows how Bamboo\xspace performs in both stored-procedure mode and interactive mode as $\theta$ of Zipfian distribution changes. As showed in \figref{fig:ycsb_zipfian_a}, Bamboo\xspace outperforms all 2PL-based protocols under high contention (e.g. $\theta > 0.7$). \revised{Compared to Wound-Wait\xspace, Bamboo\xspace provides up to 72\% improvements in throughput. For cases of lower contention, Bamboo\xspace has $\sim$10\% degradation in throughput compared with Wound-Wait\xspace due to overhead. However, such degradation diminishes in the interactive mode where the expensive network communication dominates. In the interactive mode, Bamboo\xspace is comparable with Wound-Wait\xspace with $\sim$8\% improvements when $\theta \leq 0.8$ and shows up to 2$\times$ speedup over Wound-Wait\xspace. Similarly, }SILO’s low-level performance optimizations (e.g., lock-free data structures) and the cache warming-up effect due to many aborts makes its performance significantly better than other 2PL-based protocols in stored-procedure mode. However, the performance advantage of Silo disappears in the interactive mode where aborts are significantly more expensive due to expensive gRPC calls. Bamboo\xspace outperforms all other protocols \revised{including SILO} in interactive mode where the cache warming-up effect has less impact. \begin{figure}[t]% \vspace{.05in} \centering \begin{subfigure}[t]{0.49\linewidth} \center \includegraphics[width=0.97\linewidth]{./figures/ycsb_zipf_100g.pdf} \caption{throughput} \label{fig:ycsb_zipfian_a} \end{subfigure} \begin{subfigure}[t]{0.49\linewidth} \center \includegraphics[width=\linewidth]{./figures/ycsb_zipf_100g_runtime.pdf} \caption{runtime analysis} \label{fig:ycsb_zipfian_b} \end{subfigure} \vspace{-.1in} \caption{\textbf{YCSB with varying distribution} --- throughput vs. Zipfian skew level for YCSB, stored-procedure mode, ($read\_ratio = 0.5$, 16 threads)} \label{fig:ycsb_zipfian} \end{figure} \vspace{-.15in} \section{Background and Motivation} \label{sec:related} \secref{sec:2pl} describes the background of two-phase locking, with a special focus on the Wound-Wait\xspace variant which Bamboo\xspace is based on. Then, \edits{\secref{sec:txn-chopping} discusses how transaction chopping mitigates the hotspot issue.} \subsection{Two-Phase Locking (2PL)} \label{sec:2pl} Two-phase locking (2PL) is the most widely used class of concurrency control in database systems. In 2PL, reads and writes are synchronized through explicit locks in shared (\textit{SH}) or exclusive (\textit{EX}) mode. A transaction operates on a tuple only if it has become an ``owner'' of the corresponding lock. According to Bernstein and Goodman \cite{bernstein1981concurrency}, 2PL forces two rules in acquiring locks: 1) conflicting locks are not allowed at the same time for the same data; 2) a transaction cannot acquire more locks once it releases any. The second rule requires every transaction obtaining locks to follow two phases: \textit{growing phase} and \textit{shrinking phase}. A transaction can acquire locks in the growing phase but will enter the shrinking phase if they ever releases a lock. In the shrinking phase, no more locks should be acquired. This rule guarantees serializability of executions by ensuring no cycles of dependency among transactions. For a lock request that violates the first rule, 2PL may put the requesting transaction on the waiting queue until the lock is available. Two major approaches exist to avoid deadlocks due to cycles of waiting: \textit{deadlock detection} and \textit{deadlock prevention}. The former explicitly maintains a central \textit{wait-for} graph and checks for cycles periodically. The graph becomes a scalability bottleneck with highly parallel modern hardware~\cite{yu2014}. Deadlock prevention technique instead allows waiting only when certain criteria are met. The two mostly popular protocols under this category are Wound-Wait\xspace and Wait-Die\xspace~\cite{bernstein81, rosenkrantz1978system}. \vspace{.05in} \noindent\textbf{Wound-Wait Variant of 2PL} \vspace{.05in} In Wound-Wait\xspace, each transaction is assigned a timestamp when it starts execution; transactions with smaller timestamps have higher priority. When a conflict occurs, the requesting transaction $T$ compares its own timestamp with the timestamps of the current lock owners --- owners whose timestamps are bigger than $T$ are aborted, namely \texttt{Wound}. Then $T$ either becomes the new owner (i.e., all current owners are aborted) or waits for the lock (i.e., some owners remain), namely \texttt{Wait}. The lock entry for each tuple maintains \texttt{owners}\xspace and \texttt{waiters}\xspace as two lists of transactions that are owning or waiting for the lock on the tuple (\figref{fig:data-structure}). \texttt{waiters}\xspace can be sorted based on the transactions' timestamps to simplify the process of moving transactions from \texttt{waiters}\xspace to \texttt{owners}\xspace. Wound-Wait\xspace is deadlock-free because a transaction can only wait for other transactions that have smaller timestamps. In the wait-for graph, this means all the edges are from transactions with larger timestamps to transactions with smaller timestamps, which inherently prevents cycles. Besides deadlock-freedom, Wound-Wait\xspace is also starvation-free as the oldest transaction has the highest priority and will never abort due to a conflict. Wound-Wait\xspace is the concurrency control protocol used in Google Spanner~\cite{corbett12, malkhi2013spanner}. \vspace{.05in} \noindent\textbf{Wait-Die Variant of 2PL} \vspace{.05in} Different from Wound-Wait\xspace, when a conflict occurs, Wait-Die\xspace allows transactions with smaller timestamps to wait (i.e., \texttt{Wait}) and transactions with larger timestamps to self-abort (i.e., \texttt{Die}). Wait-Die\xspace is also deadlock- and starvation-free; it is the concurrency control protocol used in Microsoft Orleans~\cite{orleans-txn}. \vspace{-.15in} \subsection{Transaction Chopping \label{sec:txn-chopping} \edits{Similar to Bamboo\xspace, transaction chopping~\cite{shasha1995transaction} aims to increase concurrency when hotspots are present. In particular, it chops a transaction into smaller sub-transactions and allow an update to be visible after the sub-transaction finishes but before the entire transaction commits. In particular, an SC-graph is created based on static analysis of the workload, where each sub-transaction represents a node in the graph. Sub-transactions of the same transaction are connected by sibling (S) edges. Sub-transactions of different transactions are connected by conflict (C) edges if they have potential conflicts. Chopping requires no cycle in the graph and only the first piece can roll-back or abort. It obtains the finest chopping that can guarantee safeness based on static information. IC3~\cite{wang2016scaling} is the state-of-the-art concurrency control protocol in this line of research. IC3 achieves fine-grained chopping through column-level static analysis. Sub-transactions accessing different columns of the same table will no longer introduce C-edges. As IC3 allows cycles in the SC-graph, during runtime, it tracks dependencies of transactions and enforces pieces involving C-edges to execute in order to maintain serializability. Moreover, it proposes optimistic execution to enforce waiting just on validation and commit phases for non-conflicting transactions accessing same columns of different tuples. Although inducing more aborts, the optimistic approach still show advantages under high contention. However, IC3 still has several limitations. First, it assumes column accesses of all transactions to be known before execution and does not support ad-hoc transactions. Second, chopping must guarantee no crosses of C-edges to avoid potential deadlocks. For example, if one transaction accesses table A before B while the other accesses table B before A. The accesses of table A and B must be merged into one piece, limiting concurrency. Third, column-level static analysis does not exploit the concurrency when transactions accessing same columns of different tuples; it helps reduce more contention only when transactions access different columns of the same tuple. We will show quantitative evaluations of IC3 compared with Bamboo\xspace in \secref{sec:exp-ic3}. } \section{Conclusion} We proposed Bamboo\xspace, a concurrency control protocol that extends traditional 2PL but allows the two-phase rule to be violated by retiring locks early. Through extensive analysis and performance evaluation, we demonstrated that Bamboo\xspace can lead to significant performance improvement when the workload contain hotspots. Evaluation on TPC-C shows a performance advantage of up to 3$\times$. \subsection{Experimental Setup} \label{ssec:exp-setup} We implement Bamboo\xspace in DBx1000~\cite{dbx1000, yu2014}, a multi-threaded, in-memory DBMS prototype. DBx1000 stores all data in a row-oriented manner with hash table indexes. The code is open sourced~\cite{code}. In this paper, we extended DBx1000 to run transactions in both stored-procedure and interactive modes. \revised{In the stored-procedure mode, all accesses in a transaction and the execution logic are ready before execution. The interactive mode involves two types of nodes: (1) the \emph{DB server} processes requests like \texttt{get\_row()}, \texttt{update\_row()}, and \texttt{commit()}, and (2) the \emph{client server} executes transaction logic and sends requests to the DB server through gRPC. As Bamboo\xspace does not require knowing the position of the last write for correctness (cf. \secref{ssec:decide_retire}). When \texttt{update\_row()} is called, the DB server immediately retires the lock after the write, essentially treating every write as the last write. In the interactive mode, the second optimization of no retiring does not apply. } DBx1000 includes a pluggable lock manager that supports different concurrency control schemes. This allows us to compare Bamboo with various baselines {\bf within the same system}. We implemented 5 approaches described as follow: \vspace{-.05in} \begin{itemize}[leftmargin=.2in] \item {\bf WOUND\_WAIT}~\cite{bernstein81}: The Wound-Wait\xspace variant of 2PL (\secref{sec:2pl}). \item {\bf NO\_WAIT}~\cite{bernstein81}: The No-Wait variant of 2PL where any conflict causes the requesting transaction to abort. \item {\bf WAIT\_DIE}~\cite{bernstein81}: The Wait-Die\xspace variant of 2PL (\secref{sec:2pl}). \item {\bf SILO}~\cite{tu13}: An in-memory database for fast and scalable transaction processing. It implements a variant of OCC. \item {\bf IC3}~\cite{wang2016scaling}: State-of-the-art transaction chopping-based concurrency control protocol as described in \secref{sec:txn-chopping}. \end{itemize} \vspace{-.05in} \revised{Experiments in stored-procedure mode were run on a machine with four Intel Xeon CPU at 2.8GHz (15 cores) with 1056GB of DRAM, running Ubuntu 16.04. Each core supports two hardware threads. For the interactive mode, experiments were run on workstations provided by cloudlab \cite{cloudlab} with each machine containing} two Intel Xeon CPU at 2.6GHz (32 cores) with 376GB of DRAM, running Ubuntu 16.04. Each core supports two hardware threads. We collect transaction statistics, such as throughput, latency, and abort rates by running each workload for at least 30 seconds. \revised{In this paper, we assume that each hotspot contains one tuple and treat a set of hot tuples as multiple hotspots. For the experiments, transactions log to main memory --- modern non-volatile memory would offer similar performance. Bamboo\xspace applies all the optimizations introduced in~\secref{ssec:optimization}. To decide the choice of $\delta$, we ran microbenchmark with a wide range of $\delta$. In general, as $\delta$ increases, the overhead in Bamboo\xspace decreases, which improves performance in low-contention cases. However, a larger $\delta$ also increases the time spent on waiting for locks under high contention, which leads to less than 13\% drop in performance in our experiments. To balance the contrary effects under different workloads, we chose a $\delta$ of 0.15 across all workloads. As the dynamic timestamp assignment can also be applied to other 2PL-based protocols, we turn on the optimizations whenever they gain improvements from it. However, as only Bamboo\xspace involves cascading aborts, the other protocols barely benefit from the optimization.} \section{Bamboo\xspace} \label{sec:protocol} The basic idea of Bamboo\xspace is simple --- in certain controlled circumstances, we allow other transactions to violate an exclusive lock held by a particular transaction. A transaction's dirty updates can be accessed after it has finished its updates on the tuple, following the idea shown in \figref{fig:hotspot-ideal}. \subsection{Challenges of Violating 2PL} \label{sec:challenges} Although violating 2PL offers great performance potential, it also brings two key challenges that we highlight below. \vspace{.05in} \noindent\textbf{Challenge 1: Dependency Tracking} \vspace{.05in} A conventional 2PL protocol uses locks to track dependencies among transactions. 2PL protocols use various techniques (cf. Section~\ref{sec:2pl}) to prevent/break a cycle in the dependency graph. We call an edge in a conventional dependency graph a \textit{lock-induced} edge. In contrast, Bamboo\xspace allows a transaction to read dirty value without waiting for locks. Such a \textit{read-after-write} dependency can be part of a cycle and yet is not captured by a conventional lock. For example, \txn{1} may read \txn{2}'s dirty write on record A and \txn{2} may read \txn{1}'s dirty write on record B. Such a cycle is not captured by the ``wait-for'' relationship. We call such a dependency edge a \textit{dirty-read-induced} edge. For Bamboo\xspace to work efficiently, we need a new deadlock avoidance mechanism that can avoid cycles caused by both lock-induced and dirty-read-induced edges uniformly. Section~\ref{sec:protocol-desc} will describe the detailed protocol we build that achieves this goal. \vspace{.05in} \noindent\textbf{Challenge 2: Cascading Aborts} \vspace{.05in} Allowing a transaction to read dirty data may lead to cascading aborts, as pointed out in multiple previous protocols~\cite{orleans-txn, larson11}. Specifically, if \txn{2} reads \txn{1}'s update before \txn{1} commits, a commit dependency between the two transactions is established --- \txn{2} is able to commit only if \txn{1} has successfully committed. If \txn{1} decides to abort (e.g., due to conflicts or integrity violation) then all the transactions that have commit dependencies on \txn{1} must also abort; this includes \txn{2} and all the transactions that have read \txn{2}'s dirty writes and so forth. This means potentially a long chain of transactions with commit dependencies need to cascadingly abort, causing waste of work and performance degradation. In Section~\ref{sec:cascade}, we present a deeper analysis of cascading aborts. \subsection{Protocol Description} \label{sec:protocol-desc} This section describes the basic Bamboo\xspace protocol in detail. In particular, we focus on addressing the first challenge in Section~\ref{sec:challenges} (i.e., dependency tracking). Bamboo\xspace is developed based on Wound-Wait\xspace (cf. \secref{sec:2pl}); our description mainly focuses on the differences between the two. We firstly describe the new data structures Bamboo\xspace requires to track dependencies of dirty reads, followed by a detailed description of the pseudocode of the protocol. \subsubsection{Data Structures} All edges in the dependency graph of a conventional 2PL protocol are lock-induced edges. They are captured by locks and maintained in lock entries of individual tuples. For Bamboo\xspace, we try to uniformly handle both lock-induced and dirty-read-induced edges by adding extra metadata into each lock entry and transaction. \begin{figure}[t]% \centering \includegraphics[width=\linewidth]{./new_figures/data-structure.pdf} \caption{\textbf{A lock entry in Bamboo\xspace} --- \normalfont{Transaction \txn{} moves between lists (i.e., \texttt{owners}\xspace, \texttt{waiters}\xspace, and \texttt{retired}\xspace) through function calls. The \texttt{retired}\xspace list does not exist in baseline Wound-Wait\xspace.}} \label{fig:data-structure \end{figure} \textbf{\texttt{tuple.\texttt{retired}\xspace:}} Bamboo\xspace adds a new list called \texttt{retired}\xspace in each lock entry next to the existing lists of \texttt{owners}\xspace and \texttt{waiters}\xspace, as shown in \figref{fig:data-structure}. \texttt{retired}\xspace is sorted based on the timestamps of transactions in it. After a transaction has finished updating a tuple, the transaction can be moved from \texttt{owners}\xspace to \texttt{retired}\xspace. This allows other transactions to join \texttt{owners}\xspace and read the dirty updates of the retired transactions. By maintaining the \texttt{retired}\xspace list, a dirty-read dependency can be captured in the lock entry --- if a retired transaction \txn{} has an exclusive lock, then all transactions in \texttt{retired}\xspace after \txn{} and all transactions in \texttt{owners}\xspace depend on \txn{}. Adding \texttt{retired}\xspace allows both lock-induced and dirty-read-induced dependencies to be maintained in the lock entry. \textbf{\texttt{Transaction.commit\_semaphore:}} Bamboo\xspace uses a new variable \texttt{commit\_semaphore} to ensure that transactions with dirty-read dependencies commit in order. A transaction \txn{} increments its own semaphore when it conflicts with any transaction in \texttt{retired}\xspace of any tuple. The semaphore is decremented only when the dependent transaction leaves \texttt{retired}\xspace so that \txn{} becomes one of the leading non-conflicting transactions in \texttt{retired}\xspace. \revised{The semaphore is implemented using a 64-bit integer for each transaction and incremented/decremented through atomic operations. The number of accesses to the semaphore is bounded by the number of tuple accesses of a transaction. The overhead of the semaphore is within 0.2\% of the total execution time with 120 threads under a high-contention workload. Details of how this variable is operated will be discussed in the concrete protocol. } \subsubsection{Locking Logic} \begin{algorithm}[t!] \setstretch{0.9} \small \renewcommand{\nl}{\let\nl\oldnl}\codeComment{req\_type is SH or EX} \\ \textit{LockAcquire(txn, req\_type1, tuple1)} \\ \renewcommand{\nl}{\let\nl\oldnl}... \\ \renewcommand{\nl}{\let\nl\oldnl}\codeComment{Lock can retire after the last write to the tuple} \\ \hl{\textit{LockRetire(txn, tuple1)}} \\ \textit{LockAcquire(txn, req\_type2, tuple2)} \\ \renewcommand{\nl}{\let\nl\oldnl}... \\ \renewcommand{\nl}{\let\nl\oldnl}\codeComment{Wait for transactions that txn depends on} \\ \While{\hl{txn.commit\_semaphore $\not=$ 0}}{\hl{pause}} \textit{\revised{if(!abort)} writeLog()} \codeComment{Log to persistent storage device}\\ \textit{LockRelease(txn, tuple1, is\_abort)} \\ \textit{LockRelease(txn, tuple2, is\_abort)} \\ \textit{txn.terminate(is\_abort)} \caption{\textbf{A transaction's lifecycle in Bamboo\xspace} --- Differences between Bamboo\xspace and Wound-Wait\xspace are highlighted in \hl{gray}.} \label{alg:lifecycle} \end{algorithm} \algoref{alg:lifecycle} shows the lifecycle of how the database executes a transaction with Bamboo\xspace. It largely remains the same as a conventional Wound-Wait\xspace 2PL protocol, with the differences highlighted using gray background color. For the basic protocol, we assume each transaction is assigned a timestamp when it first started similar to Wound-Wait\xspace; we will later optimize the timestamp assignment process in \secref{ssec:optimization}. Different from conventional 2PL, Bamboo\xspace allows a transaction to immediately retire a tuple that will not be written again by the transaction (line 2); the transaction can still read the tuple since Bamboo\xspace keeps a local copy of the tuple for each read request. The transaction can acquire more locks on other tuples after retiring some lock (line 3). After the execution finishes, a transaction must wait for its \texttt{commit\_semaphore} to become 0 before it can start committing. The transaction then moves forward to perform logging (line 6) and release locks (lines 7--8). Finally, the transaction either commits or aborts depending on the execution result. If an abort occurs during the transaction execution, the transaction directly jumps to line 7 to release locks. \begin{algorithm}[t!] \setstretch{0.9} \small \renewcommand{\nl}{\let\nl\oldnl}\textit{tuple.retired} \codeComment{List of retired transactions \revised{ordered} by ascending timestamp order} \\ \renewcommand{\nl}{\let\nl\oldnl}\textit{tuple.owners} \codeComment{List of owners of the lock} \\ \renewcommand{\nl}{\let\nl\oldnl}\textit{tuple.waiters} \codeComment{List of transactions waiting for the lock, sorted by ascending timestamp order}\\ \renewcommand{\nl}{\let\nl\oldnl}\codeComment{Each list above contains \{txn, type\} where type is either SH or EX}\\ \SetKwProg{myfun}{Function}{}{} \renewcommand{\nl}{\let\nl\oldnl}\texttt{\\} \myfun{LockAcquire(txn, req\_type, tuple)}{ \textit{has\_conflicts = false} \\ \For{(t, type) in \hl{concat(tuple.retired,} tuple.owners\hl{)}}{ \If {conflict(req\_type, type)}{ \textit{has\_conflicts = true} } \If{has\_conflicts \textbf{and} txn.ts $<$ t.ts}{ \textit{t.set\_abort()} } } \textit{tuple.waiters.add(txn)} \\ \textit{PromoteWaiters(tuple)} } \renewcommand{\nl}{\let\nl\oldnl}\texttt{\\} \renewcommand{\nl}{\let\nl\oldnl} \codeComment{move txn from tuple.owners to tuple.retired} \\ \myfun{\hl{LockRetire(txn, tuple)}}{ \label{alg:retire} \hl{\textit{tuple.owners.remove(txn)}} \\ \hl{\textit{tuple.retired.add(txn)}} \\ \hl{\textit{PromoteWaiters(tuple)}} } \renewcommand{\nl}{\let\nl\oldnl}\texttt{\\} \myfun{LockRelease(txn, tuple, is\_abort)}{ \label{alg:release} \hl{\textit{all\_owners = tuple.retired $\cup$ tuple.owners}}\\ \If{\hl{is\_aborted \textbf{and} txn.getType(tuple) == EX}}{ \hl{\textit{abort all transactions in all\_owners after txn}} } \textit{remove txn from \hl{tuple.retired or} tuple.owners} \\ \If{\hl{txn was the head of tuple.retired \textbf{and} conflict(txn.getType(tuple), tuple.retired.head)}}{ \renewcommand{\nl}{\let\nl\oldnl}\codeComment{heads: leading non-conflicting transactions}\\ \renewcommand{\nl}{\let\nl\oldnl}\codeComment{Notify transactions whose dependency is clear} \\ \For{\hl{t in all\_owners.heads}}{ \hl{\textit{t.commit\_semaphore$--$}} } } \textit{PromoteWaiters(tuple)} } \renewcommand{\nl}{\let\nl\oldnl}\texttt{\\} \myfun{PromoteWaiters(tuple)}{ \For{t in tuple.waiters}{ \If{conflict(t.type, tuple.owners.type)}{ break \\ } \textit{tuple.waiters.remove(t)} \\ \textit{tuple.owners.add(t)} \\ \If{\hl{$\exists$(t', type) $\in$ tuple.retired s.t. conflict(type, t.type)}}{ \hl{\textit{t.commit\_semaphore} ++ } } } } \caption{\textbf{Function calls in Bamboo\xspace } --- Difference between Bamboo\xspace and Wound-Wait\xspace is highlighted in \hl{gray}.} \label{alg:protocol} \end{algorithm} \algoref{alg:protocol} shows the detailed implementation of the functions in Bamboo\xspace, i.e., \textit{LockAcquire()}, \textit{LockRetire()}, \textit{LockRelease()}, as well as an auxiliary function \textit{PromoteWaiters()} which is called by the other three functions. \revised{Note that the first three functions are in critical sections protected by latches, same as other 2PL protocols.} The baseline Wound-Wait\xspace is the algorithm in \algoref{alg:protocol} ignoring the code in gray. Bamboo\xspace adds extra logic to \textit{LockAcquire()} and \textit{LockRelease()}, and adds a new function \textit{LockRetire()}. In the following, we walk through these functions step-by-step. \vspace{.05in} \textbf{\textit{LockAcquire()}} \vspace{.05in} As discussed in Section~\ref{sec:2pl}, in Wound-Wait\xspace when a conflict occurs, the requesting transaction would abort (i.e., \texttt{wound}) current owners that have a bigger timestamp than the requesting transaction (lines 2--7). In Bamboo\xspace, we need only a small change which is to wound transactions in \textit{both} \texttt{owners}\xspace and \texttt{retired}\xspace (line 3). After some or all of the current owners are aborted, some waiting transaction(s) or the requesting transaction may become the new owner. In the pseudo code, for brevity, we show this logic by always adding the requesting transaction to the waiter list (line 8) and then try to promote transactions with small timestamps from \texttt{waiters}\xspace to \texttt{owners}\xspace (by calling \textit{PromoteWaiters()} in line 9). In our actual implementation, unnecessary movement between \texttt{waiters}\xspace and \texttt{owners}\xspace are avoided. Inside \textit{PromoteWaiters()}, the algorithm scans \texttt{waiters}\xspace in the growing timestamp order (line 24). For each transaction that does not conflict with the current owner(s) (line 25), it is moved from \texttt{waiters}\xspace to \texttt{owners}\xspace (lines 27--28). Otherwise the loop breaks when the first conflict is encountered (line 26). In Bamboo\xspace, we also need to increment the \textit{commit\_semaphore} for each transaction that just became an owner if it conflicts with any transaction in \texttt{retired}\xspace (lines 29--30). This allows the transaction to be notified when transactions that it depends on have committed. \vspace{.05in} \textbf{\textit{LockRetire()}} \vspace{.05in} This function simply moves the transaction from \texttt{owners}\xspace to \texttt{retired}\xspace of the tuple (lines 11--12). It then calls \textit{PromoteWaiters()} to potentially add more transactions to \texttt{owners}\xspace (line 13). It is important to note that the \textit{LockRetire()} function call is completely optional. If the function is never called for all transactions, then Bamboo\xspace degenerates to Wound-Wait\xspace. Bamboo\xspace also allows any particular transaction to choose whether to call \textit{LockRetire()} on any particular tuple. Such compatibility allows the system to choose between Bamboo\xspace and Wound-Wait\xspace at a very fine granularity. \vspace{.05in} \textbf{\textit{LockRelease()}} \vspace{.05in} In Wound-Wait\xspace, releasing a lock simply means removing the transaction from \texttt{owners}\xspace of the tuple (line 18) and promoting current waiters to become new owners (line 22). In Bamboo\xspace, the \textit{LockRelease()} function does more work: (1) handling cascading aborts and (2) notifying other transactions when their dependency is clear. Specifically, we define a list \texttt{all\_owners} to be the concatenation of \texttt{retired}\xspace and \texttt{owners}\xspace (line 15). If the releasing transaction decides to abort and its lock on the tuple has type \textit{EX}, then all the transactions in \texttt{all\_owners} after the releasing transaction must abort cascadingly. In Bamboo\xspace, these transactions will be notified abort; the abort logic will be later performed by the corresponding worker threads. Note that if the aborting transaction locks the tuple with type \textit{SH}, then cascading aborts are not triggered --- an \textit{SH} lock has no effect on the following transactions. The algorithm then removes the transaction from \texttt{retired}\xspace or \texttt{owners}\xspace depending on where it resides (line 18). If the removed transaction was the old head of \texttt{retired}\xspace and has a lock type that conflicts with the new head of \texttt{retired}\xspace (line 19), then the algorithm notifies all the current leading non-conflicting transactions in \texttt{retired}\xspace (i.e., \texttt{heads} of \texttt{retired}\xspace) that their dependency on this tuple is clear, by decrementing their corresponding \textit{commit\_semaphore} (lines 29--30). \subsection{Deciding Timing for Lock Retire} \label{ssec:decide_retire} \revised{ In principle, every write can be immediately followed by $\textit{lock\_retire()}$ without affecting correctness. If a transaction writes a tuple for a second time after retiring the lock, it can still ensure serializability by simply aborting all transactions that have seen its first write. Performance will not be affected if each transaction updates each tuple only once. For better performance, a lock can be retired after the transaction's last write to the tuple if the tuple may be updated more than once by the same transaction. To determine where the last write is, Bamboo\xspace can rely on \textit{programmer annotation} or \textit{program analysis} to find the last write and insert $\textit{lock\_retire()}$ after it. In this section, we discuss the latter approach. } Determining the last write to a tuple can be challenging since the position may depend on the query parameters or tuples accessed earlier in the transaction. We illustrate the challenge using a transaction snippet shown in~\lstref{code:example-before}, where $op_1$ and $op_2$ (line 1 and 5) both work on some tuple from the same table {\code table1}. In the example, ideally we want to add {\code LockRetire()} immediately after $op_1$, yet we cannot be sure whether the later operation $op_2$ will execute and access the same tuple or not at the desired retire point. To solve this challenge, Bamboo\xspace synthesizes a condition to add to the transaction program that dynamically decides whether to retire a lock. The process is described below. {\bf Program analysis.} Bamboo\xspace first performs standard control and data flow analysis~\cite{dataflow-book} to obtain all the control and dataflow dependencies in the transaction program. It inlines all functions to perform inter-procedural analysis, and constructs a single dependency graph for each transaction. {\bf Identify queries.} Bamboo\xspace next identifies every tuple access by recognizing the database query API calls. It analyzes the query (e.g., the SQL query string passed to the API call) as well as parameters to understand the table involved and the variable that stores the key of the tuple being accessed. We assume most queries access a single tuple by primary key and Bamboo\xspace can check potential tuple-level re-access using the key. For non-key-access queries we assume it touches all tuples and detect table-level re-access. {\bf Synthesizing retire condition.} If an operation $op$ works on a table that is no longer accessed after $op$ in the transaction, then the lock in $op$ can safely retire. Otherwise, Bamboo\xspace will synthesize a condition to decide whether to retire the lock. This condition checks any later access will be executed and touching the same tuple as $op$, where an example is shown on line 3 in~\lstref{code:example-after}. In the condition, the lock in $op_1$ can safely retire either when {\code cond} evaluates to false, which means the later access $op_2$ will not happen, or when {\code cond} evaluates to true but the key of {\code tup1} and {\code tup2} are not equal, which means $op_2$ will happened but not touch the same tuple. To generate such condition, the value of {\code cond} and the key of {\code tup2} must be computed before the {\code LockRetire()} call. To do so, Bamboo\xspace traces the data source along the data dependency path of {\code cond} and the key, and then moves any computation on the path that happens later than $op_1$ to an early position, without changing the program semantic. Internally, Bamboo\xspace tries to move the computation to every position later than $op_1$, and stops when it finds the earliest one where all the data dependencies hold after the movement. Then it adds the synthesized condition as well as the {\code LockRetire()} call to a position after the computation. For instance, line 3 in~\lstref{code:example-before} originally computes the key of {\code tup2} late in the transaction. Bamboo\xspace moves the computation of {\code tup2.key} to line 2 in~\lstref{code:example-after}, the earliest position after $op_1$ where no data dependency is violated. \begin{lstlisting}[language=c, basicstyle=\scriptsize\ttfamily, escapeinside={@}{@}, numbers=left, caption={A transaction program snippet}, label={code:example-before}] LockAcquire(table1, tup1, EX); // op1 ... // other queries and computations tup2.key = f(input); if (cond) LockAcquire(table1, tup2, EX); // op2 \end{lstlisting} \begin{lstlisting}[language=c, basicstyle=\scriptsize\ttfamily, escapeinside={@}{@}, numbers=left, caption={An example of synthesized condition from~\lstref{code:example-before}}, label={code:example-after}] LockAcquire(table1, tup1, EX); // op1 tup2.key = f(input); if (!cond || (cond && tup1.key!=tup2.key)) //synthesized LockRetire(table2, tup2) ... // other queries and computations if (cond) LockAcquire(table1, tup2, EX); // op2 \end{lstlisting} {\bf Handling loops.} Bamboo\xspace performs loop fission to allow synthesizing condition for lock retire in loops.~\lstref{code:loop-before} shows an example. Because the keys used in later accesses are computed in later loop iterations, Bamboo\xspace breaks the loop into two parts, as shown in~\lstref{code:loop-after}, where the first loop computes all the keys and the second loop executes the tuple accesses. Bamboo\xspace then adds a nested loop to produce the retire condition, as shown from lin 6-9. This nested loop checks the keys of the rest of the iterations and sets the variable added by Bamboo\xspace, {\code can\_retire}, if any later key is the same as the key in the current iteration. Bamboo\xspace only handles {\code for} loops where the number of iteration is fixed (i.e., not changed inside the loop). For other types of loop, we do not retire locks inside the loop for now and leave it as future work. \begin{lstlisting}[language=c, basicstyle=\scriptsize\ttfamily, escapeinside={@}{@}, numbers=left, caption={A transaction snippet involving for loop}, label={code:loop-before}] for(i=0; i<input1; i++) { key[i] = f(input2[i]); tup.key = key[i]; LockAquire(table, tup, EX); } \end{lstlisting} \begin{lstlisting}[language=c, basicstyle=\scriptsize\ttfamily, escapeinside={@}{@}, numbers=left, caption={An example of synthesized condition from~\lstref{code:loop-before}}, label={code:loop-after}] for(i=0; i<input1; i++) key[i] = f(input2[i]); for(i=0; i<input1; i++) { tup.key = key[i]; LockAquire(table, tup, EX); bool can_retire = true; for(j=i+1; j<input1; j++) can_retire &&= (key[j]!=tup.key); if (can_retire) LockRetire(table, tup); } \end{lstlisting} \vspace{-.1in} \subsection{Discussions} This section discusses a few other important aspects of Bamboo\xspace. An important feature of Bamboo\xspace is the strong compatibility with Wound-Wait\xspace, meaning an existing database can be extended to use Bamboo\xspace without a major rewrite. Many important design aspects can be directly inherited from 2PL. \textbf{Fault Tolerance:} Bamboo\xspace does not require special treatment of logging. As shown in \algoref{alg:lifecycle}, a transaction \revised{does not log its commit record until it has satisfied the concurrency control protocol.} This is similar to conventional 2PL which logs in the same way. \textbf{Phantom Protection:} Phantom protection~\cite{eswaran76} in Bamboo\xspace uses the same mechanism as in other 2PL protocols, namely, \textit{next-key locking}~\cite{mohan1989aries} in indexes; this technique achieves the same effect as \textit{predicate locking} but is more widely used in practice. In this context, lock retiring can also be applied to inserts or deletes to an index in the same way as read/writes to tuples. \textbf{Weak Isolation:} Bamboo\xspace can support isolation levels weaker than serializability. For example, \textit{repeatable read} is supported by giving up phantom protection; \textit{read committed} (RC) is supported by releasing shared locks early. \revised{For RC, Bamboo\xspace needs to retire only writes since read locks are always immediately released}. Finally, \textit{read uncommitted} means each retire becomes a release. \textbf{Other Variants of 2PL:} To ensure deadlock freedom, Bamboo\xspace permits \txn{2} to read \txn{1}'s dirty write only if such a dependency edge is permitted in the underlying 2PL protocol. Bamboo\xspace can be extended to any variants of 2PL but some variants fit better than others. Wait-Die\xspace, for example, allows only older transactions to wait for younger transactions. When applying retiring and dirty reads to this setting, the older transactions are subject to cascading aborts, meaning an unlucky old transaction may starve and never commit. Such problems do not exist in Wound-Wait\xspace. \textbf{Compatibility with Underlying 2PL:} As we pointed out in \secref{sec:protocol-desc}, it is possible to smoothly transition between Bamboo\xspace and the underlying 2PL protocol. Specifically, the \textit{LockRetire()} function call is completely optional for any transaction on any tuple. When it is not called, dirty reads are disabled for the particular lock and the system behavior degenerates to 2PL. This allows a system to dynamically turn on/off the dirty read optimization of Bamboo\xspace based on the performance and frequency of cascading aborts. \revised{ \textbf{Opacity:} Opacity is a property that reads must be consistent even before commit. By definition, ensuring opacity means a transaction is not allowed to read uncommitted data. If opacity is required for a transaction, Bamboo\xspace can enforce it by running the transaction in Wound-Wait (i.e., wait on a tuple until the retired and owners lists are empty). Note that some production systems~\cite{orleans-txn, larson11, diaconu13, corbett12, malkhi2013spanner} also do not support further optimizations for transactions with opacity. } \vspace{-.05in} \subsection{Optimizations} \label{ssec:optimization} Here we introduce four optimizations for Bamboo\xspace --- the first two are to reduce extra overheads and the rest are to reduce aborts. \revised{These ideas are not entirely new but we discuss them here since they can substantially improve the performance of Bamboo\xspace. We also apply them to baseline protocols when applicable.} \textbf{Optimization 1: No extra latches for read operations.} In Bamboo\xspace, read operations retire automatically in \texttt{LockAcquire()}. We keep a local copy for every new read \revised{unless the data is written by the transaction itself}. A read operation can be moved to the retired list directly whenever it can become the owner. This optimization requires no extra latches for retiring and will not cause more aborts since aborting a transaction holding a read lock does not cause cascading aborts. \revised{Although copying may incur extra overhead, existing work shows such overhead scales linearly with the core count~\cite{xie2015high} and the overhead is less than 0.1\% of the runtime with 120 cores under high contention. With the optimization, the cost of accessing extra latch is within 0.8\% of the total execution time with the same setting.} \revised{\textbf{Optimization 2: No retire when there is no benefit.} Write operations do not need to retire if they bring little benefit but increase the chances of cascading abort or the overhead of acquiring latches; read operations can always safely retire since they cannot cause cascading effects. Different heuristics can be used to decide which writes may or may not be retired. We use a simple heuristic where writes in the last $\delta$ ($0\leq\delta\leq1$) fraction of accesses are not retired. The intuition is that hotspots at the end of a transaction should not cause long blocking and retiring them has no benefit. However, if a transaction turns out to spend significant time (i.e., longer than $\delta$ of the total execution time) waiting on the \texttt{commit\_semaphore}, we will retire those write operations at the end of a transaction. } \textbf{Optimization 3: Eliminate aborts due to read-after-write conflicts.} In the basic Bamboo\xspace protocol, when a transaction tries to acquire a shared lock, it needs to abort all the write operations of low-priority transactions in both the retired list and the owner list. We observed that such aborts are unnecessary. As Bamboo\xspace keeps local copies for reads and writes, such a new read-operation can read the local copy of other operations without triggering aborts. \revised{The optimization naturally fits Bamboo\xspace as it allows for read-modify-write over dirty data and multiple uncommitted updates can exist on a tuple. However, the idea cannot be easily applied to existing 2PL as reading uncommitted data is not allowed hence there exists only one copy of the data. There are existing works~\cite{larson11, sadoghi2014} sharing similar ideas that also allow a transaction to choose which version to read as they allow reading uncommitted data under certain scenarios, though they typically support up to one uncommitted version. } \textbf{Optimization 4: Assign timestamps to a transaction on its first conflict.} \revised{Here we explain how to extend Bamboo\xspace to support an existing idea, dynamic timestamp assignment, to avoid aborting on the first conflict.} The pseudocode is shown in \algoref{alg:dynamic_ts}. Specifically, lines 1--5 in \algoref{alg:dynamic_ts} are inserted to the beginning of function \textit{LockAcquire()} in \algoref{alg:protocol}. If the incoming transaction conflicts with any other transaction in \texttt{retired}\xspace, \texttt{owners}\xspace, or \texttt{waiters}\xspace (line 1--2), we assign timestamps for all transactions in the three lists, in the order specified by the algorithm (lines 3--4), and then assign timestamp for the incoming transaction (line 5). Timestamp assignment is a single compare\_and\_swap() call (lines 6--8). \revised{Bamboo\xspace may be further improved through more complex timestamping strategies~\cite{lomet2012multi}.} \begin{algorithm}[t!] \small \textit{all\_txns = concat(tuple.retired, tuple.owners, tuple.watiers)}\\ \If{txn conflicts with any transaction in all\_txns}{ \For{t in all\_txns}{ \textit{set\_ts\_if\_unassgined(t)} } \textit{set\_ts\_if\_unassigned(txn)} } \SetKwProg{myfun}{Function}{}{} \renewcommand{\nl}{\let\nl\oldnl}\texttt{\\} \myfun{set\_ts\_if\_unassigned(txn)}{ \If{txn.ts = UNASSIGNED}{ atomic\_compare\_and\_swap(\&txn.ts, UNASSIGNED, atomic\_add(global\_ts)) } } \caption{\textbf{Support for dynamic timestamp assignment} --- lines 1--5 are added to the beginning of function \textit{LockAcquire()} in \algoref{alg:protocol}.} \label{alg:dynamic_ts} \end{algorithm} \input{sections/proof} \section{Cascading Aborts} \label{sec:cascade} This section analyzes the effect of cascading aborts qualitatively and discusses an optimization that we propose to mitigate the effect. \vspace{-.1in} \subsection{Cases Inducing Cascading Aborts} \label{sec:abort-cases} \textit{Cascading aborts} (also called \textit{cascading rollback}) is a situation where the abort of a transaction causes other transactions to abort. In Bamboo\xspace, transactions can read uncommitted data, if the transaction that wrote the data aborts, all dependent transactions must also abort cascadingly. In our algorithm, the procedure of cascading aborts corresponds to the line 17 of \algoref{alg:protocol}. In Bamboo\xspace, a transaction $T$ may abort in three cases: (1) when $T$ is wounded by another transaction with higher priority \revised{to prevent deadlocks}, (2) when a transaction that $T$ depends on aborts and $T$ aborts cascadingly, or (3) when $T$ self-aborts due to transactional logic (e.g., inventory level below 0) or user intervention. Since cases (1) and (3) can occur in the baseline Wound-Wait\xspace protocol as well, we focus on discussing the difference between case (2) and the other two cases. \vspace{-.15in} \subsection{Effects of Cascading Aborts as a Trade-off of Reducing Blocking} \label{ssec:cascading-effect} \revised{The effect of cascading aborts can be evaluated through three metrics --- length of abort chain, abort rate, and abort time. The \textit{length of abort chain} depicts the number of transactions that must abort cascadingly due to one transaction's abort; our empirical result shows the number can be as large as the number of concurrent transactions at high contention. The \textit{abort time} describes} the total CPU time wasted on executing transactions that aborted in the end. The throughput of a transaction processing system is largely determined by the number of CPU cycles performing useful vs. useless work; \emph{abort time} is an example of such useless work. The other example is the time spent on waiting for a lock, which we defined as \emph{Wait Time}. Unlike the first two indicating the magnitude of the effect, the time measurements illustrate the tradeoff between waits and aborts in a more direct way. \revised{We use this metric to show the trade-off in the evaluation section.} Compared to Wound-Wait\xspace, Bamboo\xspace substantially reduces the \emph{wait time} but increases \emph{abort time}. Although Bamboo\xspace may have more aborts than Wound-Wait\xspace due to cascading aborts, trading waits for aborts may be a good deal in many cases. \revised{There are two reasons why aborts may be preferred in certain cases.} First, with hotspots, all the transactions aborted cascadingly are those that have speculatively read dirty data. These transactions would have been waiting in Wound-Wait\xspace without making forward progress in the first place. A large portion of cycles wasted on cascading aborts in Bamboo\xspace would also be wasted on waiting in Wound-Wait\xspace. Second, even if a transaction aborts, it warms up the CPU cache with accessed tuples, so subsequent executions become faster~\cite{kung1981optimistic, tu13}. We observe this effects on Silo, an OCC-based protocol, as described in \secref{sec:exp} --- Silo has higher abort rate than many 2PL protocols but also higher throughput. We build a model based on previous theoretical analysis on 2PL~\cite{gray1992book, Gray1981ASM, bernstein2009principles} to illustrate when the benefits of Bamboo\xspace outweigh its overhead. We define $K$ as the number of lock requests per transaction, $N$ as the number of transactions running concurrently, $D$ as the number of data items, and $t$ as the average time spent between lock requests. The throughput is proportional to $\frac{N}{(K+1)t}\times(1-AP_{\textit{conflict}}-BP_{\textit{abort}})$, where $P_{\textit{conflict}}$ and $P_{\textit{abort}}$ denote the probability a transaction encounters a conflict and an abort, respectively; $A$ denotes the fraction of execution time that a transaction spends waiting given a conflict; $B$ denotes the fraction of time spent on aborted execution. Bamboo\xspace (bb) can reduce $AP_{\textit{conflict}}$ (due to early retire) but increase $BP_{\textit{abort}}$ (due to cascading aborts). It has positive gains over Wound-Wait (ww) when the benefits outweigh the overhead. The gain in $AP_{\textit{conflict}}$ is $(A_{\textit{bb}}-A_{\textit{ww}}) P_{\textit{conflict}}$. $P_{\textit{conflict}}$ is a property of the workload and is approximately $NK^2/(2D)$ in both protocols. Specifically, as there are $N-1$ transactions running concurrently holding $NK/2$ locks on average, the probability of a single lock request conflicting with others is $NK/(2D)$ given a uniformly random distribution of accesses. Thus, the probability of a transaction encountering a conflict during its lifetime is $1 - (1 - NK/2D)^K \approx NK^2/(2D)$~\cite{gray1992book, Gray1981ASM}; $A_{\textit{bb}}$ is approximately $1/(K+1)$ (i.e., wait for only the duration of one access) and $A_{\textit{ww}}$ is on average $1/2$ (i.e., wait for half of the transaction execution time). To model $BP_{\textit{abort}}$, we observe that Bamboo\xspace and Wound-Wait\xspace share two common sources of aborts, i.e. aborts due to deadlock and user-initiated aborts. Bamboo\xspace introduces another source of aborts due to cascading, represented as $B P_{\textit{cas\_abort}}$, where $P_{\textit{cas\_abort}}$ is the probability that a transaction aborts cascadingly. We calculate an upper bound of this cost. We can bound $B$ by $1$ and bound $P_{\textit{cas\_abort}}$ by $(1-P_{\textit{deadlock}})\times P_{\textit{conflict}} \times P_{\textit{deadlock}} \times (N-1)$ (i.e., the current transaction experiences a conflict while some other transaction experiences a deadlock), which is bounded by $NP_{\textit{conflict}}P_{\textit{deadlock}}$. The value of $P_{\textit{deadlock}}$ is approximately $NK^4/4D^2$, which is approximated by the probability of a transaction conflicting with a transaction that is already conflicted with it~\cite{gray1992book, Gray1981ASM}. Combining the above, Bamboo\xspace has performance advantage when $(A_{\textit{ww}}-A_{\textit{bb}})P_{\textit{conflict}}>BP_{\textit{cas\_abort}}$, which is satisfied when $(\frac{1}{2}-\frac{1}{K+1})P_{\textit{conflict}} > NP_{\textit{conflict}}P_{\textit{deadlock}}$, which is satisfied when $\frac{N^2K^4}{2D^2} < \frac{K-1}{K+1}$. For most databases, the data size $D$ is orders of magnitude larger than $N$ and $K$; so the equation will hold. The high-level intuition here is that the probability of a deadlock is much lower than the probability of a conflict~\cite{gray1992book, Gray1981ASM}. Bamboo\xspace optimizes for the common case by reducing the cost of a conflict and sacrifices performance of the conner case by increasing the cost of aborts during deadlocks. \revised{In \secref{sec:exp}, we performed quantitative evaluations on the impact of cascading aborts and the tradeoff between aborts and waits using the metrics described above. We will show how the evaluation results corroborate the arguments and modeling here.} \section{Related Work} \label{sec:related} In this section, we discuss a few lines research related to Bamboo\xspace. \subsection{Violating Two-Phase Locking} Previous work has explored mechanisms that violate 2PL to improve transaction performance but targeted different aspects than Bamboo\xspace. In distributed systems, Jones et al.~\cite{speculative-distributed} proposed a locking scheme that allows dependent transactions to execute when one transaction is waiting for its execution to finish in multiple partitions. The technique avoids making transactions wait for earlier transactions' distributed coordination. Gupta et al.~\cite{opt-distributed} proposed a distributed commit protocol where transactions are permitted to read the uncommitted writes of transactions in the prepare phase of two-phase commit, but not during transaction execution. Locking violation is also used to avoid holding locks while logging. Early Lock Release (ELR)~\cite{kimura2012efficient, soisalon1995partial} is based the observation that a canonical transaction holds locks after the execution (i.e., has pre-committed) while waiting for the log to be flushed. ELR allows such pre-committed transactions to release their locks early so that other transactions can proceed and uses tags to enforce the commit order among dependent transactions. ELR has been applied in both research protocols~\cite{johnson2010aether} and commercial systems~\cite{orleans-txn}. Controlled lock violation (CLV)~\cite{graefe2013controlled} achieved the same goal as ELR. But instead of using tags to enforce the commit order, CLV extends data structures in the lock table to track and enforce the dependency order, therefore working for more general cases. The dependency tracking mechanism of Bamboo\xspace is inspired by CLV. In contrast to ELR and CLV, Bamboo\xspace explores violating 2PL to avoid holding locks on hotspots during the \textit{execution phase} before a transaction pre-commits to exploit more parallelism. \textit{Ordered shared locks}~\cite{agrawal1995ordered} explores violating 2PL in the execution phase but lacks specifications on key design components such as how to track dependency and avoiding deadlocks effectively. It has no qualitative or quantitative analysis on cascading aborts, a major concern of the approach, on modern systems. In contrast, this paper thoroughly analyzes the effect of cascading aborts and the inherent tradeoff between waits and aborts both qualitatively and quantitatively with proposed optimizations. We further discussed both key designs and new techniques in details such as safe retire with program analysis used in Bamboo\xspace. In the end, we performed evaluations on modern systems comparing with state-of-the-art baselines. \vspace{-.1in} \subsection{Reading Uncommitted Data} Previous work also proposed non-locking concurrency control protocols that can read uncommitted data. Faleiro et al.~\cite{faleiro2017high} proposed a protocol for deterministic databases that enables early write visibility, meaning that a transaction's writes are visible prior to the end of the execution. This protocol leverages the determinism where transaction execution is ordered prior to execution and the writes can be immediately visible to later transactions if the writing transaction will certainly commit. In contrast, Bamboo\xspace does not rely on the assumption of determinism. Hekaton~\cite{diaconu13, larson11} proposed two protocols --- pessimistic version and optimistic version --- for main memory databases based on multiversioning. The pessimistic protocol allows for eager updates. Operations like appending updates to a read-locked data or reading the last-committed version of a write-locked data will not be blocked. If the owner of uncommitted dirty data is in \emph{preparing state}, dirty reads are allowed as well. However, as dirty data is not visible if its owner is in \emph{active state}, a write operation over write locked data by an active transaction will still be blocked. Bamboo\xspace makes uncommitted data visible to reduce blocking time for transactions with write-after-write conflicts. Similarly to Hekaton, Bamboo\xspace also tracks dependencies of transactions due to visible dirty data for serializable correctness, but in a different way. In addition to IC3, runtime pipelining~\cite{xie2015high} (RP) is another variant of transaction chopping. It is based on table-level static analysis combined with runtime enforcement. Specifically, they firstly derive a total ranking of all the read-write tables. Then it orders the sub-transactions based on the rank and enforces the execution to follow the order. However, sub-transactions still cannot be arbitrarily small to allow for more concurrency for two reasons. First, similar to IC3, accesses must be merged into one piece if they cause crossing of C-edges. Second, table-level analysis allows for less concurrency than column-level analysis Deferred runtime pipelining~\cite{mu2019deferred} (DRP) extends runtime pipelining to support both transactions where the access sets are known and unknown. Similar to Bamboo\xspace, DRP allows transactions to read \textit{tame} transactions' uncommitted data whenever the updates are done. However, DRP imposes stronger assumptions on the tame transactions that all the accesses must be known before execution to ensure serializability and being deadlock-free. DRP also introduces deferred execution to know when the updates are done and to reduce cascading aborts. However, the technique can be applied only when later operations do not depend on the previous ones. There are works redesigning the architecture to allow reading uncommitted data for hardware transaction. For example, Jeffrey et al. proposed Swarm~\cite{swarm1, swarm2} which divides a sequential program into many small ordered transactions and speculatively runs them in parallel. Unlike Swarm, Bamboo\xspace is implemented in software and can be easily integrated into existing 2PL-based database systems. \vspace{-.15in} \subsection{Transaction Scheduling} Previous work has investigated techniques to reschedule operations within a transaction for better performance. Quro~\cite{yan2016leveraging} changes the order of operations within a transaction to make hotspots appear as close to commit as possible to reduce the duration of locking period. However, Qura is subject to data dependency in the transaction thus often not able to flexibly move hotspots arbitrarily. Bamboo\xspace does not require changing the order of transaction operators, but changes the concurrency control to handle hotspots, making it able to improve performance for a wider range of transactions than aforementioned work. Ding et al.~\cite{ding2018improving} reorders the transactions within a batch to minimize inter-transaction conflicts and improve OCC for highly contentious cases. It models the problem of finding the best order while preserving the correctness as a feedback vertex set problem over directed graph. For each batch, they run a proposed greedy algorithm to approximate the solution of the NP-hard problem. However, empirical evaluation shows the reordering process can take up to 17$\times$ of the transaction processing time. making the end-to-end performance lower than Bamboo\xspace. \subsection{Proof of Correctness} \label{sec:proof} According to the serializability theory, a schedule of transactions is serializable if and only if the serialization graph (SG)\footnote{\scriptsize A serialization graph (SG) is a directed graph whose nodes are committed transactions and whose edges represent conflicts among pairs of transactions.} is acyclic~\cite{bernstein1987concurrency, bernstein79}. In the following, we will first revisit how serializability was proved for 2PL and then develop a similar proof for Bamboo\xspace. \vspace{-.05in} \begin{property}\label{property:2pl} \textbf{[Two-Phase Rule]} A transaction does not acquire more locks once it has released a lock. \end{property} \vspace{-.05in} Property~\ref{property:2pl} describes the basic rule that all 2PL protocols must follow. This property together with the behavior of locking ensures that all 2PL schedules of committed transactions are serializable. \vspace{-.05in} \begin{theorem}\label{thm:2pl-sr} Every schedule in 2PL is serializable. \end{theorem} \begin{proof} \vspace{-.05in} If the serialization graph contains an edge $T_i \rightarrow T_j$, then the two transactions must have a conflict on some tuple $x$, on which $T_j$ acquired a lock after $T_i$ released a lock. A graph with a cycle must contain a path that starts and ends with the same transaction, e.g., $T_i \rightarrow T_j \rightarrow ... \rightarrow T_i$. This means $T_i$ has released a lock before $T_j$ acquires a lock which happens before $T_j$ releases a lock (according to two-phase rule). All of these happen before $T_i$ acquires a lock (according to the last edge). However, this means $T_i$ acquires a lock after it has released a lock, violating the two-phase rule. \end{proof} With Bamboo\xspace, we cannot prove for serializability following the exact same proof above. Since for Bamboo\xspace, an edge $T_i \rightarrow T_j$ does not imply $T_i$ releasing the lock before $T_j$ acquiring the lock. We introduce the concept of \textit{commit point} to conduct the proof. \begin{definition}\label{def:commit-point} \textbf{[Commit Point]} A transaction's commit point is a point in time between the transaction completes all of the operations and recorded the operations in log. (In Algorithm~\ref{alg:lifecycle}, the commit point is after finishing line 6 but before starting line 7.) \end{definition} \begin{lemma}\label{lemma:commit-point} \textbf{[Commit Point Ordering]} In Bamboo\xspace, if the serialization graph contains $T_i \rightarrow T_j$, then $T_j$ reaches the commit point after $T_i$. \end{lemma} \begin{proof} Without loss of generality, we assume $T_i$ and $T_j$ conflict on tuple $x$. We know that $T_j$ can reach its commit point only after $T_j.$\textit{commit\_semaphore} becomes 0 (lines 4--5 in Algorithm~\ref{alg:lifecycle}). According to Algorithm~\ref{alg:protocol}, $T_j$\textit{.commit\_semaphore} can become $0$ only after $T_i$ has released its lock on $x$, which can happen only after $T_i$ has reached its commit point (according to Algorithm~\ref{alg:lifecycle}). Together, this means $T_j$ reaches its commit point after $T_i$. \end{proof} \begin{theorem}\label{thm:2pl-sr} Every schedule in Bamboo\xspace is serializable. \end{theorem} \begin{proof} According to Lemma~\ref{lemma:commit-point}, every edge $T_i \rightarrow T_j$ in the serialization graph means $T_j$ reaching the commit point after $T_i$. Therefore, no cycle may exist since a transaction cannot reach the commit point after it has already reached the commit point, finishing the proof. \end{proof} Note that the proof for Bamboo\xspace above can also be used to prove for serializability of 2PL. The key here is that all transactions reach the commit point following the data dependency order. Even if a transaction $T_j$ reads dirty data from transaction $T_i$, $T_j$ can only reach the commit point after $T_i$ does. Similar to the original 2PL proof, the proof for Bamboo\xspace only shows serializability for committed transactions. The fact that Bamboo\xspace is deadlock-free follows the deadlock-freedom of Wound-Wait\xspace, the proof of which is beyond the scope of this paper. \subsection{Experiments on TPC-C Results}\label{ssec:tpcc} Finally, we compare Bamboo\xspace with other concurrency control schemes on the TPC-C benchmark~\cite{tpc-c}. We only ran experiments with 50\% new-order transactions and 50\% payment transactions. Note in the benchmark, 1\% of new order transaction are chosen at random to simulate user-initiated aborts. We first vary the number of threads. Figure~\ref{fig:tpcc_threads} presents the behavior of Bamboo\xspace under high contention in stored-procedure mode. Bamboo\xspace can obtain up to 2$\times$ improvements against Wound-Wait\xspace in stored-procedure mode. \revised{Similar to previous observations, SILO outperforms other 2PL protocols given its cache warming-up effect in stored-procedure mode.} In interactive mode, Bamboo\xspace's performance scales up till 32 threads and achieves up to 4$\times$ and 14$\times$ improvements against Wound-Wait\xspace and SILO respectively. \begin{figure}[t]% \centering \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_thds_wh1.pdf} \caption{stored-procedure mode} \end{subfigure} \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_thd_interactive.pdf} \caption{interactive mode} \end{subfigure} \vspace{-.1in} \caption{vary \# of threads in TPC-C (1 warehouse) \label{fig:tpcc_threads \end{figure} \begin{figure}[t]% \centering \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_wh_thd32.pdf} \caption{stored-procedure mode} \end{subfigure} \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_wh_interactive.pdf} \caption{interactive mode} \label{fig:tpcc_wh_b} \end{subfigure} \vspace{-.1in} \caption{vary \# of warehouses in TPC-C (32 threads) \label{fig:tpcc_wh \end{figure} Figure~\ref{fig:tpcc_wh} presents how Bamboo\xspace performs with a different number of warehouses with 32 threads. In stored-procedure mode, Bamboo\xspace outperforms other 2PL-based protocols in \revised{high-contention} cases. The improvement depends on the number of warehouses. For example, when the number of warehouses is one (which is similar to the single hotspot cases), Bamboo\xspace outperforms Wound-Wait\xspace by up to 2$\times$. When the workload is less contentious (e.g. with more warehouses), the difference between Bamboo\xspace and other protocols is smaller. This is expected since Bamboo\xspace targets at highly contentious cases. Figure~\ref{fig:tpcc_wh_b} shows that Bamboo\xspace has more notable improvements of up to 4$\times$ over the best baseline when running in interactive mode. \vspace{-.15in} \subsection{Comparison with IC3}\label{sec:exp-ic3} We choose to compare the performance of Bamboo\xspace and IC3 in a separate section due the fact that IC3 requires the knowledge of the entire workload --- an assumption not made in the other protocols. We implemented IC3 with all its optimizations in DBx1000 and show the results with the best setting. Note that we omit the optimizations for commutative operations for a fair comparison to all algorithms. \figref{fig:tpcc_ic3_original} shows the comparison between Bamboo\xspace and IC3 on the mix of payment and new-order transactions with a global warehouse table. As payment and new-order accesses different columns of warehouse table and district table (the most contentious tables in TPC-C), IC3's prior knowledge on all column accesses help it get rid of much contention. Given this, IC3 outperforms Bamboo\xspace though it enforces some waiting when two transactions access the same column of different tuples However, for workloads where contentious transactions access the same columns of hotspot tuples, IC3 cannot gain such benefits and underperforms Bamboo\xspace due to its column-level static analysis. To illustrate the effect, we simply modified new-order transactions to read one more column (\texttt{W\_YTD}) that will be updated by payment transactions. It is conceivable for a real-world transaction to read a hot field updated by other transactions. The result and runtime analysis are shown in \figref{fig:tpcc_ic3_modified} and \figref{fig:tpcc_ic3_modified_runtime}. While the performance of Bamboo\xspace is barely affected, the performance of IC3 drops significantly. As expected, IC3 spends more time on waiting than Bamboo\xspace due to its column-level static analysis. Note that, the increase in aborts is due to IC3's optimistic execution. The version without optimistic execution shows worse performance with more waiting. It would serialize the execution of potentially but not actually conflicting sub-transactions instead of just serializing the validation phases of these sub-transactions. Overall, Bamboo\xspace has up \revised{1.5$\times$} improvement against IC3 on the slightly modified TPC-C workload that has ``true'' conflicts between payment and neworder transactions on the warehouse table. \begin{figure}[t]% \vspace{.1in} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_ic3.pdf} \caption{with original new-order } \label{fig:tpcc_ic3_original} \end{subfigure} \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_ic3_runtime.pdf} \caption{runtime analysis} \end{subfigure} \vspace{-.1in} \begin{subfigure}[t]{0.49\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_ic3_modified.pdf} \caption{with modified new-order} \label{fig:tpcc_ic3_modified} \end{subfigure} \begin{subfigure}[t]{0.45\linewidth} \includegraphics[width=\linewidth]{./figures/tpcc_ic3_modified_runtime.pdf} \caption{runtime analysis} \label{fig:tpcc_ic3_modified_runtime} \end{subfigure} \caption{Bamboo vs. IC3 in TPC-C, stored-procedure mode (1 warehouse)} \label{fig:tpcc_ic3 \end{figure} {\bf Summary.} In stored-procedure, Bamboo\xspace is better than IC3 when there is a global hotspot that most transactions truly conflict on . The column-level static analysis in IC3 helps reduce contention when transactions are more likely to access different columns of the same tuple, however, it makes IC3 non-applicable for ad-hoc transactions. \vspace{-.15in} \section{Experimental Evaluation} \label{sec:exp} \input{sections/exp_workloads} \input{sections/exp_synthetic} \input{sections/exp_ycsb} \input{sections/exp_tpcc}
1e25353d49a91500c887e4cd0a90cd021054befb
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \vskip 0.3cm A compact convex set with non-empty interior is called a convex body in $n$-dimensional Euclidean space $\mathbb{R}^n$. Let $\mathcal{K}^n$ denote the set of all convex bodies in $\mathbb{R}^n$, and let $\mathcal{K}_o^n$ denote the set of all convex bodies in $\mathbb{R}^n$ containing the origin in their interiors. It is clear that $\mathcal{K}_o^n$ is subset of $\mathcal{K}^n$. The Brunn-Minkowski theory is the very core of convex geometric analysis, and Minkowski problem is one of main parts of the Brunn-Minkowski theory. Minkowski problem characterizes a geometric measure generated by convex bodies. For smooth case, it corresponds to Monge-Amp\`{e}re type equation in partial differential equations. The study of Minkowski problem promotes greatly developments of the Brunn-Minkowski theory \cite{Schneider}) and fully non-linear partial differential equations (see \cite{TrudingerWang}). In 1990s, Lutwak \cite{Lutwak3} introduced the $L_p$ surface area measure $S_{p}(K, \cdot)$ of convex body $K\in \mathcal{K}_o^n$ which is a fundamental concepts in convex geometric analysis defined by the variational formula of the $n$-dimensional volume (Lebesgue measure) $V_n$ as follows: For $p\in \mathbb{R}\setminus\{0\}$, \begin{align}\label{Lp-measure} \lim_{t\rightarrow0^+}\frac{V_n(K+_pt\cdot L)-V_n(K)}{t} =\frac{1}{p}\int_{S^{n-1}}h_L^p(u)dS_{p}(K,u), \end{align} where $K+_pt\cdot L$ is the $L_p$ Minkowski combination of $K,L\in \mathcal{K}_o^n$ (see \eqref{Lp-Min-comb}), and $h_L$ is the support function of $L$ on the unit sphere $S^{n-1}$ (see \eqref{support-function}). Note that the case for $p=0$ can be defined by similar way. When $p=1$, the $L_1$ surface area measure $S_1(K,\cdot)$ is the well-known classical surface area measure $S_K$, that is, $S_1(K,\cdot)=S_K$. The Minkowski problem for the $L_p$ surface area measure is called $L_p$ Minkowski problem as follows: \vskip 0.2cm \noindent {\bf $L_p$ Minkowski problem:} \emph{For a fixed $p$ and a given non-zero finite Borel measure $\mu$ on $S^{n-1}$, what are the necessary and sufficient conditions on $\mu$ such that there exists a convex body $K$ in $\mathbb{R}^n$ such that its $L_p$ surface area measure $S_p(K,\cdot)$ is equal to $\mu$, that is, \begin{align*} \mu=S_p(K,\cdot)? \end{align*}} When $p=1$, the $L_p$ Minkowski problem is the classical Minkowski problem studied by Minkowski \cite{Minkowski1897,Minkowski1903}, Alexandrov \cite{Aleksandrov1938,Aleksandrov1939}, Fenchel-Jensen \cite{FenchelJ} and others. Besides, the centro-affine Minkowski problem ($p=-n$) and the logarithmic Minkowski problem ($p=0$) are other two special cases of the $L_p$ Minkowski problem, see \cite{ChouWang,BoroczkyLYZ2,BoroczkyHZ,Zhu1,Zhu3,JianLuZhu,Stancu1,Stancu2}. For the existence, uniqueness and regularity of the (normalized) $L_p$ Minkowski problem, one can see \cite{Lutwak3,LutwakOliker,LYZ3, HuangLiuXu,LuWang,Zhu2,Zhu5,HugLYZ,Chen}. As an important application, the solutions to the $L_p$ Minkowski problem play a vital role in discovering some new (sharp) affine $L_p$ Sobolev inequalities, see \cite{Zhang1,LYZ2,CianchiLYZ,HaberlSchuster1,HaberlSchuster2,HaberlSchusterXiao,Wang1}. Besides, the $L_p$ Minkowski problem was studied by some curvature flows. The $L_p$ surface area measure has an important property as follows: \emph{If the sequence $\{K_i\}\subseteq \mathcal{K}_o^n$ converges to $K_0\in \mathcal{K}_o^n$ in the Hausdorff metric, then $\{S_{p}(K_i,\cdot)\}$ converges to $S_{p}(K_0,\cdot)$ weakly.} This is the continuity of the $L_p$ surface area measure with respect to the Hausdorff metric. The reverse question of this result is interesting: \begin{theoremalph}\label{A} Does the sequence $\{K_i\}\subseteq \mathcal{K}_o^n$ converge to $K_0\in \mathcal{K}_o^n$ in the Hausdorff metric as $\{S_{p}(K_i,\cdot)\}$ converges to $S_{p}(K_0,\cdot)$ weakly? \end{theoremalph} This problem is related closely to the $L_p$ Minkowski problem and can be restated as follows: \begin{theoremalph}\label{B} Suppose that $p\in \mathbb{R}$, $K_i\in \mathcal{K}_o^n$ is the solution to the $L_p$ Minkowski problem associated with Borel measure $\mu_i$ on $S^{n-1}$ and $K_0\in \mathcal{K}_o^n$ is the solution to the $L_p$ Minkowski problem associated with Borel measure $\mu_0$ on $S^{n-1}$. Does the sequence $\{K_i\}$ converge to $K_0$ in the Hausdorff metric as $\{\mu_i\}$ converges to $\mu_0$ weakly? \end{theoremalph} In this sense, Problem \ref{A} (or Problem \ref{B}) is called continuity of the solution to the $L_p$ Minkowski problem. Since the $L_p$ surface area measure is positively homogeneous of degree $n-p$, then Zhu \cite{Zhu4} showed that this question is not positive for $p=n$ by the following counterexample: \emph{Let $p=n$, $K_1$ be an origin-symmetric convex body and $K_i=\frac{1}{i}K_1$, then $S_n(K_i,\cdot)=S_n(K_1,\cdot)$ for all $i$ but $\{K_i\}$ converges to the origin as $i\rightarrow +\infty$.} Moreover, Zhu \cite{Zhu4} gave an affirmative answer to Problem \ref{A} for $p>1$ with $p\neq n$. The part results of Problem \ref{A} for $p=0$ and $0<p<1$ were obtained in \cite{WangLv,WFZ2019-2} In 2016, Huang-Lutwak-Yang-Zhang \cite{HuangLYZ} defined the dual curvature measure by the variational formula of the dual volume (see \cite{Lutwak1}) for $L_1$ Minkowski combination and studied the corresponding Minkowski problem called dual Minkowski problem. The existence, uniqueness and regularity of this problem and its generalization were studied in \cite{BoroczkyF,BoroczkyHP,BoroczkyLYZZ,LYZ4,WJ2020,Zhao1,Zhao2,ZhuXY2018}. The continuity of the solution to this problem for $q<0$ was studied in \cite{WFZ2019-1}. Recently, the Brunn-Minkowski theory for the Gaussian probability measure $\gamma_n$ has hot attention defined by \begin{align*} \gamma_n(E)=\frac{1}{(\sqrt{2\pi})^n}\int_{E}e^{-\frac{|x|^2}{2}}dx, \end{align*} where $E$ is a subset of $\mathbb{R}^n$ and $|x|$ is the absolute value of $x\in E$. $\gamma_n(E)$ is called the Gaussian volume of $E$. Since the Gaussian volume $\gamma_n$ does not have translation invariance and homogeneity, then there are more difficulties to study the corresponding Brunn-Minkowski theory. This makes Brunn-Minkowski theory for $\gamma_n$ quite mysterious and further stimulates people's interest. The Brunn-Minkowski inequality and the Minkowski inequality for the Gaussian volume $\gamma_n$ were studied in \cite{EskenazisG,BoroczkyK,Saroglou,GardnerZ}. By the variational formula of the Gaussian volume $\gamma_n$ for $L_p$ Minkowski combination, the $L_p$ Gaussian surface area measure $S_{p,\gamma_n}(K,\cdot)$ of convex body $K\in \mathcal{K}_o^n$ is defined in \cite{HuangXZ,Liu,LvWang} by For $K,L\in \mathcal{K}^n_o$ and $p\neq 0$, \begin{align}\label{Lp-Gaussian-Measure} \lim_{t\rightarrow0}\frac{\gamma_n(K+_pt\cdot L)-\gamma_n(K)}{t} =\frac{1}{p}\int_{S^{n-1}}h_L^p(u)dS_{p,\gamma_n}(K,u). \end{align} When $p=1$, it is the Gaussian surface area measure defined in \cite{HuangXZ}, that is, $S_{1,\gamma_n}(K,\cdot)=S_{\gamma_n,K}$. Note that the $L_p$ Gaussian surface area measure $S_{p,\gamma_n}(K,\cdot)$ is not positively homogeneous. The corresponding Minkowski problem in Gaussian probability space is called the $L_p$ Gaussian Minkowski problem (see \cite{HuangXZ,Liu,LvWang}) as follows: \vskip 0.2cm \noindent {\bf $L_p$ Gaussian Minkowski problem:} \emph{For fixed $p$ and a given non-zero finite Borel measure $\mu$ on $S^{n-1}$, what are the necessary and sufficient conditions on $\mu$ in order that there exists a convex body $K\in \mathcal{K}^n_o$ such that \begin{align*} \mu=S_{p,\gamma_n}(K,\cdot)? \end{align*} } If $f$ is the density of the given measure $\mu$, then the corresponding Monge-Amp\`{e}re type equation on $S^{n-1}$ is as follows: For $u\in S^{n-1}$, \begin{align*} \frac{1}{(\sqrt{2\pi})^n}e^{-\frac{|\nabla h(u)|^2+h^2(u)}{2}} h^{1-p}(u)\text{det}(\nabla^2 h(u)+h(u)I)=f(u), \end{align*} where $h:S^{n-1}\rightarrow (0, +\infty)$ is the function to be found, $\nabla h, \nabla^2 h$ are the gradient vector and the Hessian matrix of $h$ with respect to an orthonormal frame on $S^{n-1}$, and $I$ is the identity matrix. In this paper, we mainly consider the continuity of the solution to the $L_p$ Gaussian Minkowski problem and obtain the following result: \begin{thm}\label{thm0-1} Suppose $p\geq1$ and $K_i\in\mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If the sequence $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, then the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric. \end{thm} Besides, we obtain that the solution to the $L_p$ Guassian Minkowski problem is continuous with respect to $p$. \begin{thm}\label{thm0-2} Suppose $p_i\geq1$ and $K_i\in\mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If $S_{p_i,\gamma_n}(K_i,\cdot)=S_{p_0,\gamma_n}(K_0,\cdot)$, then the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric as $\{p_i\}$ converges to $p_0$. \end{thm} \vskip 0.5cm \section{Preliminaries}\label{Preliminaries} \vskip 0.3cm In this section, we list some notations and recall some basic facts about convex bodies. According to the context of this paper, $|\cdot|$ can denote different meanings: the absolute value and the total mass of a finite measure. For vectors $x,y\in\mathbb{R}^n$, $x\cdot y$ denotes the standard inner product in $\mathbb{R}^n$. $S^{n-1}$ denotes the boundary of the Euclidean unit ball $B_n=\{x\in\mathbb{R}^n: \sqrt{x\cdot x}\leq 1\}$ and is called unit sphere. Let $\omega_n$ denote the $n$-dimensional volume (Lebesgue measure) of $B_n$. Let $\partial K$ and $\text{int}~K$ denote the boundary and the set of all interiors of convex body $K$ in $\mathbb{R}^n$, respectively. $\partial' K$ is the subset of $\partial K$ with unique outer unit normal. The support function $h_K: \mathbb{R}^n\rightarrow \mathbb{R}$ of $K\in \mathcal{K}^n$ is defined by \begin{align}\label{support-function} h_K(x)=\max\{x\cdot y: y\in K\},\quad x\in \mathbb{R}^n. \end{align} A convex body is uniquely determined by its support function. Support functions are positively homogeneous of degree one and subadditive. For $K\in \mathcal{K}^n_o$, its support function $h_K$ is continuous and strictly positive on the unit sphere $S^{n-1}$. The radial function $\rho_K: \mathbb{R}^n\setminus\{0\}\rightarrow \mathbb{R}$ of convex body $K\in \mathcal{K}^n_o$ is another important function for $K\in \mathcal{K}^n_o$, and it is given by \begin{align*} \rho_K(x)=\max\{\lambda>0: \lambda x\in K\},\quad x\in \mathbb{R}^n\setminus\{0\}. \end{align*} Note that the radial function $\rho_K$ of $K\in \mathcal{K}^n_o$ is positively homogeneous of degree $-1$, and it is continuous and strictly positive on the unit sphere $S^{n-1}$. For each $u\in S^{n-1}$, $\rho_K(u)u\in\partial K$. The set $\mathcal{K}^n_o$ can be endowed with Hausdorff metric and radial metric which mean the distance between two convex bodies. The Hausdorff metric of $K, L\in \mathcal{K}^n_o$ is defined by \begin{align*} || h_K-h_L ||=\mathop{\max}\limits_{u\in S^{n-1}}|h_K(u)-h_L(u)|. \end{align*} The radial metric of $K, L\in \mathcal{K}^n_o$ is defined by \begin{align*} || \rho_K-\rho_L ||=\mathop{\max}\limits_{u\in S^{n-1}}|\rho_K(u)-\rho_L(u)|. \end{align*} The two metrics are mutually equivalent, that is, for $K, K_i\in \mathcal{K}^n_o$, \begin{align*} h_{K_i}\rightarrow h_K~\text{uniformly}\quad \text{if and only if} \quad \rho_{K_i}\rightarrow \rho_K~\text{uniformly}. \end{align*} If $|| h_{K_i}-h_K ||\rightarrow0$ or $||\rho_{K_i}-\rho_K ||\rightarrow0$ as $i\rightarrow+\infty$, we call the sequence $\{K_{i}\}$ converges to $K$. The polar body $K^*$ of $K\in \mathcal{K}^n_o$ is given by \begin{align*} K^*=\{ x\in\mathbb{R}^n: x\cdot y\leq1~\text{for~all}~y\in K\}. \end{align*} It is clear that $K^*\in \mathcal{K}^n_o$ and $K=(K^{*})^*$. There exists an important fact on $\mathbb{R}^n\setminus\{0\}$ between $K$ and its polar body $K^*$: \begin{align*} h_K=1/\rho_{K^*}\quad \text{and}\quad \rho_K=1/h_{K^*}. \end{align*} Then, for $K,K_i\in \mathcal{K}^n_o$, we can obtain the following result: \begin{align*} K_i\rightarrow K\quad \text{if and only if} \quad K^*_i\rightarrow K^*. \end{align*} For $f\in C^+(S^{n-1})$, the Wullf shape $[f]$ of $f$ is defined by \begin{align*} [f]=\{ x\in\mathbb{R}^n: x\cdot u\leq f(u)~\text{for~all}~ u\in S^{n-1}\}. \end{align*} It is not hard to see that $[f]$ is a convex body in $\mathbb{R}^n$ and $h_{[f]}\leq f$. In addition, $[h_K]=K$ for all $K\in \mathcal{K}^n_o$. By the concept of Wullf shape, the $L_p$ Minkowski combination can be defined for all $p\in\mathbb{R}$. When $p\neq 0$, for $K,L\in \mathcal{K}^n_o$ and $s,t\in\mathbb{R}$ satisfying that $sh_K^p+th_L^p$ is strictly positive on $S^{n-1}$, the $L_p$ Minkowski combination $s\cdot K+_pt\cdot L$ is defined by \begin{align}\label{Lp-Min-comb} s\cdot K+_pt\cdot L=[(sh_K^p+th_L^p)^{1/p}]. \end{align} When $p=0$, the $L_p$ Minkowski combination $s\cdot K+_0t\cdot L$ is defined by \begin{align*} s\cdot K+_0t\cdot L=[h_K^sh_L^t]. \end{align*} \vskip 0.5cm \section{The proof of Theorem \ref{thm0-1}}\label{} \vskip 0.3cm In this section, we consider the continuity of the solution to the $L_p$ Gaussian Minkowski problem and obtain the result for $p\geq1$. By the variational formula \eqref{Lp-Gaussian-Measure} and Ehrhard inequality, the following Minkowski-type inequality is obtained in \cite{HuangXZ,Liu}. \begin{lem}[\cite{HuangXZ}]\label{Conti-Min-ineq} Suppose $K,L\in\mathcal{K}_o^n$, then, for $p\geq 1$, \begin{align*} \frac{1}{p}\int_{S^{n-1}}h_L^p(u)-h^p_K(u)dS_{p,\gamma_n}(K,u)\geq\gamma_n(K)\log\frac{\gamma_n(L)}{\gamma_n(K)}, \end{align*} with equality if and only if $K=L$. \end{lem} The following lemmas will be needed. \begin{lem}\label{Conti-cos-bound} Suppose $p\geq 1$ and $K_i\in \mathcal{K}^n_o$ for $i=0,1,2,\cdots$. If the sequence $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, then there exists a constant $c_1>0$ such that \begin{align} \int_{S^{n-1}}(u\cdot v)_+^pdS_{p,\gamma_n}(K_i,v)\geq c_1, \end{align} for all $u\in S^{n-1}$ and $i\in\{0,1,2,\cdots\}$, where, $(u\cdot v)_+=\max\{0,u\cdot v\}$. \end{lem} \begin{proof} For $x\in \mathbb{R}^{n}$, let $$ g_i(x)=\int_{S^{n-1}}(x\cdot v)_+^pdS_{p,\gamma_n}(K_i,v). $$ It is not hard to see that $g_i$ is a sublinear function on $\mathbb{R}^{n}$. Then, there exists a convex body in $\mathbb{R}^{n}$ such that its support function is equal to $g_i$. By the fact that $\{S_{\gamma_n,K_i}\}$ converges to $S_{\gamma_n,K_0}$ weakly, we have the sequence $\{g_i\}$ converges to $g_0$ pointwise. Since pointwise and uniform convergence of support functions are equivalent on $S^{n-1}$, then $\{g_i\}$ converges to $g_0$ on $S^{n-1}$ uniformly. Since $S_{p,\gamma_n}(K_0,\cdot)$ is not concentrated in any closed hemisphere of $S^{n-1}$, then $g_0>0$ on $S^{n-1}$. Together with the compactness of $S^{n-1}$ and the continuity of $g_0$ on $S^{n-1}$, we obtain there exists a constant $c_2>0$ such that \begin{align*} g_0(u)\geq c_2, \end{align*} for all $u\in S^{n-1}$. Since $\{g_i\}$ converges to $g_0$ uniformly on $S^{n-1}$ , then there exists a constant $c_1>0$ such that \begin{align*} g_i(u)\geq c_1, \end{align*} for all $u\in S^{n-1}$ and $i=0,1,2,\cdots$. \end{proof} The weak convergence of the $L_p$ Guassian surface area measures implies the sequence of the corresponding convex bodies is bounded. \begin{lem}\label{Conti-bound} Suppose $p\geq 1$ and $K_i\in \mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If the sequence $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, then the sequence $\{K_i\}$ is bounded. \end{lem} \begin{proof} For $K\in\mathcal{K}^n_o$, the function $\Phi_p$ is defined by \begin{align}\label{Phi} \Phi_p(K)=-\frac{1}{p\gamma_n(K)}\int_{S^{n-1}}h^p_K(u)dS_{p,\gamma_n}(K,u)+\log\gamma_n(K). \end{align} From Lemma \ref{Conti-Min-ineq} and $\gamma_n(K_i)\geq1/2$, we have \begin{align*} \Phi_p(K_i)&=-\frac{1}{p\gamma_n(K_i)}\int_{S^{n-1}} h^p_{K_i}(u)dS_{p,\gamma_n}(K_i,u)+\log\gamma_n(K_i)\\ &\geq-\frac{1}{p\gamma_n(K_i)}\int_{S^{n-1}}h^p_{B_n}(u)dS_{p,\gamma_n}(K_i,u)+\log\gamma_n(B_n)\\ &=-\frac{|S_{p,\gamma_n}(K_i,\cdot)|}{p\gamma_n(K_i)}+\log\gamma_n(B_n)\\ &\geq-\frac{2}{p}|S_{p,\gamma_n}(K_i,\cdot)|+\log\gamma_n(B_n). \end{align*} Since the sequence $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, then, $\{|S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $|S_{p,\gamma_n}(K_0,\cdot)$. Together with the fact that $|S_{\gamma_n,K_0}|$ is finite, there exists a big enough constant $M_1>0$ satisfying \begin{align*} |S_{p,\gamma_n}(K_i,\cdot)|\leq \frac{p}{2}M_1, \end{align*} for all $i$. Hence, for $i=1,2,\cdots$, \begin{align}\label{Conti-bound-1} \Phi_p(K_i)\geq-M_1+\log\gamma_n(B_n). \end{align} Since $K_i\in\mathcal{K}^n_o$, then $\rho_{K_i}$ is continuous on $S^{n-1}$. By the fact that $S^{n-1}$ is compact, we obtain there exists $u_i\in S^{n-1}$ such that \begin{align*} \rho_{K_i}(u_i)=\max\{\rho_{K_i}(u): u\in S^{n-1}\}. \end{align*} Let $R_i=\rho_{K_i}(u_i)$. then, $R_iu_i\in K_i$ and $K_i\subseteq R_iB_n$. By the definition of support function, \begin{align*} h_{K_i}(v)\geq R_i(u_i\cdot v)_+, \end{align*} for all $v\in S^{n-1}$. Combining $p\geq1$, $K_i\subseteq R_iB_n$ with Lemma \ref{Conti-cos-bound}, we have \begin{align*} \Phi_p(K_i)&=-\frac{1}{p\gamma_n(K_i)}\int_{S^{n-1}} h^p_{K_i}(v)dS_{p,\gamma_n}(K_i,v)+\log\gamma_n(K_i)\\ &\leq-\frac{R_i^p}{p\gamma_n(K_i)}\int_{S^{n-1}}(u_i\cdot v)_+^pdS_{p,\gamma_n}(K_i,v)+\log\gamma_n(K_i)\\ &\leq-\frac{R_i^p}{p\gamma_n(R_iB_n)}\int_{S^{n-1}}(u_i\cdot v)_+^pdS_{p,\gamma_n}(K_i,v)+\log\gamma_n(R_iB_n)\\ &\leq-\frac{c_1R_i^p}{p\gamma_n(R_iB_n)}+\log\gamma_n(R_iB_n). \end{align*} Suppose that $\{R_i\}$ is not a bounded sequence. Without loss of generality, we may assume that $\lim_{i\rightarrow+\infty}R_i=+\infty$. By polar coordinates, \begin{align*} \gamma_n(R_iB_n)=\frac{1}{(\sqrt{2\pi})^n}\int_{R_iB_n}e^{-\frac{|x|^2}{2}}dx =\frac{n\omega_n}{(\sqrt{2\pi})^n}\int_0^{R_i}t^{n-1}e^{-\frac{t^2}{2}}dt. \end{align*} By the fact that $n\omega_n=2\pi^\frac{n}{2}\big/\Gamma(\frac{n}{2})$ where $\Gamma(s)=\int_0^{+\infty}t^{s-1}e^{-t}dt$ is the Gamma function for $s>0$, \begin{align}\label{gamma=1} \lim_{i\rightarrow+\infty}\gamma_n(R_iB_n) &=\frac{n\omega_n}{(\sqrt{2\pi})^n}\int_0^{+\infty}t^{n-1}e^{-\frac{t^2}{2}}dt\nonumber\\ &=\frac{n\omega_n 2^{\frac{n}{2}-1}}{(\sqrt{2\pi})^n} \int_0^{+\infty}\left(\frac{t^2}{2}\right)^{\frac{n}{2}-1}e^{-\frac{t^2}{2}}tdt\nonumber\\ &=\frac{n\omega_n 2^{\frac{n}{2}-1}}{(\sqrt{2\pi})^n} \int_0^{+\infty}t^{\frac{n}{2}-1}e^{-t}dt\nonumber\\ &=\frac{2\pi^\frac{n}{2} 2^{\frac{n}{2}-1}}{\Gamma(\frac{n}{2})(\sqrt{2\pi})^n}\Gamma(\frac{n}{2})=1. \end{align} Together with $\lim_{i\rightarrow+\infty}R_i=+\infty$ and $p\geq 1$, we have \begin{align*} \Phi_p(K_i)\leq-\frac{c_1R_i^p}{\gamma_n(R_iB_n)}+\log\gamma_n(R_iB_n) \rightarrow -\infty, \end{align*} as $i\rightarrow+\infty$. This is a contradiction to \eqref{Conti-bound-1}. Therefore, $\{R_i\}$ is bounded, that is, the sequence $\{K_i\}$ is bounded. \end{proof} \begin{lem}\label{interior} Suppose $K$ is a compact convex set in $\mathbb{R}^n$. If $\gamma_n(K)>0$, then $K$ is a convex body in $\mathbb{R}^n$, that is, $K\in \mathcal{K}^n$. \end{lem} \begin{proof} By the definition of the Gaussian volume $\gamma_n$, we have \begin{align*} \gamma_n(K)=\frac{1}{(\sqrt{2\pi})^n}\int_Ke^{-\frac{|x|^2}{2}}dx \leq\frac{1}{(\sqrt{2\pi})^n}\int_Kdx=\frac{1}{(\sqrt{2\pi})^n}V_n(K). \end{align*} Together with $\gamma_n(K)>0$, \begin{align*} V_n(K)\geq(\sqrt{2\pi})^n\gamma_n(K)>0. \end{align*} Therefore, compact convex set $K$ has nonempty interior in $\mathbb{R}^n$, that is, $K$ is a convex body in $\mathbb{R}^n$. \end{proof} \begin{lem}\label{Conti-polar-bound} Suppose $K_i\in \mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=1,2,\cdots$. If the sequence $\{K_i\}$ converges to compact convex set $L$ in the Hausdorff metric, then $L\in \mathcal{K}^n_o$. \end{lem} \begin{proof} By the continuity of Gaussian volume, \begin{align*} \gamma_n(L)=\lim_{i\rightarrow+\infty}\gamma_n(K_i)\geq1/2. \end{align*} Together with Lemma \ref{interior}, we have $L$ is a convex body in $\mathbb{R}^n$. Assume that $o\in \partial L$. Then, there exists a $u_0\in S^{n-1}$ such that $h_L(u_0)=0$. Since $\{K_i\}$ converges to $L$, we have \begin{align*} \lim_{i\rightarrow+\infty}h_{K_i}(u_0)=h_L(u_0)=0. \end{align*} For arbitrary $\varepsilon>0$, there exists a big enough integer $N_\varepsilon>1$ so that $h_{K_i}(u_0)<\varepsilon$ for all $i>N_\varepsilon$. Thus, $$K_i\subseteq \{x\in\mathbb{R}^n: x\cdot u_0\leq \varepsilon \}$$ for all $i>N_\varepsilon$. By the fact that $\{K_i\}$ converges to $L$ again, there exists a constant $R>0$ such that $K_i\subseteq B_n(R)$ where $B_n(R)$ is a ball with radius $R$ in $\mathbb{R}^n$. Hence, for all $i>N_\varepsilon$, \begin{align*} K_i\subseteq B_n(R)\cap\{x\in\mathbb{R}^n: x\cdot u_0\leq \varepsilon \}. \end{align*} By the following result: \begin{align*} \gamma_n(\mathbb{R}^n)=\frac{1}{(\sqrt{2\pi})^n}\int_{\mathbb{R}^n}e^{-\frac{|x|^2}{2}}dx=1, \end{align*} we obtain, for halfspace $H^-(u_0)=\{x\in\mathbb{R}^n: x\cdot u_0\leq 0 \}$, \begin{align*} \gamma_n(H^-(u_0))=\frac{1}{(\sqrt{2\pi})^n}\int_{H^-(u_0)}e^{-\frac{|x|^2}{2}}dx =\frac{1}{2}\gamma_n(\mathbb{R}^n)=\frac{1}{2}. \end{align*} Together with $\gamma_n(H^-(u_0))=\gamma_n(H^-(u_0)\cap B_n(R))+\gamma_n(H^-(u_0)\backslash B_n(R))$, we have \begin{align*} \gamma_n(H^-(u_0)\cap B_n(R))<\frac{1}{2}. \end{align*} Hence, for $\varepsilon$ small enough, \begin{align*} \gamma_n(K_i)\leq \gamma_n\big( B_n(R)\cap\{x\in\mathbb{R}^n: x\cdot u_0\leq \varepsilon \}\big)<\frac{1}{2}, \end{align*} for all $i>N_\varepsilon$. This is a contradiction to the condition $\gamma_n(K_i)\geq1/2$ for $i=1,2,\cdots$. Therefore, $o$ is an interior point of $L$, that is, $L\in \mathcal{K}^n_o$. \end{proof} \begin{remark}\label{rem-1} From the proof of Lemma \ref{Conti-polar-bound}, we obtain that $\gamma_n(K)\geq \frac{1}{2}$ for $K\in \mathcal{K}^n_o$ means that the origin $o$ can not be close sufficiently to the boundary $\partial K$. If $L\in \mathcal{K}^n$ with $o\in \partial L$, then we have $\gamma_n(L)<\frac{1}{2}$. Hence, the condition ``$\gamma_n(K_i)\geq \frac{1}{2}$ for $K_i\in \mathcal{K}^n_o$" is very necessary for Lemma \ref{Conti-polar-bound}. \end{remark} The weak convergence of Gaussian surface area measure is obtained in \cite{HuangXZ}: \begin{lem}[\cite{HuangXZ}]\label{weak-conve} Let $K_i\in \mathcal{K}^n_o$ for $i=1,2,\cdots$ such that $\{K_i\}$ converges to $K_0\in \mathcal{K}^n_o$ in the Hausdorff metric, then $\{S_{\gamma_n,K_i}\}$ converges to $S_{\gamma_n,K_0}$ weakly. \end{lem} By the variational formula \eqref{Lp-Gaussian-Measure} of the Gaussian volume $\gamma_n$ for $L_p$ Minkowski combination, the integral expression of $L_p$ Gaussian surface area measure was obtained in \cite{HuangXZ,Liu,LvWang}. \begin{lem}[\cite{HuangXZ,Liu,LvWang}]\label{definition-Lp-Gaussian} Suppose $p\in \mathbb{R}$ and $K\in \mathcal{K}^n_o$. For each Borel set $\eta\subseteq S^{n-1}$, $L_p$ Gaussian surface area $S_{p,\gamma_n}(K,\cdot)$ of $K$ is defined by \begin{align*} S_{p,\gamma_n}(K,\eta) &=\frac{1}{(\sqrt{2\pi})^n} \int_{\nu^{-1}_K(\eta)}(x\cdot\nu_K(x))^{1-p}e^{-\frac{|x|^2}{2}}d\mathcal{H}^{n-1}(x)\\ &=\int_{\eta}h_K^{1-p}(u)dS_{\gamma_n,K}(u). \end{align*} Here, $\nu_K: \partial' K \rightarrow S^{n-1}$ is the Gauss map of $K$ and $\mathcal{H}^{n-1}$ is an ($n-1$)-dimensional Hausdorff measure. \end{lem} Since $\{K_i\}$ converges to $K_0$ in the Hausdorff metric for $K_i\in \mathcal{K}^n_o$$(i=0,1,\cdots)$, then $\{h_{K_i}\}$ converges to $h_{K_0}$ uniformly on $S^{n-1}$. Together with the Lemma \ref{weak-conve} and Lemma \ref{definition-Lp-Gaussian}, we obtain the weak convergence of $L_p$ Gaussian surface area measures as follows: \begin{pro}\label{weak-conve-p} Suppose $p\in \mathbb{R}$ and $K_i\in \mathcal{K}^n_o$ for $i=0,1,\cdots$. If $\{K_i\}$ converges to $K_0$ in the Hausdorff metric, then $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly. \end{pro} The uniqueness of the solution to the $L_p$ Gaussian Minkowski problem is obtained in \cite{HuangXZ,Liu}: \begin{lem}[\cite{HuangXZ,Liu}]\label{uniqueness} Let $p\geq1$ and $K,L\in\mathcal{K}^n_o$ with $\gamma_n(K),\gamma_n(L)\geq1/2$. If \begin{align*} S_{p,\gamma_n}(K,\cdot)=S_{p,\gamma_n}(L,\cdot), \end{align*} then, $K=L$. \end{lem} The continuity of the solution to the $L_p$ Gaussian Minkowski problem is obtained for $\gamma_n(\cdot)\geq1/2$. Theorem \ref{thm0-1} is rewritten as Theorem \ref{continuity-1}: \begin{thm}\label{continuity-1} Suppose $p\geq1$ and $K_i\in\mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If the sequence $\{S_{p,\gamma_n}(K_i,\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, then the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric. \end{thm} \begin{proof} Assume that the sequence $\{K_i\}$ does not converge to $K_0$. Then, without loss of generality, we may assume that there exists a constant $\varepsilon_0>0$ such that \begin{align*} \|h_{K_i}-h_{K_0}\|\geq \varepsilon_0, \end{align*} for all $i=1,2,\cdots$. By Lemma \ref{Conti-bound}, $\{K_i\}$ is bounded. Thus, from Blaschke selection theorem, the sequence $\{K_i\}$ has a convergent subsequence $\{K_{i_j}\}$ which converges to a compact convex set $L_0$. Clearly, $L_0\neq K_0$. Together with the continuity of $\gamma_n$ and $\gamma_n(K_{i_j})\geq1/2$ for $i=1,2,\cdots$, we have \begin{align*} \gamma_n(L_0)=\lim_{j\rightarrow+\infty}\gamma_n(K_{i_j})\geq1/2. \end{align*} By $\lim_{j\rightarrow+\infty}K_{i_j}=L_0$ and $\gamma_n(K_{i_j})\geq1/2$ for $i=1,2,\cdots$ again, $L_0\in\mathcal{K}^n_o$ with $L_0\neq K_0$ from Lemma \ref{Conti-polar-bound}. Since $\{K_{i_j}\}$ converges to $L_0$ in Hausdorff metric, then $\{S_{p,\gamma_n}(K_{i_j},\cdot)\}$ converges to $S_{p,\gamma_n}(L_0,\cdot)$ weakly by Lemma \ref{weak-conve-p}. Together with $\{S_{p,\gamma_n}(K_{i_j},\cdot)\}$ converges to $S_{p,\gamma_n}(K_0,\cdot)$ weakly, we have \begin{align*} S_{p,\gamma_n}(K_0,\cdot)=S_{p,\gamma_n}(L_0,\cdot). \end{align*} By $\gamma_n(K_0),\gamma_n(L_0)\geq1/2$ and Lemma \ref{uniqueness}, we obtain $K_0=L_0$. This is a contradiction to $K_0\neq L_0$. Therefore, the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric. \end{proof} \vskip 0.5cm \section{The proof of Theorem \ref{thm0-2}}\label{} \vskip 0.3cm In this section, we mainly prove Theorem \ref{thm0-2}. The following lemma will be needed. \begin{lem}\label{bound-2} Suppose $p_i\geq1$ and $K_i\in\mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If $S_{p_i,\gamma_n}(K_i,\cdot)=S_{p_0,\gamma_n}(K_0,\cdot)$ with $\lim_{i\rightarrow+\infty}p_i=p_0$, then the sequence $\{K_i\}$ is bounded. \end{lem} \begin{proof} By $p_i\geq1$ and $\lim_{i\rightarrow+\infty}p_i=p_0$, without loss of generality, we may assume that \begin{align}\label{pi} 1\leq p_i<2p_0, \end{align} for all $i$. From \eqref{Phi}, $S_{p_i,\gamma_n}(K_i,\cdot)=S_{p_0,\gamma_n}(K_0,\cdot)$, Lemma \ref{Conti-Min-ineq} and $\gamma_n(K_i)\geq1/2$, we have \begin{align}\label{Conti-bound-2} \Phi_{p_i}(K_i) &=-\frac{1}{p_i\gamma_n(K_i)}\int_{S^{n-1}} h^{p_i}_{K_i}(u)dS_{p_i,\gamma_n}(K_i,u)+\log\gamma_n(K_i)\nonumber\\ &\geq-\frac{1}{p_i\gamma_n(K_i)}\int_{S^{n-1}}h^{p_i}_{B_n}(u)dS_{p_i,\gamma_n}(K_i,u)+\log\gamma_n(B_n)\nonumber\\ &=-\frac{|S_{p_i,\gamma_n}(K_i,\cdot)|}{p_i\gamma_n(K_i)}+\log\gamma_n(B_n)\nonumber\\ &\geq-\frac{2}{p_i}|S_{p_0,\gamma_n}(K_0,\cdot)|+\log\gamma_n(B_n)\nonumber\\ &\geq-2|S_{p_0,\gamma_n}(K_0,\cdot)|+\log\gamma_n(B_n) \end{align} Let \begin{align*} R_i=\rho_{K_i}(u_i)=\max\{\rho_{K_i}(u): u\in S^{n-1}\}, \end{align*} where $u_i\in S^{n-1}$. Then, $R_iu_i\in K_i$ and $K_i\subseteq R_iB_n$. Thus, \begin{align*} h_{K_i}(v)\geq R_i(u_i\cdot v)_+, \end{align*} for all $v\in S^{n-1}$ and $\gamma_n(K_i)\leq \gamma_n(R_iB_n)$. Since $S_{p_0,\gamma_n}(K_0,\cdot)$ is not concentrated in any closed hemisphere of $S^{n-1}$ and $S^{n-1}$ is a compact set, then there exists a constant $c_3>0$ such that \begin{align*} \int_{S^{n-1}}(u\cdot v)_+^{2p_0}dS_{p_0,\gamma_n}(K_0,v)\geq c_3, \end{align*} for $u\in S^{n-1}$. Thus, \begin{align*} \Phi_{p_i}(K_i) &=-\frac{1}{p_i\gamma_n(K_i)}\int_{S^{n-1}} h^{p_i}_{K_i}(v)dS_{p_i,\gamma_n}(K_i,v)+\log\gamma_n(K_i)\\ &=-\frac{1}{p_i\gamma_n(K_i)}\int_{S^{n-1}} h^{p_i}_{K_i}(v)dS_{p_0,\gamma_n}(K_0,v)+\log\gamma_n(K_i)\\ &\leq-\frac{R_i^{p_i}}{p_i\gamma_n(K_i)}\int_{S^{n-1}}(u_i\cdot v)_+^{p_i}dS_{p_0,\gamma_n}(K_0,v)+\log\gamma_n(K_i)\\ &\leq-\frac{R_i^{p_i}}{2p_0\gamma_n(R_iB_n)}\int_{S^{n-1}}(u_i\cdot v)_+^{2p_0}dS_{p_0,\gamma_n}(K_0,v)+\log\gamma_n(R_iB_n)\\ &\leq-\frac{c_3R_i^{p_i}}{2p_0\gamma_n(R_iB_n)}+\log\gamma_n(R_iB_n). \end{align*} Assume that $\{R_i\}$ is not bounded. Without loss of generality, we may assume that $\lim_{i\rightarrow+\infty}R_i=+\infty$. Together with \eqref{gamma=1} and \eqref{pi}, we have \begin{align*} \Phi_{p_i}(K_i)\leq-\frac{c_3R_i^{p_i}}{2p_0\gamma_n(R_iB_n)}+\log\gamma_n(R_iB_n) \rightarrow -\infty, \end{align*} as $i\rightarrow+\infty$. This is a contradiction to \eqref{Conti-bound-2}. Hence, $\{R_i\}$ is bounded, that is, the sequence $\{K_i\}$ is bounded. \end{proof} Theorem \ref{thm0-2} is rewritten as the following Theorem \ref{continuity-2}. \begin{thm}\label{continuity-2} Suppose $p_i\geq1$ and $K_i\in\mathcal{K}^n_o$ with $\gamma_n(K_i)\geq1/2$ for $i=0,1,2,\cdots$. If $S_{p_i,\gamma_n}(K_i,\cdot)=S_{p_0,\gamma_n}(K_0,\cdot)$, then the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric as $\{p_i\}$ converges to $p_0$. \end{thm} \begin{proof} Suppose that $\{K_i\}$ does not converge to $K_0$ in the Hausdorff metric. Then, there exist a constant $\varepsilon_1>0$ and a subsequence of $\{K_i\}$, denoted by $\{K_i\}$ again, such that \begin{align*} \|h_{K_i}-h_{K_0}\|\geq \varepsilon_1, \end{align*} for all $i=1,2,\cdots$. By Lemma \ref{bound-2}, $\{K_i\}$ is bounded. By the Blaschke selection theorem, we have $\{K_i\}$ has a convergent subsequence $\{K_{i_j}\}$ which converges to a compact convex set $L_0$. Clearly, $L_0\neq K_0$. Together with the continuity of $\gamma_n$ and $\gamma_n(K_{i_j})\geq1/2$ for $i=1,2,\cdots$, we have \begin{align*} \gamma_n(L_0)=\lim_{j\rightarrow+\infty}\gamma_n(K_{i_j})\geq1/2. \end{align*} By $\lim_{j\rightarrow+\infty}K_{i_j}=L_0$ and $\gamma_n(K_{i_j})\geq1/2$ for $i=1,2,\cdots$ again, $L_0\in\mathcal{K}^n_o$ with $L_0\neq K_0$ from Lemma \ref{Conti-polar-bound}. Since $\{K_{i_j}\}$ converges to $L_0$ in Hausdorff metric, then $h_{K_{i_j}}\rightarrow h_{L_0}$ uniformly and $S_{\gamma_n,K_{i_j}}\rightarrow S_{\gamma_n,K_{0}}$ weakly as $j\rightarrow+\infty$. Together with $\lim_{i\rightarrow+\infty}p_i=p_0$, we have $\{S_{p_{i_j},\gamma_n}(K_{i_j},\cdot)\}$ converges to $S_{p_0,\gamma_n}(L_0,\cdot)$ weakly. By $S_{p_i,\gamma_n}(K_i,\cdot)=S_{p_0,\gamma_n}(K_0,\cdot)$, \begin{align*} S_{p,\gamma_n}(K_0,\cdot)=S_{p,\gamma_n}(L_0,\cdot). \end{align*} By $\gamma_n(K_0),\gamma_n(L_0)\geq1/2$ and Lemma \ref{uniqueness}, we obtain $K_0=L_0$. This is a contradiction to $K_0\neq L_0$. Therefore, the sequence $\{K_i\}$ converges to $K_0$ in the Hausdorff metric. \end{proof} \begin{remark} Since Lemma \ref{Conti-polar-bound} plays a vital role in the proofs of Theorem \ref{thm0-1} and Theorem \ref{thm0-2}, then the condition ``$\gamma_n(K_i)\geq \frac{1}{2}$ for $K_i\in \mathcal{K}^n_o$" is necessary for Theorem \ref{thm0-1} and Theorem \ref{thm0-2} by Remark \ref{rem-1}. \end{remark} \vskip 0.3 cm
cb4c0785afea9bce32c7071e9ce59873dfa7e777
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} One of the joys of working in a metric space is that the closure of a set coincides with its \textit{sequential closure}. In particular, if $X$ is a metric space, $A$ is a subset of $X$, and $b$ is in the closure of $A$, then there exists a sequence of elements in $A$ which converges to $b$. In \cite{Invariant}, Simon showed that global types which are finitely satisfiable over a countable model of a countable NIP theory admit a similar property. Let $T$ be a complete, first-order theory, $\mathcal{U}$ a monster model of $T$, and $M$ a small submodel of $\mathcal{U}$. Simon proved the following (\cite[Lemma 2.8]{Invariant}): \begin{theorem}\label{sim:conv} Let $T$ be a countable NIP theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is finitely satisfiable over $M$ where $|M| = \aleph_0$. Then there exists a sequence of points $(a_{i})_{i \in \omega}$ in $M^{x}$ such that $\lim_{i\to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. \end{theorem} One of the goals of this paper is to \textit{morally} generalize the proof of the above theorem in two different directions. By mimicking Simon's proof, we are able to prove the following, \begin{enumerate}[(T$1$)] \item Let $T$ be any countable theory. Suppose $p$ is a type in $S_{x}(\mathcal{U})$ and $p$ is generically stable over $M$. Then there exists a sequence of points $(a_i)_{i \in \omega}$ in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. \item Let $T$ be a countable NIP theory. Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$ where $|M| = \aleph_0$. Then there exists a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$. More explicitly, for any formula $\varphi(x)$ in $\mathcal{L}_{x}(\mathcal{U})$, we have that \begin{equation*} \lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i})(\varphi(x)) = \mu(\varphi(x)). \end{equation*} \end{enumerate} The proofs of both of these theorems are slightly more \textit{enjoyable} than one would anticipate. For example, we already know many diverse and useful approximation theorems for measures in NIP theories (and some for generically stable types in arbitrary theories) and so one might expect that our proofs rely on composing approximation techniques. However, stringing together different approximation methods can result in an array with some kind of \textit{modes-of-convergence} problem. As stated previously, the technique used to prove both these theorems mimics the argument used in \cite[Lemma 2.8]{Invariant}. In the generically stable case, the set up is identical: Suppose $p$ is in $S_{x}(\mathcal{U})$ where $p$ is generically stable over $M$ and $I$ is a Morley sequence in $p$ over $M$. As in Simon's proof, we use both $M$ and $I$ to find an eventually indiscernible sequence of points in $M^{x}$ which converge to $p|_{MI}$. The \textit{eventual EM-type} of this sequence over $M$ is precisely $p^{(\omega)}|_{M}$. Using generic stability and compactness, we conclude that this sequence must converge to $p$. Our proof of the Keisler measure case is slightly more exotic since there is no standard notion of a ``Morley sequence in a Keisler measure". The proof we provide is \textit{essentially} done in first order model theory (with an important exceptional lemma following from Ben Yaacov's work on randomizations \cite{Ben}). We expect that there exists other proofs using other methods such as continuous model theory\footnote{In fact, after this paper was posted to arXiv, another proof was discovered by Khanaki using BFT on an infinite product space \cite{Khanaki3}.}. The proof we give here embraces the ideology first developed in \cite{HPS} and shows that this can be resolved by replacing the Morley sequence (in Simon's proof) by a \textit{smooth sequence in $\mu$ over $M$}. This provides more evidence for the intuition that smooth measures can play the role of realized types, at least in the NIP context. After constructing a countable model $N_{\omega}$ ``containing this sequence", we find a sequence of points in $(M^{x})^{<\omega}$ such that the corresponding average measures on these tuples converge to $\mu|_{N_{\omega}}$. After constructing an eventually indiscernible subsequence in this context, we are able to readapt most of Simon's proof technique by making use of known approximation theorems, symmetry properties, and some basic integration techniques. It is interesting to note that one can give another equivalent characterization of generically stable measures in NIP theories using smooth sequences. This characterization highlights the connection between generically stable types and generically stable measures. Recall that a type $p$ is generically stable over a model $M$ if for every Morley sequence $(a_i)_{i \in \omega}$ in $p$ over $M$, $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. We show that in an NIP theory, a measure $\mu$ is generically stable over a model $M$ if and only if for every \textit{smooth sequence} in $\mu$ over $M$, the limit of this sequence is precisely $\mu$. In addition to proving these theorems, we also introduce the classes of \textit{sequentially approximated measures} and \textit{sequentially approximated types}. These definitions can be seen as the \textit{global analogue} to Khanaki's definition of \textit{Baire 1 definability} for local types (see \cite{Khanaki2}). Sequentially approximated measures should be thought of as a ``halfway point" between finitely approximated measures and Keisler measures which are finitely satisfiable over a small model. For instance, we show that a Keisler measure is finitely approximated if and only if it is both definable and sequentially approximated (Proposition \ref{Mazur}) and sequentially approximated measures commute with definable measures (Proposition \ref{prop:com}). Sequentially approximated types remain a little more mysterious. We show that there exists a type such that its corresponding Keisler measure is sequentially approximated (even finitely approximated), but the type itself is not sequentially approximated (Proposition \ref{Gabe}). In the last section, we consider connections to the local measure case and generalize the main result in \cite{GannNIP} (Theorem \ref{main:Gan}). Explicitly, the main result in \cite{GannNIP} demonstrates that if a formula $\varphi$ is NIP and $\mu$ is a $\varphi$-measure which is $\varphi$-definable and finitely satisfiable over a \textit{countable model}, then $\mu$ is $\varphi$-finitely approximated in said model. Here, we demonstrate that \textit{countable} can be replaced by \textit{small}. This paper is structured as follows: In section 2, we discuss preliminaries. In section 3, we describe sequentially approximated measures and sequentially approximated types. In section 4, we show that if $p$ is generically stable over $M$, then $p$ is sequentially approximated over $M$. We also give some examples of types which are which are not sequentially approximated at the end of the section. In section 5, we show that if $T$ is a countable NIP theory, and $\mu$ is finitely satisfiable over a countable model $M$, then $\mu$ is sequentially approximated over $M$. We then give an equivalent characterization of generically stable measures in NIP theories using smooth sequences. In section 6, we generalize the main theorem in \cite{GannNIP}. \subsection*{Acknowledgements} We would like to thank Gabriel Conant, James Hanson, Karim Khanaki, Pierre Simon and our Ph.D. defense committee Daniel Hoffmann, Anand Pillay, Sergei Starchenko, and Minh Chieu Tran for helpful discussions and comments. Thanks also to the referee for many helpful comments. This paper was also partially supported by the NSF research grant DMS-1800806 as well as the NSF CAREER grant DMS-1651321. \section{Preliminaries} If $r$ and $s$ are real numbers and $\epsilon$ is a real number greater than $0$, then we write $r \approx_{\epsilon} s$ to mean $|r - s| < \epsilon$. Fix $\mathcal{L}$ a countable language. Throughout this paper, we always have a countable, complete, first-order theory $T$ and a monster model $\mathcal{U}$ of $T$ in the background. The letters $M$ and $N$ will be used to denote small elementary submodels of $\mathcal{U}$. The letters $x,y,z$ will denote tuples of variables. If $A \subseteq \mathcal{U}$, we let $\mathcal{L}(A)$ be the collection of formulas with parameters from $A$ (modulo logical equivalence). A formula in $\mathcal{L}(A)$ is called an ``$\mathcal{L}(A)$-formula". If $x_0,...,x_k$ is a finite sequence of pairwise disjoint tuples of variables, we let $\mathcal{L}_{x_0,...,x_k}(A)$ be the collection of $\mathcal{L}(A)$-formulas with free variables in these tuples. We write $\mathcal{L}_{x_{0},...,x_{k}}(\emptyset)$ simply as $\mathcal{L}_{x_{0},...,x_{k}}$. If $(x_i)_{i \in \omega}$ is a countable sequence of pairwise distinct tuples of variables, we let $\mathcal{L}_{(x_i)_{i \in \omega}}(A) = \bigcup_{k \in \omega} \mathcal{L}_{x_0,...,x_k}(A)$. For a tuple $x$, let $A^{x}= \{(a_0,...,a_{|x|-1}): a_i \in A, i \leq |x|-1\}$. We let $(A^{x})^{<\omega}$ be the collection of all finite sequences of points in $A^{x}$. If we call $\varphi(x,y)$ a \textit{partitioned $\mathcal{L}_{x,y}(\mathcal{U})$-formula}, we treat $x$ as object variables and $y$ as parameter variables. The formula $\varphi^{*}(y,x)$ denotes the exact same formula as $\varphi(x,y)$, but with the roles exchanged for parameters and object tuples. Generally speaking, in any instance where we have multiple tuples of variables (e.g. $x$ and $y$, or $(x_1,x_2,x_3,...)$), we will always assume they are pairwise distinct without comment. \textbf{Unlike similar papers about Keisler measures, we do not identify a type and its corresponding Keisler measure}. We let $S_{x}(A)$ denote the usual type space over $A$ and $\mathfrak{M}_{x}(A)$ the space of Keisler measures over $A$. We let $\mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$ be the collection of finitely additive probability measures on $\mathcal{L}_{(x_{i})_{i \in \omega}}(\mathcal{U})$. For any (tuple of) variable(s) $x$, and any subset $A \subseteq \mathcal{U}$, we have a map $\delta: S_{x}(A) \to \mathfrak{M}_{x}(A)$ via $\delta(p) = \delta_{p}$ where $\delta_{p}$ is the \textit{Dirac measure at the type $p$}. We sometimes refer to $\delta_{p}$ as the \textit{corresponding Keisler measure} of $p$. If $\overline{a} = (a_1,...,a_n)$ is a sequence of points in $\mathcal{U}^{x}$, then we let $\operatorname{Av}(\overline{a})$ be the associated average measure in $\mathfrak{M}_{x}(\mathcal{U})$. Explicitly, for any $\psi(x) \in \mathcal{L}_{x}(\mathcal{U})$, we define \begin{equation*}\operatorname{Av}(\overline{a})(\psi(x)) = \frac{|\{1\leq i \leq n: \mathcal{U} \models \psi(a_i)\}|}{n}. \end{equation*} \subsection{Basics of convergence} Recall that if $A \subseteq \mathcal{U}$, then both $S_{x}(A)$ and $\mathfrak{M}_{x}(A)$ carry a natural compact Hausdorff topology. For $S_{x}(A)$, we have the usual Stone space topology. Similarly, $\mathfrak{M}_{x}(A)$ admits a compact Hausdorff topology. There are two ways to describe this topology. First, this topology is the topology induced from the compact Hausdorff space $[0,1]^{\mathcal{L}_{x}(A)}$ where we identify each measure with the obvious map from $\mathcal{L}_{x}(A)$ to $[0,1]$. This topology on $\mathfrak{M}_{x}(A)$ can also be described as the coarsest topology such that for any continuous function $f: S_{x}(A) \to \mathbb{R}$, the map $\int f : \mathfrak{M}_{x}(A) \to \mathbb{R}$ is continuous. We will routinely need to keep track of which sets of parameters our types and measures are converging over. Hence, we establish the following conventions. \begin{definition} Fix $A \subseteq \mathcal{U}$, $p \in S_{x}(A)$ and $\mu \in \mathfrak{M}_{x}(A)$. \begin{enumerate}[$(i)$] \item We say that a sequence of types $(p_{i})_{i \in \omega}$, where each $p_i$ is in $S_{x}(A)$, \textbf{converges} to $p$ if it converges in the Stone space topology on $S_{x}(A)$, which we write as ``$\lim_{i \to \infty} p_i = p$ in $S_{x}(A)$" or simply as ``$\lim_{i \to \infty} p_i = p$" when the underlying space is obvious. We recall that $\lim_{i \to \infty} p_i = p$ if for every $\psi(x) \in p$, there exists some natural number $N_{\psi}$ such that for any $n > N_{\psi}$, $\psi(x) \in p_n$. \item We say that a sequence of measures $(\mu_i)_{i \in \omega}$, where each $\mu_{i}$ is in $\mathfrak{M}_{x}(A)$, \textbf{converges} to $\mu$ if this sequence converges in the compact Hausdorff topology on $\mathfrak{M}_{x}(A)$, which we write as ``$\lim_{i \to \infty} \mu_{i} = \mu$ in $\mathfrak{M}_{x}(A)$" or simply as ``$\lim_{i \to \infty} \mu_i = \mu$" when there is no possibility of confusion. Notice that $\lim_{i \to \infty} \mu_i = \mu$ if for every $\psi(x) \in \mathcal{L}_{x}(A)$ and $\epsilon >0$, there exists some natural number $N_{\psi,\epsilon}$ such that for any $n > N_{\psi,\epsilon}$, \begin{equation*} |\mu_{n}(\psi(x)) - \mu(\psi(x))| < \epsilon. \end{equation*} \end{enumerate} \end{definition} We now observe the relationship between finitely satisfiable types and measures and topological closure in their respective spaces. \begin{fact}\label{Avcls} Suppose $p \in S_{x}(\mathcal{U})$, $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. Assume that $p$ and $\mu$ are finitely satisfiable over $M$. Then the following are true. \begin{enumerate}[($i$)] \item The type $p$ is in the closure of $\{tp(a/\mathcal{U}): a \in M^{x}\}$ in $S_{x}(\mathcal{U})$. \item The associated Keisler measure $\delta_{p}$ is in the closure of $\{\delta_{a}: a \in M^{x}\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. \item The measure $\mu$ is in the closure of \begin{equation*} \Big\{\sum_{i=1}^{n} r_i \delta_{a_i}: n \in \mathbb{N}, r_i > 0, \sum_{i=1}^{n} r_i =1, a_i \in M^{x}\Big\} \end{equation*} in $\mathfrak{M}_{x}(\mathcal{U})$. \item The measure $\mu$ is in the closure of $\{\operatorname{Av}(\overline{a}): \overline{a} \in (M^{x})^{<\omega}\}$ in $\mathfrak{M}_{x}(\mathcal{U})$. \end{enumerate} \end{fact} We remark that the proof of $(i)$ is a standard exercise and the proof of $(ii)$ follows directly from $(i)$. A proof of $(iii)$ can be found at \cite[Proposition 2.11]{ChGan} and $(iv)$ follows directly from $(iii)$. \subsection{Types} We recall some basic definitions and facts about special kinds of types (e.g. generically stable types). Our notion of an \textit{EM-type} is not defined in complete generality since we are only concerned with countable sequences in this paper. \begin{definition} Let $(a_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and let $B \subseteq \mathcal{U}$. Then the \textbf{Ehrenfeucht-Mostowski type} or \textbf{EM-type} of the sequence $(a_{i})_{i \in \omega}$ over $B$, denoted $\operatorname{EM}((a_{i})_{i \in \omega }/B)$, is the following partial type: \begin{equation*} \{\varphi(x_0,...,x_k) \in \mathcal{L}_{(x_i)_{i \in \omega}}(B): \mathcal{U} \models \varphi(a_{i_{0}},...,a_{i_{k}}) \text{ for any } i_0 <...<i_{k} \}. \end{equation*} We remark that this partial type corresponds to a subset of $S_{(x_{i})_{i \in \omega}}(B)$. \end{definition} \begin{observation} It is clear from the definition above that for any sequence of points $(a_{i})_{i \in \omega}$ in $\mathcal{U}^{x}$ and any $B \subseteq \mathcal{U}$, the type $\operatorname{EM}((a_{i})_{i \in \omega }/B)$ is complete if and only if the sequence $(a_{i})_{i \in \omega}$ is indiscernible over $B$. \end{observation} The general notion of a \textit{generically stable type} was introduced by Pillay and Tanovi\'{c} in \cite{PiTa}. The definition of a generically stable type provided below was proved to be equivalent in \cite{CoGan} (see Proposition 3.2). We also provide the definition of a $\operatorname{dfs}$ type which will be important throughout this paper. In general, the class of $\operatorname{dfs}$ types strictly contains the class of generically stable types. \begin{definition} Suppose that $p \in S_{x}(\mathcal{U})$. \begin{enumerate}[$(i)$] \item We say that $p$ is \textbf{dfs} if there exists a small model $M \prec \mathcal{U}$ such that $p$ is both definable and finitely satisfiable over $M$. In this case, we say that $p$ is \textbf{dfs over $M$}. \item We say that $p$ is \textbf{generically stable} if there exists a small model $M \prec \mathcal{U}$ such that $p$ is invariant over $M$ and for any Morley sequence $(a_i)_{i \in \omega}$ in $p$ over $M$, we have that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$. In this case, we say that $p$ is \textbf{generically stable over $M$}. \end{enumerate} \end{definition} Finally, we provide a collection of standard facts about these classes of types. \begin{fact}\label{gfs:facts} Let $p$ be in $S_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $p$ is generically stable over $M$, then $p$ is $\operatorname{dfs}$ over $M$ $($\cite[Proposition 1]{PiTa}$)$. \item If $p$ is $\operatorname{dfs}$ over $M$, then any Morley sequence in $p$ over $M$ is totally indiscernible over $M$ $($\cite[Proposition 3.2]{HP}, proof does not use NIP$)$. \item If $p$ is generically stable/$\operatorname{dfs}$ over $M$ and $M_0$-invariant, then $p$ is respectively generically stable/$\operatorname{dfs}$ over $M_0$ $($generically stable case follows from $(i)$ of \cite[Proposition 1]{PiTa}; $\operatorname{dfs}$ case can be found in \cite[Lemma 2.8]{Sibook}$)$. \item $($T is countable$)$ If $p$ is generically stable/$\operatorname{dfs}$ over $M$, there exists an elementary submodel $M_0$ such that $|M_0| = \aleph_0$ and $p$ is generically stable/$\operatorname{dfs}$ over $M_0$ $($Easy to check from $(iii)$$)$. \item $($T is NIP$)$ If $p$ is $\operatorname{dfs}$ over $M$ then $p$ is generically stable over $M$ $($e.g. \cite[Theorem 2.29]{Sibook}$)$. \end{enumerate} \end{fact} \subsection{Keisler measures} In this subsection, we will briefly recall some important definitions and facts about these measures. As with any paper about Keisler measures, we provide the following \textit{standard atlas}. \begin{definition} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. \begin{enumerate}[($i$)] \item $\mu$ is \textbf{invariant} if there exists a model $M \prec \mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and $b,b' \in \mathcal{U}^{y}$ such that $b \equiv_{M} b'$, $\mu(\varphi(x,b)) = \mu(\varphi(x,b'))$. In this case, we say that $\mu$ is \textbf{$M$-invariant} or \textbf{invariant over $M$}. \item If $\mu$ is invariant over $M$, then for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$, we can define the map $F_{\mu,M}^{\varphi}:S_{y}(M) \to [0,1]$ via $F_{\mu,M}^{\varphi}(q) = \mu(\varphi(x,b))$ where $b \models q$. When $M$ is obvious we will simply write $F_{\mu,M}^{\varphi}$ as $F_{\mu}^{\varphi}$. \item $\mu$ is \textbf{Borel-definable} if there exists a model $M \prec \mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is Borel. In this case, we say that $\mu$ is \textbf{Borel-definable over $M$}. \item $\mu$ is \textbf{definable} if there exists a model $M \prec \mathcal{U}$ such that $\mu$ is $M$-invariant and for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, the map $F_{\mu,M}^{\varphi}$ is continuous. In this case, we say that $\mu$ is \textbf{$M$-definable} or \textbf{definable over $M$}. \item $\mu$ is \textbf{finitely satisfiable over a small model} if there exists $M \prec \mathcal{U}$ such that for every formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$, if $\mu(\varphi(x)) > 0$ then there exists $a \in M^{x}$ such that $\mathcal{U} \models \varphi(a)$. In this case, we say that $\mu$ is \textbf{finitely satisfiable over $M$}. \item $\mu$ is \textbf{finitely approximated} if there exists a model $M \prec \mathcal{U}$ such that for every partitioned $\mathcal{L}$-formula $\varphi(x,y)$ and every $\epsilon > 0$, there exists $\overline{a} \in (M^{x})^{<\omega}$ such that \begin{equation*} \sup_{b \in \mathcal{U}^{y}} |\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a})(\varphi(x,b))| < \epsilon. \end{equation*} In this case, we say that $\mu$ is \textbf{finitely approximated over $M$}. \item $\mu$ is \textbf{smooth} if there exists a model $M \prec \mathcal{U}$ such that for any $\lambda \in \mathfrak{M}_{x}(\mathcal{U})$ if $\lambda|_{M} = \mu|_{M}$, then $\lambda = \mu$. If this is the case, we say that $\mu$ is \textbf{smooth over $M$}. \end{enumerate} \end{definition} We now provide a collection of basic facts. Statements $(i)$, $(iii)$, $(iv)$, and $(v)$ in Fact \ref{KM:imp} are relatively straightforward to prove and so we leave them as exercises. \begin{fact}\label{KM:imp} Assume that $T$ is any theory and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ with $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $\mu = \operatorname{Av}(\overline{a})$ for some $\overline{a} \in (M^{x})^{<\omega}$, then $\mu$ is smooth over $M$. \item If $\mu$ is smooth over $M$, then $\mu$ is finitely approximated over $M$, $($e.g. \cite[Proposition 7.10]{Sibook}$)$. \item If $\mu$ is finitely approximated over $M$, then $\mu$ is both definable and finitely satisfiable over $M$. \item If $\mu$ is definable or finitely satisfiable over $M$, then $\mu$ is $M$-invariant. \item The measure $\mu$ is definable over $M$ if and only if for every partitioned $\mathcal{L}(M)$-formula $\varphi(x,y)$ and for every $\epsilon > 0$, there exists formulas $\psi_{1}(y),...,\psi_{n}(y) \in \mathcal{L}_{y}(M)$ and real numbers $r_1,...,r_n \in [0,1]$ such that \begin{equation*} \sup_{q \in S_{y}(M)} | F_{\mu,M}^{\varphi}(q) - \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_i(y)}(q)| < \epsilon. \end{equation*} where $\mathbf{1}_{\psi_{i}(y)}$ is the characteristic function of the clopen set $[\psi_{i}(y)]$. \end{enumerate} Moreover, if $T$ is NIP then the following also hold. \begin{enumerate}[$(vi)$] \item If $\mu$ is invariant over $M$, then $\mu$ is Borel-definable $M$ $($e.g. \cite[Proposition 7.19]{Sibook}$)$. \item Any measure $\mu$ is definable and finitely satisfiable over $M$ if and only if $\mu$ is finitely approximated over $M$ $($\cite[Proposition 3.2]{HPS}$)$. \item Every measure has a ``smooth extension". In particular, for any given $M \prec \mathcal{U}$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, there exists some $N$ such that $M \prec N \prec \mathcal{U}$ and a measure $\lambda \in \mathfrak{M}_{x}(\mathcal{U})$ such that $\lambda$ is smooth over $N$ and $\lambda|_{M} = \mu|_{M}$ $($\cite[Lemma 2.2]{HPS}$)$. \end{enumerate} \end{fact} \begin{proposition}[T is countable]\label{m:countable} If $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$, then there exists a countable model $M_0$ such that $\mu$ is definable, finitely approximated, smooth or $\operatorname{dfs}$ over $M_0$ $($respectively$)$. \end{proposition} \begin{proof} We notice that the properties of definability and smoothness only require the existence of $\aleph_0$-many $\mathcal{L}(M)$-formulas (by \cite[Lemma 2.3]{HPS} and (v) of Fact \ref{KM:imp} respectively). If we choose an elementary submodel $M_0$ of $M$ containing the parameters from these formulas, then $\mu$ will have the desired property over $M_0$. Finitely approximated measures only require the existence of $\aleph_0$-many elements of $M$. Choosing an elementary submodel $M_0$ of $M$ with these elements demonstrates that $\mu$ is finitely approximated over $M_0$. Finally, if $\mu$ is $\operatorname{dfs}$ then $\mu$ is definable over a countable model $M_{0}$. In particular, $\mu$ is invariant over $M_0$ and so $\mu$ is also finitely satisfiable over $M_0$ by the same argument as in \cite[Proposition 4.13]{GannNIP}. \end{proof} \begin{remark} Assuming $T$ is countable, there are measures (even types) which are finitely satisfiable over a small submodel, but are not finitely satisfiable over a countable submodel. See Proposition \ref{omega} and Remark \ref{example:coheir1} for an explicit example. \end{remark} \begin{definition}Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, $\nu \in \mathfrak{M}_{y}(\mathcal{U})$ and assume that $\mu$ is Borel-definable over $M$. Then we define the \textbf{Morley product} of $\mu$ and $\nu$ (denoted $\mu \otimes \nu)$ is the unique Keisler measure in $\mathfrak{M}_{x,y}(\mathcal{U})$ with the following property: for any formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$, \begin{equation*} \mu \otimes \nu (\varphi(x,y)) = \int_{S_{y}(N)} F_{\mu}^{\varphi} d(\nu|_{N}), \end{equation*} where $N$ is any small elementary submodel of $\mathcal{U}$ containing $M$ and any parameters from $\varphi$ and $\nu|_{N}$ is the associated regular Borel probability measure of the restriction of $\nu$ to $N$ on the type space $S_{y}(N)$. \end{definition} We remark that this this product is well-defined and the computation does not depend on our choice of $N$ (assuming $N$ contains $M$ and all parameters in $\varphi(x,y)$) (see discussion after \cite[Proposition 7.19]{Sibook}). This observation allows us to grow or shrink the space in which we are integrating over and we will make substantial use of this property in section 5. We end this section with a list of facts about measures and products. \begin{fact}\label{KM:imp2} Assume that $T$ is any theory and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, $\nu \in \mathfrak{M}_{y}(\mathcal{U})$, and $\lambda \in \mathfrak{M}_{z}(\mathcal{U})$. Assume that $\mu$ and $\nu$ are both $M$-invariant. \begin{enumerate}[$(i)$] \item If $\mu$ is smooth and $\nu$ is Borel definable, then $\mu \otimes \nu = \nu \otimes \mu$ $($see \cite[Corollary 2.5]{HPS}$)$. \item If $\mu$ and $\nu$ are definable (over $M$), then $\mu \otimes \nu$ is definable (over $M$) and $\mu \otimes (\nu \otimes \lambda) = (\mu \otimes \nu) \otimes \lambda)$ $($see \cite[Proposition 2.6]{CoGan}$)$. \item If $\mu$ and $\nu$ are smooth (over $M$), then $\mu \otimes \nu$ is smooth (over $M$) $($e.g. \cite[Corollary 3.1]{CoGaNA}$)$. \item If $\mu$ is Borel definable (over $M$) and $\nu$ is invariant (over $M$), then $\mu \otimes \nu$ is invariant (over $M$) $($discussion before \cite[Exercise 7.20]{Sibook}$)$. \item If $\mu$ and $\nu$ are $\operatorname{dfs}$ (over $M$), then $\mu \otimes \nu$ is $\operatorname{dfs}$ (over $M$) $($e.g. \cite[Proposition 2.10]{CoGan}$)$. \end{enumerate} Moreover, if $T$ is NIP then the following also hold. \begin{enumerate}[$(a)$] \item If $\mu,\nu$ are invariant then $\mu \otimes (\nu \otimes \lambda) = (\mu \otimes \nu) \otimes \lambda$ $($see \cite{CoGaNA}$)$. \item If $\mu$ is $\operatorname{dfs}$ and $\nu$ is invariant, then $\mu \otimes \nu = \nu \otimes \mu$ $($see \cite[Theorem 3.2]{HPS}$)$. \end{enumerate} \end{fact} \begin{definition}[T is NIP]\label{prod:inf} Suppose that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is invariant. Then, we define the following measures: \begin{enumerate} \item $\mu^{(0)}(x_0) = \mu(x_0)$. \item $\mu^{(n)} = \mu(x_{n}) \otimes \mu^{(n-1)}(x_0,...,x_{n-1})$. \item $\mu^{(\omega)} = \bigcup_{i \in \omega} \mu^{(n)}$ (where $\mu^{(\omega)} \in \mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$). \end{enumerate} We note that $\mu^{(n)}$ and $\mu^{(\omega)}$ are well-defined by Fact \ref{KM:imp2}, and moreover we do not need to worry about the ordering of the parentheses in the product. \end{definition} \section{Sequentially approximated types and measures} We begin this section by isolating the property of \textit{sequential approximability}. We again remark that these classes of objects are a global version of Khanaki's \textit{Baire 1 definability} \cite{Khanaki2}. We assume that $T$ is countable, but make no other global assumptions about $T$. As usual, $\mathcal{U}$ is a fixed sufficiently saturated model of $T$. We now define sequentially approximated types and measures. \begin{definition}\label{SA} Let $p \in S_{x}(\mathcal{U})$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. We say that, \begin{enumerate} \item $p$ is \textbf{sequentially approximated} if there exists $M \prec \mathcal{U}$ and a sequence of points $(a_i)_{i \in \omega}$ in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_i/\mathcal{U}) = p$ in $S_{x}(\mathcal{U})$. In this case, we say $p$ is \textbf{sequentially approximated over $M$}. \item $\mu$ is \textbf{sequentially approximated} if there exists $M \prec \mathcal{U}$ and a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. In this case, we say $\mu$ is \textbf{sequentially approximated over $M$}. \end{enumerate} \end{definition} We warn the reader that Definition \ref{SA} is only meaningful in the context of types and measures over large models. Indeed, if $M$ is a countable model and $T$ is a countable theory, then for every $p \in S_{x}(M)$, there exists a sequence of points in $M^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}(a_{i}/M) =p$ in $S_{x}(M)$. The analogous statement also holds for measures. We also emphasize to the reader that there is a real distinction between a type $p$ being sequentially approximated over a model $M$ and its associated Keisler measure $\delta_{p}$ being sequentially approximated over $M$. Proposition \ref{Gabe} gives an example of a type which is not sequentially approximated while its associated Keisler measure is sequentially approximated. However, the other implication holds almost trivially. \begin{observation}\label{forward:easy} If a type $p$ in $S_{x}(\mathcal{U})$ is sequentially approximated over a model $M$, then the associated Keisler measure $\delta_p$ is sequentially approximated over $M$. \end{observation} \begin{proof} If $\lim_{i \to \infty}\operatorname{tp}(a_i/\mathcal{U}) = p$ in $S_{x}(\mathcal{U})$, then $\lim_{i \to \infty} \delta_{a_{i}} = \delta_{p}$ in $\mathfrak{M}_{x}(\mathcal{U})$ since $\delta: S_{x}(\mathcal{U}) \to \mathfrak{M}_{x}(\mathcal{U})$ is a topological embedding. \end{proof} \subsection{Basic properties} We now connect sequentially approximated types and measures to standard model-theoretic properties. For the reader's intuition, sequential approximability (at least in the case of measures) should be thought of as a strong version of finite satisfiability over a small model or a weak version of finite approximability. Sequentially approximated types remain a little more mysterious. \begin{proposition}\label{finitesat} Assume that $p \in S_{x}(\mathcal{U})$ and $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. \begin{enumerate}[($i$)] \item If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are finitely satisfiable over $M$. Even more, $p$ and $\mu$ are finitely satisfiable over a countable elementary submodel of $M$. \item If $p$ and $\mu$ are sequentially approximated over $M$, then $p$ and $\mu$ are Borel-definable over $M$. \item If $\mu$ is finitely approximated over $M$, then $\mu$ is sequentially approximated over $M$. $($Warning: In general, this fails for types.$)$ \item If $T$ is NIP, then $p$ is sequentially approximated over $M$ if and only if $\delta_{p}$ is sequentially approximated over $M$. \item Assume that $k \subseteq \{1,2,...,n\}$ and let $\pi_{k}:S_{n}(\mathcal{U}) \to S_{k}(\mathcal{U})$ and $\rho_{k}:\mathfrak{M}_{n}(\mathcal{U}) \to \mathfrak{M}_{k}(\mathcal{U})$ be the obvious projection maps. If $p \in S_{n}(\mathcal{U})$ and $p$ is sequentially approximated over $M$, then $\pi_{k}(p)$ is sequentially approximated over $M$. Similarly, if $\mu \in \mathfrak{M}_{n}(\mathcal{U})$ is sequentially approximated over $M$ then so is $\rho_{k}(\mu)$. \end{enumerate} \end{proposition} \begin{proof} We prove the claims. \begin{enumerate}[($i$)] \item The first part of $(i)$ is obvious. For the second part, we only need to choose a submodel containing a sequence which sequentially approximates our type or measure. Since $T$ is countable, we can choose a countable model. \item The proofs for both the type and measure cases are similar, so we prove the measure case. Assume that $(\overline{a}_{i})_{i \in \omega}$ is a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. By part $(i)$, $\mu$ is finitely satisfiable over $M$ and hence $M$-invariant. So, for any partitioned formula $\varphi(x,y)$ in $\mathcal{L}$, the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$ is well-defined. By sequential approximability, the sequence of continuous functions $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converges pointwise to $F_{\mu}^{\varphi}$. Hence, $F_{\mu}^{\varphi}$ is Baire-1 (and therefore Borel). \item This follows from an encoding argument. Let $(\varphi_{n}(x,y_{n}))_{n \in \omega}$ be an enumeration of the partitioned $\mathcal{L}$-formulas. For each $n \in \mathbb{N}$, consider the partitioned formula $\theta_{n}(x;y_0,...,y_n,z_{*},z_0,...,z_n)$ where $|z_*| = |z_i| = 1$ and \begin{equation*} \theta_{n}(x;\bar{y},\bar{z}) : = \bigwedge_{i \leq n}\left( \left( z_* = z_i \wedge \bigwedge_{\substack{j \leq n \\ j \neq i }} z_{j} \neq z_{*} \right) \to \varphi_{i}(x,y_i) \right). \end{equation*} Since $\mu$ is finitely approximated over $M$, for $\epsilon = \frac{1}{n}$, there exists some $\overline{a}_n$ in $(M^{x})^{<\omega}$ such that for every $(\bar{b},\bar{c}) \in \mathcal{U}^{\bar{y}\bar{z}}$, \begin{equation*} |\operatorname{Av}(\overline{a}_n)(\theta_n(x,\bar{b},\bar{c})) - \mu((\theta_n(x,\bar{b},\bar{c})) | < \epsilon. \end{equation*} Notice that $\theta_{n}(x;\bar{y},\bar{z})$ encodes the definable sets which are obtained by the formulas $\varphi_{0}(x,y_0),...,\varphi_{n}(x,y_n)$. In particular, for every $b \in \mathcal{U}^{y_{j}}$ where $j \leq n$, consider then tuple $(\bar{d}_{b},\bar{c}_j) = (d_{0},...d_{j-1},b,d_{j+1}...,d_n,c_*,c_0,...,c_n)$ where the $d_{i}$'s are arbitrary and $c_* = c_l$ if and only if $l = j$. Then \begin{equation*} |\operatorname{Av}(\overline{a}_{n})(\varphi_{j}(x,b)) - \mu(\varphi_{j}(x,b))| = |\operatorname{Av}(\theta(x,\bar{d}_{b},\bar{c}_j)) - \mu((\theta(x,\bar{d}_{b},\bar{c}_j))|. \end{equation*} So for any $j \leq n$ and $b \in \mathcal{U}^{y_{j}}$, \begin{equation*} |\operatorname{Av}(\overline{a}_n)(\varphi_j(x,b)) - \mu(\varphi_j(x,b))| < \frac{1}{n}. \end{equation*} It is clear that $\lim_{n\to \infty} \operatorname{Av}(\overline{a}_{n}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. \item The forward direction is Observation \ref{forward:easy}. We consider the converse. If $\delta_{p}$ is sequentially approximated over $M$ then $\delta_{p}$ is finitely satisfiable over a countable submodel $M_0$ by $(i)$ above. Then $p$ is finitely satisfiable over $M_0$ and so by Theorem \ref{sim:conv}, $p$ is sequentially approximated over $M_0$ (and also over $M$). \item Simply consider the approximating sequence restricted to the appropriate coordinates. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{Mazur} A measure $\mu$ is sequentially approximated and definable over $M$ if and only if $\mu$ is finitely approximated over $M$. \end{proposition} \begin{proof} We first prove the forward direction. The proof is similar to the proof of \cite[Theorem 4.8]{GannNIP}. Fix $\epsilon > 0$. For any partitioned $\mathcal{L}$-formula $\varphi(x,y)$, consider the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$. Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of points in $(M^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Observe that each map $F_{\operatorname{Av}(\overline{a})}^{\varphi}:S_{y}(M) \to [0,1]$ is continuous and the sequence $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converge pointwise to $F_{\mu}^{\varphi}$. Since $\mu$ is definable, the map $F_{\mu}^{\varphi}$ is continuous. By the Riesz representation theorem and dominated convergence theorem, we have that $\big(F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi}\big)_{i \in \omega}$ converges weakly to $F_{\mu}^{\varphi}$ in $C(S_{y}(M))$. By a standard application of Mazur's lemma, there exists a sequence of functions $(g_j)_{j \in \omega}$ such that each $g_j$ is a rational convex combination of $\{F_{\operatorname{Av}(\overline{a}_i)}^{\varphi}: i \leq n_{j}\}$ for some natural number $n_{j}$ and the sequence $(g_j)_{j \in \omega}$ converges uniformly to $F_{\mu}^{\varphi}$. Choose $m \in \mathbb{N}$ so that \begin{equation*} \sup_{p \in S_{y}(M)}|F_{\mu}^{\varphi}(p) - g_m(p)| < \epsilon. \end{equation*} By construction, $g_m = F_{\operatorname{Av}(\overline{c})}^{\varphi}$ for some $\overline{c} \in (M^{x})^{< \omega}$. Notice that \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{c})(\varphi(x,b))| < \epsilon. \end{equation*} For the converse, $\mu$ is definable over $M$ by $(iii)$ of Fact \ref{KM:imp}. Moreover, $\mu$ is sequentially approximated over $M$ by $(iii)$ of Proposition \ref{finitesat}. \end{proof} We now show that sequentially approximated measures commute with definable measures. It is well-known that in the context of NIP theories definable measures commute with measures which are finitely satisfiable over a small model (see \cite[Lemma 3.1]{HPS} or \cite[Proposition 7.22]{Sibook}). Recently, it was shown that in general, measures which are finitely satisfiable over a small model (even $\operatorname{dfs}$ measures) do not always commute with definable measures (see \cite[Proposition 7.14]{CGH}). We first present a topological proof (in NIP theories) which shows that measures which are finitely satisfiable over a small model commute with definable measures. We will then modify this proof (by replacing an instance of continuity by the dominated convergence theorem) to show that sequentially approximated measures commute with definable ones in any theory. Recall the following facts. \begin{fact}\label{cont:meas} Let $\nu \in \mathfrak{M}_{y}(\mathcal{U})$, $N \prec \mathcal{U}$, and $\varphi(x,y)$ be an $\mathcal{L}_{x,y}(N)$ formula. Let $\mathfrak{M}_{x}(\mathcal{U},N)$ denote the collection of measures in $\mathfrak{M}_{x}(\mathcal{U})$ which are finitely satisfiable over $N$. \begin{enumerate}[($i$)] \item If $\nu$ is definable over $N$, then the map from $\mathfrak{M}_{x}(\mathcal{U})$ to $[0,1]$ defined via $\mu \to \nu \otimes \mu(\varphi(x,y))$ is continuous $($\cite[Lemma 5.4]{CGH}$)$. \item $($T is NIP$)$ If $\nu$ is any measure, then the map from $\mathfrak{M}_{x}(\mathcal{U},N)$ to $[0,1]$ defined via $\mu \to \mu \otimes \nu(\varphi(x,y))$ is well-defined and continuous $($\cite[Proposition 6.3]{ChGan}$)$. \end{enumerate} \end{fact} We remark that statement $(ii)$ of Fact \ref{cont:meas} requires NIP for two reasons. First, it is not true in general that measures which are finitely satisfiable over a small model are Borel definable. In NIP theories, this is true ($(vi)$ of Fact \ref{KM:imp}). Secondly, the proof that this map is continuous relies on the existence of a smooth extension of $\nu|_N$. Without NIP, this map need not be continuous. The first proof of the following proposition can be found in \cite{HPS}. \begin{proposition}[T is NIP] Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely satisfiable over a small model and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{proposition} \begin{proof} Fix a formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is finitely satisfiable over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Since $\mu$ is finitely satisfiable over $N$, there exists a net of measures $(\operatorname{Av}(\overline{a}_i))_{i \in I}$ such that each $\overline{a}_{i} \in (N^{x})^{< \omega}$ and $\lim_{i \in I} \operatorname{Av}(\overline{a}_i) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$ ($(iv)$ of Fact \ref{Avcls}). By Fact \ref{cont:meas} \begin{align*} \mu \otimes \nu(\varphi(x,y)) = \int_{S_{y}(N)}F_{\mu}^{\varphi} d(\nu|_N) &\overset{(a)}{=}\ \lim_{i \in I} \int_{S_{y}(N)}F_{\operatorname{Av}(\overline{a}_i)}^{\varphi} d(\nu|_N)\\ & \overset{(b)}{=}\ \lim_{i \in I} \int_{S_{x}(N)}F_{\nu}^{\varphi^*} d(\operatorname{Av}(\overline{a}_{i})|_{N})\\ & \overset{(c)}{=}\ \int_{S_{x}(N)}F_{\nu}^{\varphi^*} d(\mu|_N) = \nu \otimes \mu (\varphi(x,y)).\\ \end{align*} Where the equalities $(a)$ and $(c)$ follow from the fact that continuous functions commute with nets. The equality $(b)$ is simple to check and is also justified by statement $(i)$ of Fact \ref{KM:imp2}. \end{proof} \begin{proposition}\label{prop:com} Sequentially approximated and definable measures commute. Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is sequentially approximated and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{proposition} \begin{proof} Fix a formula $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Choose $N$ such that $\mu$ is sequentially approximated over $N$, $\nu$ is definable over $N$, and $N$ contains all the parameters from $\varphi$. Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of points in $(N^{x})^{<\omega}$ such that $\lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i}) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Now we consider the following computation. \begin{align*} \mu \otimes \nu(\varphi(x,y)) = \int_{S_{y}(N)} F_{\mu}^{\varphi} d(\nu|_N) &\overset{(a)}{=}\ \lim_{i \to \infty}\int_{S_{y}(N)} F_{\operatorname{Av}(\overline{a}_{i})}^{\varphi} d(\nu|_N)\\ & \overset{(b)}{=}\ \lim_{i \to \infty} \int_{S_{x}(N)} F_{\nu}^{\varphi^{*}} d(\operatorname{Av}(\overline{a}_{i})|_N)\\ & \overset{(c)}{=}\ \int_{S_{x}(N)} F_{\nu}^{\varphi^{*}} d(\mu|_N) = \nu \otimes \mu(\varphi(x,y)).\\ \end{align*} Where the equality $(a)$ now holds from the dominated convergence theorem, equality $(c)$ holds from $(i)$ of Fact \ref{cont:meas} and the observation that continuous functions commute with nets, and equality $(b)$ is easy to check (also $(i)$ of Fact \ref{KM:imp2}). \end{proof} \begin{corollary} Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\nu \in \mathfrak{M}_{y}(\mathcal{U})$. If $\mu$ is finitely approximated and $\nu$ is definable, then $\mu \otimes \nu = \nu \otimes \mu$. \end{corollary} \begin{proof} By (iii) of Proposition \ref{finitesat}, $\mu$ is sequentially approximated. Apply Proposition \ref{prop:com}. \end{proof} \subsection{Egorov's theorem} It is interesting to note that sequentially approximated measures are not too far away from finitely approximated measures. In particular, if we fix some measure on the parameter space, any sequentially approximated measure is \textit{almost} finitely approximated. This result is in a similar vein as Khanaki's \textit{almost definable} coheirs in the local setting (\cite{Khanaki1}). A direct application of Egorov's theorem gives our result. \begin{theorem}[Egorov's Theorem] Let $(X,B,\mu)$ be a finite measure space. Assume that $(f_i)_{i \in \omega}$ is a sequence of measurable functions from $X \to \mathbb{R}$ such that $(f_i)_{i \in \omega}$ converges to a function $f$ pointwise. Then for every $\epsilon > 0$ there exists a $Y_{\epsilon} \in B$ such that $f_i|_{Y_{\epsilon}}$ converges to $f|_{Y_{\epsilon}}$ uniformly on $Y_{\epsilon}$ and $\mu(X \backslash Y_{\epsilon}) < \epsilon$. \end{theorem} A proof of Egorov's theorem can be found in \cite[Theorem 3.2.4.1]{Acourse}. Restating this theorem in our context gives the following result. \begin{corollary} Assume that $p$ and $\mu$ are sequentially approximated over $M$. Let $\nu \in \mathfrak{M}_{y}(M)$. Then, for every $\epsilon > 0$, there exists a Borel set $Y_{\epsilon} \subset S_{y}(M)$ such that \begin{enumerate} \item $\nu(Y_{\epsilon}) > 1 - \epsilon$. \item For every $\delta > 0$ and every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $\overline{a}_{\delta}$ in $(M^{x})^{<\omega}$ such that for every $b \in \mathcal{U}^{y}$ so that $\operatorname{tp}(b/M) \in Y_{\epsilon}$, we have \begin{equation*} |\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a}_{\delta})(\varphi(x,b))| < \delta. \end{equation*} \item For every partitioned $\mathcal{L}$-formula $\varphi(x,y)$, there exists $a$ in $M^{x}$ such that for every $b \in \mathcal{U}^{y}$ so that $\operatorname{tp}(b/M) \in Y_{\epsilon}$, we have \begin{equation*} \varphi(x,b) \in p \iff \models \varphi(a,b). \end{equation*} \end{enumerate} \end{corollary} \section{Generically stable types} Throughout this section, we let $T$ be a countable theory and $\mathcal{U}$ be a monster model of $T$. We show that if a type $p$ is generically stable over a small submodel $M$ of $\mathcal{U}$, then $p$ is sequentially approximated over $M$. Toward proving this result, we actually prove a slightly stronger lemma than what is necessary. Namely, let $p$ be a $\operatorname{dfs}$ type and let $M$ be a countable model such that $p$ is $\operatorname{dfs}$ over $M$ (for any $\operatorname{dfs}$ type, these models always exist by (iv) of Fact \ref{gfs:facts}). We show that there exists a special sequence of points in $M$ such that the \textit{limiting behavior} of this sequence \textit{resembles} a Morley sequence in $p$ over $M$. In the case where $p$ is generically stable over $M$, we show that this special sequence converges to $p$. This is enough to show the result since every generically stable type is generically stable over some countable model. We now begin with a discussion on eventually indiscernible sequences, which were introduced in \cite{Invariant}. \begin{definition} Let $(c_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$. We say that $(c_i)_{i \in \omega}$ is an \textbf{eventually indiscernible sequence over $A$} if for any formula $\varphi(x_0,...,x_k)$ in $\mathcal{L}_{(x_i)_{i \in \omega}}(A)$, there exists some natural number $N_{\varphi}$ such that for any indices $n_{k} > .... > n_{0} > N_{\varphi}$ and $m_{k} > ... > m_{0} > N_{\varphi}$, we have that \begin{equation*} \mathcal{U} \models \varphi(c_{n_0},...,c_{n_{k}}) \leftrightarrow \varphi(c_{m_0},...,c_{m_k}). \end{equation*} \end{definition} \begin{fact}\label{eventual} Let $(b_i)_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$ such that $|A| = \aleph_0$. Then there exists a subsequence $(c_i)_{i \in \omega}$ of $(b_i)_{i \in \omega}$ such that $(c_i)_{i \in \omega}$ is eventually indiscernible over $A$. \end{fact} The proof is a standard application of Ramsey's theorem and taking the diagonal (as mentioned in \cite{Invariant}). We prove a ``continuous" version of this fact in the next section and the proof is analogous (see Proposition \ref{correct} for details). For any eventually indiscernible sequence $(c_{i})_{i \in \omega}$ over a set of parameters $A$, we can associate to this sequence a unique type in $S_{(x_i)_{i \in \omega}}(A)$. We call this the \textit{eventual Ehrenfeucht-Mostowski type} (or $\operatorname{EEM}$-type) of $(c_{i})_{i \in \omega}$ over $A$. We now give the formal definition. \begin{definition} Let $(b_{i})_{i \in \omega}$ be a sequence of points in $\mathcal{U}^{x}$ and $A \subset \mathcal{U}$. Then the \textbf{eventual Ehrenfeucht-Mostowski type} (or \textbf{EEM-type}) of $(b_i)_{i \in \omega}$ over $A$, which is written as $\operatorname{EEM}((b_i)_{i \in \omega})/A)$, is a subset of $\mathcal{L}_{(x_i)_{i \in \omega}}(A)$ defined as follows: Let $\varphi(x_{i_{0}},...,x_{i_{k}})$ be a formula in $\mathcal{L}_{(x_{i})_{i \in \omega}}(A)$ where the indices are ordered $i_{0} < ... < i_{k}$. Then $\varphi(x_{i_0},...x_{i_{k}}) \in \operatorname{EEM}((b_{i})_{i \in \omega})/A)$ if and only if there exists an $N_{\varphi}$ such that for any $n_k > ... > n_0 > N_{\varphi}$, we have that $\mathcal{U} \models \varphi(b_{n_0},..., b_{n_k})$. \end{definition} Notice that an $\operatorname{EEM}$-type of a sequence is always indiscernible in the following sense: If we have indices $i_{0},...,i_{k}$ and $j_{0},...,j_{k}$ where $i_{0} < ... < i_{k}$ and $j_{0}<...<j_{k}$, then $\varphi(x_{i_{0}},...,x_{i_{k}})$ is in the $\operatorname{EEM}$-type of $(b_{i})_{i \in \omega}$ over $A$ if and only if $\varphi(x_{j_0},...,x_{j_k})$ is. This follows directly from the definition. We have some basic observations. \begin{observation} Let be $(c_i)_{i \in \omega}$ an eventually indiscernible sequence over $A$. \begin{enumerate} \item Then $\operatorname{EEM}((c_{i})_{i \in \omega}/A)$ is a complete type in $S_{(x_i)_{i \in \omega}}(A)$. \item If $(c_i)_{i \in \omega}$ is $A$-indiscernible, then $\operatorname{EEM}((c_i)_{i \in \omega}/A) = \operatorname{EM}((c_i)_{i \in \omega}/A)$. \item If $\operatorname{tp}((b_i)_{i \in \omega}/A) = \operatorname{EEM}((c_i)_{i \in \omega}/A)$, then $(b_i)_{i \in \omega}$ is $A$-indiscernible. \end{enumerate} \end{observation} \begin{proof} Clear from the definitions and discussion above. \end{proof} We warn the reader that an eventually indiscernible sequence need not ``realize" its own $\operatorname{EEM}$-type. Consider the following example: \begin{example} Let $T_{<}$ be the theory of $(\mathbb{R};<)$. Let $\mathcal{U}$ be a monster model of $T_{real}$ and $\mathbb{R} \prec \mathcal{U}$. Then the sequence $(a_i)_{i \in \omega}$ where $a_i = i$ is eventually indiscernible over $\mathbb{R}$ while the sequence $(b_i)_{i \in \omega}$ where $b_i = i(-1)^{i}$ is not. Clearly, $(a_i)_{i \in \omega}$ is not $\mathbb{R}$-indiscernible. Moreover, for each $r \in \mathbb{R}$, the formula $x_0 > r$ is in $\operatorname{EEM}((a_i)_{i \in \omega}/\mathbb{R})$ while $a_{1} >2$ clearly does not hold. So if $\operatorname{tp}((c_i)_{i \in \omega}/\mathbb{R})) = \operatorname{EEM}((a_i)_{i \in \omega}/\mathbb{R})$, then $c_i > \mathbb{R}$ for each $i \in \omega$. \end{example} The next two lemmas prove the bulk of this section's main theorem and their proofs are similar to the proof of Theorem \ref{sim:conv}. The proof strategy for this theorem is the following: If $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$, then we can find a countable model $M$ such that $p$ is $\operatorname{dfs}$ over $M$. Let $I$ be a Morley sequence in $p$ over $M$. Using the fact that $p$ is finitely satisfiable over $M$, we can find a sequence of points in $M^{x}$ which converge to $p|_{MI}$ in $S_{x}(MI)$. After moving to an eventually indiscernible subsequence, we show that the $\operatorname{EEM}$-type of this eventually indiscernible sequence is precisely $p^{\omega}|_{M}$. With the stronger assumption that our type $p$ is generically stable (instead of just $\operatorname{dfs}$), we show that this eventually indiscernible subsequence must converge to $p$ in $S_{x}(\mathcal{U})$. \begin{lemma}\label{dfs:lemma} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is $\operatorname{dfs}$ over $M$ where $|M| = \aleph_0$. Then there exists a sequence $(c_i)_{i \in \omega}$ in $M^{x}$ such that $\operatorname{EEM}((c_i)_{i \in \omega}/M) = p^{\omega}|_{M}$. \end{lemma} \begin{proof} Let $I = (a_i)_{i \in \omega}$ be a Morley sequence in $p$ over $M$. Since $T$, $M$, and $I$ are countable, $\mathcal{L}_{x}(MI)$ is countable. It follows that $p|_{MI}$ is countable and we may enumerate this collection of formulas as $(\varphi_{i}(x))_{i \in \omega}$. Since $p$ is $\operatorname{dfs}$ over $M$, in particular $p$ is finitely satisfiable over $M$. For each natural number $n$, we choose $b_{n}$ in $M^{x}$ such that $\mathcal{U} \models \bigwedge_{j \leq n} \varphi_{j}(b_n)$. By construction, we have that $\lim_{i \to \infty} \operatorname{tp}(b_i/MI) = p|_{MI}$ in $S_{x}(MI)$. By Fact \ref{eventual}, we may choose a subsequence $(c_{i})_{i \in \omega}$ of $(b_{i})_{i \in \omega}$ such that $(c_i)_{i \in \omega}$ is eventually indiscernible over $MI$. For ease of notation, we write $(c_{i})_{i \in \omega}$ as $J$. We now show that \textit{$\operatorname{EEM}(J/M) = \operatorname{EM}(I/M) = p^{\omega}|_{M}$}. We remind the reader that $\operatorname{EM}(I/M) = p^{\omega}|_{M}$ follows directly from the definition of a Morley sequence. We prove the first equality by induction on the number of free variables occurring in a formula. We begin with the base case. It suffices to show that for every $\varphi(x_0) \in \mathcal{L}_{x_{0}}(M)$, if $\varphi(x_0) \in \operatorname{EM}(I/M)$, then $\varphi(x_0) \in \operatorname{EEM}(J/M)$. Notice that $ \lim _{n\to \infty} \operatorname{tp}(b_n/MI) = p|_{MI}$, and $(c_i)_{i \in \omega}$ is a subsequence of $(b_n)_{n \in \omega}$, $\lim _{i\to \infty} \operatorname{tp}(c_i/MI) = p|_{MI}$. This clearly implies the base case. Fix $k$ and suppose that for any formula $\theta(x_0,...,x_k)$ in $\mathcal{L}_{x_{0},...,x_{k}}(M)$, we have that $\theta(x_0,...,x_k) \in \operatorname{EM}(I/M)$ if and only if $\theta(x_0,...,x_k) \in \operatorname{EEM}(J/M)$. Towards a contradiction, we assume that $\neg \theta(x_0,...,x_{k+1}) \in \operatorname{EEM}(J/M)$ and $\theta(x_0,...,x_{k+1}) \in \operatorname{EM}(I/M)$. Since $\neg \theta(\overline{x}) \in \operatorname{EEM}(J/M)$, there exists some natural number $N_{\theta_{1}}$ such that for any $n_{k+1} > ... > n_{0} > N_{\theta_{1}}$, we have that $\mathcal{U}\models \neg \theta(c_{n_{0}},...,c_{n_{k+1}})$. Since $\theta(\overline{x}) \in \operatorname{EM}(I/M)$, we conclude that $\mathcal{U} \models \theta(a_0,...,a_{k+1})$. Since $p$ is $\operatorname{dfs}$ over $M$, $I$ is totally indiscernible over $M$ by Fact \ref{gfs:facts}. Therefore, $\mathcal{U} \models \theta(a_{k+1}, a_{0}...,a_{k})$ and so $\theta(x,a_{0},...,a_{k}) \in p|_{Ma_{0},...,a_{k}}$. Since $\lim_{i \to \infty}\operatorname{tp}(c_i/MI) = p|_{MI}$, there exists some $N_{\theta_2}$ such that for every $n > N_{\theta_{2}}$, we have that $\mathcal{U}\models \theta(c_{n},a_{0},...,a_{k})$. Choose $n_{*} > \max\{N_{\theta_{1}},N_{\theta_{2}}\}$. Then the formula $\theta(c_{n_*},x_{0},...,x_{k}) \in \operatorname{tp}(a_0,...,a_{k}/M)$. By our induction hypothesis, we have that $\theta(c_{n_{*}},\overline{x}) \in \operatorname{EEM}(J/M)$ and so there exists $N_{\theta_{3}}$ such that for any $m_{k}> ... > m_{0} > N_{\theta_{3}}$, we have that $\mathcal{U}\models \theta(c_{n_*}, c_{m_{0}},...,c_{m_{k}})$. Now consider what happens when $m_0 > \max\{N_{\theta_{3}}, n_{*}\}$. Then $m_k > ... > m_{0} > n_{*} > N_{\theta_1}$ and so $\mathcal{U} \models \neg \theta(c_{n_*},c_{m_0},...,c_{m_k})$ by our assumption. However, $m_k > ... > m_{0} > N_{\theta_{3}}$ and therefore $\mathcal{U} \models \theta(c_{n_*},c_{m_0},...,c_{m_{k}})$. This is a contradiction. \end{proof} \begin{lemma}\label{gs:lemma} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $M \prec \mathcal{U}$. Assume that $p$ is generically stable over $M$. If $(c_i)_{i \in \omega}$ is a sequence in $M^{x}$ such that $\operatorname{EEM}((c_i)_{i \in \omega}/M) = p^{\omega}|_{M}$, then $\lim_{i \to \infty} tp(c_i/\mathcal{U}) = p$. \end{lemma} \begin{proof} Let $p$, $(c_{i})_{i \in \omega}$ and $M$ be as in the statement of the lemma. Let $J = (c_{i})_{i \in \omega}$. We first argue that the sequence of global types $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges and then argue that this sequence converges to $p$. \textbf{Claim 1:} The sequence $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to a some type in $S_{x}(\mathcal{U})$. It suffices to argue that for any formula $\psi(x) \in \mathcal{L}_{x}(\mathcal{U})$, $\lim_{i \to \infty} \mathbf{1}_{\psi}(c_i)$ exists (recall that $\mathbf{1}_{\psi(x)}$ is the characteristic function of the definable set $\psi(x)$). Assume not. Then we may choose a subsequence $(c_i')_{i \in \omega}$ of $(c_i)_{i \in \omega}$ such that $ \mathcal{U} \models \psi(c_{i}') \leftrightarrow \neg \psi(c_{i+1}')$. For notational purposes, we also denote $(c'_i)_{i \in \omega}$ as $J'$. It is clear that $(c_{i}')_{i \in \omega}$ is also eventually indiscernible over $M$ and $\operatorname{EEM}((c_i')_{i \in \omega}/M) = \operatorname{EEM}((c_i)_{i \in \omega}/M)$. By using $J'$, one can show that the following type is finitely consistent: \begin{equation*} \Theta_1 = \operatorname{EEM}(J'/M) \cup \bigcup_{\textit{$i$ is even}}\{\psi(x_i) \wedge \neg \psi(x_{i+1})\}. \end{equation*} Let $(d_i)_{i \in \omega}$ realize this type. Then $(d_i)_{i \in \omega}$ is a Morley sequence in $p$ over $M$ because \begin{equation*} \operatorname{EM}((d_i)_{i \in \omega}/M) = \operatorname{EEM}(J'/M) = \operatorname{EEM}(J/M) = p^{\omega}|_{M}. \end{equation*} Then $\mathcal{U} \models \psi(d_{i})$ if and only if $i$ is even. This contradicts generic stability since $\lim_{i \to \infty} \operatorname{tp}(d_i/M)$ does not converge. \textbf{Claim 2:} The sequence $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to $p$. Again, assume not. By claim 1, $\lim_{i \to \infty} \operatorname{tp}(c_i/\mathcal{U}) = q$ for some $q \in S_{x}(\mathcal{U})$. By assumption, $q \neq p$ and so there exists a formula $\psi(x)$ such that $\psi(x) \in p$ and $\neg \psi(x) \in q$. Since $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ converges to $q$, there is an $N$ such that for every $n > N$, we have that $\mathcal{U} \models \neg \theta(c_n)$. By a similar argument as the previous claim, one can show the following type is finitely consistent: \begin{equation*} \Theta_2 = \operatorname{EEM}(J/M) \cup \bigcup_{i \in \omega} \neg \theta(x_{i}). \end{equation*} Again, we let $(d_i)_{i \in \omega}$ realize this type. Then $(d_i)_{i \in \omega}$ is a Morley sequence in $p$ over $M$ and we have that $\lim_{i \to \infty} \operatorname{tp}(d_i/\mathcal{U}) \neq p$ in $S_{x}(\mathcal{U})$. This again contradicts the definition of generic stability. \end{proof} \begin{theorem}\label{gstheorem} Suppose $p$ is in $S_{x}(\mathcal{U})$ and $p$ is generically stable (over $M$). Then $p$ is sequentially approximated (over $M$). \end{theorem} \begin{proof} If $p$ is generically stable, then $p$ is generically stable over a countable submodel model $M_{0}$ contained in $M$ by Fact \ref{gfs:facts}. Then $p$ is $\operatorname{dfs}$ over $M_0$ and so by Lemma \ref{dfs:lemma}, one can choose $(c_i)_{i \in \omega}$ where each $c_i \in M_{0}^{x}$ and $\operatorname{EEM}((c_i)_{i \in \omega}/M_{0}) = p^{\omega}|_{M_0}$. By Lemma \ref{gs:lemma}, $\lim_{i \to \infty}\operatorname{tp}(c_i/\mathcal{U}) = p$. \end{proof} \begin{corollary} Assume that $T'$ is countable or uncountable in the language $\mathcal{L'}$, $\mathcal{U}' \models T'$, and $M'$ a submodel of $\mathcal{U}'$. Assume that $p$ is generically stable over $M'$. Then for any countable collection of formulas $\Delta = \{\psi_{i}(x,y_{i})\}_{i \in \omega}$ in $\mathcal{L'}$, there exists a sequence of points $(c_i)_{i \in \omega}$ each in $(M')^{x}$ such that $\lim_{i \to \infty} \operatorname{tp}_{\Delta}(c_i/\mathcal{U}) = p|_{\Delta}$. \end{corollary} \begin{proof} Let $\mathcal{L}$ be a countable sublanguage of $\mathcal{L'}$ containing all the formulas in $\Delta$. The corresponding type $p|_{\mathcal{L}}$ is generically stable over the model $M$ where $M = M'|_{\mathcal{L}}$ (see \cite[Remark 3.3]{CoGan}). Hence we may apply Theorem \ref{gstheorem}. \end{proof} \subsection{Examples and non-examples} We begin this subsection by collecting the known examples of sequentially approximated types. We then go on to give two examples of types which are not sequentially approximated (over any model). \begin{observation} Assume that $p \in S_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $p$ is sequentially approximated over $M$ if \begin{enumerate}[($i$)] \item $T$ is stable, and $p$ is invariant over $M$, \item $T$ is NIP, $|M| = \aleph_0$, and $p$ is finitely satisfiable over $M$, or \item $p$ is generically stable over $M$. \end{enumerate} \end{observation} We just proved $(iii)$. Clearly, $(i)$ follows from $(iii)$ (we remark that it also follows from $(ii)$). As noted previously, the proof of $(ii)$ is precisely \cite[Lemma 2.8]{Invariant}. We now exhibit some concrete examples of types which are not sequentially approximated. We begin by describing a type in an NIP theory which is finitely satisfiable over a small model but not sequentially approximated (and its associated Keisler measure is not sequentially approximated either). We then discuss a finitely approximated type which is not sequentially approximated. \begin{proposition}\label{omega} Let $\omega_{1}$ be the first uncountable ordinal, $M = (\omega_{1};<)$ with the usual ordering, and let $T_{<}$ be the theory of $M$ in the language $\{<\}$. Recall that $T_{<}$ is NIP. Let $p \in S_{x}(\omega_{1})$ be the complete type extending $\{\alpha < x: \alpha < \omega_1\}$. Let $\mathcal{U}$ be a monster model of $T_{<}$ such that $M \prec \mathcal{U}$ and let $p_* \in S_{x}(\mathcal{U})$ be the unique global coheir of $p$. Then, $p_{*}$ is not sequentially approximated over any model. \end{proposition} \begin{proof} Assume for the sake of contradiction that $p_{*}$ is sequentially approximated over some model $N$. Then there exists a sequence of points $(b_i)_{i \in \omega}$ in $N$ such that $\lim_{i \to \infty} \operatorname{tp}(b_i/\mathcal{U}) = p_{*}$ in $S_{x}(\mathcal{U})$. There is either an infinite subsequence which is strictly increasing or strictly decreasing and so without loss of generality, $(b_i)_{i \in \omega}$ has one of these two properties. First assume that $(b_i)_{i \in \omega}$ is strictly increasing. Notice that $b_i < x \in p_{*}$. Since $p_{*}$ is a coheir of $p$, $p_*$ is finitely satisfiable over $\omega_1$. So, for each $b_i$ there exists $\alpha$ in $\omega_{1}$ such that $b_{i} < \alpha$. Now, for each $b_i$, we define $\alpha_i := \min \{\alpha \in \omega_1: \mathcal{U} \models b_i<\alpha \}$. Since $\omega_1$ is well-ordered, $\alpha_i$ is well-defined. We let $\beta$ be the supremum (in $\omega_{1}$) of $\{\alpha_{i}: i \in \omega\}$. Then $\mathcal{U} \models b_i < \beta$ for each $i \in \omega$, and so but $x < \beta \in p_{*}$, contradiction. Now we assume that $(b_i)_{i \in \omega}$ is a strictly decreasing subsequence. Notice that for each $i \in \omega$, $b_i>x \in p_{*}$. Let $\Theta(x) = \{\alpha < x: \alpha \in \omega_1\}\ \cup \{x < b_i: i \in \omega\}$. By compactness, choose $c_{\infty}$ in $\mathcal{U}$ satisfying $\Theta(x)$. Since $p_{*}$ is finitely satisfiable over $\omega_1$, we have $c_{\infty} > x \in p_{*}$. But since $\mathcal{U} \models b_i > c_{\infty}$ for each $i \in \omega$, we have that $x >c_{\infty} \in p$, contradiction. \end{proof} \begin{remark}\label{example:coheir1} The type $p_{*}$ in Proposition \ref{omega} is finitely satisfiable over a small model, but not finitely satisfiable over any countable submodel by Theorem \ref{sim:conv}. \end{remark} \begin{proposition} Let $p_{*}$ be as in Proposition \ref{omega}. Then the associated Keisler measure $\delta_{p_{*}}$ is not sequentially approximated. \end{proposition} \begin{proof} Clear from $(iv)$ of Proposition \ref{finitesat}. \end{proof} \begin{proposition}\label{Gabe} Let $T^{2}_{s}$ be the theory of the random $K_{s}$-free graph in the language $\mathcal{L} = \{E(x,y)\}$. Let $p_{*}$ be the unique global complete type extending the formulas $\{ \neg E(x,b): b \in \mathcal{U}\}$. Then, $\delta_{p_{*}}$ is sequentially approximated (even finitely approximated over any submodel) but $p_{*}$ is not sequentially approximated. Moreover, $T^{2}_{s}$ admits no (non-realized) sequentially approximated types. \end{proposition} \begin{proof} The proof that $\delta_{p_*}$ is finitely approximated can be found in \cite[Theorem 5.8]{CoGan}. By $(iii)$ of Proposition \ref{finitesat}, $\delta_{p_*}$ is sequentially approximated. By $(v)$ of Proposition \ref{finitesat}, it suffices to show that there are no non-realized types in one variable which are sequentially approximated. Let $p$ be any non-realized type in $S_{1}(\mathcal{U})$ and assume that $(b_i)_{i \in \omega}$ is a sequence of points in $\mathcal{U}^{x}$ such that $\lim_{i\to \infty}\operatorname{tp}(b_i/\mathcal{U}) = p$. Since $p$ is non-realized, we may assume that the points in $(b_i)_{i \in \omega}$ are distinct. Then, by Ramsey's theorem, there is a subsequence which is either independent or complete. It cannot be complete, because that would violate $K_{s}$-freeness. Therefore, $(b_i)_{i \in \omega}$ contains an independent subsequence, call it $(c_i)_{i \in \omega}$. By compactness, there exists an $a$ in $\mathcal{U}$ such that $\mathcal{U} \models E(c_i,a)$ if and only if $i$ is even. Then, $(\operatorname{tp}(c_i/\mathcal{U}))_{i \in \omega}$ does not converge in $S_{x}(\mathcal{U})$ and so $(\operatorname{tp}(b_i/\mathcal{U}))_{i \in \omega}$ does not converge in $S_{x}(\mathcal{U})$. \end{proof} \begin{question} We say a global type $p$ in $S_{x}(\mathcal{U})$ is \textbf{sad}\footnote{Credit to James Hanson for the terminology.} if it is both \textbf{s}equentially \textbf{a}pproximated and \textbf{d}efinable. Does there exist a global type $p$ which is sad over a model $M$ but is not generically stable over $M$? It is clear that if $T$ is NIP, then all sad types are generically stable. Therefore an example of such a type must come from \textit{the wild}. \end{question} \section{Sequential approximations of measures in NIP theories} Throughout this section, we assume that $T$ is a countable NIP theory and $\mathcal{U}$ is a monster model of $T$. We show that measures which are finitely satisfiable over a countable model of $T$ are sequentially approximated (Theorem T2). To do this, we introduce the notion of a \textit{smooth sequence}. These are sequences of global measures which are intended to play the role of a Morley sequence for a measure. Unfortunately, these sequences only exist (a priori) in the NIP context and it is currently not known how to expand this idea to IP theories. At the end of this section, we give a characterization of generic stability using smooth sequences (again, only in the NIP context). To motivate the machinery introduced in this section, we explain why Theorem T2 does not follow directly from some approximation results currently in the literature. One might assume that one could prove Theorem T2 from Theorem \ref{sim:conv} in tandem with the following fact \cite[Proposition 7.11]{Sibook}, \begin{fact}[T is NIP]\label{type:approx} Suppose that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is finitely satisfiable over $M$. Then, for any formula $\varphi(x,y) \in \mathcal{L}$ and every $\epsilon > 0$, there exists types $p_1,...,p_n \in S_{x}(\mathcal{U})$, where for each $i \leq n$ the type $p_i$ is finitely satisfiable over $M$, and \begin{equation*} \sup_{b \in \mathcal{U}^{y}}| \mu(\varphi(x,b)) - \operatorname{Av}(\overline{p})(\varphi(x,b))|< \epsilon. \end{equation*} \end{fact} If $\mu$ is in $\mathfrak{M}_{x}(\mathcal{U})$ and is finitely satisfiable over a countable model $M$, then one can use Theorem \ref{sim:conv} and Fact \ref{type:approx} together to produce: \begin{enumerate} \item a sequence of global measures $(\operatorname{Av}(\overline{p}_{i}))_{i \in \mathbb{N}}$ such that each $\overline{p}_{i} = (p_{i_1},...,p_{i_{k}})$, each $p_{i_{k}} \in S_{x}(\mathcal{U})$ is finitely satisfiable over $M$, and $\lim_{i \to \infty} \operatorname{Av}(\overline{p}_i) = \mu$ in $\mathfrak{M}_{x}(\mathcal{U})$, \item for each $i \in \mathbb{N}$, a sequence of points $(\overline{a}_{i_j})_{j \in \mathbb{N}}$ each in $(M^{x})^{<\omega}$ so that $\lim_{j \to \infty} \operatorname{Av}(\overline{a}_{i_j})= \operatorname{Av}(\overline{p}_{i})$. \end{enumerate} This construction gives an \textit{array} of points $(\overline{a}_{i_j})_{(i,j) \in \mathbb{N} \times \mathbb{N}}$ in $(M^{x})^{< \omega}$ so that \begin{equation*}\lim_{i \to \infty} \lim_{j \to \infty} \Big(\operatorname{Av}(\overline{a}_{i_j})\Big) = \mu \text{ in $\mathfrak{M}_{x}(\mathcal{U})$}. \end{equation*} A priori, the convergence of an array \textit{does not imply} that there exists a subsequence of that array which converges to the array's limit\footnote{For example, any Baire-2 function which is not Baire-1 can be written as the limit of an array of continuous functions, but cannot be written as the sequential limit of continuous functions.}. A similar situation arises by trying to iterate Theorem \ref{Khanaki}. So, we must work slightly harder. As previously stated, our proof essentially mimics the proof of Theorem \ref{sim:conv} but with Morley sequences replaced by \textit{smooth sequences}. Finally we remark that if there were an \textit{elementary proof} using an array to show this result, then we would have a moderately simple proof that $\operatorname{dfs}$ measures are finitely approximated in NIP theories. In particular, this proof would bypass the implicit use of randomizations (i.e. $(i)$ of Fact \ref{HPSFact}). We formally begin this section by discussing a ``continuous" analogue of eventually indiscernible sequences. \subsection{Eventually indiscernible sequences revisited} We fix some notation. Fix distinct tuples of variables $x$ and $x_0,...,x_n$ such that $|x| = |x_i|$ for $i \leq n$. If $\varphi(x_0,...,x_n)$ is a formula in $\mathcal{L}_{x_0,...,x_n}(\mathcal{U})$ and $\overline{a}_{0},...,\overline{a}_{n}$ is a finite sequence of elements where each $\overline{a}_i \in (\mathcal{U}^{x})^{<\omega}$ and $\overline{a}_{i} = (a_{i,0},...,a_{i,m_{i}})$ for $i \leq n$, then we write $\varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})$ to mean, \begin{equation*} \bigotimes_{i=0}^{n}\operatorname{Av}(\overline{a}_i)_{x_{i}}(\varphi(x_0,...,x_n)). \end{equation*} Notice that $\varphi_{c}(\overline{a}_0,...,\overline{a}_n)$ is a real number. We observe that by unpacking the definition of the product measure, our formula can be computed as follows: \begin{equation*} \varphi_{c}(\overline{a}_{0},...,\overline{a}_{n})= \frac{1}{\prod_{i=0}^{n} (m_{i} + 1)} \sum_{j_{0} = 0}^{m_{0}}...\sum_{j_{n} = 0}^{m_{n}}\mathbf{1}_{\varphi}(a_{0,j_0},...,a_{n,j_{n}}). \end{equation*} \begin{definition}\label{convex} Let $(\overline{a}_{i})_{i \in \omega}$ be a sequence of elements in $(\mathcal{U}^{x})^{<\omega}$ and let $A \subset \mathcal{U}$ be a collection of parameters. Then we say that the sequence $(\overline{a}_i)_{i \in \omega}$ is \textbf{eventually indiscernible over $A$} if for any formula $\varphi(x_0,...,x_n)$ in $\mathcal{L}_{(x_i)_{i\in \omega}}(A)$ and any $\epsilon > 0$, there exists $N_{\epsilon,\varphi}$ such that for any $n_{k}>...>n_{0}>N_{\epsilon,\varphi}$ and $m_{k}>....>m_{0}>N_{\epsilon,\varphi}$, \begin{equation*} |\varphi_{c}(\overline{a}_{n_{0}},...,\overline{a}_{n_{k}})-\varphi_{c}(\overline{a}_{m_{0}},...,\overline{a}_{m_{k}})|<\epsilon. \end{equation*} \end{definition} \begin{proposition}\label{correct} Let $(\overline{a}_{i})_{i\in\omega}$ be a sequence of tuples in $(\mathcal{U}^{x})^{< \omega}$. If $A$ is a countable set of parameters, then there exists some subsequence $(\overline{c}_i)_{i \in \omega}$ of $(\overline{a})_{i \in \omega}$ such that $(\overline{c}_{i})_{i \in \omega}$ is eventually indiscernible over $A$. \end{proposition} \begin{proof} This proof is a standard application of Ramsey's theorem applied to the ``continuous" setting. Enumerate all pairs in $\mathcal{L}_{(x_i)_{i \in \omega}}(A) \times \mathbb{N}_{>0}$. Let $(\overline{a}_{i}^{0})_{i\in\omega} :=(\overline{a}_{i})_{i\in\omega}$ and set $B_{0} = \{\overline{a}^{0}_{i}: i \in \omega\}$. Now, assume we have constructed the subsequence $(\overline{a}_{i}^{l})_{i\in\omega}$ and $B_{l}$ (where $B_{l} = \{\overline{a}_{i}^{l}: i \in \omega\}$). We now construct $(\overline{a}_{i}^{l+1})_{i\in\omega}$ and $B_{l+1}$. Assume that $(\varphi(x_{0},...,x_{k}),n)$ is the $l+1$ indexed pair in our enumeration. Then we define the coloring $r_{l+1}:[B_{l}]^{k+1}\to\{0,...,n\}$ via \begin{equation*} r(\{\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}}\}) = \lfloor n \cdot \varphi_{c}(\overline{a}^{l}_{i_{0}},...,\overline{a}^{l}_{i_{k}}) \rfloor. \end{equation*} where $i_0 < i_1 < ...< i_k$. By Ramsey's theorem, there is an infinite monochromatic subset $B_{l}'$ of $B_{l}$. Let $(\overline{a}_{i}^{l+1})_{i\in\omega}$ be the obvious reindexed subsequence of $(\overline{a}_{i}^{l})_{i\in\omega}$ with the elements only from the monochromatic set $B_{l}^{'}$. We let $B_{l + 1} = \{\overline{a}^{l+1}_{i}: i \in \omega\}$. By construction, the sequence $(\overline{a}_{i}^{i})_{i\in\omega}$ is eventually indiscernible. \end{proof} We now present a collection of facts which will help us prove that the associated average measures along eventually indiscernible sequences always converge to a measure in $\mathfrak{M}_{x}(\mathcal{U})$ when the underlying theory is NIP. The first fact is elementary and left to the reader as an exercise. \begin{fact} Assume that $(\mu_{i})_{i \in \omega}$ is a sequence of Keisler measures in $\mathfrak{M}_{x}(\mathcal{U})$. If for every formula $\varphi(x) \in \mathcal{L}_{x}(\mathcal{U})$, $\lim_{i \to \infty} \mu_{i}(\varphi(x))$ converges, then $(\mu_{i})_{i \in \omega}$ converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. \end{fact} The next collection of facts can be found in \cite{HPS}. In particular, $(i)$ follows immediately from Lemma 2.10 while $(ii)$ and $(iii)$ are from Corollary 2.14. The proof of Lemma 2.10 is non-trivial and is an interpretation of results in \cite{Ben}. Implicitly, our proof uses the fact that the randomization of an NIP theory is NIP. \begin{fact}[T is NIP]\label{HPSFact} Suppose that $\lambda \in \mathfrak{M}_{(x_i)_{i \in \omega}}$ where $|x_i| =|x_j|$ for each $i,j < \omega$. $\lambda$ is said to be \textbf{$M$-indiscernible} if for every increasing sequence of indices $i_0,...,i_n$ and any formula $\varphi(x_{i_0},...,x_{i_{n}})$ in $\mathcal{L}_{(x_i)_{i \in \omega}}(M)$, we have that \begin{equation*} \lambda(\varphi(x_{i_0},...,x_{i_n})) = \lambda(\varphi(x_{0},...,x_{n})). \end{equation*} Let $\mu, \nu \in \mathfrak{M}_{x}(\mathcal{U})$ such that $\mu,\nu$ are invariant over $M$. The following statements are true. \begin{enumerate}[($i$)] \item If $\lambda$ is $M$-indiscernible, then for any formula $\varphi(x,b) \in \mathcal{L}_{x}(\mathcal{U})$, we have that $\lim_{i \to \infty} \lambda(\varphi(x_i,b))$ exists. \item The measures $\mu^{(\omega)}$ and $\nu^{(\omega)}$ are $M$-indiscernible. \item If $\mu^{(\omega)}|_{M} = \nu^{(\omega)}|_{M}$, then $\mu = \nu$. \end{enumerate} \end{fact} We now establish a formal connection between eventually indiscernible sequences of tuples and indiscernible measures. We use this connection to show that the eventually indiscernible sequences converges to a measure in $\mathfrak{M}_{x}(\mathcal{U})$. \begin{proposition}\label{converge} Let $(\overline{c}_i)_{i \in \omega}$ be a sequence of points in $(\mathcal{U}^{x})^{<\omega}$. If $(\overline{c}_i)_{i \in \omega}$ is an eventually indiscernible sequence over some model $M$, then the sequence $(\operatorname{Av}(\overline{c}_i))_{i \in \omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$. \end{proposition} \begin{proof} Assume not. Then there exists some formula $\psi(x,b)$ in $\mathcal{L}_{x}(\mathcal{U})$, some $\epsilon_{0} >0$, and some subsequence $(\overline{c}_i')_{i \in \omega}$ of $(\overline{c}_{i})_{i \in \omega}$ such that for each natural number $i$, \begin{equation*} |\operatorname{Av}(\overline{c}_i')(\psi(x;b)) - \operatorname{Av}(\overline{c}_{i+1}')(\psi(x;b))| > \epsilon_0. \end{equation*} It is clear that $(\overline{c}_{i}')_{i \in \omega}$ is also eventually indiscernible over $M$. We now aim to contradict $(i)$ of Fact \ref{HPSFact} via (topological) compactness of the space $\mathfrak{M}_{\omega}(\mathcal{U}) : = \mathfrak{M}_{(x_{i})_{i \in \omega}}(\mathcal{U})$. For any formula $\varphi(x_{i_{0}},...,x_{i_{k}}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$, we let $r_{\varphi}$ be the unique real number such that for every $\epsilon > 0$, there exists an $N_{\epsilon,\varphi}$ so that for any $n_k > ... >n_0 > N_{\epsilon,\varphi}$ we have \begin{equation*} | \varphi_{c}(\overline{c}'_{n_0},...,\overline{c}'_{n_k}) - r_{\varphi} | < \epsilon. \end{equation*} Since the sequence $(\overline{c}_{i}')_{i\in \omega}$ is eventually indiscernible over $M$, $r_{\varphi}$ exists for each $\varphi(\overline{x}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$. Now, for every $\varphi(\overline{x}) \in \mathcal{L}_{(x_i)_{i \in \omega}}(M)$ and $\epsilon >0$, we define the following family of closed subsets of $\mathfrak{M}_{\omega}(\mathcal{U})$; \begin{equation*} C_{\epsilon,\varphi} = \Big\{ \lambda \in \mathfrak{M}_{\omega}(\mathcal{U}): r_{\varphi} - \epsilon \leq \lambda(\varphi(\overline{x})) \leq r_{\varphi} + \epsilon \Big\}. \end{equation*} We also define another family of sets and argue that they are closed; let \begin{equation*} D_{i} = \Big\{\lambda \in \mathfrak{M}_{\omega}(\mathcal{U}) : |\lambda(\psi(x_i,b)) - \lambda(\psi(x_{i+1},b))| \geq \frac{\epsilon_{0}}{2}\Big\}. \end{equation*} Notice that $D_{i}$ is closed since for every natural number $i$, the evaluation map $E_{i}: \mathfrak{M}_{\omega}(\mathcal{U}) \to [0,1]$ via $E_{i}(\lambda) = \lambda(\varphi(x_i,b))$ is continuous. Indeed, define $F_{i} = E_{i} - E_{i+1}$ and $H_{i} = E_{i+1} - E_{i}$. Then we have $D_{i} = F_{i}^{-1}([\frac{\epsilon_{0}}{2},1]) \cup H_{i}^{-1}([\frac{\epsilon_{0}}{2},1])$ and so $D_{i}$ is a union of two closed sets and therefore closed. Using $(\overline{c}_{i}')_{i \in \omega}$, the collection $\Phi = \{C_{\epsilon,\varphi}: \epsilon > 0, \varphi(\overline{x}) \in \mathcal{L}_{\omega}(M)\}\cup\{D_{i}: i \in \omega\}$ has the finite intersection property. Therefore, there exists some $\lambda \in \mathfrak{M}_{\omega}(\mathcal{U})$ in the intersection of all the sets in $\Phi$. Moreover, $\lambda$ is $M$-indiscernible by construction. Since $\lambda$ is in $D_{i}$ for each $i$, its existence contradicts $(i)$ of Fact \ref{HPSFact}. \end{proof} \subsection{Smooth sequences} In this subsection, we define the notion of a smooth sequence and prove the main theorem. If $\mu$ is a global $M$-invariant measure, then a smooth sequence is a collection of models and measures meant to replicate a Morley sequence. The ideology is the following: A Morley sequence in $p$ over $M$ is to the infinite type $p^{\omega}|_{M}$ as a smooth sequence in $\mu$ over $M$ is to the measure $\mu^{(\omega)}|_{M}$. We now provide the formal definition. \begin{definition}Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and assume that $\mu$ is invariant over some small model $M$. Then, a \textbf{smooth sequence in $\mu$ over $M$} is a sequence of pairs of measures and small models, $(\mu_i,N_i)_{i \in \omega}$, such that: \begin{enumerate}[$(i)$] \item $M \prec N_0$ and $N_i \prec N_{i +1}$ and each $N_i$ is small. \item $\mu_{i}$ is smooth over $N_i$. \item $\mu_{0}|_M = \mu|_M$ and for $i > 0$, $\mu_{i}|_{N_{i-1}} = \mu|_{N_{i-1}}$. \end{enumerate} Furthermore, we define $\bigotimes_{i=0}^{\omega} \mu_{i} = \bigcup_{i =0}^{\omega} \bigotimes_{i=0}^{n}\mu_i$ which is an element of $\mathfrak{M}_{(x_i)_{i \in \omega}}(\mathcal{U})$. We let $N_{\omega} = \bigcup_{i \in \omega} N_{i}$. Notice that for each $i \in \omega$, the measure $\mu_{i}$ is smooth over $N_{\omega}$. \end{definition} \begin{proposition}\label{existence} If $T$ is a countable NIP theory, $\mu \in \mathfrak{M}_{x}(\mathcal{U})$, and $\mu$ is invariant over $M$ where $|M|=\aleph_0$, then there exists a smooth sequence $(\mu_{i},N_i)_{i\in\omega}$ in $\mu$ over $M$ such that each $N_{i}$ is countable. \end{proposition} \begin{proof} This follows directly from Proposition \ref{m:countable}. \end{proof} \begin{proposition}[T is NIP]\label{smoothinv} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and $\mu$ is $M$-invariant. Let $(\mu_i,N_i)_{i \in \omega}$ be a smooth sequence in $\mu$ over $M$. Then, $\bigotimes_{i=0}^{\omega} \mu_{i} |_{M} = \mu^{(\omega)}|_{M}$. Hence, $\bigotimes_{i=0}^{\omega} \mu_i$ is $M$-indiscernible. \end{proposition} \begin{proof} We prove this via induction on formulas in $\mathcal{L}_{(x_i)_{i \in \omega}}(\mathcal{U})$. For our base case, it is true by construction that $\mu_{0}|_{M} = \mu|_{M}$. For our induction hypothesis, we assume that $\mu^{(k-1)}|_{M} = \bigotimes_{i=0}^{k-1} \mu_{i}|_{M}$. For ease of notation, we set $\lambda = \bigotimes_{i=0}^{k-1} \mu_{i}$ and show the induction step: Let $\varphi(x_0,...,x_k)$ be any formula in $\mathcal{L}_{x_0,...,x_k}(M)$. Since the product of smooth measures is smooth (by $(iii)$ of Fact \ref{KM:imp2}), we have that $\lambda$ is smooth over $N_{k-1}$. In particular, $\lambda$ is invariant over $N_{k-1}$. We let $\overline{x} = (x_0,...,x_{k-1})$ and $\theta(x_{k};\overline{x}) = \varphi(x_{0},...,x_{k})$. We consider the following computation followed by a list of justifications. \begin{equation*} \mu_{k} \otimes \lambda (\varphi(x_0,...,x_{k})) = \int_{S_{\overline{x}}(N_{k})} F_{\mu_{k}}^{\theta} d(\lambda|_{N_{k}}) \overset{(a)}{=} \int_{S_{x_{k}}(N_{k})} F_{\lambda}^{\theta^*}d(\mu_{k}|_{N_{k}}) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu_{k}|_{N_{k-1}}) \overset{(c)}{=} \int_{S_{x_{k}}(N_{k-1})}F_{\lambda}^{\theta^{*}}d(\mu|_{N_{k-1}}) \overset{(a)}{=}\int_{S_{\overline{x}}(N_{k-1})}F_{\mu}^{\theta}d(\lambda|_{N_{k-1}}) \end{equation*} \begin{equation*} \overset{(d)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\lambda|_M) \overset{(e)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\mu^{(k-1)}|_M) = \mu \otimes \mu^{(k-1)} (\varphi(x_0,...,x_{k})). \end{equation*} We provide the following justifications: \begin{enumerate}[($a$)] \item Smooth measures commute with invariant measures. \item Changing space of integration since $\lambda$ is invariant over $N_{k-1}$. \item By construction of smooth sequences, we have that $\mu_{k}|_{N_{k - 1}} = \mu|_{N_{k -1}}$. \item Changing space of integration since $\mu$ is invariant over $M$. \item By our induction hypothesis. \qedhere \end{enumerate} \end{proof} We now begin the proof of our main theorem. Again, the proof is similar to both the generically stable case in the previous section and even more so to the proof of Lemma 2.8 in \cite{Invariant}. Here, the major difference is that we replace the Morley sequence in that proof with a countable model, $N_{\omega}$, which ``contains" a smooth sequence in $\mu$ over $M$. Then we find a sequence of elements in $(M^{x})^{< \omega}$ such that the associated average measures converge to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. After choosing an eventually indiscernible subsequence, we know from our NIP assumption that this new sequence converges to a global measure $\nu$ in $\mathfrak{M}_{x}(\mathcal{U})$. Finally, we demonstrate that $\nu^{(\omega)}|_{M} = \mu^{(\omega)}|_{M}$ which completes the proof. \begin{theorem}[$T$ is NIP] Let $\mu$ be finitely satisfiable over a countable model $M$. Then there exists a sequence $(\overline{a})_{i \in \omega}$ of elements, each in $(M^{x})^{<\omega}$, such that for any $\theta(x) \in \mathcal{L}_{x}(\mathcal{U})$, we have that, \begin{equation*} \lim_{i \to \infty} \operatorname{Av}(\overline{a}_{i})(\theta(x)) = \mu(\theta(x)). \end{equation*} \end{theorem} \begin{proof} Choose a smooth sequence $(\mu_{i},N_i)_{i \in \omega}$ in $\mu$ over $M$. By Proposition \ref{existence} we may choose this sequence so that for each $i \in \omega$, $N_i$ is countable. In particular, this implies that $N_{\omega}$ is a countable model. We begin by constructing a sequence of elements $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that $(\operatorname{Av}(\overline{a}_{i})|_{N_{\omega}})_{i \in \omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega})$. Since $N_{\omega}$ is countable, we let $(\theta_{i}(x))_{i \in \omega}$ be an enumeration of the formulas in $\mathcal{L}_{x}(N_{\omega})$. Since $\mu$ is finitely satisfiable over $M$, we can find we find $\overline{a}_{k} \in (M^{x})^{<\omega}$ such that for any $j \leq k$, we have that, \begin{equation*} |\mu(\theta_{j}(x)) - \operatorname{Av}(\overline{a}_{k})(\theta_{j}(x))| < \frac{1}{k}. \end{equation*} By construction, it is clear that the sequence $(\operatorname{Av}(\overline{a}_i)|_{N_{\omega}})_{i\in \omega}$ converges to $\mu|_{N_{\omega}}$ in $\mathfrak{M}_{x}(N_{\omega}$). Now, we let $(\overline{c}_i)_{i \in \omega}$ be a subsequence of $(\overline{a}_i)_{i \in \omega}$ so that $(\overline{c}_i)_{i \in \omega}$ is eventually indiscernible over $N_{\omega}$. Then the sequence $(\operatorname{Av}(\overline{c}_i))_{i \in \omega}$ converges in $\mathfrak{M}_{x}(\mathcal{U})$ by Proposition \ref{converge}. Assume that $(\operatorname{Av}(\overline{c}_{i}))_{i \in \omega}$ converges to some measure $\nu \in \mathfrak{M}_{x}(\mathcal{U})$. Hence, $\nu$ is finitely satisfiable over $M$ by $(i)$ of Proposition \ref{finitesat} and therefore $\nu$ is invariant over $M$. We show that $\nu^{(\omega)}|_{M} = \mu^{(\omega)}|_{M}$. This will conclude the proof by $(iii)$ of Fact \ref{HPSFact}. Since $(\overline{c}_{i})_{i \in \omega}$ is a subsequence of $(\overline{a}_{i})_{i \in \omega}$, it follows that $\nu|_{N_{\omega}} = \mu|_{N_{\omega}}$ and therefore $\nu|_{M} = \mu|_{M}$. We now proceed by induction. Assume that $\nu^{(k-1)}|_{M} = \mu^{(k-1)}|_{M}$. Fix $\varphi(x_0,...,x_{k})$ in $\mathcal{L}_{x_0,...,x_k}(M)$. For ease of notation, set $\lambda = \bigotimes_{i=0}^{k-1} \mu_{i}$. We recall that $\lambda$ is smooth over $N_{\omega}$ (see Fact \ref{KM:imp2}). By Proposition \ref{smoothinv}, $\mu^{(k-1)}|_{M} = \lambda|_{M}$. We let $\overline{x} = (x_0,...,x_{k-1})$ and let $\theta(x_{k};\overline{x}) = \varphi(x_0,...,x_k)$. We now consider the critical computation followed a small glossary of justifications. \begin{equation*} \nu^{(k)}(\varphi(x_0,...,x_{k})) = \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta} d(\nu^{(k-1)}|_{M}) \overset{(a)}{=} \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta} d(\mu^{(k-1)}|_M) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{\overline{x}}(M)} F_{\nu}^{\theta}d(\lambda|_{M}) \overset{(c)}{=} \int_{S_{\overline{x}}(N_{\omega})} F_{\nu}^{\theta}d(\lambda|_{N_{\omega}}) \overset{(d)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\lambda}^{\theta^*}d(\nu|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(e)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\lambda}^{\theta^*}d(\mu|_{N_\omega}) \overset{(d)}{=} \int_{S_{\overline{x}}(N_{\omega})} F_{\mu}^{\theta} d(\lambda|_{N_{\omega}}) \overset{(c)}{=} \int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\lambda|_{M}) \end{equation*} \begin{equation*} \overset{(b)}{=}\int_{S_{\overline{x}}(M)} F_{\mu}^{\theta} d(\mu^{(k-1)}|_{M}) = \mu^{(k)}(\varphi(x_0,...,x_{k})). \end{equation*} We provide the following justifications: \begin{enumerate}[(a)] \item Induction hypothesis. \item $\mu^{(k-1)}|_M = \lambda|_M$. \item Changing the space of integration. \item Smooth measures commute with invariant measures. \item $\nu|_{N_{\omega}} = \mu|_{N_{\omega}}$ \qedhere \end{enumerate} \end{proof} We now observe that we have another proof of the theorem that global measures in NIP theories which are definable and finitely satisfiable are also finitely approximated. \begin{corollary} If $T'$ is a countable or uncountable NIP theory and $\mu$ is $\operatorname{dfs}$ over $M$, then $\mu$ is finitely approximated over $M$. \end{corollary} \begin{proof} After restricting to a countable language, we still have a $\operatorname{dfs}$ measures (by \cite[Proposition 2.9]{CoGan}). By Proposition \ref{m:countable}, $\mu$ restricted to this language is $\operatorname{dfs}$ over a countable model, $M_0$. By the previous result, $\mu$ is sequentially approximated over $M_0$. Since $\mu$ is also definable, an application of Proposition \ref{Mazur} yields the result. \end{proof} \begin{observation} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$ and let $M$ be a small elementary submodel. Then, $\mu$ is sequentially approximated over $M$ if \begin{enumerate} \item $T$ is stable, and $\mu$ is invariant over $M$, \item $T$ is NIP, $|M| = \aleph_0$, and $\mu$ is finitely satisfiable over $M$, or \item $\mu$ is finitely approximated over $M$. \end{enumerate} \end{observation} Finally, one may ask what happens in the local context. We remark that there exists two proofs for a local version of Theorem T2 which both rely on an important result of Bourgain, Fremlin, and Talagrand whose connection to model theory is (by now) well-known (e.g. \cite{IBFT,SimonBFT, Khanaki1,GannNIP}). Chronologically, the first proof of the following theorem is implicit in the work of Khanaki (see \cite[Remark 3.21, Theorem 3.26]{Khanaki1}) (through the observation that measures are types over models of the randomization in continuous model theory and \cite[Proposition 1.1]{Ben2}), \begin{theorem}\label{Khanaki}Suppose $\mu$ is a Keisler measure in $\mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is finitely satisfiable over $M$ where $|M| = \aleph_0$, and $\varphi(x,y)$ is an NIP formula. Then there exists a sequence of points $(\overline{a}_{i})_{i \in \omega}$ in $(M^{x})^{< \omega}$ such that for each $b \in \mathcal{U}^{y}$, \begin{equation*}\lim_{i \to \infty} \operatorname{Av}(\overline{a}_i)(\varphi(x,b)) = \mu(\varphi(x,b)). \end{equation*} \end{theorem} \noindent There is another proof for the case of just Keisler measures via the VC theorem (see \cite[Lemma 4.7]{GannNIP}) which came later. \subsection{Smooth sequences and generically stable measures in NIP theories} We now give an equivalent characterization for generically stable measures in NIP theories. We invite the reader to review the definition of a generically stable type prior to reading this section. Recall the following theorem due to Hrushovski, Pillay, and Simon \cite[Theorem 3.2]{HPS}. \begin{theorem}[T is NIP]\label{genstab:equiv} Assume that $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent. \begin{enumerate}[($i$)] \item $\mu$ is dfs. \item $\mu$ is finitely approximated. \item $\mu$ is fim (see \cite[Definition 2.7]{HPS}). \item $\mu$ is invariant and $\mu_{x} \otimes \mu_{y} = \mu_{y} \otimes \mu_{x}$. \end{enumerate} Moreover, a Keisler measure (in an NIP theory) is called \textbf{generically stable} if it satisfies any/all of $(i) - (iv)$. \end{theorem} We will now show that smooth sequences can also give a characterization of generically stable measures in NIP theories. \begin{lemma}[T is NIP] Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Suppose that $\mu$ is generically stable over $M$. For any smooth sequence $(\mu_i,N_i)_{i \in \omega}$ in $\mu$ over $M$, we have that $\lim_{i \to \infty} \mu_i = \mu$ in $\mathfrak{M}_x(\mathcal{U})$. \end{lemma} \begin{proof} Since $(\mu_i,N_i)_{i \in \omega}$ is a smooth sequence in $\mu$ over $M$, the measure $\bigotimes_{i=0}^{\omega} \mu_i$ is indiscernible over $M$ by Proposition \ref{smoothinv}. By $(i)$ of Fact \ref{HPSFact}, we know that $\lim_{i \to \infty} \mu_{i} = \nu$ for some $\nu \in \mathfrak{M}_{x}(\mathcal{U})$. Since each $\mu_i$ is finitely satisfiable over $N_i$, it follows that $\nu$ is finitely satisfiable over $N_{\omega}$. By $(iii)$ of Fact \ref{HPSFact}, it is enough to show that $\nu^{(\omega)}|_{N_{\omega}} = \mu^{(\omega)}|_{N_{\omega}}$. The base case is trivial. Assume that $\nu^{(k-1)}|_{N_{\omega}} = \mu^{(k-1)}|_{N_{\omega}}$. Fix $\varphi(x_0,...,x_k) \in \mathcal{L}_{x_0,...,x_k}(N_{\omega})$ and $\epsilon > 0$. Let $\overline{x} = (x_0,...,x_{k-1})$ and $\theta(x_k;\overline{x}) = \varphi(x_0,...,x_k)$. Since $\mu$ is generically stable over $M$, $\mu^{(k-1)}$ is generically stable over $M$ ($(v)$ of Fact \ref{KM:imp2}) and so also definable over $N_{\omega}$. Therefore by $(v)$ of Fact \ref{KM:imp}, there exists formulas $\psi_1(x_{k}),...,\psi_{n}(x_{k}) \in \mathcal{L}_{x_{k}}({N_{\omega}})$ and real numbers $r_1,...,r_n \in [0,1]$ so that \begin{equation*} \sup_{q \in S_{x_{k}}(N_{\omega})} | F_{\mu^{(k-1)}}^{\theta^{*}}(q) - \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_i(x_k)}(q)| < \epsilon. \end{equation*} Consider the following sequence of equations followed by a short list of justifications. \begin{equation*} \nu^{(k)}(\varphi(x_0,...,x_{k})) = \int_{S_{\bar{x}}(N_{\omega})} F_{\nu}^{\theta} d( \nu^{(k-1)}|_{N_{\omega}}) \overset{(a)}{=} \int_{S_{\bar{x}}(N_{\omega})} F_{\nu}^{\theta} d(\mu^{(k-1)}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(b)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\mu^{(k-1)}}^{\theta^{*}} d(\nu|_{N_{\omega}}) \approx_{\epsilon} \int_{S_{x_{k}}(N_{\omega})} \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_{i}(x_{k})} d(\nu|_{N_{\omega}}) \end{equation*} \begin{equation*} = \sum_{i=1}^{n} r_{i} \nu(\psi_{i}(x_{k})) \overset{(c)}{=} \sum_{i=1}^{n} r_{i} \mu(\psi_{i}(x_{k})) = \int_{S_{x_{k}}(N_{\omega})} \sum_{i=1}^{n} r_i \mathbf{1}_{\psi_{i}(x_{k})} d(\mu|_{N_{\omega}}) \end{equation*} \begin{equation*} \approx_{\epsilon} \int_{S_{x_{k}}(N_{\omega})} F_{\mu^{(k-1)}}^{\theta^{*}} d(\mu|_{N_{\omega}}) \overset{(b)}{=} \int_{S_{x_{k}}(N_{\omega})} F_{\mu}^{\theta} d(\mu^{(k-1)}|_{N_{\omega}}) = \mu^{(k)}(\varphi(x_0,...,x_{k})). \end{equation*} \begin{enumerate}[(a)] \item Induction hypothesis. \item (T is NIP) Generically stable measures commute with invariant measures (see $(b)$ of Fact \ref{KM:imp2}). \item Base case. \end{enumerate} As $\epsilon$ was arbitrary, this proves the result. \end{proof} \begin{lemma}[T is NIP] Assume that $\mu$ is $M$-invariant. If for every smooth sequence $(\mu_{i},N_i)_{i \in \mathbb{N}}$ in $\mu$ over $M$, we have that $\lim_{i \to \infty} \mu_{i} = \mu$, then $\mu$ is generically stable over $M$. \end{lemma} \begin{proof} Since $T$ is NIP, all invariant measures are Borel definable. By Theorem \ref{genstab:equiv}, it suffices to show that $\mu$ commutes with itself, i.e. $\mu_x \otimes \mu_y = \mu_y \otimes \mu_x$. Fix $\varphi(x,y) \in \mathcal{L}_{x,y}(\mathcal{U})$. Let $M_1$ be a small model such that $ M \prec M_1$ and $M_1$ contains all the parameters from $\varphi(x,y)$. We choose a smooth sequence $(\mu_{i,x}; N_i)_{i \in \omega}$ in $\mu_{x}$ over $M_1$ and let $N_\omega = \bigcup_{i \in \omega} N_i$. By construction, the sequence $(\mu_{i,x},N_i)_{i \in \omega}$ is a smooth sequence in $\mu_{x}$ over $M$. Consider the following computation. \begin{equation*} \mu_{x}\otimes\mu_{y}(\varphi(x,y))= \int_{S_{y}(M_{1})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{M_{1}}) \overset{(a)}{=} \int_{S_{y}(N_{\omega})}F_{\mu_{x}}^{\varphi}d(\mu_{y}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(b)}{=}\lim_{i\to\infty}\int_{S_{y}(N_{\omega})}F_{\mu_{i,x}}^{\varphi}d(\mu_{y}|_{N_{\omega}}) \overset{(c)}{=} \lim_{i\to\infty}\int_{S_{x}(N_{\omega})}F_{\mu_{y}}^{\varphi^{*}}d(\mu_{i,x}|_{N_{\omega}}) \end{equation*} \begin{equation*} \overset{(d)}{=} \lim_{i \to \infty} \int_{S_{x}(M_{1})} F_{\mu_y}^{\varphi^{*}} d(\mu_{i,x}|_{M_1}) \overset{(e)}{=} \lim_{i \to \infty} \int_{S_{x}(M_1)} F_{\mu_{y}}^{\varphi^{*}} d(\mu_{x}|_{M_1}) \end{equation*} \begin{equation*} = \int_{S_{x}(M_1)} F_{\mu_{y}}^{\varphi^{*}} d(\mu_{x}|_{M_1}) = \mu_{y} \otimes \mu_{x} (\varphi(x,y)). \end{equation*} \noindent We provide a list of the following justifications: \begin{enumerate}[$(a)$] \item Changing the space of integration. \item Dominated convergence theorem. \item Smooth measures commute with Borel definable measures. \item Since $\mu_{y}$ is $M_1$ invariant. \item Since $\mu_{i,x}|_{M_{1}} = \mu_{x}|_{M_1}$ for any $i \in \omega$. \qedhere \end{enumerate} \end{proof} \begin{theorem}[T is NIP] Let $\mu \in \mathfrak{M}_{x}(\mathcal{U})$. Then the following are equivalent: \begin{enumerate} \item $\mu$ is generically stable over $M$. \item For any smooth sequence $(\mu_i,N_i)_{i \in \omega}$ in $\mu$ over $M$, \begin{equation*} \lim_{i \to \infty} \mu_i = \mu \text{ in $\mathfrak{M}_{x}(\mathcal{U})$.} \end{equation*} \end{enumerate} \end{theorem} \begin{proof} Follows directly from the previous two lemmas. \end{proof} \section{Local Measures revisited} We generalize the main theorem of \cite{GannNIP}. Fix a partitioned NIP formula $\varphi(x,y)$ and let $\mu$ be a $\varphi$-measure. In \cite{GannNIP}, we proved two main theorems. We showed that if $\varphi(x,y)$ is an NIP formula and $\mu$ is $\varphi$-definable and finitely satisfiable over a \textbf{countable} model $M$, then $\mu$ is $\varphi$-finitely approximated. We then proved that if $\mu$ is definable and finitely satisfiable over any small model $M$, then $\mu$ is finitely approximated in $M$ by reducing to the previous theorem. But this was somewhat unsatisfactory and the following question was left open: if $\mu$ is $\varphi$-definable and finitely satisfiable over a \textbf{small} model, then is $\mu$ $\varphi$-finitely approximated? We give a positive answer to this question by modifying one of the important technical lemmas in the proof. Let us first recall some definitions. \begin{definition}\label{local} Fix $\mathcal{U}$ and a formula $\varphi(x,y)$ in $\mathcal{L}(\mathcal{U})$. \begin{enumerate} \item $\mathcal{L}_{\varphi}(\mathcal{U})$ denotes the Boolean algebra of definable sets of $\mathcal{U}^{x}$ generated by the collection $\{\varphi(x,b): b \in \mathcal{U}\}$. \item A $\varphi$-measure is a finitely additive measure on the Boolean algebra $\mathcal{L}_{\varphi}(\mathcal{U})$. \item The collection of all $\varphi$-measures is denoted $\mathfrak{M}_{\varphi}(\mathcal{U})$. \item Let $M \prec \mathcal{U}$ and assume that $M$ contains all the parameters from $\varphi(x,y)$. For any $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$, we say that $\mu$ is $(M,\varphi)$-invariant if for any $b,c \in \mathcal{U}^{y}$ such that $\operatorname{tp}(b/M) = \operatorname{tp}(c/M)$, we have that $\mu(\varphi(x,b)) = \mu(\varphi(x,c))$. \item Let $\mu \in \mathfrak{M}_{\varphi}(M)$. If $\mu$ is $(M,\varphi)$-invariant, then we define can the fiber map $F_{\mu}^{\varphi}: S_{y}(M) \to [0,1]$ via $F_{\mu,M}^{\varphi}(q) = \mu(\varphi(x,b))$ where $b \models q|_M$. When $M$ is clear from context, we write $F_{\mu,M}^{\varphi}$ simply as $F_{\mu}^{\varphi}$. \item Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be $\varphi$-definable if the map $F_{\mu,M}^{\varphi}: S_{y}(M) \to [0,1]$ is continuous. \item Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. Then $\mu$ is said to be definable if for any formula $\theta(x,\overline{y})$ in the algebra generated by $\{\varphi(x,y_i): i \in \mathbb{N}\}$, $\mu$ is $(M,\theta)$-invariant and the map $F_{\mu}^{\varphi}:S_{y}(M) \to [0,1]$ is continuous. \item For any $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$, $\mu$ is said to be finitely satisfiable in $M$ if for every $\theta(x) \in \mathcal{L}_{\varphi}(\mathcal{U})$ such that $\mu(\theta(x)) > 0$, there exists some $a \in M$ so that $\mathcal{U} \models \theta(a)$. \item For each $a \in M$ we let $F_{a}^{\varphi}: S_{y}(M) \to [0,1]$ via $F_{a}^{\varphi} = \mathbf{1}_{\varphi(a,y)}$. We denote the collection of such functions as $\mathbb{F}_{M}$. We let $\operatorname{conv}(\mathbb{F}_M)$ be the collection of convex combinations of elements in $\mathbb{F}_{M}$. We let $F = [0,1]^{S_{y}(M)}$ endowed with the Tychonoff topology and if $A \subset F$, we let $\operatorname{cl}(A)$ denote its closure in this space and so the set $\operatorname{cl}(\operatorname{conv}(A))$ is well-defined. \end{enumerate} \end{definition} \noindent Recall the following facts about $\varphi$-measures which can be found in \cite{GannNIP}. \begin{fact}\label{local:facts} Let $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$ and $M \prec \mathcal{U}$. \begin{enumerate}[$(i)$] \item If $\mu$ is finitely satisfiable or $\varphi$-definable over $M$ then $\mu$ is $(M,\varphi)$-invariant. \item If $\mu$ is $\varphi$-definable over $M$ then $\mu$ is $(M_{0},\varphi)$-invariant for some $M_0 \prec M$ such that $|M_0| = \aleph_0$. \item If $\mu$ is finitely satisfiable over $M$ then $F_{\mu,M}^{\varphi}$ is in $\operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M}))$. \item If $|M| = \aleph_0$ and $\varphi(x,y)$ is NIP, there exists a sequence of elements $(g_i)_{i \in \omega}$ with each $g_i \in \operatorname{conv}(\mathbb{F}_M)$ so that $\lim_{i \to \infty} g_i = F_{\mu,M}^{\varphi}$. \end{enumerate} \end{fact} \noindent The following lemma is essentially the \textit{missing lemma} from \cite{GannNIP}. The missed observation is that one can consider finitely many parameters at once (instead of a single parameter). \begin{lemma}\label{meas:lemma} Suppose that $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$ and $\mu$ is finitely satisfiable in a small submodel $N$ and $(M,\varphi)$-invariant. Then the map $F_{\mu,M}^{\varphi} \in \operatorname{cl}(\operatorname{conv}(\mathbb{F}_M))$. \end{lemma} \begin{proof} The proof is similar to the proof for types \cite[Lemma 2.18]{Sibook} as well as the proof for measures \cite[Proposition 4.13]{GannNIP} (which has both a stronger assumption and conclusion). It suffices to show that for any finite collection of types $p_1,...,p_n \in S_{y}(M)$ and $\epsilon > 0$ there exists $\overline{a} \in (M^{x})^{< \omega}$ such that $F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_i) \approx_{\epsilon} F_{\mu,M}^{\varphi}(p_i)$ for each $i \leq n$. Fix $p_1,...,p_n \in S_{y}(M)$ and $\epsilon >0$. Choose $b_i \models p_i$ for $i \leq n$. Let $q = \operatorname{tp}(N/M) \in S_{|N|}(M)$. Let $\hat{q} \in S_{|N|}(\mathcal{U})$ such that $\hat{q} \supset q$ and $\hat{q}$ is finitely satisfiable in $M$, i.e. $\hat{q}$ is a global coheir of $q$. Let $N_{1} \models \hat{q}|_{Mb_1,...,b_n}$. By compactness, there exists elements $b_1',...,b_n' \in \mathcal{U}$ such that $\operatorname{tp}(N_1 b_1,...,b_n/M) = tp(Nb_1',...,b_n'/M)$. Since $\mu$ is $(M,\varphi)$-invariant, we have that \begin{equation*} F_{\mu,M}^{\varphi}(p_i) = \mu(\varphi(x,b_i)) = \mu(\varphi(x,b'_i)), \end{equation*} for each $i \leq n$. Since $\mu$ is finitely satisfiable in $N$, there exists some $m$ and $\overline{c} \in (N^{x})^{m}$ such that $\operatorname{Av}(\overline{c})(\varphi(x,b'_i)) \approx_{\epsilon} \mu(\varphi(x,b_i'))$ for $i \leq n$. Let $B_i =\{j \leq m: \models \varphi(c_j,b'_i)\}$. Now consider the formula \begin{equation*} \theta(x_1,...,x_m,y_1,...y_n) = \bigwedge_{i \leq n} \Big( \bigwedge_{j\in B_i} \varphi(x_{j},y_{i}) \wedge \bigwedge_{j \not \in B_i} \neg \varphi(x_{j},y_{i}) \Big). \end{equation*} By construction $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(\overline{c},\overline{b'}/M)$ and so for an appropriate choice of indices, $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(Nb_1',...,b_n'/M)$. Hence $\theta(\overline{x},\overline{y}) \in \operatorname{tp}(N_1b_1,...,b_n/M)$ and so $\theta(\overline{x},\overline{b}) \in \operatorname{tp}(N_1/Mb_1,...,b_n) \subset \hat{q}$. Since $\hat{q}$ is finitely satisfiable in $M$, there exists $\overline{a} \in (M^{x})^{m}$ such that $\models \theta(\bar{a},\bar{b})$. By construction, we have that for any $i \leq n$, \begin{equation*} F_{\operatorname{Av}(\overline{a}),M}^{\varphi}(p_i) = \operatorname{Av}(\overline{a})(\varphi(x,b_i)) = \operatorname{Av}(\overline{c})(\varphi(x,b'_i)) \approx_{\epsilon} \mu(\varphi(x,b'_i)) = F_{\mu,M}^{\varphi}(p_i). \end{equation*} This concludes the proof. \end{proof} \begin{theorem}\label{main:Gan} Fix a formula $\varphi(x,y)$ and a small model $M$ containing all the parameters from $\varphi(x,y)$. Assume that $\mu \in \mathfrak{M}_{\varphi}(\mathcal{U})$. If \begin{enumerate} \item $\varphi(x;y)$ is NIP, \item $\mu$ is $\varphi$-definable over $M$, \item and $\mu$ is finitely satisfiable in $M$, \end{enumerate} Then for every $\epsilon > 0$, there exists $a_1,...,a_n \in M^{x}$ such that, \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{a})(\varphi(x,b))| < \epsilon. \end{equation*} \end{theorem} \begin{proof} We remark that the proof is similar to that of Proposition \ref{Mazur}. Since $\mu$ is $\varphi$-definable over $M$, $\mu$ is $(M_0,\varphi)$-invariant where $M_0$ is a countable submodel of $M$. By Lemma \ref{meas:lemma}, the map $F_{\mu,M_{0}}^{\varphi} \in \operatorname{cl}(\operatorname{conv}(\mathbb{F}_{M_0}))$. By Fact \ref{local:facts}, there exists a sequence $(g_i)_{i \in I}$ so that $\lim_{i \to \infty} g_i = F_{\mu,M_0}^{\varphi}$. By Mazur's lemma, for every $\epsilon > 0$, there exists a finite set $I \subset \mathbb{N}$ and positive real numbers $\{r_i: i \in I\}$ such that $\sum_{i \in I} r_i = 1$ and \begin{equation*} \sup_{q \in S_{y}(M_{0})} |F_{\mu,M_0}^{\varphi}(q) - \sum_{i \in I} r_i g_i(q)| < \epsilon. \end{equation*} The map $\sum_{i \in I} r_i g_{i}$ can clearly be uniformly approximated by an average function. More explicitly, there exists $ \overline{d} \in (M^{x})^{<\omega}$ such that \begin{equation*} \sup_{q \in S_{y}(M)} |\sum_{i \in I} r_i g_i (q) - F^{\varphi}_{\operatorname{Av}(\overline{d}),M}(q)| <\epsilon. \end{equation*} Hence \begin{equation*} \sup_{b \in \mathcal{U}^{y}}|\mu(\varphi(x,b)) - \operatorname{Av}(\overline{d})(\varphi(x,b))| = \sup_{q \in S_{y}(M)} |F_{\mu,M}^{\varphi}(q) - F_{\operatorname{Av}(\overline{d}),M}^{\varphi}(q)| < 2\epsilon. \end{equation*} which completes the proof. \end{proof}
8e5fd2e6b0b96f94a42a291146753ff58b176445
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{intro} This paper concerns how to generate and understand discourse anaphoric noun phrases, or noun phrases (NPs) that evoke a discourse entity already in the discourse model (Webber~\shortcite{webber78}). Dale~\shortcite{dale89}~\shortcite{dale92} implements Gricean constraints on informativeness for generating discourse anaphoric NPs. However, his model follows the tradition of assuming that distinct constraints govern pronouns versus phrasal NPs (cf.~\cite{reichman85}~\cite{gs86}). Centering~\shortcite{gjw83}~\shortcite{kameyama85}, a model of local attentional state~\shortcite{sidner79}, has been applied primarily to definite pronouns. I argue that Gricean constraints should be applied equally to discourse anaphoric pronouns and phrasal NPs, and that integrating centering and informational constraints covers a broader range of cases. In \S\ref{data}, I present an analysis of all discourse anaphoric NPs (N=1,233) in a corpus of ten narratives showing that semantic explicitness depends largely on informational constraints. Discourse anaphoric NPs almost never provide new information, and are rarely more informative than necessary. In \S\ref{model}, I show how Dale~\& Reiter's~\shortcite{dale&reiter94} generation model can be augmented to apply uniformly to pronouns and phrasal NPs for both generation and understanding. While centering has been used to account for informationally under-specified pronouns, I argue that centering also accounts for certain over-specified phrasal NPs. In~\S\ref{integration}, I integrate centering with the augmented Gricean model and discuss the extended coverage. Results in \S\ref{data} include a one-way correlation of overly informative discourse anaphoric NPs with shifts in global discourse structure. In the conclusion, I discuss directions for extending the integrated model in ways that might indirectly account for this correlation. \section{Analysis of a Coded Corpus} \label{data} In this section, I present the results of an analysis of all discourse anaphoric NPs in a corpus of spoken narratives directed at the question of how informative NPs are, relative to their contexts of occurrence. The first subsection describes the corpus and coding features. The next subsection presents results showing that discourse anaphoric NPs in the corpus, whether pronominal or phrasal, are rarely more informative than necessary, and if so, tend to occur at shifts in global discourse structure. Fig.~\ref{eg1} identifies four possibilities regarding the semantic informativeness of an NP relative to its context. Three of them pertain to the following Gricean principles, referred to by Dale~\shortcite{dale89} as informational adequacy and efficiency: the speaker should be sufficiently informative to unambiguously identify the intended referent (adequacy), and the speaker should be no more informative than necessary (efficiency). The boxed pronouns in (2a) of Fig.~\ref{eg1} are both adequate and efficient (well-specified): it is clear what the pronouns refer to; less informative forms (zero pronouns) would be ungrammatical. The phrasal NPs in (2b) are adequate but not efficient (over-specified). The pronominal NP in (2c) is inadequate (under-specified; efficiency does not apply to inadequate NPs): ``{\it it}'' could refer either to the ladder or the tree. A fourth possibility is that an NP may perform two functions, to identify the referent and to add information about it, as in (2d) (over-determined). In Fig.~\ref{eg1}, the feature +/- increasing distinguishes between over-determined and over-specified NPs. {\scriptsize \begin{figure}[t] \begin{tabbing} aaaa \= aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa \=aaaaaaaaaaaaaaaaaaaaaaaaaaa \kill 1 \> A man$_{1}$ saw a ladder$_{2}$ leaning against a pear tree$_{3}$. \\ 2a. \> Later, \fbox{he$_{1}$} moved \fbox{it$_{2}$} to a different tree. \\ \> $+$adequate; $+$efficient \> {\tt well-specified}\\ 2b. \> \fbox{The man$_{1}$} moved \fbox{the ladder$_{3}$} to a different tree. \\ \> $+$adequate; $-$efficient; $-$increasing \> {\tt over-specified}\\ 2c. \> \fbox{It (?)} was tall. \\ \> $-$adequate \> {\tt under-specified}\\ 2d. \> \fbox{The contented pear picker$_{3}$} was done for the day. \\ \> $+$adequate; $+$increasing \> {\tt over-determined} \end{tabbing} \vspace{-.18in} \caption{\label{eg1} Relative Informativeness} \end{figure} } \subsection{Data Coding} The corpus consists of ten narrations from Chafe's Pear stories~\shortcite{chafe80}. Chafe recorded and transcribed subjects who had been asked to view the same movie and describe it to a second person. The movie contained seven sequential episodes about a man picking pears. It had a vivid sound track, but no language. As part of a long term study of the relationship between linguistic features and discourse structure~\cite{passonneau&litman93} {}~\cite{litman&passonneau95a}~\cite{litman&passonneau95b}, discourse anaphoric NPs in the corpus had already been coded for coreference relations and location. Location of an NP is represented here in terms of the containing sentential utterance and discourse segment, as described below. Fig.~\ref{excerpt} illustrates an excerpt. {\scriptsize \begin{figure}[h] \begin{center} \begin{tabular}{rrl} \multicolumn{1}{c}{S$_{i}$} & \multicolumn{1}{c}{U$_{j}$} \\ 6 & 28 & And you think "Wow, this little boy's$_{i}$ \\ & & probably going to come and see the pears, \\ & 29 & [ps] and [ps] he$_{i}$ 's going to take a pear or two, \\ & & and then.. go on his way." \hspace{.5in} \\\hline 7 & 30 & [ps] U-m but \fbox{the little boy$_{i}$} comes, \\ & 31 & [ps] a-nd u-h [1.0]] he$_{i}$ doesn't want just a pear, \\ & 32 & he$_{i}$ wants a whole basket. \\\hline 8 & 33 & [ps] So \fbox{he$_{i}$} puts the- [ps] bicycle down,\\ & 34 & and he$_{i}$ [ps] you wonder \\ & & how he's$_{i}$ going to take it with this. \\ \end{tabular} \caption{\label{excerpt} Narrative Excerpt Illustrating Informativeness} \end{center} \vspace{-.15in} \end{figure} } Chafe~\shortcite{chafe80} identified three types of prosodic phrases from graphic displays of intonation contours. A period indicates a phrase terminated by a pitch fall, a question mark indicates final level or rising pitch, and a comma indicates phrase final---not sentence-final---intonation. The transcriptions here show all repeated and incomplete words and phrases, non-lexical articulations such as ``uh, um, tsk'', and vowel lengthening as indicated by `-'. Pause locations are shown as `[ps]'. Sentential utterances are defined be a non-overlapping sequence of units that completely covers the discourse. Briefly, a new sentential utterance begins with a functionally independent clause (FIC) if it is immediately adjacent to the preceding FIC. Otherwise it begins at the onset of the prosodic phrase where the next FIC begins. An FIC is a tensed clause that is not a verb argument, a restrictive relative clause, or one of a set of formulaic ``interjection'' clauses (e.g., ``{\it You know}'' with no clausal argument; for full details cf.~\cite{passonneau94cod}). Material between clauses includes sentence or word fragments, and non-lexical articulations (e.g., ``{\it um}''). Locations and sequence numbers of the seven sentential utterances in Fig.~\ref{excerpt} are shown in column 2. The global context is structured into sequential segments, multi-utterance units whose utterances are presumed to be more related to one another semantically and pragmatically than to other utterances. The segments numbered 6-8 (col. 1 of Fig.~\ref{excerpt}) were derived from an empirical study described in~\cite{passonneau&litman93}. Each narrative was segmented by 7 new, untrained subjects. Subjects were instructed to place segment boundaries in transcripts whenever the narrator had finished one communicative task and begun a new one. They were restricted to placing boundaries between prosodic phrases. To focus their attention on the criterion, subjects' were also instructed to label segments with a brief description of the speaker's intention. The size and number of segments per subject per narrative varied widely, from a rate of 5.5\% to 41.3\% (Avg.=16\%), with segment widths ranging from 1 to 49 phrases (Avg.=5.9). Despite this variation, the number of times 4 to 7 subjects assigned boundaries in the same place was extremely significant (using Cochran's Q~\shortcite{cochran50}; cf.~\cite{passonneau&litman93}). We took agreement among at least 4 subjects as the threshold for empirically validated boundaries. A surface constituent is considered to be a discourse anaphoric NP if it occurs in free variation with syntactically prototypical NPs, and corefers with a preceding NP (cf.~\cite{passonneau94cod}). One type of empty category is also included, namely zero pronoun subjects of FICs conjoined by ``,'', ``and'', etc. In Fig.~\ref{excerpt}, the sequence of coreferential NPs used to refer to the little boy are coindexed. Segments 7 and 8 in Fig.~\ref{excerpt} both begin with an utterance containing an NP referring to the boy. At the onset of segment 7, a phrasal NP is used to refer to him (U$_{30}$) whereas at the onset of segment 8 (U$_{33}$), a definite pronoun is used. But a pronoun could have replaced the phrasal NP in U$_{30}$ with no loss of information. So the phrasal NP is over-specified but not over-determined; the attributes ``{\it boy}'' and ``{\it little}'' were already mentioned in U$_{28}$. The pronoun subject in U$_{33}$ is locally well-specified because the boy is the only animate entity mentioned in U$_{32}$; it is globally well-specified because the boy is the only entity in the discourse with a bicycle. \subsection{Analysis of Informational Constraints} \label{analysis} The goal of the analysis is to determine whether relative informativeness of NPs correlates with global discourse structure (cf.~\cite{reichman85}~\cite{gs86}). Any phrasal NP that is discourse anaphoric is potentially over-specified, whereas a definite pronoun will only be over-specified if a zero pronoun could have been used. I first sorted the discourse anaphoric NPs in the corpus (N=1,233) into the three categories of phrasal NPs (PhrNPs; N=563), explicit pronouns (PROs: definite, indefinite, demonstrative; N=544), and zero pronominals (ZPs; N=126). Then I identified all pairs of coindexed NPs where NP$_{2}$ was more explicit than NP$_{1}$. This procedure identified 128 discourse anaphoric NPs in the corpus that were potentially over-specified or over-determined. The sole over-determined NP, illustrated in Fig.~\ref{fig2}, occurs relatively late in the narrative (U$_{85}$); it seems mainly to provide contrast (cf. ``{\it that old man}'' vs. {\it those little boys}''). {\scriptsize \begin{figure}[t] \begin{tabular}{rl} \multicolumn{1}{c}{U$_{j}$} & \\ 84 & [ps] You just know that those little boys are going to go back, \\ & [ps] to where the pear tree was, \\ 85 & and you just know \fbox{that old man}'s going to see [ps] these little\\ & boys coming and say "Ha.. you're the ones who stole the pears." \end{tabular} \caption{\label{fig2} Over-determined NP} \end{figure} } Potentially over-specified NPs were sorted into four mutually exclusive categories---well-specified, segment onset, attentional shift, and reiterative. A potentially over-specified NP is well-specified if a less explicit form would have been ambiguous or unclear. The containing utterance is included in the context since the proposition expressed in an utterance can disambiguate a referring expression. A potentially over-specified NP that is not well-specified, but which occurs in the first utterance of a new segment, is classified as a segment onset. The segments in the coded Pear corpus arguably contain intra-segmental shifts of attention associated with changes in temporal aspect, or shifts in discourse reference time (for definitions assumed here, cf.~\cite{kameyama&etal93}). The third category, attentional shift, consists of these cases. A fourth catch-all category includes, e.g., repetitions, repairs, contrastive NPs and unexplained cases. {\scriptsize \begin{table}[b] \begin{center} \begin{tabular}{|lrrrrr|}\hline\hline \multicolumn{1}{|c}{Antecedent} & \multicolumn{1}{c}{Well-} & \multicolumn{1}{c}{Segment} & \multicolumn{1}{c}{Atten.} & \multicolumn{1}{c}{Other} & \multicolumn{1}{c|}{Total} \\ \multicolumn{1}{|c}{Segment} & \multicolumn{1}{c}{Specified} & \multicolumn{1}{c}{Onset} & \multicolumn{1}{c}{Shift} & & \\\hline Same & 22 & - & 21 & 15 & 58 \\ \% & 38\% & - & 36\% & 26\% & 100\% \\\hline Prev & 37 & 20 & 10 & 4 & 69 \\ \% & 53\% & 29\% & 12\% & 6\% & 100\% \\\hline Totals & 59 & 20 & 29 & 19 & 127 \\ \% & 46\% & 16\% & 23\% & 15\% & 100\% \\\hline \end{tabular} \end{center} \caption{\label{table1} Potentially Over-Specified NPs} \end{table} } Table~\ref{table1} indicates that most potentially over-specified NPs (N=127) were either well-specified (46\%) or occur at an empirically verified segment onset (16\%) or a hypothesized attentional shift (23\%). Of the 69 NPs whose nearest antecedent was in a distinct segment, 29\% occurred at a segment onset. Over a third (36\%) of the NPs whose antecedent was in the same segment, and 12\% of those whose antecedent was in a distinct segment occurred at an intra-segmental attentional shift. In sum, in the coded Pear corpus, NPs that re-evoke existing entities seem to be rarely over-specified (68/1233, or 5.5\%) or over-determined (1/1233). Of the 68 over-specified cases (columns 2-5), 20 (30\%) correlate with segment onsets independently identified by naive subjects, and 29 (42\%) appear to correlate with intra-segmental attentional shifts. Thus, an over-specified NP is more likely than not to correlate with an attentional shift (72\%). Note however, that the reverse implication does not hold, that is, it is not the case that a segment shift is likely to be signalled by an over-specified NP. \subsection{Focused Attribute Sets} \label{fav} To account for the choice of modifiers in phrasal discourse anaphoric NPs, it is necessary to determine how attributes are selected from the information known about a discourse entity. According to Grice's~\shortcite{grice75} maxim of quality, speakers should be relevant. With respect to discourse anaphoric NPs in the Pear stories, NP modifiers are derived from what I refer to as focussed attribute sets, independent of whether the NP is over-specified. Focussed attribute sets comprise the following three categories of relevance. First, an attribute set can be in focus because it was mentioned in the most recent phrasal NP. For example, in Fig.~\ref{excerpt}, the boy is referred to in U$_{30}$ as ``{\it the little boy},'' repeating attributes mentioned in the last phrasal NP referring to the boy (in U$_{28}$). Second, the focussed attribute set may specify the most recently mentioned location of an entity. The subject NP in U$_{17}$ of Fig.~\ref{fig2prime} (\S\ref{c-describe}) refers to one man as ``{\it the man up in the tree}'' to distinguish him from the second man who came by with a goat. The tree is the last mutually known location of the former. Finally, an attribute set can be in focus because it pertains to a key narrative event that the entity has been an agent of. Intuitively, an event is more central to a narrative the more difficult it is to describe the narrative without mentioning that event. Operationally, key events occur more frequently than others both within and across narratives. For example, the main adult character is often described as ``{\it the pear picker},'' or as ``{\it the man who was picking pears}'' (see U$_{108}$ of Fig.~\ref{fig3}, \S\ref{integration}), and so on; the other main character is often described as ``{\it the thief},'' ``{\it the boy who stole the pears},'' ``{\it the boy with the pears},'' and so on. How to order the focussed attribute sets for a given discourse entity is a topic for further investigation. Here, I simply assume that the three types of attribute sets mentioned above---where applicable---are in focus. I also assume that the focussed attribute sets of an entity (FAV$_{e}$) are updated as the discourse progresses. \section{Modelling Informativeness of NPs} \label{model} The data reported above indicates that in the Pear corpus, definite pronouns and phrasal NPs are rarely over-specified or over-determined. In this section, I describe a processing model to account for this observation. In the next section, I discuss how centering can be integrated with this model to account for under-specified pronouns, and certain over-specified phrasal NPs. First, I briefly review Dale's~\shortcite{dale89}~\shortcite{dale92} model, including his more recent work with Reiter~\shortcite{dale&reiter94}. Then I modify this model to apply to understanding as well as generation; to include the current utterance in the context of evaluation; to apply informational constraints uniformly to pronouns and phrasal NPs; and to select modifiers on the basis of focused attribute-value pairs. \subsection{Distinguishing Descriptions} \label{dd} Dale~\shortcite{dale89} generates anaphoric pronouns and phrasal NPs by distinct means. In EPICURE~\shortcite{dale89}, a system for generating recipes, a definite pronoun is always generated to refer to the `discourse center', which is analogous to the backward-looking center of~\cite{gjw83}~\cite{kameyama85}, but is domain specific. It is the entity that results from the next recipe operation. For example, rice$_{1}$ will be the center after an utterance of {\it Stir the rice$_{1}$}. Dale~\shortcite{dale89} requires phrasal NPs to be distinguishing descriptions. As in Webber~\shortcite{webber78}, Dale assumes that the discourse model represents the discourse entities that have already been evoked, and the attribute-value pairs describing them. For any set of entities U, Dale~\shortcite{dale89} defines a distinguishing description of an entity $e$ in U to be a set of attribute-value pairs that are true of $e$, and of no other members of U. This enforces adequacy. He defines a minimal distinguishing description to be one where the cardinality of the attribute-value pairs cannot be reduced. This addresses efficiency.\footnote{Cf. Reiter~\shortcite{reiter90} for a discussion of problems in generating maximally efficient NPs using Dale's framework, and Dale~\& Reiter~\shortcite{dale&reiter94} for an argument that maximal efficiency is psychologically implausible.} Dale~\shortcite{dale89} defines the discriminatory power (${\cal F}$) of an attribute-value pair $<$A, V$>$ that is true of a discourse entity $e$ in a universe of entities U in terms of the cardinality $N$ of U, and the total number $n$ of entities in U that $<$A, V$>$ is true of: ${\cal F}(<A, \: V>, \: U) \: = \: \frac{N \: - \: n}{N \: - \: 1}$ \noindent ${\cal F}$ ranges in value from 0 to 1. If $<A, \: V>$ is true of only one of the entities in the set U, then ${{\cal F}_{<A, \: V>}}$ is 1, and $<A, \: V>$ is a distinguishing description of the entity. Dale's~\shortcite{dale89} algorithm for constructing a distinguishing description of $e$ in U, given a set ${\cal P}$ of attribute-value pairs that are true of $e$, briefly works as follows. First compute ${\cal F}$ for each member of ${\cal P}$. If all values of ${\cal F}$ are 0, no unique description can be constructed. Otherwise, select the attribute-value pair with the highest value to add to the description, and reset U to be only those entities in the initial U that the selected attribute-value pair is true of. Repeat this process, terminating when an attribute-value pair with a discriminatory power of 1 has been selected. The selected attribute-value pairs constitute the input description for a surface NP. In recent work, Dale~\& Reiter~\shortcite{dale&reiter94} enforce a range of Gricean constraints using an algorithm based on human behavior that is simpler and faster than their previous algorithms~\cite{dale89}~\cite{reiter90}. It performs less length-oriented optimization, thus balancing brevity against lexical preference. The output NPs are not guaranteed to be maximally short because humans occasionally use unnecessary modifiers. The 5.5\% rate of over-specified discourse anaphoric NPs in the Pear data also supports the relaxation of brevity, but is partly conditioned by attentional factors (cf.\S\S\ref{integration}-\ref{conclusion}). \subsection{C\_describe} \label{c-describe} In this section I illustrate the role of $c\_describe$ in processing definite pronouns and phrasal NPs. C\_describe is a 4-place relation among a discourse entity E, a surface NP, the current utterance context $\lambda U$, and the discourse context C that requires $\lambda NP \lambda U$ to be a distinguishing description of E relative to C. For generation, NP is solved for given an instantiation of the remaining three arguments, whereas E is solved for during understanding (assuming Prolog's control structure). A definite pronoun that is a distinguishing description is also a minimal distinguishing description because its length is 1. In generation, C$\_describe$ attempts first to find a definite pronoun to satisfy the uninstantiated NP argument, succeeding if the pronoun is a distinguishing description. For generating the pronoun ``{\it he}'' in U$_{4}$ of Fig.~\ref{fig6}, the arguments of $c\_describe$ are: \begin{eqnarray} \label{u-4} \lefteqn{c\_describe(e_{1}, \: NP,}\\ \nonumber \lefteqn{\hspace{.15in} \lambda X \: fills'(X,``his \: thing'',``pears''), \: FS_{1})} \end{eqnarray} \noindent The utterance context is assumed to be a feature structure co-indexed with any relevant discourse entities other than the uninstantiated variable E.\footnote{For simplicity, the utterance context represents certain semantic arguments as quoted strings.} By using the utterance as part of the input in solving for NP, given information that appears anywhere in the current utterance can filter entities from the discourse context, following Dale's~\shortcite{dale89} algorithm. New information about an entity in the utterance is not mutually known, and has no discriminatory power~\cite{dale89}. For present purposes, the last argument of $c\_describe$ is first instantiated to the most recent focus space, and in turn to other focus spaces until a solution is found. Dale~\shortcite{dale89} takes the universe of discourse to be partitioned into focus spaces (cf.~\cite{gs86}), with the most recent focus space being the most accessible, and making no assumptions regarding relative accessibility of earlier focus spaces. Similar assumptions are made here. I assume that segment boundaries in the Pear corpus correspond to focus spaces, and that some focus spaces may be composed of others. I assume the existence of an inference mechanism that constrains how focus spaces are signalled during generation, and how focus spaces are inferred during understanding. In recent work, for example, Litman and I report on algorithmic methods for identifying segment boundaries in the Pear corpus using features of prosody, cue words and referential NPs~\cite{litman&passonneau95b}. Given such a mechanism, a new focus space would be added to the discourse model after a segment onset has been processed. {\scriptsize \begin{figure}[t] \begin{tabular}{rrl} \multicolumn{1}{c}{S$_{i}$} & \multicolumn{1}{c}{U$_{j}$} \\ 2 & 4 & [ps] A- nd \fbox{he (e$_{1}$)} [ps] fills his- thing with pears, \\ & 5 & and ZERO (e$_{1}$) comes down \\ & 6 & and there's a basket he (e$_{1}$) puts them in. \\\hline 3 & 7 & [ps] A-nd you see- [ps] passerbyers on \\ & & \hspace{.18in} bicycles and stuff go by. \\ & 8 & [ps] A-nd [ps] then a boy (e$_{2}$) comes by, \\ & & [ps] on a bicycle, \\ & 9 & \fbox{the man (e$_{1}$)} is in the tree,\\ & 10 & [ps] and \fbox{the boy (e$_{2}$)} gets off the bicycle, \\ & 11 & and ZERO (e$_{2}$) looks at the man (e$_{1}$) , \\ \end{tabular} \caption{\label{fig6} Excerpt from Narrative 9} \end{figure} } In (\ref{u-4}), FS$_{1}$ appears as the initial context argument of $c\_describe$. The only animate entity in FS$_{1}$ is e$_{1}$, previously described as a man picking pears in a pear tree who looks like a farmer, is plump, has a mustache, and is wearing a white apron (utterances 1-3, not shown here). The feature structures corresponding to all but one of the definite pronouns ``{\it he, she, it}'' or ``{\it they}'' will be rejected as a description of e$_{1}$ because e$_{1}$ is neither plural, non-animate or female.\footnote{For simplicity, I am ignoring the difference between grammatical gender and sex.} The pronoun ``{\it he}'', represented as the attribute-value pairs ($<$type: human$>$, $<$gender: male$>$, $<$cardinality: 1$>$), not only describes e$_{1}$, it is also a minimal distinguishing description. An analogous process applies to understanding the same pronoun in U$_{4}$, with the entity variable E uninstantiated, NP instantiated to ``{\it he}'', and the utterance and discourse context instantiated as above. Given a distinguishing description, there is guaranteed to be exactly one solution to E. However, the search problem increases with the size of the context. Partitioning the search space into focus spaces controls the search through the discourse model to some degree. (Integrating centering with $c\_describe$ as described below guides the search even further.) For present purposes, $c\_describe$ returns E instantiated to e$_{1}$ after searching through the entities in FS$_{1}$. The remaining NPs exemplified here are understood in a similar fashion. Given a context where there is no definite pronoun solution to NP, $c\_describe$ will attempt to construct a phrasal NP, preferably with no modifiers. In Fig.~\ref{fig6}, a new, singular, male, human entity is added to the context at U$_{8}$: a boy who comes by on a bicycle (e$_{2}$). Subsequent references to the boy or the man must discriminate between them. The utterance context for the subject NP of U$_{9}$---$\lambda \: X \: in'(X,``the \; tree'')$---does not identify e$_{1}$ because U$_{5}$---``{\it he comes down}''---leads to the inference that the man is no longer in the tree. However, e$_{1}$ is a male adult and e$_{1}$ is a male child, a distinction encoded by the common nouns ``{\it man}'' versus ``{\it boy}''. Since ``{\it man}'' is what Dale~\& Reiter~\shortcite{dale&reiter94} refer to as a basic attribute, ``{\it man}'' will be selected as the head noun. The determiner will be definite because the entity is already in the context (but cf.~\cite{passonneau94foc}). The resulting NP {\it the man} is a minimal distinguishing description because no pronoun is a distinguishing description. {\scriptsize \begin{figure}[t] \begin{tabular}{rrl} \multicolumn{1}{c}{S$_{i}$} & \multicolumn{1}{c}{U$_{j}$} \\ & 13 & and it's just a monotonous kind of thing for him (e$_{1}$). \\\hline 4 & 14 & [ps] And a man (e$_{2}$) comes along with a goat, \\ & 15 & [ps] and the goat obviously is interested in the pears. \\ & 16 & But the man (e$_{2}$) just walks by with the goat. \\\hline 5 & 17 & And \fbox{the man (e$_{1}$) up in the tree}\\ & & \hspace{.12in} doesn't even notice. \end{tabular} \caption{\label{fig2prime} Phrasal NP to Avoid Ambiguity} \end{figure} } Fig.~\ref{fig2prime} illustrates a context where a phrasal NP without modifiers could not both have a head noun that specifies a basic attribute, and be a distinguishing description. It also illustrates the problematic nature of relations among distinct focus spaces. In generating the subject NP in U$_{17}$, the last argument of $c\_describe$ is first instantiated to FS$_{4}$. The pears referred to in U$_{15}$ of segment 4 are some pears that e$_{1}$ picked, so in order to interpret U$_{15}$, e$_{1}$ must be brought into focus. This side-effect of resolving the reference to the pears could be implemented by adding e$_{1}$ to FS$_{4}$, or by resetting the current focus space to a more encompassing focus structure that includes FS$_{3}$ and FS$_{4}$. I believe further empirical work is needed to resolve such issues. In any case, I assume that the context for generating U$_{17}$ includes both e$_{1}$ and e$_{2}$. Because these two entities are the same type, a distinguishing description of e$_{1}$ must contain discriminatory modifiers. Features for generating the modifiers are selected from FAV$_{e_{1}}$, which here contains only two sets of salient attributes. Since e$_{1}$'s location is the most recently evoked, it is used in generating the NP. Above, I noted that centering can add structure to the search space for understanding discourse anaphoric NPs. Fig.~\ref{fig6} illustrates another reason to integrate centering with $c\_describe$. In U$_{10}$ of Fig.~\ref{fig6}, the subject NP (``{\it the boy}'') is not a pronoun even though the utterance context is a distinguishing description of e$_{2}$. The boy (e$_{2}$) is mutually known to have been on a bicycle at the time of the event mentioned in utterance U$_{8}$. Temporal processing (cf.~\cite{kameyama&etal93}) would lead to the inference that the boy is still on the bicycle after U$_{9}$. Thus a definite pronoun is presumably well-specified, and the model presented so far would generate ``{\it he}''. However, a pronoun would produce a garden path effect in this context; i.e., it would be interpreted as referring to the man until ``{\it bicycle}'' has been processed. \section{Centering and Informativeness} \label{integration} The $c\_describe$ relation has three limitations that centering can compensate for. First, $c\_describe$ constrains the semantic content of a discourse anaphoric NP, but not its grammatical role. Second, as noted below, centering predicts that a pronoun can be under-specified. Third, an explanation is needed for the over-specified NP {\it the boy} in U$_{10}$ of Fig.~\ref{excerpt}. In this section, I indicate how centering is interleaved with $c\_describe$. Centering is a more local process so it applies first. \subsection{Centering} \label{centering} Centering is a model of local focus of attention that constrains the use of definite pronouns~\cite{gjw83}~\cite{kameyama85}. One of the discourse entities~\cite{webber78} evoked by an NP in an utterance U$_{i}$ may be the backward-looking center (CB)~\cite{gjw83} of U$_{i}$, the current local focus of attention. Alternatively, the CB of U$_{i}$ (CB$_{U_{i}}$) might not be explicitly mentioned (realized) in the utterance~\cite{gjw83}. The discourse entities mentioned in U$_{i}$ comprise the forward looking centers (CFs), ordered by increasing obliqueness of grammatical role~\cite{kameyama85}~\cite{passonneau89} to represent the likelihood that they will be mentioned in the subsequent utterance. The centering principle~\cite{gjw83} predicts that if CB$_{U_{i}}$ and CB$_{U_{i-1}}$ are the same entity, then the NP evoking CB$_{U_{i}}$ will be a third person, definite pronoun. \addtocounter{example}{1} \begin{ex} \label{egg} a. Carmella$_{j}$ went to the bookstore. \\ b. Afterwards, she$_{j}$ gave Rachel$_{k}$ a new book. \\ c. She$_{j}$'s a true bibliophile. \end{ex} Example (\ref{egg}) illustrates that where the semantics of the utterance and commonsense reasoning do not discriminate among possible referents for an ambiguous pronoun, there is an independent effect of local attentional constraints. Centering predicts that the preferred interpretation of the pronoun in (\ref{egg}c) is Carmella. But in this context, neither the pronoun alone nor the utterance is a distinguishing description of anyone, so the pronoun is under-specified. \begin{ex} \label{lmn} a. Carmella$_{j}$ went to the bookstore. \\ b. On her way home, she$_{j}$ saw Rachel$_{k}$. \\ c. She$_{k}$ looked pale. \end{ex} Kameyama~\shortcite{kameyama85} used examples like (\ref{lmn}) to illustrate how commonsense reasoning and lexical semantics can override the default centering predictions for pronoun interpretation. Centering would predict that `Carmella' is the backward-looking center of (\ref{lmn}b), and that the default interpretation of the pronoun in (\ref{lmn}c) would thus be `Carmella'. Instead, (\ref{lmn}c) is interpreted as a continuation of the description of the perceptual event in (\ref{lmn}b). Kameyama~\shortcite{kameyama86} posits property sharing of features of adjacent utterances as a constraint on CB, where the shared property can be subject (or non-subject) grammatical role (cf.~\cite{passonneau89}), as in (\ref{egg}), or what she refers to as empathy, as in (\ref{lmn}). Note that because `Rachel' is already known to be the object of the perceptual event in (\ref{lmn}b), the utterance context in (\ref{lmn}c) is a distinguishing description of `Rachel.' \subsection{Integrated Model} \label{final-model} {\scriptsize \begin{figure}[b] \begin{tabular}{rrl} \multicolumn{1}{c}{S$_{i}$} & \multicolumn{1}{c}{U$_{j}$} & \\ 21 & 105 & [ps] So they (e$_{1}$)'re walking along, \\ & 106 & and they (e$_{1}$) brush off their pears (e$_{3}$), \\ & 107 & and they (e$_{1}$) start eating it (e$_{3}$). \\\hline 22 & 108 & Then \fbox{they (e$_{1}$)} walk by-[ps] \\ & & \fbox{the man who was picking the pears (e$_{2}$)} \\ \end{tabular} \caption{\label{fig3} Excerpt from Narrative 1} \end{figure} } Fig.~\ref{fig3} shows all of one segment and part of another one where the subject pronouns of all the utterances are coreferential. On the one hand, the CB of the segment initial utterance U$_{108}$ is the same as the CB of U$_{107}$, conflicting with the idea expressed in~\cite{gjw86} that centering transitions reflect global discourse coherence (cf.~\cite{passonneau94a}). On the other hand, integrating centering and $c\_describe$ can account for both NPs in U$_{108}$ and support inferences consistent with a global focus shift. Earlier in the narrative excerpted in Fig.~\ref{fig3}, three boys helped the pear thief after he had fallen off of his bicycle, and were rewarded with three pears. Segment 21 describes their adventures after the pear thief leaves. In generating utterance U$_{108}$, the input to the generator will be a representation of an event in which the boys eat their pears. The set of three boys is designated as the new CB. Because CB$_{U_{108}}$ is the same as CB$_{U_{107}}$, it should be realized as a pronoun~\cite{gjw83}, and by property sharing~\cite{kameyama85}~\cite{passonneau89}, it should be realized as the subject of the current utterance. Centering and Gricean constraints coincide here in that the definite pronoun ``{\it they}'' is also a minimal distinguishing description. To generate the phrasal NP object in U$_{108}$, the process is analogous to that discussed above for generating ``{\it the man up in the tree}'' in Fig.~\ref{fig2prime}. The context argument of $c\_describe$ is first set to Cf$_{U_{107}}$. Since neither Cf$_{U_{107}}$ nor the most accessible focus space (FS$_{21}$) contains a representation of e$_{2}$, the context argument will be reset until e$_{2}$ is in a focus space on the focus stack. Focussed attribute sets are then used to generate the relative clause. For understanding the subject NP in U$_{108}$, recall that centering applies before $c\_describe$. The subject pronoun will be assumed to realize the CB of the utterance, and will be assigned the default interpretation of e$_{1}$. Application of $c\_describe$ leads to the recognition that ``{\it they}'' is also a distinguishing description of e$_{1}$ relative to CF$_{U_{107}}$. In this fashion, centering prunes the search space to the single entity satisfying the informational constraints imposed by $c\_describe$. In understanding the object NP, the context argument must be instantiated to a more inclusive focus space, since neither the previous utterance nor the previous segment contains any entities described by this NP. The integrated model also accounts for the problematic phrasal NP in Fig.~\ref{fig6}, discussed above. We saw that for U$_{9}$ and U$_{10}$, repeated below, the phrasal subject of U$_{9}$ was well-specified, but the phrasal subject of U$_{10}$ was over-specified, and a pronoun would be generated instead. But as noted above, a pronoun subject would have a garden path effect. \begin{description} \item[U$_{9}$: \hspace{.005in}] the man (e$_{1}$) is in the tree (e$_{3}$), \vspace{-.02in} \item[U$_{10}$:] and the boy (e$_{2}$) gets off the bicycle (e$_{4}$), \end{description} \noindent Kameyama's version of centering~\shortcite{kameyama86} differs from~\cite{gjw83} in allowing an utterance to have a null CB. U$_{10}$ would have a null CB because there is no definite pronoun constrained by property sharing that corefers with an NP in the previous utterance; in fact no NPs in U$_{10}$ refer to entities mentioned in U$_{9}$. A definite pronoun subject in U$_{10}$ would be assumed to be CB$_{U_{10}}$ and would be inferred to refer to e$_{1}$. This accounts for the garden path effect. Consequently, a pronoun must be blocked. Because no entity in U$_{9}$ is referred to in U$_{10}$, the input for generating U$_{10}$ will be annotated as having a NULL CB. This imposes output constraints requiring the subject and object NPs to be other than definite pronouns. As a consequence, $c\_describe$ will not try to find a pronoun solution to the uninstantiated NP argument. In the first phrasal NP solution, the head would denote a basic category and the NP would have no modifiers, thus generating the existing phrase ``{\it the boy}''. In sum, centering relaxes the constraint otherwise imposed by $c\_describe$ that an NP cannot be over-specified. \section{Conclusion} \label{conclusion} I have presented an analysis of discourse anaphoric phrasal NPs in a corpus of narrative monologues showing that pronouns and phrasal NPs are rarely over-specified. Future research should indicate to what degree this generalization applies to other genres and modalities. Centering predicts conditions under which an under-specified pronoun can be used, but says little about the interpretation of phrasal NPs. I have outlined a processing model that integrates the attentional constraints of centering with aspects of Grice's maxims of quantity and quality. For enforcing the maxim of quantity, I rely on Dale's algorithm for constructing distinguishing descriptions~\shortcite{dale89}~\shortcite{dale92}, which I apply uniformly to pronouns and phrasal NPs for both generation and understanding. For enforcing the maxim of quality, I combine aspects of Dale~\& Reiter's~\shortcite{dale&reiter94} preferred attributes with the construct of focussed attribute sets derived from the corpus analysis. In contrast to Dale~\& Reiter~\shortcite{dale&reiter94}, distinguishing descriptions are evaluated using the current utterance context as a filter, and by instantiating the discourse context successively to the Cf list of the preceding utterance, then the current focus space, then other focus spaces, until a solution is found. Centering provides one mechanism for relaxing the requirement that an NP (either pronominal or phrasal) should be a distinguishing description. Another mechanism would be needed to relax informational constraints at shifts in focus structure, so as to account for the one-way implication of over-specified NPs with global shifts of attention (Table~\ref{table1}). However, further investigation is needed to determine how to integrate local and global discourse processing. When neither the Cf list nor the current focus space is the appropriate context for understanding or generating a discourse anaphoric NP, I have assumed that either an earlier focus space or a more inclusive one must be accessed. Some of the examples presented here suggest that the contextual dependencies captured by the use of focused attributes might constrain the relation of each new utterance to the global discourse model. For example, the segment onset in Fig.~\ref{fig3} (U$_{108}$) contains two NPs, one of which is the same as the CB of the preceding utterance. Maintaining the same CB relates U$_{108}$ and its focus space (FS$_{22}$) to the most recent focus space FS$_{21}$. But the object NP expresses attributes last mentioned in segment 17, thus relating U$_{108}$ to the earlier focus space FS$_{17}$. If the global structure is a tree, the relation of U$_{107}$ to both segments 21 and 17 might indicate how high up in the tree to locate the new focus space. Alternatively, an investigation of such relations might provide evidence about the nature of global structure, such as whether it is a tree or a lattice. \section*{Acknowledgements} {\footnotesize This work was partly supported by NSF grant IRI-91-13064. Thanks to Robert Dale, Ehud Reiter, and Megumi Kameyama for valuable comments on ideas presented here.} {\small \bibliographystyle{named}
30e4deb5f5282235e1ffe9ae65ea93640561425a
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Considering quantum mechanical indistinguishable particles, one uses to speak about their ``statistical'' (or ``exchange'') interaction. It has no classical analogue, since there is no classical interaction force, but it influences the many-particle spectrum. Under usual circumstances, with bosons and fermions only, the rules of combining single-particle states for indistinguishable particles are affected, while particles of different species remain independent. For anyons \cite{Leinaas77,Wilczek82}, however, this interaction is nontrivial, since it leads to the absence of single-particle states, and even more, to the possibility that particles of different species can become mutually ``entangled''; hence the problem of multispecies anyons demands special consideration. It is also believed that this problem may be of direct physical interest, particularly in connection with the $t-J$ model, which might be relevant to the high-$T_c$ superconductivity. The question about statistics of excitations therein, spinons and holons, is still unsettled, and it is anticipated that their ``mutual statistics'' may be nontrivial \cite{Mor}. Another possible application is the layered Hall media where FQHE quasiparticles in different layers can be viewed as anyons with mutual statistical interactions \cite{Wilczek92}. Finally, the same situation arises in the usual quantum Hall liquid: In a state where exactly $m$ Landau levels are filled, the excitations are characterized by ``charge vectors'' of dimension $m$ and there are different nontrivial phase factors associated with interchanging different sorts of excitations \cite{Froelich91}. In this paper we address certain points concerning thermodynamics of multispecies anyons, by starting from microscopic considerations on the wave functions. For the sake of simplicity, we do the calculations for two species only, but a generalization to more than two species would be straightforward. The fugacity expansion of the thermodynamic potential now involves two fugacities, and the equation of state, respectively, two densities; hence, the cluster and virial coefficients are labeled by two indices. The mixed virial coefficient of the lowest order, $a_{11}$, is trivially calculable, but exact expressions for higher-order ones are certainly not easier to obtain than in the one-species case. To proceed further, we investigate the case when all the particles are in the lowest Landau level of a strong magnetic field, and, using microscopic arguments as in \cite{Dasnieres94a}, we obtain the equation of state which coincides with the one conjectured in \cite{Dasnieres94b} and derived in \cite{Wu94} from a definition of fractional statistics \`a la Haldane \cite{Haldane91} (exclusion statistics), using a state-counting argument. A mean-field interpretation of this equation is also proposed. Also, in this (and only in this) case it becomes possible to interpret the system in terms of statistical mechanics of a gas of free particles, by following essentially the same lines as in \cite{IsakovIJMP94}. A statistical distribution is derived, which again coincides with the one for exclusion statistics \cite{Wu94,IsakovMPL94}. We summarize the expressions for the cluster and virial coefficients of a multispecies system, in the Appendix. \section{Mutual fractional exchange statistics} When particles are distinguishable, their wave functions have no definite behavior with respect to their interchange, but one can impose certain conditions that they should satisfy under a double interchange, or, equivalently, a winding of one particle around the other one. In three dimensions, as usually, wave functions are single-valued, whereas in two dimensions a non-trivial phase factor may arise. In fact, if there are $M$ species of particles, one has an $M\times M$ matrix $\alpha_{ab}$ of statistical parameters, such that the wave function of the whole system picks up a phase \begin{itemize} \item $\exp[i\pi\alpha_{aa}]$ under an {\it interchange\/} of two particles of species $a$ \item $\exp[2i\pi\alpha_{ab}]$ under {\it winding} a particle of species $a$ around a particle of species $b$ \end{itemize} (provided, in both cases, that the complete closed path encloses no other particles). The second condition, of course, holds for $a=b$ as well, but the first one in this case is stronger. Note that there is periodicity in $\alpha_{aa}$'s with period~2, but in $\alpha_{ab}$'s with period~1. Note also that since ``winding particle $a$ around particle $b$'' and ``winding particle $b$ around particle $a$'' is one and the same operation, in the system of their center of mass, the matrix $\alpha_{ab}$ must be symmetric. The above condition on the wave function can also be understood within the approach where statistics appear as kinematic properties of the configuration space \cite{Leinaas77}. The configuration space for a system containing $M$ species of identical particles can be constructed in analogy with the case of a single species \cite{Brekke91}. Irreducible representations of the fundamental group of the configuration space determine properties of the wave function under exchange of particles. In \cite{Brekke91} it was found that the fundamental group for the configuration space for a system with $M$ species of identical particles is $B_{N_1\dots N_M}$, a generalization of the braid group $B_N$. One-dimensional irreducible representations of $B_{N_1\dots N_M}$ (corresponding to spinless particles) are labeled by $M$ parameters associated with exchange of particles of the same species (which may be identified with the above $\alpha_{aa}$) and $\frac12 M(M-1)$ parameters associated with exchange of particles of different species (which may be identified with the above $\alpha_{ab}$) \cite{Brekke91}. One important observation is now in order. A physical picture usually associated with anyons is the model of charge-flux composites \cite{Wilczek82}, where the origin of the anyonic interchange phase is the Aharonov-Bohm interaction of the charges and fluxes ascribed to the particles. In the multispecies case, each species $a$ would be characterized by a charge $e_a$ and a flux $\phi_a$. Now, if one writes the Hamiltonian in the regular gauge, including the relevant vector potentials, it turns out that a singular gauge transformation to the free Hamiltonian (anyon gauge) is possible only if the equality \begin{equation} e_a \phi_b = e_b \phi_a \label{1} \end{equation} holds, for each $a$ and $b$. In this case, one will have \begin{equation} \alpha_{ab} = e_a \phi_b / 2\pi, \label{2} \end{equation} and the matrix $\alpha_{ab}$ is indeed symmetric. Equations (\ref{1}) and (\ref{2}) together imply that \begin{equation} \alpha_{ab} = \pm \sqrt{\alpha_{aa}\alpha_{bb}}. \label{2bis} \end{equation} Thus, only the diagonal statistical parameters are independent, save for the sign. However, a situation may be imagined when there are several Aharonov-Bohm gauge fields. Eq.~(\ref{1}) then becomes \begin{equation} e_a^\beta \phi_b^\beta = e_b^\beta \phi_a^\beta, \label{3} \end{equation} $\beta$ labeling the gauge fields, while (\ref{2}) becomes $\alpha_{ab} = \sum_\beta e_a^\beta \phi_b^\beta / 2\pi$, and (\ref{2bis}) no longer holds. Therefore, one may consider all $\alpha_{ab}$'s as independent, but in the Aharonov-Bohm model this would, generally speaking, demand to have several gauge fields. Note at the same time that in a model where the fluxes appear as a result of coupling the particles to Chern-Simons gauge fields \cite{Dasnieres92}, (\ref{3}) is satisfied automatically. Consider now the simplest case of $M=2$, and denote for brevity $\alpha_{11}=\alpha_1$, $\alpha_{22}=\alpha_2$, $\alpha_{12}=\gamma$. A many-particle wave function of a system of $N_1$ particles of species~1 and $N_2$ particles of species~2 that satisfies the above-discussed interchange conditions reads in the anyon gauge \begin{equation} \Psi = \prod\limits_{j<k}^{N_1}(z_j-z_k)^{\alpha_1} \prod\limits_{m<n}^{N_2}(\zeta_m-\zeta_n)^{\alpha_2} \prod\limits_{j,m}(z_j-\zeta_m)^\gamma \Phi(\{z_j\},\{\zeta_m\}) \label{4} \end{equation} $\alpha_1,\alpha_2,\gamma>0$ is meant, while the replacement $(z_j-z_k)^{\alpha_1} \to (z^*_j-z^*_k)^{-\alpha_1}$ is to be made if $\alpha_1 < 0$, etc. $\Phi$ is a single-valued function symmetric with respect to the coordinates $\{z_j\}$ (spe\-cies~1) as well as with respect to the coordinates $\{\zeta_m\}$ (species~2), but arbitrary with respect to $z \leftrightarrow \zeta$. This is the starting point for constructing the $(N_1+N_2)$-particle spectrum. Obviously, as far as $\gamma$ is not an integer, all $(N_1+N_2)$ coordinates are ``entangled'' together, while if $\gamma$ is integer, the problem can be factorized into the one of $N_1$ particles and the one of $N_2$ particles. Knowing the spectrum, one may find the partition function $Z_{N_1N_2}\! = \!{\rm Tr}\:\exp \left( -\beta H_{N_1N_2} \right)$, then proceed to the grand partition function \begin{equation} \Xi = \sum_{N_1,N_2} z_1^{N_1} z_2^{N_2} Z_{N_1N_2}, \label{5} \end{equation} where $z_a = \exp(\beta\mu_a)$ are the fugacities, in order to obtain the cluster expansion \begin{equation} \ln \Xi = \sum_{k_1,k_2} b_{k_1k_2} z_1^{k_1} z_2^{k_2} \label{5bis} \end{equation} and the virial expansion for the equation of state \begin{equation} \beta P = \sum_{k_1,k_2} a_{k_1k_2} \rho_1^{k_1} \rho_2^{k_2} \label{6} \end{equation} (see Appendix). Clearly, for the virial coefficients $a_{n0}$ and $a_{0n}$ one has \begin{equation} a_{n0} = a_n(\alpha_1), \qquad a_{0n} = a_n(\alpha_2), \label{7} \end{equation} where $a_n(\alpha)$ is the $n$-th virial coefficient of anyons with statistical parameter $\alpha$. Among these, only $a_2$ is known exactly \cite{Arovas85}, \begin{equation} a_2(\alpha) = \left[ \frac{1}{4} - \frac{1}{2}(1-\alpha)^2 \right] \lambda^2, \label{7a} \end{equation} with $\lambda=\sqrt{2\pi\beta/m}$. The mixed second-order coefficient, $a_{11}$, involves the $N_1=N_2=1$ case. Eq.~(\ref{4}) then turns into \begin{equation} \Psi = (z - \zeta)^\gamma \Phi(z,\zeta), \label{8} \end{equation} where $\Phi$ is {\it any\/} single-valued function. Since it may always be represented as a sum of symmetric and antisymmetric functions, the partition function $Z_{11}$ can be written as \begin{equation} Z_{11}=Z_2(\gamma)+Z_{2}(1+\gamma), \label{9a} \end{equation} $Z_{2}(\gamma)$ being the 2-anyon partition function with statistical parameter $\gamma$. Using the formulas from the Appendix, we get \begin{equation} a_{11}=a_{2}(\gamma)+a_{2}(1+\gamma) = \gamma(1-\gamma) \lambda^2. \label{10} \end{equation} For $\gamma = 0,1$, this coefficient vanishes, reflecting the fact that particles of different species are mutually independent. \section{Quantum and statistical mechanics in the lowest Landau level} Calculating higher-order virial coefficients would demand solving a multiparticle problem, which is in general impossible. In the one-species case, however, -when the system is projected in the lowest Landau level (LLL) of a strong external magnetic field, the whole virial expansion can be obtained\cite{Dasnieres94a}. Note that for the gap above the ground state to be bounded from below and thus the projection onto the LLL to be justified, the particle fluxes must be antiparallel to the external field, for example if $B<0$, one should have $\phi>0$. If the particle charge $e$ is taken to be positive, then the necessary condition is $\alpha \in [0,1]$ \footnote{Note that this convention for signs is opposite to \cite{Dasnieres94a,Dasnieres94b} and coincides with \cite{Wu94}.}. The equation of state then is \begin{equation} \beta P = \rho_L \ln \left( 1 + \frac{\nu}{1 - \alpha\nu} \right) \label{11} \end{equation} ($\rho_L = m\omega_c / \pi$ is the Landau degeneracy per unit area, $\rho = N/V$ is the density, $\nu = \rho/\rho_L$ is the filling factor, $\omega_c = |eB|/2m$ is half the cyclotron frequency; ``strong magnetic field'' means $\beta\omega_c \gg 1$). It has been shown \cite{Wu94} that the same equation (\ref{11}) is obtained from Haldane's definition of fractional statistics \cite{Haldane91}, usually referred to as exclusion statistics, with $\alpha$ being the quantity by which the number of available single-particle states diminishes as one particle is added to the system. Why the equations coincide can be understood within a mean-field approximation (where it becomes possible to speak about single-particle states): Since the anyon fluxes are antiparallel to the external field, the flux $\phi = 2\pi\alpha/e$ of a particle added partially screens the external flux $\Phi = BV$ and thus diminishes the number of states available for other particles; when $N$ particles are added, the Landau level degeneracy per unit area becomes $\rho_L - \rho\alpha$. A multispecies generalization of (\ref{11}) within the framework of exclusion statistics is as follows: If $\alpha_{ab}$ is the decrease in the number of single-particle states available for $a$-th species caused by addition of one particle of $b$-th species, then \begin{equation} \beta P = \rho_{L} \sum_a \ln \left( 1 + \frac{\nu_a}{1 - \sum\limits_b \alpha_{ab}\nu_b} \right), \label{12} \end{equation} with $\nu_a = \rho_a/\rho_{L}$ \cite{Dasnieres94b,Wu94}. The physical picture would be the following: a particle of $a$-th species carries statistical charges $e_a^\beta$, which couple to the statistical gauge fields, and an electric charge $e$ which couples to the external magnetic field and has to be common to all species. In fact, we are forced to restrict ourselves to the Landau density $\rho_L$ being the same for all species (whereas in \cite{Dasnieres94b,Wu94}, where mean field arguments are used, different Landau densities $\rho_{La}$ are possible), because it is only in this case that exact eigenstates can be constructed. In the two-species case, by using microscopic arguments for the LLL spectrum, we will now show that the above equation indeed holds with $\alpha_{ab}$'s being the mutual statistical parameters defined previously: it is then almost certain that (\ref{12}) is correct for any number of species. If there is only one statistical charge, then, as already emphasized, $\gamma$ is not independent from the $\alpha_{aa}$'s. Now, both $\phi_1$ and $\phi_2$ have to be positive, and it follows then from (\ref{1}) that $e_1$ and $e_2$ must have the same sign; choosing it again to be positive, we have $\alpha_1, \alpha_2 \in [0,1]$, and, by virtue of (\ref{2}), $\gamma$ is positive as well, so the constraint (\ref{2bis}) would read \begin{equation} \gamma = \sqrt{\alpha_1\alpha_2}. \label{SFO} \end{equation} However, our considerations will hold in the case of several gauge fields as well, and the result will be true for arbitrary $\gamma \in [0,1]$ (with the condition, however, that the gap above the ground state be bounded from below by a quantity of the order the Landau gap $2\omega_c$, which is certainly true for $\gamma$ small enough). By analogy with \cite{Dasnieres94a}, we add a harmonic attraction $\sum_{j=1}^{N_1+N_2} \frac{1}{2} m \omega^2 r_j^2$ and observe, from (\ref{4}), that the spectrum of the $(N_1+N_2)$-particle system in the LLL becomes that of a system of $N_1$ and $N_2$ bosons (mutually {\it independent\/}) in a ``harmonic-Landau'' potential whose single-particle spectrum reads \begin{equation} \varepsilon_{\ell} = \omega_t + (\omega_t - \omega_c)\ell \label{13} \end{equation} ($\omega_t = \sqrt{\omega_c^2 + \omega^2}, \; \ell = 0,1,\ldots$), plus a constant shift \begin{equation} \Delta E_{N_1N_2} (\alpha_1, \alpha_2, \gamma) = \left[ \frac{N_1(N_1-1)}2\alpha_1 + \frac{N_2(N_2-1)}2\alpha_2 + N_1N_2\gamma \right] (\omega_t - \omega_c) \label{14} \end{equation} coming from the anyonic prefactor in (\ref{4}). One then has \begin{equation} Z_{N_1N_2} = e^{-\beta \Delta E_{N_1N_2} (\alpha_1,\alpha_2,\gamma)} Z^b_{N_1} Z^b_{N_2}, \label{15} \end{equation} where \begin{equation} Z^b_N = \frac{e^{-N\beta\omega_t}} {(1-e^{-\beta(\omega_t - \omega_c)})(1-e^{-2\beta(\omega_t - \omega_c)}) \cdots(1-e^{-N\beta(\omega_t - \omega_c)})} \label{16} \end{equation} is the $N$-boson partition function in the harmonic-Landau potential. For the cluster coefficients, one gets (see Appendix for the prescription for passing to the thermodynamic limit) \begin{eqnarray} b_{k_10} & \!\!\!=\!\!\! & \rho_LV \frac{1}{k_1} \prod_{l_1=1}^{k_1-1} \left( 1 - \frac{k_1\alpha_1}{l_1} \right) \cdot e^{-k_1\beta\omega_c}, \label{17} \\ b_{k_1k_2} & \!\!\!=\!\!\! & -\rho_LV\gamma \frac{(k_1+k_2)}{k_1k_2} \prod_{l_1=1}^{k_1-1}\left( 1 - \frac{k_1\alpha_1 + k_2\gamma}{l_1} \right) \prod_{l_2=1}^{k_2-1}\left( 1 - \frac{k_2\alpha_2 + k_1\gamma}{l_2} \right) \cdot e^{-(k_1+k_2)\beta\omega_c}, \nonumber \\ \label{18} \end{eqnarray} and $b_{0k_2} = b_{k_20} |_{\alpha_1 \leftrightarrow \alpha_2}$. Note that, as expected, (\ref{18}) does not reduce to (\ref{17}) when $k_2=0$ (in particular, $b_{k_1k_2}$ certainly vanishes for $\gamma=0$, while $b_{k_10}$ is $\gamma$ independent). The virial coefficients calculated herefrom take the form ($\beta\omega_c \gg 1$) \begin{equation} a_{k_1k_2} = -\frac{1}{\rho_L^{k_1+k_2-1}} \frac{(k_1+k_2-1)!}{k_1!k_2!} \left\{ [(\alpha_1-1)^{k_1} - \alpha_1^{k_1}] \gamma^{k_2} + [(\alpha_2-1)^{k_2} - \alpha_2^{k_2}] \gamma^{k_1} \right\}, \label{19} \end{equation} and the equation of state then is \begin{equation} \beta P = \rho_L \left[ \ln \left( 1 + \frac{\nu_1}{1 - \alpha_1\nu_1 - \gamma\nu_2} \right) + \ln \left( 1 + \frac{\nu_2}{1 - \alpha_2\nu_2 - \gamma\nu_1} \right) \right], \label{20} \end{equation} which indeed coincides with (\ref{12}). A mean-field interpretation is possible in this case as well. Indeed, what figures in the Hamiltonian for $a$-th species is the sum $e\vec A + \sum_\beta e^\beta_a \vec A^\beta$, where $\vec A$ is the vector potential of the magnetic field and $\vec A^\beta$ is the $\beta$-th gauge field potential created by all other particles. If {\it all\/} the gauge fields are averaged, so that $B^\beta = \sum_b N_b \phi^\beta_b /V = \sum_b \rho_b \phi^\beta_b$, then the Landau level degeneracy per unit area becomes $\frac{1}{2\pi} (|eB| - \sum_\beta e_a^\beta B^\beta) = \rho_L - \sum_b \rho_b \alpha_{ab}$, which is precisely what leads to (\ref{12}). As well as for the one-species case, this equation is valid only when $\nu_1$ and $\nu_2$ are small enough---in fact, both denominators have to be positive; one of them vanishing means that the critical filling is achieved for the corresponding species. In the special case (\ref{SFO}), the condition is \begin{equation} \sqrt{\alpha_1}\nu_1 + \sqrt{\alpha_2}\nu_2 < \frac{1}{\sqrt{\max (\alpha_1,\alpha_2) }}. \end{equation} \section{Interpretation in terms of single-state distributions} We now wish to interpret the exactly solvable model of anyons in the LLL in terms of a gas of free particles with a particular statistical distribution. We will use arguments similar to those of \cite{IsakovIJMP94} for the Calogero model. Description of single-state partition functions for multispecies systems follows Ref. \cite{IsakovMPL94}. Let there be single-particle states labeled with $i$, common for all species $a$, but with energies $\varepsilon_a^{(i)}$ possibly depending on $a$. The particles in different single-particle states are assumed to be statistically independent, which implies that the grand partition function is a product of single-state grand partition functions \begin{equation} \Xi =\prod_i \Xi^{(i)} \label{eq:a1}\end{equation} with $\Xi^{(i)}$ depending only on the Gibbs factors $x_a=e^{\beta(\mu_a-\varepsilon_a^{(i)})}$: $\Xi^{(i)}=\Xi^{(i)}(\{x_a\})$. The statistical distributions (average number of particles of different species in state $i$) are \begin{equation} n_a^{(i)}=x_a\frac{\partial}{\partial x_a}\ln \Xi^{(i)}. \label{eq:a2}\end{equation} We assume that $\ln \Xi^{(i)}$ may be expanded in integer powers of the Gibbs factors (again, we specify the formulas containing the series expansions for two species): \begin{equation} \ln \Xi^{(i)} =\sum_{k_1,k_2=0}^\infty f_{k_1k_2}x_1^{k_1}x_2^{k_2}. \label{eq:a3}\end{equation} It follows then from (\ref{eq:a2}) \begin{equation} n_a^{(i)}=\sum_{k_1,k_2=1}^\infty k_a f_{k_1k_2}x_1^{k_1}x_2^{k_2}. \label{eq:a4}\end{equation} Combining (\ref{eq:a1}) with (\ref{eq:a3}), summing over single-particle states and comparing with (\ref{5bis}), we get the relation \begin{equation} b_{k_1k_2}=f_{k_1k_2} \sigma_{k_1k_2}, \label{eq:a6}\end{equation} where \begin{equation} \sigma_{k_1k_2}=\sum_i e^{-\beta(k_1\varepsilon_1^{(i)}+k_2\varepsilon_2^{(i)})}. \label{eq:a7}\end{equation} We use the many-particle partition functions for anyons in the LLL in a harmonic well (\ref{15}), with $\omega\to0$, to calculate the coefficients $f_{k_1k_2}$ in the expansion (\ref{eq:a3}). Expressing the cluster coefficients in terms of the many-particle partition functions (but {\it not yet\/} using the thermodynamic limit prescription given in the Appendix), taking into account the expression (\ref{13}) for the single-particle spectrum, in order to calculate $\sigma_{k_1k_2}$, and then using (\ref{eq:a7}) and (\ref{eq:a6}), we eventually obtain \begin{eqnarray} f_{k_10} &\!\!\!=\!\!\!& \frac1{k_1} \prod_{l_1=1}^{k_1-1}(1-\frac{k_1\alpha_1}{l_1}), \label{eq:a8} \\ f_{k_1k_2} &\!\!\!=\!\!\!& -\gamma\frac{k_1+k_2}{k_1k_2} \prod_{l_1=1}^{k_1-1}(1-\frac{k_1\alpha_1+k_2\gamma}{l_1}) \prod_{l_2=1}^{k_2-1}(1-\frac{k_2\alpha_2+k_1\gamma}{l_2}). \label{eq:a9} \end{eqnarray} We thus conclude that in the limit $\omega\to 0$, multispecies anyons in the LLL can be described by the partition functions (\ref{eq:a3}) and the statistical distributions (\ref{eq:a4}) with the coefficients of their expansions in powers of the Gibbs factors given by (\ref{eq:a8})--(\ref{eq:a9}). Now we consider directly the case $\omega=0$. All the single-particle states have then the same energy $\omega_c$, therefore all the relevant quantities do not depend on the index $i$ any longer, and summation over single-particle states reduces to multiplication by the number of states in the LLL, $\rho_L V$, e.~g.~$\ln\Xi = \rho_L V \ln\Xi^{(1)}$. It turns out then that $\Xi^{(1)}$ can be represented as \begin{equation} \Xi^{(1)}=\prod_a\Xi_a \label{eq:a10}\end{equation} with \begin{equation} \Xi_a=1+\frac{n_a}{1-\sum_b \alpha_{ab}n_b}, \label{eq:a11}\end{equation} so that for the coefficients of the expansions of the single-state grand partition functions \begin{equation} \Xi_a =\sum_{k_1,k_2} P_{a;k_1k_2}x_1^{k_1}x_2^{k_2} \label{eq:a12}\end{equation} we obtain \begin{eqnarray} P_{1;k_10} &\!\!\!=\!\!\!& \prod_{l_1=2}^{k_1} \left( 1 - \frac{k_1\alpha_1}{l_1} \right), \label{eq:Po} \\ P_{1;k_1k_2} &\!\!\!=\!\!\!& -\gamma\frac{k_1}{k_2} \prod_{l_1=2}^{k_1}\left(1-\frac{k_1\alpha_1+k_2\gamma}{l_1}\right) \prod_{l_2=1}^{k_2-1}\left(1-\frac{k_2\alpha_2+k_1\gamma}{l_2}\right) \label{eq:Pg}\end{eqnarray} and $P_{2;k_1k_2}=P_{1;k_2k_1}|_{\alpha_1\leftrightarrow\alpha_2}$. Eqs.~(\ref{eq:a8}) and (\ref{eq:Po}) have been previously derived for fractional statistics in one dimension, starting from the Calogero model \cite{IsakovIJMP94} (see also a recent derivation directly for exclusion statistics \cite{Poly}). Eqs.~(\ref{eq:a9}) and (\ref{eq:Pg}) are generalizations onto the multispecies case. The partition functions $\Xi_a$, in addition, satisfy the equations \begin{equation} \Xi_a=1+x_a\prod_b\Xi_b^{\delta_{ab}-\alpha_{ab}} \label{eq:a13}\end{equation} Substituting (\ref{eq:a11}) into (\ref{eq:a13}), we get the equation for the statistical distributions $n_a$ \begin{equation} n_a/x_a=\prod_b (1-\sum_c \alpha_{bc}n_c)^{\alpha_{ab}} (1-n_a-\sum_c \alpha_{bc}n_c)^{\delta_{ab}-\alpha_{ab}}. \label{eq:a14}\end{equation} These statistical distributions coincide with the ones for exclusion statistics \cite{Wu94,IsakovMPL94} after the identification \begin{equation} g_{ab}=\alpha_{ab} \label{eq:a18}\end{equation} (in our model, consequently, the exclusion statistics parameters are symmetric, $g_{ab}=g_{ba}$). It is now easy to calculate the cluster coefficients, using (\ref{eq:a6}). Using the thermodynamic limit prescription at each order $k_1+k_2$, (\ref{eq:a7}) becomes \begin{equation} \sigma_{k_1k_2}=\rho_L V e^{-\beta(k_1+k_2)\omega_c} \label{eq:a19} \end{equation} and, with the use of (\ref{eq:a8})--(\ref{eq:a9}), the formulas (\ref{17})--(\ref{18}) are recovered. Note finally that once (\ref{eq:a10}) and (\ref{eq:a11}) have been correctly ``guessed'', the equation of state is nothing but the thermodynamic identity $\beta P V = \ln \Xi$. \section{Concluding remarks} We have studied quantum and statistical mechanics for mutual fractional exchange statistics. We have presented a microscopic solvable model, anyons in the LLL, which, on one hand, involves mutual fractional exchange statistics and, on the other hand, reproduces the statistical mechanics for mutual fractional exclusion statistics. This establishes correspondence between the two approaches to mutual statistical interactions. We hope that due to the similarity between anyons in the LLL and FQHE quasi\-part\-ic\-les, the present work will help a better understanding of excitations in quantum Hall systems. In addition, intimate links between anyons in the LLL and the Calogero model suggest that the microscopic solutions for multispecies anyons found in this paper can be used to construct solutions for a multispecies Calogero model which would reveal mutual exclusion statistics \cite{furt}. \bigskip {\bf Acknowledgements:} We thank K\aa re Olaussen for valuable discussions. We are also grateful to NORDITA for their hospitality during the Anyon Workshop where this work was initiated. S.M. gratefully acknowledges warm hospitality of the theory division of the IPN at Orsay where a significant part of this work was done and thanks Guillermo Zemba for bringing ref.~\cite{Froelich91} to his attention. S.B.I. was supported in part by the Russian Foundation for Fundamental Research under grant No.~95-02-04337. \bigskip {\Large \bf Appendix} \bigskip \nopagebreak \setcounter{equation}{0} \renewcommand{\theequation}{A.\arabic{equation}} \newcommand{\tilde{b}}{\tilde{b}} \newcommand{\tilde{a}}{\tilde{a}} We summarize here, for reference purposes, the formulas concerning the cluster and virial expansions for a two-species gas, and give the relations between $Z_{N_1N_2}$, $b_{n_1n_2}$, and $a_{n_1n_2}$, up to the fifth order. The grand canonical partition function is \begin{equation} \Xi = \sum_{N_1,N_2} z_1^{N_1} z_2^{N_2} Z_{N_1N_2} \label{A1} \end{equation} and the cluster coefficients $b_{k_1k_2}$ are determined in terms of $Z_{N_1N_2}$'s by writing the logarithm of (\ref{A1}) in the form \begin{equation} \ln \Xi = \sum_{k_1,k_2} b_{k_1k_2} z_1^{k_1} z_2^{k_2}. \label{A2} \end{equation} The pressure and the densities are \begin{equation} \beta P = \frac{1}{V} \ln \Xi, \qquad \rho_a = \frac{z_a}{V} \frac{\partial \ln \Xi}{\partial z_a}, \label{A3} \end{equation} respectively ($V$ is the area); writing \begin{equation} \beta P = \sum_{k_1,k_2} a_{k_1k_2} \rho_1^{k_1} \rho_2^{k_2}, \label{A4} \end{equation} using (\ref{A3}) with $\ln \Xi$ substituted from (\ref{A2}), and matching coefficients at equal powers of $z$'s on both sides, one gets the equations which allow to express $a_{k_1k_2}$ in terms of $b_{k_1k_2}$'s. The explicit formulas are given below. For the cluster coefficients, one has \begin{eqnarray*} b_{10} & = & Z_{10}; \\ \\ b_{20} & = & -\frac{1}{2}Z_{10}^2 + Z_{20}, \\ b_{11} & = & -Z_{01}Z_{10} + Z_{11}; \\ \\ b_{30} & = & \frac{1}{3}Z_{10}^3 - Z_{10}Z_{20} + Z_{30}, \\ b_{21} & = & Z_{01}Z_{10}^2 - Z_{10}Z_{11} - Z_{01}Z_{20} + Z_{21}; \\ \\ b_{40} & = & -\frac{1}{4}Z_{10}^4 + Z_{10}^2Z_{20} - \frac{1}{2}Z_{20}^2 - Z_{10}Z_{30} + Z_{40}, \\ b_{31} & = & -Z_{01}Z_{10}^3 + Z_{10}^2Z_{11} + 2Z_{01}Z_{10}Z_{20} -Z_{11}Z_{20} - Z_{10}Z_{21} - Z_{01}Z_{30} + Z_{31}, \\ b_{22} & = & -\frac{3}{2}Z_{01}^2Z_{10}^2 + Z_{02}Z_{10}^2 + 2Z_{01}Z_{10}Z_{11} - \frac{1}{2}Z_{11}^2 - Z_{10}Z_{12} + Z_{01}^2Z_{20} \\ && {} - Z_{02}Z_{20} - Z_{01}Z_{21} + Z_{22}; \\ \\ b_{50} & = & \frac{1}{5}Z_{10}^5 - Z_{10}^3Z_{20} + Z_{10}Z_{20}^2 + Z_{10}^2Z_{30} - Z_{20}Z_{30} - Z_{10}Z_{40} + Z_{50}, \\ b_{41} & = & Z_{01}Z_{10}^4 - Z_{10}^3Z_{11} - 3Z_{01}Z_{10}^2Z_{20} + 2Z_{10}Z_{11}Z_{20} + Z_{01}Z_{20}^2 + Z_{10}^2Z_{21} \\ && {} - Z_{20}Z_{21} + 2Z_{01}Z_{10}Z_{30} - Z_{11}Z_{30} - Z_{10}Z_{31} - Z_{01}Z_{40} + Z_{41}, \\ b_{32} & = & 2Z_{01}^2Z_{10}^3 - Z_{02}Z_{10}^3 - 3Z_{01}Z_{10}^2Z_{11} + Z_{10}Z_{11}^2 + Z_{10}^2Z_{12} - 3Z_{01}^2Z_{10}Z_{20} \\ && {} + 2Z_{02}Z_{10}Z_{20} + 2Z_{01}Z_{11}Z_{20} - Z_{12}Z_{20} + 2Z_{01}Z_{10}Z_{21} - Z_{11}Z_{21} \\ && {} - Z_{10}Z_{22} + Z_{01}^2Z_{30} - Z_{02}Z_{30} - Z_{01}Z_{31} + Z_{32}. \end{eqnarray*} If a harmonic oscillator regularization is used, the thermodynamic limit is meant as $\omega\to0$, with a particular prescription at each order of the cluster expansion \cite{McCabe}. A treatment analogous to the one given in \cite{Olaussen92} shows that to get the correct thermodynamic limit for $b_{k_1k_2}$, the necessary prescription is to identify $\frac{2\pi}{(k_1+k_2)\beta m \omega^2} \to V$, in the limit $\omega \to 0$. For the virial coefficients, defining \begin{equation} \tilde{b}_{k_1k_2} = b_{k_1k_2} / b_{10}^{k_1} b_{01}^{k_2}, \end{equation} \begin{equation} \tilde{a}_{k_1k_2} = a_{k_1k_2} / V^{k_1+k_2-1}, \end{equation} one has \begin{eqnarray*} \tilde{a}_{10} & = & 1;\\ \\ \tilde{a}_{20} & = & -\tilde{b}_{20},\\ \tilde{a}_{11} & = & -\tilde{b}_{11};\\ \\ \tilde{a}_{30} & = & 4\tilde{b}_{20}^2 - 2\tilde{b}_{30},\\ \tilde{a}_{21} & = & \tilde{b}_{11}^2 + 4\tilde{b}_{11}\tilde{b}_{20} - 2\tilde{b}_{21};\\ \\ \tilde{a}_{40} & = & -20\tilde{b}_{20}^3 + 18\tilde{b}_{20}\tilde{b}_{30} - 3\tilde{b}_{40}, \\ \tilde{a}_{31} & = & -\tilde{b}_{11}^3 - 6\tilde{b}_{11}^2\tilde{b}_{20} - 24\tilde{b}_{11}\tilde{b}_{20}^2 + 3\tilde{b}_{11}\tilde{b}_{21} + 12\tilde{b}_{20}\tilde{b}_{21} + 9\tilde{b}_{11}\tilde{b}_{30} - 3\tilde{b}_{31}, \\ \tilde{a}_{22} & = & -9\tilde{b}_{02}\tilde{b}_{11}^2 - 3\tilde{b}_{11}^3 + 6\tilde{b}_{11}\tilde{b}_{12} - 12\tilde{b}_{02}\tilde{b}_{11}\tilde{b}_{20} - 9\tilde{b}_{11}^2\tilde{b}_{20} + 6\tilde{b}_{12}\tilde{b}_{20} + 6\tilde{b}_{02}\tilde{b}_{21} \\ && {} + 6\tilde{b}_{11}\tilde{b}_{21} - 3\tilde{b}_{22}; \\ \\ \tilde{a}_{50} & = & 112\tilde{b}_{20}^4 + 32\tilde{b}_{40}\tilde{b}_{20} + 18\tilde{b}_{30}^2 - 144\tilde{b}_{30}\tilde{b}_{20}^2 - 4\tilde{b}_{50}, \\ \tilde{a}_{41} & = & \tilde{b}_{11}^4 + 8\tilde{b}_{11}^3\tilde{b}_{20} + 40\tilde{b}_{11}^2\tilde{b}_{20}^2 + 160\tilde{b}_{11}\tilde{b}_{20}^3 - 4\tilde{b}_{11}^2\tilde{b}_{21} - 24\tilde{b}_{11}\tilde{b}_{20}\tilde{b}_{21} - 80\tilde{b}_{20}^2\tilde{b}_{21}\\ && {} + 2\tilde{b}_{21}^2 - 12\tilde{b}_{11}^2\tilde{b}_{30} - 120\tilde{b}_{11}\tilde{b}_{20} \tilde{b}_{30} + 24\tilde{b}_{21}\tilde{b}_{30} + 4\tilde{b}_{11}\tilde{b}_{31} + 24\tilde{b}_{20}\tilde{b}_{31} \\ && {} + 16\tilde{b}_{11}\tilde{b}_{40} - 4\tilde{b}_{41}, \\ \tilde{a}_{32} & = & 16\tilde{b}_{02}\tilde{b}_{11}^3 + 6\tilde{b}_{11}^4 - 12\tilde{b}_{11}^2\tilde{b}_{12} + 48\tilde{b}_{02}\tilde{b}_{11}^2\tilde{b}_{20} + 32\tilde{b}_{11}^3\tilde{b}_{20} - 32\tilde{b}_{11}\tilde{b}_{12}\tilde{b}_{20} \\ && {} + 64\tilde{b}_{02}\tilde{b}_{11}\tilde{b}_{20}^2 + 80\tilde{b}_{11}^2\tilde{b}_{20}^2 - 32\tilde{b}_{12}\tilde{b}_{20}^2 - 24\tilde{b}_{02}\tilde{b}_{11}\tilde{b}_{21} - 20\tilde{b}_{11}^2\tilde{b}_{21} + 8\tilde{b}_{12}\tilde{b}_{21} \\ && {} - 32\tilde{b}_{02}\tilde{b}_{20}\tilde{b}_{21} - 64\tilde{b}_{11}\tilde{b}_{20}\tilde{b}_{21} + 8\tilde{b}_{21}^2 + 8\tilde{b}_{11}\tilde{b}_{22} + 16\tilde{b}_{20}\tilde{b}_{22} - 24\tilde{b}_{02}\tilde{b}_{11}\tilde{b}_{30} \\ && {} - 24\tilde{b}_{11}^2\tilde{b}_{30} + 12\tilde{b}_{12}\tilde{b}_{30} + 8\tilde{b}_{02}\tilde{b}_{31} + 12\tilde{b}_{11}\tilde{b}_{31} - 4\tilde{b}_{32}; \end{eqnarray*} with the obvious symmetry with respect to permutation of the subscripts.
877ccfc0e69ff103c420317fe8dfcf54fa7722cf
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} In the past few years the interest in the field of ion crystallization has become increasingly stronger from both theoretical and experimental points of view. This research effort has led to the successful detection of crystallization in an ion trap \cite{[1]} and in a mini-quadrupole storage ring \cite{[2]}. In these systems, ions are at rest in the laboratory frame and the transition to the ordered state is achieved by laser cooling \cite{[3]}. At the same time, the possibility has been suggested to cool an ion beam in a storage ring to the extent that a crystalline phase is reached even in such system, in which particles travel at high velocity \cite{[4]}. In this case, compaction of phase-space of the ion beam is obtained by electron \cite{[5]} and laser cooling techniques. Recently, an ion storage ring dedicated to crystallization studies has been proposed \cite{[6]}. Among the items one should consider in designing such a ring is the problem of cooling the ion beam and the diagnostics of the ordered state. It is known that an ion beam can be efficiently laser-cooled only longitudinally but there is no experimentally tested technique that allows cooling of the transverse degrees of freedom of the ion beam as efficiently as longitudinally. Although some proposals for an efficient transverse cooling are under consideration, the lack of such a technique is a severe obstacle to the reaching of any ordered state. The diagnostics of the ordered state is another crucial point, because a clear and unambiguous detection of the crystalline state is necessary to validate experimental findings. For the case of ions in traps, one can use a CCD-based detection since ions are practically at rest \cite{[2]}. Unfortunately, to our knowledge, no satisfactory method to detect ordering of an ion beam by resolving the single ion has been proposed so far. It has been suggested that Schottky-noise-based pick-ups might detect the passage of the single particle of the beam but, in a storage ring, the bandwidth of such an instrument would be greater than $100$ GHz, that is by far higher than any available device can achieve. Among optical methods, some possibilities are described in Ref.\cite{[7]}; these instruments should provide a response correlated with ion order but they would never allow a direct and unambiguous evidence of the position of the single ion. In this paper we propose a fluorescence-based method to detect ordering in a one-dimensional ion beam. \section{Basic principles of the method} A pulsed laser - resonant with the travelling ions - is split in two parts, which simultaneously cross the ion beam at right angle at two nearby positions along the storage ring (see Fig.1). This laser-to-ion crossing area is followed by four photomultipliers which detect the photons emitted by the ions that have previously been excited by the laser beams. The signals recorded by the photomultipliers are analyzed when one laser beam is moved with respect to the other one. In the absence of ordering, no correlation in fluorescence signals should be recorded while changing the relative distance of the two laser beams. On the contrary, if a string were obtained as a result of cooling, a strong correlation between the signals should be observed. Suppose that one of the four photomultipliers detects the fluorescence of an ion excited by one of the laser beams. If one of the other three photomultipliers detects a simultaneous fluorescence signal, it means that the other laser has interacted with another ion in the string, in turn indicating that the distance between the two ion-to-laser crossing points is an integer multiple of the string interparticle spacing. Then, by slightly moving the second laser beam, the correlation signal should vanish. A sort of periodical dependence on the distance between the laser beams should appear, whose contrast will mainly depend on the detection efficiency. The diagnostics is conceived with the aim of detecting ordering within a string of ions in a storage ring. When an ion beam in a storage ring undergoes sufficiently strong cooling, a phase transition to an ordered state is expected to occur \cite{[Schiffer]}. The simplest ordered structure is a one-dimensional string. For this system the degree of ordering increases as temperature decreases without any sharp transition as is the case for three-dimensional systems \cite{[Land]} \cite{[Emer]}. Typical values of the interparticle spacing for the string configuration lie between $s=10-100~\mu$m; for the following we shall assume $s=50~\mu$m. At non-zero temperature, ions are expected to oscillate both in the transverse and longitudinal directions (incoherent motion). Moreover, for a string longitudinal oscillations of equilibrium positions can also occur (coherent motion) through long-range waves. For the diagnostic system concerned through this paper the distance between the two laser beams can be chosen to match interparticle spacing of the string. In this case, the effect of coherent motion is uneffective as the nearest-neighbor spacing is relatively uniform. The effect of long wavelength on interparticle oscillations is negligible for a relative distance of a few lattice steps. An analytical evaluation of this effect can be found in Ref. \cite{[Avilov]}. Short wavelengths mostly affect the fluctuations in interparticle spacing, $\delta$$s$. Based on Ref. \cite{[Avilov]}, a rough estimate holds: $<\delta$$s^{2}>$/$s^{2}$$ \simeq$ 1/$\Gamma$, where $\Gamma$ is the plasma parameter for the ion beam. When the ion beam is being cooled $\delta$$s$ becomes even lower than $s$; as an example, at $\Gamma=100$, $\delta$$s$ = $5\mu$m. If the distance between the laser beams were larger than one interparticle spacing, the correlation signal should become progressively weaker. The configuration of the diagnostic device can also compensate for a possible influence of transverse oscillations. Since the laser beams cross the string orthogonally and their spots are small enough that they do not overlap, the ions suffering transverse oscillations in the direction of the laser can always be resonant, irrespective of their coordinates along that axis. Ion oscillations in direction orthogonal to both the laser and the string could in principle move the target ion outside the laser beam spot. To avoid this effect - and the consequent less efficiency - the laser beam can be focused by a cylindrical lens. This optical element can be arranged to produce a focal segment orthogonal to the directions of both the laser and the string. In this way the locations where the laser beams impinge on the string are two thin regions; these can be as wide as several hundreds of microns in one dimension without overlap between them. Transverse oscillations of a very cold ion beam are expected to be lower than this value. \section{An application of the method to a real storage ring} We shall discuss a possible implementation of such diagnostic device with reference to the case of a $ ^{24}\!Mg^{+}$ string. In order to provide an example of a possible application, we shall refer to the ASTRID Storage Ring at Aarhus: its main parameters can be found in Ref. \cite{[Nielsen]}. With the above assumptions, the velocity of the ion beam is about $ v=8.97\cdot10^{5}$ m/s. The time duration of each laser pulse must be much shorter than the time, $T=s/v=56$ ps, taken by an ion to travel one interparticle spacing. On the other hand, the laser pulse must not be too short because this would lead to a very broad frequency pattern, in turn making more difficult the filtering between laser photons and fluorescence photons, as discussed below. A laser whose pulses are 2 ps long meets these requirements. In the following we shall refer to a commercially available, frequency tripled $ Ti:Al_2O_3 $ pulsed laser (50 MHz repetition rate) , with some nJ/pulse. Considering the finite duration of a Gaussian-shaped laser pulse, the probability to excite a $ ^{24}\!Mg^{+}$ ion is about 0.5 for the transition under consideration ($ 3 s ^2\!S_{1/2} \to 3 p ^2\!P_{1 /2}$). The laser frequency must be resonant with the ion's transition energy in its reference system ($\lambda$=279.6 nm). The laser beams need focusing to spot much smaller than the interparticle spacing ($50~\mu$m). This can be done since it is experimentally possible to focus a laser beam within $5~\mu$m (FWHM). The laser focusing systems need to be placed in the vacuum chamber and should be movable in order to avoid interference with the ion beam during normal operation of the storage ring. The deacay region is viewed by four photomultipliers, located each behind its own window. The window length ($2L$) matches the decay length for the ion de-excitation. The four windows cover about $50$$\%$ of the azimuthal acceptance of the fluorescence photons. The distance $d$, between the upstream edge of photomultiplier acceptance and the interaction region between the ion and the laser beam should be as short as possible (see Fig. 2). We assume $d=10$ mm, $2L=20$ mm, a beam-pipe radius, $R=35$ mm, and the lifetime of the upper level, $\tau=3.5$ ns. The photomultipliers are single-photon detectors, which can be assumed to have a quantum efficiency of $23$$\%$ and a rate of background counts of about $100$ c/s. The signals from the four photomultipliers are discriminated ($20$ ns signal width) and the logical signals are ANDed two by two to form 6 combinations. These are then ORed; a positive logical level for the OR is an event in which the two ions, excited by the laser beams, have both emitted a photon. Each photomultiplier must be equipped with a filter to intercept stray laser photons. Let $\theta$ be the angle between the direction of the ion beam and the direction of a photon emitted by an ion. Due to the finite duration of a laser pulse (2 ps), the frequency spectrum is of the order of 500 GHz. Considering that the laser beams impinge on the string at right angle, the photons of the lasers are overlapped in frequency with those from natural fluorescence between the angles $\theta=85^{0}$ and $\theta=95^{0}$. Filters are designed in order to discard all photons impinging with an angle $\theta > 70^{0}$. The geometrical acceptance of the system allows detections of photons only for $\theta > 50^{0}$. This corresponds to a lower limit for the filter bandwidth of 0.25 nm. The distance between the two laser beam spots can be varied by moving the optical set-up; this movement can be done with an accuracy better than the range of longitudinal oscillation for interparticle spacing. As described above, by moving the second laser beam with respect to the first one and counting the coincidence events of the photomultipliers, one can assess if the distance between the two laser-to-ion crossing points is an integer multiple of the string's interparticle spacing. After having met this condition, by slightly moving the second laser, one would record only accidental coincidences; this is true since, if the first laser excites an ion, then the second laser beam can no longer be synchronous with any other ion in the string. Appearance of correlation in the signals with the same periodicity of the string would be firm indication of ordering in the beam. The total probability that one of the four detectors sees the fluorescence of an ion excited by one of the lasers is about $9.3 \cdot 10^{-4}$. Considering a purely random jitter of the laser, a laser beam spot of $5~\mu$m and a typical interparticle spacing of $50~\mu$m, the probability that a laser pulse crosses an ion in the beam can be roughly estimated as $1/10$. Therefore, the probability that a simultaneous de-excitation will be recorded by the device is $1 \cdot 10^{-6}$. Considering a repetition rate of the laser of $5\cdot10^7$ Hz, one expects a counting rate of approximately 50 Hz. When the distance of the laser-to-ion beam crossing points is not an integer multiple of the string's step, the rate of accidental coincidences is estimated to be $0.25$ Hz. On the contrary, a non ordered ion beam would exhibit a coincidence rate of $10$ Hz independent of the relative positions of the laser beams. In order to check these results, we have developed a Monte Carlo simulation. Ion oscillations around their equilibrium positions, the excitation process of an ion by the laser light, the spontaneous emission, the geometrical acceptance and efficiency of the detector (filters + photomultipliers) are taken into account. Fig.3 shows the counting rate of coincidences versus the position of the second laser. A strong correlation signal is achieved for the case of an ordered string. \section{Conclusions} A novel method to detect ordered structures of an ion beam in an unambiguous way is proposed, and its feasibility is demonstrated. The method provides firm observation of ordering within the ion beam using available technology. This could be profitably applied to operating storage rings. \acknowledgements The authors are grateful to A. Burov, R. W. Hasse, S. Gustafsson, L. Piemontese, and L. Tecchio for critical reading of the manuscript.
8d09ba7831f4fb1e36dd5b88de2a4965d60b1aad
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Galaxy clusters occupy a unique position in the dynamical evolution of the universe. Unlike lower-mass systems such as galaxies, which for the most part retain little dynamical information about their formation, clusters of galaxies are within one or two crossing times of their formation. This suggests that they may retain valuable clues to their initial conditions (as well as hints about the collapse and formation of structure in the early universe). The effect of the dense cluster environment on galaxy evolution, as well as other trends in the physical properties of clusters (see, for instance, Dressler 1984; Giovanelli \& Haynes 1985; Edge \& Stewart 1991), suggests that they are gravitationally bound and that their galaxies no longer participate in the Hubble flow. This distinguishes clusters from superclusters and other large-scale structures. The study of galaxy clusters thus provides a unique opportunity to explore gravitational interactions and dynamical evolution in the universe. Clusters of galaxies contain two luminous components, hot gas and galaxies. If a cluster is sufficiently old and unperturbed, these tracer particles will have equilibrated within the cluster gravitational potential. This enables use of the equations of hydrostatic and dynamical equilibrium to explore the physical properties of these systems. For a hot gas in equilibrium with a spherical gravitational potential, the equation of hydrostatic equilibrium may be written \begin{equation} M_X(r) = { {-kTr} \over {G \bar{m}} } ({ {d~{\rm ln}~n_{gas} } \over {d~{\rm ln}~r} } + { {d~{\rm ln}~T} \over {d~{\rm ln}~r}} ) \end{equation} (e.g.\ Fabricant, Lecar \& Gorenstein 1981), where $M_X$ is the X-ray determined virial mass, $T$ is the temperature of the X-ray emitting gas, $n_{gas}$ is the gas density and $\bar{m}$ is the average mass per gas particle. Similarly, the Jeans equation relates the kinetic energy of the galaxies to the virial mass of the cluster: \begin{eqnarray} - {{G~n_{gal} M_{opt}(r)} \over r^2} & = & { d(n_{gal} \sigma ^2 _r) \over dr} + {2 n_{gal} \over { r \sigma _r ^2} } (1 - \sigma _r ^2 / \sigma ^2 _t) \\ \nonumber & = & { d(n_{gal} \sigma ^2 _r) \over dr} + {2 n_{gal} \over { r \sigma _r ^2} } A \end{eqnarray} (Merritt 1987), where $M_{opt}$ is the optically-determined virial mass, $r$ is the clustercentric radius, $n_{gal}$ is the galaxy density, $\sigma _r$ and $\sigma _t$ are the radial and tangential velocity dispersions respectively, and $A$ is the anisotropy parameter describing the distribution of galaxy orbits. For an isothermal cluster in dynamical equilibrium, with no source of energy other than gravity, the masses as determined by the galaxies and by the gas are expected to be equal. As shown by Bahcall \& Lubin (1994) among others, the ratio of the kinetic energies of the galaxies and gas is then equal to the ratio of the logarithmic slope of the gas density profile to that of the galaxies: \begin{eqnarray} {{\sigma ^2 _r} \over {kT \over \bar{m} } } & = & { {d~{\rm ln}~n_{gas} } / {d~{\rm ln}~r} \over {d~{\rm ln}~n_{gal} } / {d~{\rm ln}~r + 2A} }. \\ \nonumber & = & { {d~{\rm ln}~n_{gas} } / {d~{\rm ln}~r} \over {d~{\rm ln}~n_{gal} } / {d~{\rm ln}~r}} \end{eqnarray} (where $A = 0$ for an isotropic distribution of galaxy orbits). Therefore, using the assumptions that the gas and galaxies are both in equilibrium with the cluster gravitational potential, and that gravity is the only source of energy, allows us to predict that the velocity dispersion (as measured from galaxy velocities) and the temperature of the intracluster medium (as determined from X-ray spectra) should be correlated, with $\sigma _r \propto T ^{0.5}$. The ratio of the kinetic energies is called $\beta _{spec}$. The ratio of the logarithmic slopes of the density profiles is $\beta _{fit}$. Despite the many difficulties in accurately measuring cluster temperatures and velocity dispersions, studies of X-ray and optical cluster samples reveal a well-behaved correlation between these quantities (Mushotzky 1984; Edge \& Stewart 1991, hereafter ES91; Lubin \& Bahcall 1993, hereafter LB93). The relationship between $\sigma _r$ and $T$ expected from virial considerations is consistent with the data, although there is a large scatter about the $\sigma _r \propto T^{0.5}$ line. This scatter has been attributed to incomplete gas thermalization, cooling flows, velocity anisotropies in the galaxy orbits, foreground/background contamination, and substructure in the clusters (cf.\ ES91; LB93 and references therein). It is important to remember, however, that the predicted $\sigma _r - T$ correlation derives from the virial theorem, and that in order to test it one must consider the dynamical state of the clusters in the dataset (cf.\ Gerbal et al.\ 1994). The high frequency of substructure in clusters of all morphologies, as determined by both X-ray and optical studies (see, e.g.\, Davis \& Mushotzky 1993; Mohr, Fabricant \& Geller 1993; Beers et al.\ 1991; Bird 1993, 1994), is generally believed to indicate that clusters are dynamically-young. If clusters are only within a few crossing times of formation, then in many cases virial equilibrium has not been established. This certainly influences the broad distribution of clusters about the canonical $\sigma _r \propto T^{0.5}$ relation. In this paper we will quantify the effects of morphology and substructure on the velocity dispersion-temperature correlation for clusters. In Section 2 we present the limited cluster sample, in which the morphological type of the cluster sample has been restricted and the effects of substructure have been minimized. We have supplemented the available published X-ray temperature data with new, more accurate temperatures from ASCA and {\it Ginga}. In Section 3 we present the regressions between the velocity dispersion and temperature. Section 4 summarizes proposed mechanisms for modifying the slope of the $\sigma _r - T$ correlation. In Section 5 we present a summary. \newpage \section{The Limited Cluster Sample} The morphology of a cluster may be described by its gas and/or galaxy distribution. As our observations of clusters have improved, it has become clear that morphology is related to the dynamical age of a cluster. Irregular clusters are dynamically young, and tend to be spiral-rich and gas-poor. They tend to have non-Gaussian velocity distributions and kinematically-distinct subconcentrations of galaxies. Regular clusters are dominated by ellipticals, have Gaussian velocity distributions and tend to be luminous X-ray emitters (cf.\ Sarazin 1988 and references therein; Bird 1993,1994). Bird (1994) presents a detailed analysis of the dynamics of nearby clusters ($z < 0.1$) with central galaxies. These clusters tend to have smooth morphologies and X-ray cooling flows, and in the past it has been assumed that they represent the most relaxed, dynamically-evolved clusters in the universe. However, Bird (1994) shows that these clusters also possess significant substructure. An objective partitioning algorithm called KMM (McLachlan \& Basford 1988; Ashman, Bird \& Zepf 1994) is used to remove galaxies belonging to subsystems in the clusters, and the dynamical properties of the ``cleaned'' (i.e., substructure corrected) cluster datasets are presented. It is the 25 clusters in this ``cD database'' which form the optical sample of the present analysis. Of the 25 clusters used in Bird (1994), 21 have accurate X-ray temperature measurements. These clusters, which will be referred to as the limited cluster sample, are listed in Table 1. Table 1 includes the following information: column (1), the cluster name; (2), the 1-D velocity dispersion of the cluster (estimated using the robust biweight estimator $S_{BI}$, Beers, Flynn \& Gebhardt 1991) without substructure correction; (3), the velocity dispersion corrected for substructure; (4), the X-ray temperature, (5) the source code for the X-ray measurement. The optical redshifts are taken from the literature, with sources given in Bird (1994). In addition we have added the Centaurus Cluster (A3526), which was excluded from the cD study because of its proximity. The X-ray temperatures are taken from single-temperature models to ASCA or {\it Ginga} spectra where available, and then from EXOSAT and the {\it Einstein} MPC. For the clusters A1736 and A3558, the GINGA observations are best-fit by a two-temperature model (Day et al.\ 1991), in contradiction to both the {\it Einstein} and ROSAT spectra. Because the data are inconclusive, we have included both temperatures in Table 1 for these two clusters, and we will consider them both in the statistical analysis. Note that the velocity dispersion presented here is measured only along our line of sight to the cluster. We assume for the moment that any velocity anisotropy in these clusters is small and therefore $\sigma _{LOS}$ is comparable to $\sigma _r$ (we will explore this assumption in more detail below). In Table 2 we present the individual values of $\beta _{spec}$ for the limited cluster sample, both with and without substructure correction. With no substructure correction, the mean value of $\beta$ is 1.20$^{+0.30}_{-0.18}$, with an rms scatter of 0.66 (GINGA: 0.99$^{+0.24}_{-0.17}$, rms 0.43). The high mean value and large scatter are due to the inclusion of A2052 in the dataset. The uncorrected velocity dispersion of this cluster is extremely high, 1404 km s$^{-1}$, with corresponding $\beta _{spec} = 3.51$. If this datapoint is excluded from the list, the mean drops to 1.09$^{+0.15}_{-0.15}$ with rms scatter 0.43 (GINGA: 0.97$^{+0.24}_{-0.17}$, rms 0.42). Including the substructure correction to the velocity dispersion (and retaining A2052, which is no longer anomolous), $\langle \beta _{spec} \rangle = 0.90^{+0.10}_{-0.15}$ with an rms scatter of 0.37 (where the confidence intervals are the 90\% bootstrapped estimates) (GINGA: 0.87$^{+0.12}_{-0.17}$, rms 0.38) . To demonstrate the effect of morphology on $\beta _{spec}$, these numbers should be compared to the values from the LB93 study. Lubin \& Bahcall use 41 clusters of widely varying morphology. Their mean value of $\beta _{spec}$ is 1.14$^{+0.08}_{-0.08}$ with an rms scatter of 0.57. The ES91 sample, being based on an X-ray flux-limited catalog of clusters, is biased toward X-ray luminous systems, which are less likely to be affected by major substructure. This sample yields $\langle \beta _{spec} \rangle = 0.91^{+0.11}_{-0.13}$ with an rms scatter of 0.38. It is clear that when examining correlations between temperature and velocity dispersion, uncertainty may be introduced by neglecting the effects of morphology and substructure in the dataset. \section{The Velocity Dispersion -- Temperature Correlation} In Figure 1, we present the velocity dispersion and temperature data for the 22 clusters in the limited sample. The velocity dispersions are corrected for substructure. The dashed lines are the correlations predicted by the virial theorem, for $\beta_{spec} = 1$ and for $\beta_{spec} = 0.67$. Recall that for these data $\langle \beta _{spec} \rangle = 0.90$. The solid line is the best fit to the data using the lower temperatures for A1736 and A3558: \begin{equation} \sigma _r = 10^{2.50 \pm 0.09} T^{0.61 \pm 0.13} \end{equation} Similarly, we find that \begin{equation} T = 10^{-3.15 \pm 0.60} \sigma _r ^{1.31 \pm 0.21} \end{equation} For the higher GINGA temperatures for these two clusters, we find that \begin{equation} \sigma _r = 10^{2.39 \pm 0.09} T^{0.76 \pm 0.11} \end{equation} and \begin{equation} T = 10^{-3.21 \pm 0.61} \sigma _r ^{1.34 \pm 0.21} \end{equation} In both equations the uncertainties quoted are the bootstrapped 1-$\sigma$ values. This fit includes the errors in the measurements, using a linear fitting technique developed by Akritas, Bershady \& Bird (1995, in preparation). This algorithm, based on the ordinary least-squares bisector first defined by Isobe et al.\ (1990), explicitly includes both intrinsic scatter in the relation and uncorrelated measurement errors. The bisector method assumes that neither variable is dependent on the other, which is probably appropriate for the current physical situation. The velocity dispersion and X-ray temperature are both determined by the depth of the gravitational potential (and perhaps other physical effects), and are therefore {\it independent} of each other. This subtlety in the application of linear regression algorithms has been previously noted by astrophysicists for other applications, such as the Tully-Fisher effect (see Isobe et al.\ 1990 for a detailed discussion), but not yet applied to the problem of X-ray and optical correlations. The use of an inappropriate or biased regression technique can have a significant effect on the coefficients of the linear fit, as we demonstrate in Table 3. To simplify this discussion, in Table 3 we present the following: \begin{itemize} \item the published linear regressions given in ES91 and LB93 \item the linear regressions determined from an ordinary least squares fit, without measurement errors \item the linear regressions from the bisector lines, with and without measurement errors \end{itemize} for the ES91 and LB93 datasets, as well as similar regressions for our limited cluster dataset. The uncertainties in the linear coefficients are the 1-$\sigma$ values, determined using a bootstrap method which is the preferred estimator for small datasets. First of all, we see that the published linear regressions are recovered for both the ES91 and the LB93 datasets using the ordinary least squares (OLS) regressions, without errors. For these fits, the velocity dispersion is assumed to be {\it dependent} on the temperature, which as discussed above does not seem like a physically well-motivated assumption. In addition, simulations suggest that the OLS regressions are severely biased for such small sample sizes. The bisector slopes for all three datasets are much steeper than the OLS slopes, varying from 0.61 for our limited cluster dataset and the {\it Einstein} data to 0.87 for the LB93 dataset. The regression for our limited cluster dataset is marginally consistent with the slope of 0.5 predicted by the virial theorem. For the ES91 and LB93 datasets, the fitted slopes are at least 3$\sigma$ away from the canonical value of 0.5. Given the large dispersions between the individual linear regressions, as well as the coefficients of the regressions for the three datasets, how significant is this difference? To estimate the significance of the observed deviation, we utilize a Monte Carlo computer routine. This code simulates 22 cluster temperatures between 2.0 and 10.0 keV and generates velocity dispersions using the virial relation and a $\beta$ value of 1. It then includes a velocity term for the intrinsic scatter in the relationship (which is generated by choosing a velocity perturbation from a uniform distribution of width 150 km s$^{-1}$) as well as measurement errors in both velocity and temperature (these are modelled as Gaussians; the dispersion in velocities is 150 km s$^{-1}$ and in temperature is 0.5 keV). For 1000 simulations, only 40 of the random datasets had measured bisector slopes greater than 0.61, the lowest value obtained for the limited cluster dataset. The average value for the 1000 runs was 0.55 $\pm 0.03$. The highest value of the slope obtained for any of the simulated datasets is 0.64, which is comparable to the value obtained for the ES91 dataset but still strongly inconsistent with the LB93 regression and the limited cluster dataset (with the high temperatures for A1736 and A3558). These simulations suggest that while the deviation between the observed correlation between velocity dispersion and temperature and that predicted by the virial theorem is small, it is significant. Clearly larger individual cluster datasets, higher-quality X-ray spectra, and a larger dataset of clusters will be vital to improving our understanding of this fundamental correlation. The deviation of the $\sigma _r - T$ relationship from that predicted by the equilibrium model described in Section 1 implies that $\beta$ is a function of the depth of the gravitational potential, as estimated by either the temperature or the velocity dispersion. In this case, defining an average (unweighted) value of $\beta _{spec}$ for a cluster sample which covers a wide range of physical parameters yields a quantity which is poorly defined. The dependence of $\beta$ on temperature and/or velocity dispersion is no doubt partially responsible for the high scatter about the $\sigma _r - T$ relation, which remains even after elimination of the effects of substructure from the optical dataset. We have seen in Section 2 that consideration of morphology and substructure significantly reduces the scatter in the values of $\beta _{spec}$ for the individual clusters. Examination of Table 3 reveals that the same effect does not hold true for the determination of the $\sigma _r - T$ correlation. Inclusion of the substructure correction actually raises the scatter in the parameters of the fit slightly, although it remains comparable to the values obtained by both ES91 and LB93. It is clear that although substructure influences the scatter in the relationship, other physical effects must also be significant (see also Gerbal et al.\ 1994). Previous authors have claimed that their data was consistent with the canonical virial theorem dependence of velocity dispersion on temperature, $\sigma _r \propto T^{0.5}$ (ES91, LB93). We have seen that this ``consistency'' is due to the inaccurate use of the least squares linear regression, and that none of the three datasets are consistent with the canonical prediction. Correction for substructure has very little effect on the slope of the $\sigma _r - T$ correlation. The scatter to high velocity dispersions implied by the ``steeper than virial'' relation has been noted by all previous studies and generally attributed to velocity substructure. However, we demonstrate that correction for substructure has little effect on the correlation. \section{Mechanisms for Explaining the Discrepancy} The virial theorem prediction of the relationship between galaxy velocity dispersion and gas temperature is based on three assumptions: that the galaxy orbits are isotropic, that the gas and the galaxies occupy the same potential well, and that gravity is the only source of energy for either the gas or the galaxies. Any process which may contribute to the deviation of the slope from the virial value must operate to a different degree in hot, high-$\sigma _v$ clusters than in cooler, low-$\sigma _v$ systems, to skew the relationship in the observed fashion (although the effect need not be large). Mechanisms which have been proposed include anisotropy in the distribution of galaxy orbits, incomplete thermalization of the gas, pressure support of the ICM from magnetic fields, biasing and protogalactic winds. \subsection{Anisotropy and Magnetic Pressure Support} The anisotropy parameter $A$ is not well-determined for more than one or two clusters. Recall that $A = 1 - \sigma _r ^2 / \sigma ^2 _t$. For radial orbits, with $\sigma _r > \sigma _t$, $A < 0$ and $\beta$ is increased (relative to the value determined by profile fitting; see eqn. 3). For circularized orbits, $\sigma _r < \sigma _t$, $A > 0$ and $\beta$ is decreased. To reproduce the observed trend in the $\sigma _r - T$ relation, we estimate that hot clusters require $A \leq -0.1$ (slightly radial orbits), and cool clusters require $A \sim 0.6$ (moderately circular orbits). Such an extreme variation in galaxy anisotropy is not predicted by any current theory of cluster formation. Kauffmann \& White (1993) do find some evidence for a dependence of formation history on mass, but this variation is negligible over the range of masses included in the limited cluster sample ($5 \times 10^{13} - 1 \times 10^{15}$ M$_{\odot}$; S.\ White, 1994, private communication). In most observations, the temperature profile of the ICM is flat out to the radius where the background dominates the cluster spectrum (Mushotzky 1994). Nonetheless, simulations by Evrard (1990) suggest that the cluster gas will not be completely thermalized after only one crossing time. This effect is evident in more detailed calculations by Metzler \& Evrard (1995, in preparation), who find that the degree of thermalization is {\it not} systematically dependent on temperature. Incomplete thermalization clearly affects the distribution of temperatures measured for the limited cluster sample, but does not affect the slope of the $\sigma _r-T$ relationship in the required direction. In an attempt to resolve the discrepancy between cluster masses determined by gravitational lensing and those determined from X-rays (Miralda-Escud\'{e} \& Babul 1994), Loeb \& Mao (1994) propose magnetic pressure support of the intracluster medium, at least in the cores of cooling flows. To be dynamically significant, tangled magnetic fields must contribute a similar amount of potential energy to the ICM as the gravitational potential. The required field strength (on the order of 50 $\mu$G) is large, but Loeb \& Mao argue that such fields may be generated within cooling flows, where gas and magnetic field lines are confined and compressed. Comparison of the limited cluster sample with Table 1 of Edge, Stewart \& Fabian 1992 reveals that the majority of the limited cluster sample possesses cooling flows (as determined from deprojection analysis) and therefore may benefit from magnetic pressure support. Remember, however, that the Loeb \& Mao (1994) analysis is restricted to the inner 120$h^{-1}$ kpc of A2218 (inside the radius of the cooling flow), whereas our temperatures and velocity dispersions are determined for the entire cluster (again assuming that the cluster ICM temperature profiles are flat outside the cooling radius, as ASCA data suggest). It is unclear whether the variation in $\beta$ deriving from magnetic pressure support would be detected in our analysis of the X-ray and optical data. \subsection{Protogalactic Winds} Protogalactic winds provide an additional source of heating of the ICM. Yahil \& Ostriker (1973), Larson \& Dinerstein (1975) and White (1991) discuss ram pressure stripping and protogalactic winds as mechanisms for the metal enrichment of the ICM. In the winds scenario, the specific energy of the ICM is affected by the initial collapse of the cluster, the relative motions of galaxies in the cluster, and winds from supernova explosions during the formation of elliptical galaxies at early times. Of these three physical processes, White (1991) demonstrates that only protogalactic winds can boost the energy of the gas above the value determined through the virial theorem. In addition he shows that the energy contribution due to winds will be larger in cool clusters than in hot ones. Using White's Equation 2, we generated a distribution of temperatures for velocity dispersions ranging from 350-1200 km sec$^{-1}$ (taking his values for the fraction of intracluster gas coming from winds ($w=0.5$) and the typical wind velocity in terms of the galactic velocity dispersion ($f_w =3$)). Fitting these simulated data, we find that the protogalactic winds model predicts a correlation between the velocity dispersion and the temperature of a cluster: \begin{equation} \sigma _r \propto T^{0.68} \end{equation} This depends slightly on the choice of $w$ and $f_w$; for $f_w=2$ we find that $\sigma _r \propto T^{0.62}$. The protogalactic wind model reproduces nearly exactly the dependence of velocity dispersion on ICM temperature that we find in the limited cluster sample (and which is consistent with the slopes found by earlier studies). \subsection{Winds and Biasing} Another effect which may produce the steepness of the $\sigma _r-T$ relationship is a velocity bias between cluster galaxies and the background dark matter, which is driven by dynamical friction (Carlberg 1994; Carlberg \& Dubinski 1991). Simple virial analysis predicts $\sigma _r \propto T^{0.5}$ if the collisionless component has experienced no cooling or heating. If $\sigma_{DM}$ and $\sigma_{gal}$ refer to the background dark matter and galaxy velocity dispersions respectively, and assuming the virial equilibrium holds for the dark matter, then we can write \begin{equation} \sigma_{gal} = \sigma_{DM} {\sigma_{gal} \over \sigma_{DM}} \propto {\sigma_{gal} \over \sigma_{DM}} T^{0.5} \end{equation} If the ratio of velocity dispersions is temperature--dependent, then this will modify the observed $\sigma--T$ relation. For the purposes of illustration, we take the distribution of background dark matter velocities to be Maxwellian, \begin{equation} f\left(v\right)\,=\, {n\left(r\right) \over \left(2\pi\sigma^2\right)^{3/2}} {\rm exp}\left(-v^2/2\sigma^2\right), \end{equation} in which case the Chandrasekhar dynamical friction formula for a galaxy of mass M in a dark matter potential well with density $\rho$ can be written as \begin{equation} {d{\bf v}_M \over dt}\,=\, -{4\pi \ln\Lambda G^2 \rho M \over v_M^3} \left[ {\rm erf}\left(X\right) - {2X \over \sqrt{\pi} }{\rm exp}\left(-X^2\right)\right] {\bf v}_M, \end{equation} with $X\,=\,v_M/\sqrt{2}\sigma$ (Binney and Tremaine 1987). This can be rearranged for a characteristic timescale, and writing the bias for the individual galaxy of mass M, $b\,=\,v_M/\sigma$, we have \begin{equation} t_{fric}\,=\, { b^3\sigma^3 \over 4\pi \ln\Lambda G^2 \rho M } \left[{\rm erf}\left(b/\sqrt{2}\right) - b \sqrt{2 \over \pi} {\rm exp}\left(-b^2/2\right)\right]^{-1}. \end{equation} Again, for the purposes of illustration, we assume a power law density profile for the background dark matter, $\rho\,=\,Ar^{-\alpha}$; then $\sigma^2 \simeq {GM\left(<R\right) \over R}$ implies $G\rho \simeq { \left(3-\alpha\right) \over 4\pi R^2} \sigma^2$. Substituting in, we find that the dynamical friction timescale for galaxies at a radius R roughly scales as $t_{fric} \propto \sigma\left(R\right) R^2$. At a fixed radius R, more massive (thus typically higher temperature) clusters will have a higher velocity dispersion, and thus a longer characteristic timescale for dynamical friction to be significant. This translates into a temperature--dependent velocity bias. Simulations provide an ideal mechanism to test these ideas. Metzler \& Evrard (1995) have conducted an ensemble of N--body + hydrodynamic simulations of the formation and evolution of individual clusters, explicitly including galaxies and galactic winds. These simulated clusters are compared to a ensemble drawn from the same initial conditions --- but without galaxies and winds --- to isolate the effects of winds on clusters. The method is explained in Metzler \& Evrard (1994). Figure 2 shows velocity dispersion -- temperature data drawn from their models. A ``virial radius'' is identified for each simulated cluster as the radius with a mean interior overdensity of 170. The temperatures used are mass--averaged over all gas within the virial radius; the velocity dispersions are averages drawn from the full 3D velocity information for all dark matter or galaxies within $r_{vir}$. A solid line corresponding to $\beta_{spec}\,=\,1$ has also been placed on the plots. Comparing the dark matter velocity dispersion to the average interior temperature shows that in the simple two--fluid models, the simulated clusters are well--fit by the virial relation $\sigma \propto T^{0.5}$. This is sensible; there is no physics in these models beyond that used to derive the expected relation. Note that the values of $\beta_{spec}$ are consistently larger than one; this is a signature of the incomplete gas thermalization previously seen in other studies. It is not clear whether this is physical or numerical in origin; a series of runs with different resolution would clarify this. The models including galaxies and winds show different behavior. Here, the inclusion of energetic winds, plus dynamical friction of the galaxy component, provide the necessary physics to deviate from the virial $\sigma-T$ relation. For the dark matter, the temperature dependence is steeper than 0.5, a result of the inclusion of energetic winds. When galaxies are used to calculate the velocity dispersion, however, the relation steepens to $\sigma \propto T^{0.65}$, comparable to our observed result. The simulations thus provide evidence for a temperature--dependent velocity bias, $\sigma_{gal}/\sigma_{DM} \propto T^{0.1}$. Both this bias and the increase in gas temperatures due to energetic winds are responsible for the final correlation. It should be noted, of course, that the agreement between the simulated ensemble and our real clusters is to some degree fortuitous. The wind model used in the simulations of Metzler \& Evrard is intentionally of much greater wind luminosity than expected for real early--type galaxies, and the dynamical accuracy of modelling galaxies by heavy collisionless particles in the cluster potential is unclear (Frenk et al.\ 1995). Nonetheless, this corroborates the theoretical expectation that both energetic winds and velocity bias can result in the observed $\sigma-T$ relation. \section{Discussion} Although Lubin \& Bahcall (1993) found that the correlation between cluster velocity dispersion and temperature was somewhat steeper than that predicted by the virial theorem, the scatter in their dataset was too broad for them to rule out consistency with the hydrostatic isothermal model. We show that for our limited dataset, $\sigma _r \propto T^{0.61 \pm 0.13}$ (GINGA: $\sigma _r \propto T^{0.76}$), slightly but significantly (at 96\% confidence) steeper than that predicted by the virial theorem. For the ES91 and LB93 datasets, this discrepancy is significant at the $>99$\% level. It seems improbable that this is an artifact of the substructure correction algorithm. The mixture modelling technique used to remove substructure from the cluster datasets does not preferentially raise the velocity dispersion of high-$\sigma _r$ clusters and lower that in low-$\sigma _r$ systems, as examination of Table 1 reveals. The protogalactic winds model of White (1991), in addition to possible velocity bias due to dynamical friction acting on the cluster galaxies, quantitatively reproduces the observed variation in the $\sigma _r - T$ relationship. Preliminary measurements of cluster emission line diagnostics from ASCA show metal abundances typical of Type II supernovae, also supporting the protogalactic winds model (Mushotzky 1994). (Contrary to the model, however, there is as yet no conclusive evidence that low-temperature clusters have higher global abundances than hot systems.) It seems plausible that other physical mechanisms, such as velocity anisotropy, incomplete thermalization of the gas and/or the galaxies, and magnetic pressure support in cluster cores (which are all likely to be present in some unknown and variable degree in clusters) are responsible for the large scatter about the best-fit $\sigma _r - T$ line. This scatter is apparent even after morphology and substructure are considered in the determination of cluster parameters. Finally we can relate our revised determination of $\beta _{spec}$ to the long-standing $\beta$-discrepancy. Early studies of cluster X-ray spectroscopy and imaging revealed an important inconsistency: $\langle \beta _{spec} \rangle = 1.2$ (Mushotzky 1984) but $\langle \beta _{fit} \rangle = 0.7$ (Jones \& Forman 1984). We have seen that the corrections for morphology and substructure bring $\langle \beta _{spec} \rangle$ down to 0.9, only marginally consistent with $\langle \beta _{fit} \rangle$ (but confirming the earlier results of ES91). For many individual clusters, $\langle \beta _{spec} \rangle$ and $\langle \beta _{fit} \rangle$ are completely different. Perseus (A426) is the most obvious example, with $\beta _{spec} = 1.53$ and $\beta _{fit} = 0.57$. So what is the current status of the $\beta$-discrepancy? First of all, we can compare current data on the distribution of gas and galaxies in clusters. Schombert (1988) summarizes the data on cluster density profiles determined from a variety of tracer particles: \begin{eqnarray} \rho _{gal} & \propto & r^{-2.6 \pm 0.3} \nonumber \\ \rho _{gas} & \propto & r^{-2.1 \pm 0.2} \end{eqnarray} In the hydrostatic isothermal model, \begin{eqnarray} \rho _{gas} & \propto & \rho _{gal} ^{\beta _{fit}} \nonumber \\ \beta _{fit} & = & \beta _{spec} \end{eqnarray} For our value of $\beta _{spec}$, $\rho _{gas} \propto r^{-2.3}$, which is at best only marginally consistent with the dependence $\rho _{gas} \propto r^{-2.1}$ determined by Jones \& Forman (1984). As Gerbal et al.\ (1994) point out in their theoretical analysis of the $\beta$-discrepancy, however, in order to test the consistency of the gas and galaxy scale lengths one must simultaneously observe their radial dependence {\it independently}, not fitting them together as Jones \& Forman did. In the next stage of this project (Bird \& Mushotzky 1995), we present non-parametric determinations of the galaxy and gas density profiles based on the MAPEL package (Merritt \& Tremblay 1994). MAPEL, a constrained maximum likelihood algorithm, allows us to determine the best-fit model to the surface density profiles without assuming a King-model (or other isothermal) fit to the data (Merritt \& Tremblay 1994). This is important because there is growing evidence from gravitational lensing experiments and computer simulations that the King model fit is not a good description of the gravitational potential of a galaxy cluster (Navarro, Frenk \& White 1994; see also Beers \& Tonry 1986). These profiles will allow us to test on a cluster-by-cluster basis whether the galaxy and gas profiles differ -- a comparison which in the past has only been possible in a statistical sense (cf.\ Bahcall \& Lubin 1994). Note also that in the time since White (1991) appeared, {\it ROSAT} PSPC and {\it ASCA} surface density profiles of cool clusters have become publicly available. These clusters will be included in the continuation of this project (velocity data are published in Beers et al.\ 1994). The protogalactic winds model predicts that cool clusters will have a larger scale length of gas density than hot clusters (again, because the relative energy contribution of winds to the ICM is greater in cool systems). Use of the expanded dataset for these clusters will allow us to directly test this prediction and to probe the effects of protogalactic winds on $\beta _{fit}$. \acknowledgements We would like to thank Lori Lubin, Neta Bahcall, Ray White III, Bill Forman, Christine Jones and the other attendees of the Aspen Summer Workshop for their contributions to this project. Claude Canizares, Keith Ashman and Alistair Edge also provided useful conversations during the course of this work. Andy Fabian's critical reading of the manuscript greatly improved our statistical analysis. We are grateful to Simon White for clarification of issues relating to cluster evolution and parametrization of cluster density profiles. This research was supported in part by NSF EPSCoR grant No.\ OSR-9255223 to the University of Kansas. \newpage \begin{table} \caption{The Cluster Sample} \begin{tabular}{lcccc} \tableline \tableline Cluster&$S_{BI}$(uncorr) km s$^{-1}$&$S_{BI}$(corr) km s$^{-1}$&$T_X$ (keV)& Source Code \\ \tableline A85&810$^{+76}_{-80}$&810$^{+76}_{-80}$&6.6$^{+1.8}_{-1.4}$&E91 \\ A119&862$^{+165}_{-140}$&1036$^{+214}_{-221}$&5.1$^{+1.0}_{-0.8}$&E91 \\ A193&726$^{+130}_{-108}$&515$^{+176}_{-153}$&4.2$^{+1.6}_{-0.9}$&E91 \\ A194&530$^{+149}_{-107}$&470$^{+98}_{-78}$&2.0$^{+1.0}_{-1.0}$&JF84 \\ A399&1183$^{+126}_{-108}$&1224$^{+131}_{-116}$&6.0$^{+2.1}_{-1.5}$&E91 \\ A401&1141$^{+132}_{-101}$&785$^{+111}_{-81}$&8.6$^{+1.4}_{-1.6}$&E91 \\ A426&1262$^{+171}_{-132}$&1262$^{+171}_{-132}$&6.3$^{+0.2}_{-0.2}$&D93 \\ A496&741$^{+96}_{-83}$&533$^{+86}_{-76}$&4.0$^{+0.06}_{-0.06}$&W94 \\ A754&719$^{+143}_{-110}$&1079$^{+234}_{-243}$&8.7$^{+1.8}_{-1.6}$&E91 \\ A1060&630$^{+66}_{-56}$&710$^{+78}_{-78}$&3.3$^{+0.2}_{-0.2}$&Ikebe 1994 ASCA \\ A1644&919$^{+156}_{-114}$&921$^{168}_{-141}$&4.1$^{+1.4}_{-0.6}$&E91 \\ A1736$\dagger$&955$^{+107}_{-114}$&528$^{+136}_{-87}$&4.6$^{+0.7}_{-0.6}$&D93 \\ & & & 6.2$^{+0.7}_{-0.7}$&DFER \\ A1795&834$^{+142}_{-119}$&912$^{+192}_{-129}$&5.6$^{+0.1}_{-0.1}$&W94 \\ A2052&1404$^{+401}_{-348}$&714$^{+143}_{-148}$&3.4$^{+0.6}_{-0.5}$&E91 \\ A2063&827$^{+148}_{-119}$&706$^{+117}_{-109}$&3.4$^{+0.35}_{-0.35}$&Yamashita 1992 \\ A2107&684$^{+126}_{-104}$&577$^{+177}_{-127}$&4.2$^{+4.4}_{-1.6}$&D93 \\ A2199&829$^{+124}_{-118}$&829$^{+124}_{-118}$&4.5$^{+0.07}_{-0.07}$&W94 \\ A2634&1077$^{+212}_{-152}$&824$^{+142}_{-133}$&3.4$^{+0.2}_{-0.2}$&D93 \\ A2670&1037$^{+109}_{-81}$&786$^{+203} _{-239}$ &3.9$^{+1.6}_{-0.9}$&D93 \\ A3526&1033$^{+118}_{-79}$&780$^{+100}_{-100}$&3.8$^{+0.3}_{-0.3}$&F94 \\ A3558$\dagger$&923$^{+120}_{-101}$&781$^{+111}_{-98}$&3.8$^{+2.0}_{-2.0}$&D93 \\ & & & 6.2$^{+0.3}_{-0.3}$&DFER \\ DC1842-63&522$^{+98}_{-82}$&565$^{+138}_{-117}$&1.4$^{+0.5}_{-0.4}$&D93 \\ \tableline \end{tabular} \end{table} \newpage \begin{table} \caption{$\beta _{spec}$ with and without Substructure Correction} \begin{tabular}{lccc} \tableline \tableline Cluster&$\beta _{spec}$(uncorr)&$\beta _{spec}$(corr)& \\ \tableline A85&0.60&0.60& \\ A119&0.88&1.27& \\ A193&0.76&0.38& \\ A194&0.85&0.67& \\ A399&1.41&1.51& \\ A401&0.92&0.43& \\ A426&1.53&1.53& \\ A496&0.83&0.43& \\ A754&0.36&0.81& \\ A1060&0.73&0.92& \\ A1644&1.25&1.25& \\ A1736&1.20&0.37& {\it Einstein}\\ &0.89&0.27& GINGA\\ A1795&0.75&0.90& \\ A2052&3.51&0.91& \\ A2063&1.22&0.89& \\ A2107&0.67&0.48& \\ A2199&0.92&0.92& \\ A2634&2.07&1.21& \\ A2670&1.67&0.96& \\ A3526&1.70&0.97& \\ A3558&1.36&0.97& {\it Einstein}\\ &0.83&0.60& GINGA\\ DC1842-63&1.18&1.38& \\ \tableline \end{tabular} \end{table} \newpage \begin{table} \caption{Fitting the $\sigma _r-T$ Correlation} \begin{tabular}{lc} \tableline \tableline Source & Best Fit \\ \tableline Edge \& Stewart 1991 & $\sigma _r = 10^{2.60 \pm 0.08} T^{0.46 \pm 0.12}$ \\ $N_{clus}=23$ (pub) & $T = 10^{-3.22 \pm 0.77} \sigma _r^{1.35 \pm 0.27}$ \\ Ordinary least squares (no errors) & $\sigma _r = 10^{2.61 \pm 0.06} T^{0.45 \pm 0.09}$ \\ Bisector (no errors) & $\sigma _r = 10^{2.46 \pm 0.06} T^{0.68 \pm 0.10}$ \\ Bisector (errors) & $\sigma _r = 10^{2.41 \pm 0.51} T^{0.75 \pm 0.08}$ \\ \tableline Lubin \& Bahcall 1993 & $\sigma _r = 10^{2.53 \pm 0.06} T^{0.62 \pm 0.09}$ (unweighted) \\ $N_{clus}=41$ (pub) & $\sigma _r = 10^{2.52 \pm 0.07} T^{0.60 \pm 0.11}$ (weighted$^{\dagger}$) \\ Ordinary least squares (no errors) & $\sigma _r = 10^{2.54 \pm 0.06} T^{0.61 \pm 0.09}$ \\ Bisector (no errors) & $\sigma _r = 10^{2.38 \pm 0.05} T^{0.84 \pm 0.08}$ \\ Bisector (errors) & $\sigma _r = 10^{2.36 \pm 0.05} T^{0.87 \pm 0.08}$ \\ \tableline This paper, no substructure correction & $\sigma _r = 10^{2.48 \pm 0.25} T^{0.73 \pm 0.38}$ \\ $N_{clus}=22$ (bisector with errors) & $T = 10^{-2.79 \pm 1.54} \sigma _r ^{1.16 \pm 0.52}$ \\ Ordinary least squares (no errors) & $\sigma _r = 10^{2.75 \pm 0.08} T^{0.31 \pm 0.13}$ \\ Bisector (no errors) & $\sigma _r = 10^{2.51 \pm 0.07} T^{0.69 \pm 0.12}$ \\ \tableline This paper, substructure correction$\dagger \dagger$ & $\sigma _r = 10^{2.50 \pm 0.09} T^{0.61 \pm 0.13}$ \\ $N_{clus}=22$ (bisector with errors) & $T = 10^{-3.15 \pm 0.60} \sigma _r ^{1.31 \pm 0.21}$ \\ Ordinary least squares (no errors) & $\sigma _r = 10^{2.62 \pm 0.07} T^{0.42 \pm 0.11}$ \\ Bisector (no errors) & $\sigma _r = 10^{2.45 \pm 0.09} T^{0.69 \pm 0.13}$ \\ \tableline This paper, substructure correction$\dagger \dagger$ &$\sigma _r = 10^{2.39 \pm 0.09} T^{0.76 \pm 0.11}$ \\ $N_{clus}=22$ (bisector with errors) & $T = 10^{-3.21 \pm 0.61} \sigma _r ^{1.32 \pm 0.21}$ \\ \tableline \end{tabular} \end{table} \newpage
c6ede2038b8bbcf5bac14a6a026435fc17eb4b0b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{Section:Introduction} During the last five years, people have started to believe there is a serious possibility of building practically useful spoken language translators for limited domains. There are now a number of high-profile projects with large budgets, the most well-known being the German Verbmobil effort. At the moment, the best systems are at the level of advanced prototypes; making projections from current performance, it seems reasonable to hope that these could be developed into commercially interesting systems within a time-scale of five to ten more years. This paper will describe work carried out on one such advanced prototype, the Spoken Language Translator (SLT) system \cite{SLT-HLT,SLT-report}. SLT can translate spoken English utterances from the domain of air travel planning (ATIS; \cite{ATIS}) into spoken Swedish or French, using a vocabulary of about 1200 stem entries. The Swedish version has been operational since June 1993, and has been publicly demonstrated on numerous occasions. The French version became operational fairly recently; the language-processing component was demoed for the first time at the CeBIT trade fair at Hannover in March 1995. The Swedish and French versions have approximately equivalent levels of performance \cite{SLT-ICSLP}. SLT incorporates modules for speech recognition, speech synthesis and translation. In the paper, we will focus on the last of these. All examples given will refer to the French version. One of the most important differences between spoken language translation and text translation is that there are much stronger demands on quality of output. If output is not good enough, people frequently have difficulty understanding what has been said. There is no possibility of the pre- or post-editing which nearly all text translation systems rely on. Quite apart from the problem of generating natural-sounding speech, it is also necessary to ensure that the translated text sent to the speech synthesizer is itself of sufficient quality. A high-quality translation must fulfill several criteria: in particular, it should preserve the meaning of the original utterance, be grammatical, and contain correct word-choices. The basic design philosophy of the SLT project has been to build a framework which is theoretically clean, on the usual grounds that this makes for a system that is portable and easy to scale up. We have attempted to subsume as much of the system as possible under two standard paradigms: {\it unification-based language processing} and the {\it noisy-channel statistical model}. The unification-based part of the system encodes domain-independent grammatical rules; for each source-language word or grammatical construction covered by the system, it describes the possible target-language translations. When the rules permit more than one potentially valid translation, the statistical component is used to rank them in order of relative plausibility. The next two paragraphs give some examples to motivate this division of knowledge sources. The simplest examples of transfer rules are those used to translate individual words; here it is immediately clear that many words can be translated in several ways, and thus that more than one rule will often apply. For instance, the English preposition {\it ``on''} can be translated as any of the French prepositions {\it ``avec''} ({\it fly to Boston on Delta $\rightarrow$ aller \`{a} Boston avec Delta}); {\it ``sur''} ({\it information on ground transportation $\rightarrow$ des renseignements sur les transports publics}); {\it ``\`{a} bord de''} ({\it a meal on that flight $\rightarrow$ un repas \`{a} bord de ce vol}); {\it ``pour''} ({\it the aircraft which is used on this flight $\rightarrow$ l'avion qu'on utilise pour ce vol}); or omitted and replaced by an implicit temporal adverbial marker ({\it leave on Monday $\rightarrow$ partir le lundi}). In each of these cases, the correct choice of translation is determined by the context. To take a slightly more complex case, which involves some grammar, there are a number of transfer rules that list possible ways of realizing the English compound nominal construction in French. Among these are adjective + noun ({\it economy flight} $\rightarrow$ {\it vol \'{e}conomique}); noun + PP ({\it arrival time} $\rightarrow$ {\it heure d'arriv\'{e}e}; {\it Boston ground transportation} $\rightarrow$ {\it transports publics \`{a} Boston}); or in special cases simply a compound noun ({\it Monday morning} $\rightarrow$ {\it lundi matin}). Again, the individual lexical items and the context determine the correct rule to use. Experience has shown that it is relatively simple to write the context-independent rules which list sets of choices like the ones above. It is however much more difficult to use rules to specify the context in which each particular choice is appropriate. Moreover, the correct choice is frequently domain-dependent; thus the rules will need to be rewritten if the system is ported to a new application. For these reasons, statistically trained machine translation architectures have recently been receiving a great deal of attention. Some researchers (notably those in the IBM CANDIDE project, \cite{Brown:90}) have even gone so far as to claim that statistical techniques are sufficient on their own. Our view is that this is at best unnecessary. Since many aspects of language (for instance, agreement and question-formation in French) appear to be regular and readily describable by rules, it seems more logical to use a mixture of rules and statistics; it is in this sense that we have a {\it hybrid} transfer model (cf. \cite{Brown:92,Carbonell:92,GrishmanKosaka:92}). The rest of the paper describes the system in more detail, focussing on the question of how rules and statistics are combined in the translation component. Section~\ref{Section:SLT-system} describes the overall architecture of SLT. Section~\ref{Section:Examples} gives examples of typical non-trivial translation problems from the English/French ATIS domain, and the way they are dealt with. Finally, Section~\ref{Section:Results} summarizes the current implementation status of the project, and presents the results of tests carried out on a recent version of the prototype. \section{The SLT system} \label{Section:SLT-system} The SLT system consists of a set of individual processing modules, linked together in a pipelined fashion. The input speech signal is processed by the SRI DECIPHER(TM) recognizer \cite{Murveit:93}, and an N-best list of hypotheses is passed to the source language processor, a copy of the SRI Core Language Engine (CLE; \cite{CLE}) loaded with an English grammar and lexicon. The CLE produces for each speech hypothesis a set of possible analyses in Quasi Logical Form, and uses trainable preference methods to select the most plausible hypothesis and analysis \cite{AlshawiCarter:94,HLT-Nbest}. The QLF analysis selected as most plausible is passed to the transfer component, which first annotates it with extra information in a rule-based pre-transfer phase. Next, a set of possible target-language QLFs is created, using the unification-based transfer rules \cite{Transfer-ACL-91}. The target QLFs are stored in a ``packed'' form \cite{Tomita:86} to avoid a combinatoric explosion when many transfer choices are non-deterministic. A rule-based post-transfer phase then performs some simple rewriting of the transferred QLFs, following which a second set of trained statistical preferences extract the most plausible transferred QLF and ``unpack'' it into a normal representation. The selected target-language QLF is passed to a second copy of the CLE, loaded with a target-language grammar and lexicon, which generates a surface string using the Semantic Head-Driven algorithm \cite{SHD}. Finally, the target-language string is passed to a speech synthesizer and converted into output speech. Most of this processing has already been covered in detail in \cite{SLT-report}, with reference to the Swedish version. The rest of this section will describe the new functionalities added since then: trainable transfer preferences, transfer packing, and the use of pre- and post-transfer phases. The final sub-section briefly summarizes the main features of the French language description. \subsection{Trainable transfer preferences} \label{Transfer-preferences} The basic preference model and training method for transfer preferences is the one described in \cite{AlshawiCarter:94} and \cite{HLT-Nbest}, suitably adapted for the transfer task; a brief summary follows. We start with a training corpus, consisting of a set of utterances, each paired with a list of possible output sentences produced by the transfer component. A human judge marks each transfer as either acceptable or unacceptable. In line with the noisy-channel statistical model of translation described in \cite{Brown:90}, the plausibility of a new candidate transfer is now defined to be a real number, calculated as a weighted sum of two contributions: the {\it transfer rule score}, and the {\it target language model score}. The first of these represents the relative plausibility of the rules used to make the transfer, and the second the plausibility of the target QLF produced. The transfer rule score and the target language model score are computed using the same method; for clarity, we first describe this method with reference to transfer rules. The transfer rule score for the bag of transfer rules used to produce a given target QLF is a sum of the {\it discriminant scores} for the individual transfer rules. The discriminant score for a rule $R$ is calculated from the training corpus, and summarizes the reliability of $R$ as an indicator that the transfer is correct or incorrect. The intent is that transfer rules which tend to occur more frequently in correct transfers than incorrect ones will get positive scores; those which occur more frequently in incorrect transfers than correct ones will get negative scores. More formally, we define the discriminant score for $R$, $d(R)$, as follows. We find all possible 3-tuples $(S, T_1, T_2)$ in the training corpus where \begin{itemize} \item $S$ is a source language utterance, \item $T_1$ and $T_2$ are possible transfers for $S$, exactly one of which is correct, \item The transfer rule $R$ is used in exactly one of $T_1$ and $T_2$. \end{itemize} If $R$ occurs in the correct hypothesis of the pair $(T_1, T_2)$, we call this a ``good'' occurrence of $R$; otherwise, it is a ``bad'' one. Counting occurrences over the whole set, we let $g$ be the total number of good occurrences of $R$, and $b$ be the total number of bad occurrences. $d(R)$ is then defined as \begin{eqnarray*} d(R) = \left\{ \begin{array}{ccl} \log_2(2(g + 1)/(g + b + 2)) & if & g < b \\ 0 & if & g = b \\ -\log_2(2(b + 1)/(g + b + 2)) & if & g > b \end{array} \right. \end{eqnarray*} This formula is a symmetric, logarithmic transform of the function $(g + 1)/(g + b + 2)$, which is the expected {\it a posteriori} probability that a new $(S, T_1, T_2)$ 3-tuple will be a good occurrence of $R$, assuming that, prior to the quantities $g$ and $b$ being known, this probability has a uniform {\it a priori} distribution on the interval [0,1]. The target language model score is defined similarly. The first step is to extract a bag of ``semantic triples'' \cite{AlshawiCarter:94} from each possible transferred QLF in the training corpus, following which each individual triple is assigned a discriminant score using the method above. Semantic triples encode grammatical relationships between head-words; we have generalized the original definition from \cite{AlshawiCarter:94} to include relationships involving determiners, since these are important for transfer. Thus for example the normal reading of the English sentence \begin{quote} {\it Show flights with a stop.} \end{quote} would include the triples \begin{verbatim} (show,obj,flight) (show,obj,bare_plur) (bare_plur,det,flight) (flight,with,stop) (flight,with,a) (a,det,stop) \end{verbatim} In Section~\ref{Section:Examples} below, we will present examples illustrating how the two components of the transfer preference model combine to solve some non-trivial transfer problems. \subsection{Pre- and post-transfer} Ideally, we would like to say that unification-based rules and trainable transfer preferences constituted the whole transfer mechanism. In fact, we have found it necessary to bracket the unification-based transfer component between pre- and post-transfer phases. Each phase consists of a small set of rewriting rules, which are applied recursively to the QLF structure. It would in principle have been possible to express these as normal unification-based transfer rules, but efficiency considerations and lack of implementation time persuaded us to adopt the current solution. The pre-transfer phase implements a simple treatment of reference resolution or coercion, which at present only deals with a few cases important in the ATIS domain. Most importantly, QLF constructs representing bare code expressions used as NPs are annotated with the type of object the code refers to. Code expression are frequent in ATIS, and the type of referent is always apparent from the code's syntactic structure. The extra information is necessary to obtain a good French translation: flight codes must be prefaced with {\it le vol} (e.g. {\it C O one three three $\rightarrow$ le vol C O cent trente-trois}) while other codes are translated literally. The post-transfer phase reduces the transferred QLF to a canonical form; the only non-trivial aspect of this process concerns the treatment of nominal and verbal PP modifiers. In French, PP modifier sequences are subject to a strong ordering constraint: locative PPs should normally be first and temporal PPs last, with other PPs in between. In the limited context of the ATIS domain, this requirement can be implemented fairly robustly with a half-dozen simple rules, and leads to a marked improvement in the quality of the translation. \subsection{Transfer packing} As already indicated, the basic philosophy of the transfer component is to make the transfer rules more or less context-independent, and let the results be filtered through the statistically trained transfer preferences. The positive side of this is that the transfer rules are robust and simple to understand and maintain. The negative side is that non-deterministic transfer choices multiply out, giving a combinatoric explosion in the number of possible transferred QLFs. To alleviate this problem, transferred QLFs are {\it packed}, in the sense of \cite{Tomita:86}; lexical transfer ambiguity is left ``unexpanded'', as a locally ambiguous structure in the target QLF. It is possible to compute preference scores efficiently on the packed QLFs, and only unpack the highest-scoring candidates; this keeps the transfer phase acceptably efficient even when several thousand transferred QLFs are produced. The following example illustrates how transfer packing works. The source utterance is \begin{quote} {\it flights on Monday} \end{quote} and the packed transferred QLF (in slightly simplified form) is: \begin{verbatim} elliptical_np( term(/|\(1,[def_plur, indef_plural, bare_plur]), C^[and, [vol1,C], form(prep(/|\(2,[a_bord_de, temporal_np, sur, pour, avec])), term(/|\(3,[def_sing, bare_sing]) E^[lundi1,E]))])) \end{verbatim} This contains three lexical transfer ambiguities, reflecting the different ways of translating the bare singular and bare plural determiners, and the preposition {\it ``on''}. In this case, the transfer preferences determine that the best choices are to realise English bare plural as French definite plural, English bare singular as French definite singular, and {\it ``on''} as an implicit temporal NP marker. Substituting these in, the preferred unpacked QLF is \begin{verbatim} elliptical_np( term(def_plur, C^[and, [vol1,C], form(temporal_np, term(def_sing, E^[lundi1,E]))])) \end{verbatim} producing the French surface output \begin{quote} {\it les vols le lundi} \end{quote} \subsection{French language description} The French language description is a straightforward adaptation of the general unification-based CLE grammar and lexicon for English \cite{CLE-grammar}. It covers most important French constructions, including all those occurring frequently in the ATIS domain. The most significant divergences compared to the English language description are in the treatment of clitic pronouns, which will be reported in detail elsewhere. Very briefly, however, a approach analogous to the standard idea of ``gap-threading'' has been implemented, which uses difference lists to ``move'' the clitics from their surface position next to the verb to their notional positions (usually, but not necessarily, as verb complements). A fairly complete treatment of French inflectional morphology has been implemented, based on \cite{French-morphology}. The French lexicon currently contains about 750 stem entries (excluding proper nouns), which is adequate to provide good coverage of the ATIS domain in the English to French direction. Of these entries, about half are for function words and the remainder for content words. \section{Examples of non-trivial translation problems} \label{Section:Examples} This section will give examples of non-trivial translation problems from the ATIS domain, and describe how SLT deals with them. We were interested to discover that even a domain as simple as ATIS actually contains many quite difficult transfer problems; also, that English/French is considerably more challenging than the English/Swedish transfer pair used in the original SLT system. We will begin by giving examples\footnote{All examples presented in this section are correctly processed by the current French version of SLT.} where it is fairly clear that the problem is essentially grammatical in nature, and thus primarily involves the rule-based part of the system; later, we give examples where the problem mainly involves the preference component, and examples where both types of knowledge are needed. An obvious case of a grammatical phenomenon is agreement, which is considerably more important in French than in English; the rules for agreement are rigid and well-defined, and easy to code in a feature-based formalism. Quite frequently, however, they relate words which are widely separated in the surface structure, which makes them hard to learn for surface-oriented statistical models. For example, there are many instances in ATIS of nouns which in French are postmodified both by a PP and by a relative clause, e.g. \begin{quote} {\it Flights from Boston to Atlanta leaving before twelve a m \\ $\rightarrow$ Les vols de Boston \`{a} Atlanta qui partent avant midi} \end{quote} Here, the verb {\it partent} has to agree in number and person with the head noun {\it vols}, despite the gap of five surface words in between. Many problems related to word-order also fall under the same heading, in particular those relating to question-formation and the position of clitic pronouns. For example, French YN-questions can be formed in three ways: by inversion of subject and main verb, by prefacing the declarative version of the clause with the question particle {\it est-ce que}, or by ``complex inversion'', fronting the subject and inserting a dummy pronoun after the inverted verb. If the subject is a pronoun, only the first and second alternatives are allowed; if it is {\it not} a pronoun, only the second and third are valid. Thus for example \begin{quote} {\it Does it leave after five p m? \\ $\rightarrow$ Part-il apr\`{e}s dix-sept heures? \\ $\rightarrow$ Est-ce qu'il part apr\`{e}s dix-sept heures? \\ $\rightarrow$ *Il part-il apr\`{e}s dix-sept heures? \\ \\ Does that flight serve meals?\\ $\rightarrow$ *Sert ce vol des repas? \\ $\rightarrow$ Est-ce que ce vol sert des repas? \\ $\rightarrow$ Ce vol sert-il des repas? } \end{quote} Embedded questions constitute another good example of a mainly grammatical problem. Just as in English, French embedded questions normally have the uninverted word-order, e.g. \begin{quote} {\it Tell me {\bf when these flights arrive in Boston}\\ $\rightarrow$ Dites-moi {\bf quand ces vols arrivent \`{a} Boston}} \end{quote} However, if the main verb is {\it \^{e}tre} with an NP complement, the inverted word-order is obligatory, e.g. \begin{quote} {\it Tell me {\bf what the cheapest fares are}\\ $\rightarrow$ Dites-moi {\bf quels sont les tarifs les moins chers}\\ $\rightarrow$ *Dites-moi {\bf quels les tarifs les moins chers sont}} \end{quote} In ATIS, embedded questions occur in about 1\% of all corpus sentences; this makes them too frequent to ignore, but rare enough that a pure statistical model will probably have difficulties finding enough training examples to acquire the appropriate regularities. The relevant facts are however quite easy to state as grammatical rules. Moreover, they are domain-independent, and can thus be reused in different applications. In contrast, there are many phenomena, especially involving word-choice, which are hard to code as rules and largely domain- and application-dependent. As mentioned earlier in Section~\ref{Section:Introduction}, the translation of prepositions and determiners is most frequently determined on collocational grounds; in our framework, this means that the information used to decide on an appropriate translation is primarily supplied by the transfer preferences. We will now decribe in more detail how the idea works in practice. Recall that the preference score for a given transfer candidate is a weighted sum of a channel contribution (discriminants on transfer rules) and a target language model score (discriminants from target language semantic triples). The transfer rule discriminants make transfer rules act more or less strongly as defaults. If a transfer rule $R$ is correct more often than not when a choice arises, it will have a positive discriminant, and will thus be preferred if there is no reason to avoid it. If use of $R$ produces a strong negative target-language discriminant, however, the default will be overridden. Let us look at some simple examples. The English indefinite singular article {\it ``a''} can be translated in several ways in French, but most often it is correct to realise it as an indefinite singular ({\it ``un''} or {\it ``une''}). The discriminant associated with the transfer rule that takes indefinite singular to indefinite singular is thus fairly strongly positive. There are however several French prepositions which have a strong preference for a bare singular argument; for instance, {\it ``flights without a stop''} is almost always better translated as {\it ``les vols sans escale''} than {\it ``les vols sans une escale''}. In cases like these, the {\it a}-to-{\it un} rule will be wrong, and the less common rule that takes indefinite singular to bare singular will be right. So if enough training examples are available, the negative discriminant associated with the semantic triple \begin{verbatim} (vol, sans, indef_sing) \end{verbatim} will have a higher absolute value than the positive discriminant associated with {\it a}-to-{\it un}, and can overrule it. Similar considerations apply to prepositions. In the ATIS domain, most prepositions have several possible translations, none of which are strongly preferred. For example, the channel score discriminants associated with the transfer rules {\it on}-to-{\it sur} and {\it on}-to-{\it avec} both have low absolute values; the first is slightly negative, and the second slightly positive. Target language triples associated with these prepositions are however in general more definite: the triples \begin{verbatim} (aller avec <airline>) (renseignement sur transports) \end{verbatim} are both strongly positive, while \begin{verbatim} (aller sur <airline>) (renseignement avec transports) \end{verbatim} are strongly negative. The net result is that the target language contribution makes the decision, and as desired we get {\it ``fly on Delta''} and {\it ``information on flights''} going to {\it ``aller avec Delta''} and {\it ``des renseignements sur les vols''} rather than {\it ``aller sur ...''} and {\it ``des renseignements avec ...''}. In general, a combination of rules and collocational information is needed to translate a construction. A good example is the English implicit singular mass determiner, which is common in ATIS. Grammatical rules are used to decide that there is a singular mass determiner present, following which the correct translation is selected on collocational grounds. An elementary French grammar will probably say that the normal translation should either be the French partitive singular determiner, e.g. \begin{quote} {\it I drink {\bf milk} \\ $\rightarrow$ Je bois {\bf du lait}} \end{quote} or else the definite singular, e.g. \begin{quote} {\it I like {\bf cheese} \\ $\rightarrow$ J'aime {\bf le fromage}} \end{quote} In the ATIS domain, it happens that the nouns which most frequently occur with mass singular determiner are {\it ``transportation''} and {\it ``information''}, both of which are conventionally singular in English but plural in French. Because of this, neither of the standard rules for translating mass singular gets a strong positive discriminant score, and once again the target language model tends to make the decision. For instance, if the head noun is {\it ``transportation''}, it is most often correct to translate the mass singular determiner as a definite plural, e.g. \begin{quote} {\it Show me {\bf transportation} for Boston\\ $\rightarrow$ Indiquez-moi {\bf les transports} pour Boston} \end{quote} This is captured in a strong positive discriminant score associated with the target language triple \begin{verbatim} (def_plur, det, transport) \end{verbatim} Note that the translation {\it ``transportation''} to {\it ``les transports''} is only a preference, not a hard rule; it can be overridden by an even stronger preference, such as the preference against having a definite plural subject of an existential construction. So we have e.g. \begin{quote} {\it Is there {\bf transportation} in Boston?\\ $\rightarrow$ Y a-t-il {\bf des transports} \`{a} Boston?\\ $\rightarrow$ *Y a-t-il {\bf les transports} \`{a} Boston?} \end{quote} \section{Implementation status and results} \label{Section:Results} So far, the French version of SLT has consumed about eleven person-months of effort over and above the effort expended on the original English/Swedish SLT project. Of this, about seven person-months were spent on the French language description, two on transfer rules, and two on other tasks. The small quantity of effort required to develop a good French language description underlines the extent to which its structure overlaps with that of the original English grammar and lexicon. We now describe preliminary experiments designed to test the performance of the system. A set of 2000 ATIS utterances was used, randomly selected from the subset of the ATIS corpus consisting of A or D class\footnote{This means roughly that the sentence represented a valid inquiry to the database, either alone or in the context in which it was uttered.} utterances of length up to 15 words, which had not previously been examined during the development of the French version of SLT. Utterances were supplied in text form, i.e. the speech recognition part of the system was not tested here. Each utterance was analysed using the English language version of the CLE, and for the 1847 sentences where at least one QLF was produced the most plausible QLF was selected using the preference methods described in \cite{AlshawiCarter:94}. This was then submitted to the transfer phase, and a set of transfer candidates produced. A simple set of hand-coded transfer preferences was applied, and one French surface string was generated for each of the five highest-scoring transfer candidates. A native French speaker fluent in English judged each generated string as being either an acceptable or an unacceptable translation of the source utterance. Translations were only regarded as acceptable if they were fully grammatical, preserved the meaning of the source utterance, and used a stylistically natural choice of words. The judging process took approximately eight hours, averaging three seconds per source/target pair. The annotated N-best transfer corpus was then used to train a new set of preferences using the method described in Section~\ref{Transfer-preferences}; the corpus was divided into five equal pieces, each fifth being held out in turn as test data with the remaining four-fifths used as training. Finally, the derived preferences were tested for accuracy. Of the 1847 transfer sets, there were 1374 for which at least one acceptable transfer was in the top five candidates\footnote{There were a further 246 sets in which at least one candidate translation was produced; in most of these cases, the best translation was comprehensible and grammatically correct, but was rejected on stylistic grounds.}. The trained transfer preferences selected an acceptable candidate in 1248 of these 1374 cases (91\%); in contrast, random choice among the top five gave a baseline score of 826 acceptable transfers, or 60\%. We regard this as a promising initial result, and intend soon to repeat the experiment with a larger set of 5000--10000 sentences. We also anticipate significant improvements over the next few months from planned extensions and refinements to the French language description.
9ba04218391f524b6d8b4859955462e987e60476
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Motivation: Schr\"{o}dinger's interpolation problem through Feynman-Kac kernels} The issue of \it deriving \rm a microscopic dynamics from the (phenomenologically or numerically motivated, by approximating the frequency distributions) input-output statistics data was addressed, as the Schr\"{o}dinger problem of a probabilistic interpolation, in a number of publications \cite{schr}-\cite{olk1}. We shall consider Markovian propagation scenarios so remaining within the well established framework, where for any two Borel sets $A,B\subset R$ on which the respective strictly positive boundary densities $\rho (x,0)$ and $\rho (x,T)$ are defined, the transition probability $m(A,B)$ from the set $A$ to the set $B$ in the time interval $T>0$ has a density given in a specific factorized form: $$m(x,y)=f(x)k(x,0,y,T)g(y)$$ $$m(A,B)=\int_Adx\int_Bdy \, m(x,y)$$ $${\int dy m(x,y)=\rho (x,0)\, ,\, \int dx m(x,y)=\rho (y,T)} \eqno (1)$$ Here, $f(x), g(y)$ are the a priori unknown functions, to come out as solutions of the integral (Schr\"{o}dinger) system of equations (1), provided that in addition to the density boundary data we have in hands any strictly positive, continuous in space variables \it function \rm $k(x,0,y,T)$. Our notation makes explicit the dependence (in general irrelevant) on the time interval endpoints. It anticipates an important restriction we shall impose, that $k(x,0,y,T)$ must be a strongly continuous dynamical semigroup kernel: it will secure the Markov property of the sought for stochastic process. It is the major mathematical discovery \cite{jam} that, without the semigroup assumption \it but \rm with the prescribed, nonzero boundary data $\rho (x,0),\rho (y,T)$ \it and \rm with the strictly positive continuous function $k(y,0,x,T)$, the Schr\"{o}dinger system (1) of integral equations admits a unique solution in terms of two nonzero, locally integrable functions $f(x), g(y)$ of the same sign (positive, everything is up to a multiplicative constant). If $k(y,0,x,T)$ is a particular, confined to the time interval endpoints, form of a concrete semigroup kernel $k(y,s,x,t), 0\leq s\leq t<T$, let it be a fundamental solution associated with (5) (whose existence a priori is \it not \rm granted), then there exists \cite{zambr,blanch,olk,olk1,nag} a function $p(y,s,x,t)$: $${p(y,s,x,t)=k(y,s,x,t){{\theta (x,t)}\over {\theta (y,s)}}} \eqno (2)$$ where $${\theta (x,t)=\int dy k(x,t,y,T)g(y)}\eqno (3)$$ $$\theta _*(y,s)=\int dx k(x,0,y,s)f(x)$$ which implements a consistent propagation of the density $\rho (x,t)=\theta (x,t)\theta _*(x,t)$ between its boundary versions, according to: $${\rho (x,t) = \int p(y,s,x,t)\rho (y,s)dy}\eqno (4)$$ $$0\leq s\leq t<T$$ For a given semigroup which is characterized by its generator (Hamiltonian), the kernel $k(y,s,x,t)$ and the emerging transition probability density $p(y,s,x,t)$ are unique in view of the uniqueness of solutions $f(x),g(y)$ of (1). For Markov processes, the knowledge of the transition probability density $p(y,s,x,t)$ for all intermediate times $0\leq s< t\leq T$ suffices for the derivation of all other relevant characteristics. In the framework of the Schr\"{o}dinger problem the choice of the integral kernel $k(y,0,x,T)$ is arbitrary, except for the strict positivity and continuity demand. As long as there is no "natural" physical motivation for its concrete functional form, the problem is abstract and of no direct physical relevance. However, in the context of parabolic partial differential equations this "natural" choice is automatically settled if the Feynman-Kac formula can be utilized to represent solutions. Indeed, in this case an unambigous strictly positive semigroup kernel which is a continuous function of its arguments, can be introduced for a broad class of (admissible \cite{simon}) potentials. Time dependent potentials are here included as well \cite{freid,simon1}. Moreover, in Ref. \cite{blanch} we have discussed a possible phenomenological significance of the Feynman-Kac potentials, as contrasted to the usual identification of Smoluchowski drifts with force fields affecting particles (up to a coefficient) in the standard theory of stochastic diffusion processes. In the existing probabilistic investigations \cite{zambr,zambr1,garb,blanch,olk}, based on the exploitation of the Schr\"{o}dinger problem strategy, it was generally assumed that the kernel actually \it is \rm a fundamental solution of the parabolic equation. It means that the kernel is a function with continuous derivatives: first order-with respect to time, second order-with respect to space variables. Then, the transition probability density defined by (2) is a fundamental solution of the Fokker-Planck (second Kolmogorov) equation in the pair $x,t$ of variables, and as such is at the same time a solution of the backward (first Kolmogorov) equation in the pair $y,s$. This feature was exploited in \cite{blanch,olk}. There is a number of mathematical subtleties involved in the fundamental solution notion, since in this case, the Feynman-Kac kernel must be a solution of the parabolic equation itself. In general, Feynman-Kac kernels may have granted the existence status, even as continuous functions \cite{simon,simon1,glimm}, but may not be differentiable, and need not to be solutions of any conceivable partial differential equations. To our knowledge, this complication in the study of Markovian representations of the Schr\"{o}dinger interpolating dynamics (and the quantum Schr\"{o}dinger picture dynamics in particular) has never been addressed in the literature. Moreover, it is far from being obvious that this Markovian interpolation actually \it is \rm a diffusion process. \section{Schr\"{o}dinger's interpolation problem: general derivation of the stochastic evolution} \subsection{The Schr\"{o}dinger system of integral equations} We shall complement our previous analysis \cite{blanch,olk} by discussing the issue in more detail. It turns out the the crucial step lies in a \it proper \rm choice of the strictly positive and continuous function $k(y,s,x,t), s<t$ which, if we want to construct a Markov process, has to satisfy the Chapman-Kolmogorov (semigroup composition) equation. To proceed generally let us consider a pair of partial differential equations for real functions $u(x,t)$ and $v(x,t)$: $${\partial _tu(x,t)=\triangle u(x,t) - c(x,t)u(x,t)}\eqno (5)$$ $$\partial _tv(x,t)= -\triangle v(x,t) + c(x,t)v(x,t)$$ where, we have eliminated all unnecessary dimensional parameters. Usually, \cite{glimm,simon}, $c(x,t)$ is assumed to be a continuous and bounded from below function. We shall adopt weaker conditions. Namely, let us decompose $c(x,t)$ into a sum of positive and negative terms: $c(x,t)=c_+(x,t) - c_-(x,t)\; ,\; c_{\pm }\geq 0$ where (a) $c_-(x,t)$ is bounded, while (b) $c_+(x,t)$ is bounded on compact sets of $R\times [0,T]$. It means that $c(x,t)$ is bounded from below and locally bounded from above. Clearly, $c(x,t)$ needs not to be a continuous function and then we encounter weak solutions of (5) which admit discontinuities. With the first (forward) equation (5) we can immediately associate an integral kernel of the time-dependent semigroup (the exponential operator should be understood as the time-ordered expression): $${k(y,s,x,t)=[exp(-\int_s^t H(\tau )d\tau )](y,x)}\eqno (6)$$ where $H(\tau )=-\triangle +c(\tau )$. It is clear, that for discontinuous $c(x,t)$, no fundamental solutions are admitted by (5). By the Feynman-Kac formula, \cite{simon1,freid}, we get $${k(y,s,x,t)=\int exp[-\int_s^tc(\omega (\tau ),\tau)d\tau ] d\mu ^{(y,s)}_{(x,t)}(\omega )}\eqno (7) $$ where $d\mu ^{(y,s)}_{(x,t)}(\omega)$ is the conditional Wiener measure over sample paths of the standard Brownian motion. It is well known that $k$ is strictly positive in case of $c(x,t)$ which is continuous and bounded from below; typical proofs are given under an additional assumption that $c$ does not depend on time \cite{glimm}. However, our assumptions about $c(x,t)$ were weaker, and to see that nonetheless $k$ is strictly positive we shall follow the idea of Theorem 3.3.3 in \cite{glimm}. Namely, the conditional Wiener measure $d\mu _{(x,t)}^{(y,s)}$ can be written as follows $${ d\mu _{(x,t)}^{(y,s)} = [4\pi (t-s)]^{-1/2} exp[-{{(x-y)^2}\over {4(t-s)}}]\: d\nu _{(x,t)}^{(y,s)}}\eqno (8)$$ where $d\nu _{(x,t)}^{(y,s)}$ is the normalised Wiener measure \cite{simon}. We can always choose a certain number $r>0$ to constrain the event (sample path) set $${ \Omega (r)=[\omega : X_s(\omega )=y, X_t(\omega )=x, sup_{s\leq \tau \leq t}\: |X_{\tau }(\omega )|\leq r]}\eqno (9)$$ It comprises these sample trajectories which are bounded by $r$ on the time interval $[s,t]$. In the above, $X_t(\omega )$ is the value taken by the random variable $X(t)$ at time $t$, while a concrete $\omega $-th path is sampled. By properly tuning $r$, we can always achieve $${ \int_{\Omega (r)} d\nu _{(x,t)}^{(y,s)} \geq {1\over 2}}\eqno (10)$$ which implies that $${ k(y,s,x,t)\geq {1\over 2} [4\pi (t-s)]^{-1/2}\: exp[-{{(x-y)^2}\over {4(t-s)}}]\: exp[-(t-s)C] > 0}\eqno (11)$$ $$C=sup_{s\leq \tau \leq t,\; \omega \in \Omega (r)}\; c_+(X_{\tau }(\omega ),\tau )$$ where, by our assumptions, $c_+$ is bounded on compact sets. Consequently, the kernel $k$ is \it strictly positive \rm . With the Schr\"{o}dinger boundary data problem on mind, we must settle an issue of the \it continuity \rm of the kernel. To this end, let us invoke a well known procedure of rescaling of path integrals \cite{simon,roep}: by passing from the "unscaled" sample paths $\omega (t)$ over which the conditional Wiener measure integrates, to the "scaled" paths of the Brownian bridge, the $(y,x)$ conditioning can be taken away from the measure. Then, instead of sample paths $\omega $ connecting points $y$ and $x$ in the time interval $t-s>0$, we consider the appropriately "scaled" paths of the Brownian bridge $\alpha $ connecting the point $0$ with $0$ again, in the (scaled) time $1$. It is possible, in view of the decomposition \cite{simon,roep}: $${\omega (\tau )=({t\over {t-s}}- {\tau \over {t-s}})y + ({\tau \over {t-s}} - {s\over {t-s}})x + \sqrt{t-s} \; \alpha ({\tau \over {t-s}} - {s\over {t-s}})}\eqno (12)$$ where $\alpha $ stands for the "scaled" Brownian bridge. Then, we can write $${k(y,s,x,t)=[4\pi (t-s)]^{-1/2} exp[-{(x-y)^2\over {4(t-s)}}] \int d\mu (\alpha )\cdot }\eqno (13)$$ $$exp[- \int_s^t c({{t-\tau }\over {t-s}}y + {{\tau -s}\over {t-s}}x + \sqrt{t-s}\: \alpha ({{\tau -s}\over {t-s}})\; , \tau )d\tau ]$$ where $d\mu (\alpha )=d\nu ^{(0,0)}_{(0,1)}(\omega )$ is the normalized Wiener measure integrating with respect to the "scaled" Brownian bridge paths, which begin and terminate at the origin $0$ in-between "scaled time" instants: $0$ corresponding to $\tau =s$ and $1$ corresponding to $\tau =t$. This representation of $k$, \it if \rm combined with the assumption that $c(x,t)$ is a continuous function, allows to conclude, \cite{simon}, that the kernel is continuous in all variables. However, our previous assumptions were weaker, and it is instructive to know that through suitable approximation techniques, Theorem B.7.1 in Ref.\cite{simon1} proves that the kernel is jointly continuous in our case as well. It is also clear that $k(y,s,x,t)$ satisfies the Chapman-Kolmogorov composition rule. So, the first equation (5) can be used to define the Feynman-Kac kernel, appropriate for the Schr\"{o}dinger problem analysis in terms of a Markov stochastic process. Let us consider an arbitrary (at the moment) pair of strictly positive, but not necessarily continuous, boundary densities $\rho _0(x)$ and $\rho _T(x)$. By Jamison's principal theorem \cite{jam} there exists a unique pair of strictly positive, locally (i.e. on compact sets) integrable functions $f(x)$ and $g(x)$ solving the Schr\"{o}dinger system (1), e.g. such that ${\rho _0(x)=f(x)\int k(x,0,y,T)g(y)dy}$ and $\rho _T(x)=g(x)\int k(y,0,x,T)f(y)dy$ with the kernel $k(y,s,x,t)$ given by (7). Let us define: $${g(x,t)=\int k(x,t,y,T)g(y)dy\; \; ,\; \; f(x,t)= \int k(y,0,x,t)f(y)dy} \eqno (14)$$ The above integrals exist at least for almost every $x$ so that there appears the problem of the existence of a unique and continuous transition probability density $p(y,s,x,t)$, (2). We shall assume that the function $g(y)$ is bounded at infinity. This means that there exists a constant $C>0$ and a compact set $K\subset R$ such that $g(y)\leq C$ for all $y\in R\backslash K$. Then, for all $t<T$ and any sequences $h_n\rightarrow 0 , s_n\rightarrow 0$, as $n\rightarrow \infty $ we get ($lim$ stands for $lim_{n\rightarrow \infty }$): $$lim \: |g(x+h_n,t+s_n)- g(x,t)| \leq lim\: |\int_K [k(x+h_n,t+s_n,y,T)-k(x,t,y,T)]g(y)dy| \: +\: $$ $${lim\: |\int_{R\backslash K} [k(x+h_n,t+s_n,y,T)- k(x,t,y,T)]g(y)dy|\leq }\eqno (15)$$ $$lim\: sup_{y\in K}\: |k(x+h_n,t+s_n,y,T)-k(x,t,y,T)|\int_K g(y)dy \: +\: $$ $$C\cdot lim\: \int_{R\backslash K} |k(x+h_n,t+s_n,y,T)-k(x,t,y,T)|dy$$ The first term tends to zero because $k$ is jointly continuous and $g$ is locally integrable.The second one tends to zero because of the Lebesgue bounded convergence theorem. Consequently, our assumption suffices to make $g(x,t)$ continuous on $R\times [0,T)$. Similarly, we can prove that $g(x,t)$ is bounded. \\ Now, we can set according to (2), $p(y,s,x,t)=k(y,s,x,t)g(x,t)/g(y,s)$. Then, $p(y,s,x,t)$, $0\leq s<t\leq T$ becomes a transition probability density of a Markov stochastic process with a factorized density $\rho (x,t)=f(x,t)g(x,t)$. Clearly, this stochastic process interpolates between the boundary data $\rho _0$ and $\rho _T$ as time continuously varies from $0$ to $T$. Notice that (15) implies the continuity of $p$ in the time interval $[0,T)$. Although $p(y,s,x,t)$ is continuous in all variables, we cannot be sure that the interpolating stochastic process has continuous trajectories, and no specific (e.g. Fokker-Planck) partial differential equation can be readily associated with this dynamics. Therefore, we must explicitly verify whether the associated process is stochastically continuous. If so, we should know whether it is continuous (i.e. admits continuous trajectories). Eventually, we should check the vailidity of conditions under which the investigated interpolation can be regarded as a diffusion process. The subsequent analysis will prove that this ultimate goal results only due to the gradual strengthening of conditions imposed on the parabolic system (5). \subsection{Stochastic continuity of the process} Apart from the generality of formulation of the Schr\"{o}dinger interpolation problem which appears to preclude an unambigous identification (diffusion or not) of the constructed stochastic process, we can prove in the present case, a fundamental property of a stochastic dynamics called a stochastic continuity of the process. In this connection, compare e.g. \cite{zambr,dynk} and \cite{nel1}, where this property is linked to the uniqueness of the corresponding Markov semigroup generator. The stochastic continuity property is a necessary condition for the process to admit continuous trajectories. The stochastic process is stochastically continuous, if for the probability of the occurence of sample paths $\omega $, such that the random variable values $X_t(\omega )$ along the trajectory obey $|X_t(\omega )-X_s(\omega )|\geq \epsilon \; ,\; s< t$, the following limiting behaviour is recovered $${lim_{t\downarrow s} P[\omega:|X_t(\omega )-X_s(\omega )| \geq \epsilon ] = 0 }\eqno (16)$$ for every positive $\epsilon $. This demand can be written in a more handy way in terms of the transition probability density $p(y,s,x,t)$ and the density $\rho (x,t)$ of the process: $${ lim_{t\downarrow s} [\int_{-\infty }^{+\infty }dy \rho (y,s) \int_{|x-y|\geq \epsilon } p(y,s,x,t) dx ] = 0}\eqno (17)$$ So, for the transition density to be stochastically continuous, it suffices that $${ lim_{\triangle s\downarrow 0} \int_{|x-y|\geq \epsilon } p(y,s,x,s+\triangle s)dx = 0}\eqno (18)$$ for almost every $y\in R$. In view of our construction, (2), we have: $${ lim_{\triangle s\downarrow 0} \int_{|x-y|\geq \epsilon } p(y,s,x,s+\triangle s)dx=}\eqno (19)$$ $${1\over {g(y,s)}} lim_{\triangle s\downarrow 0} \int_{|x-y|\geq \epsilon } dx \; k(y,s,x,s+\triangle s)\int_{-\infty }^{+\infty } k(x,s+\triangle s,z,T)\; g(z) dz$$ By changing the order of integrations (allowed by positivity of the involved functions) we get: $${ lim_{\triangle s\downarrow 0} \int_{|x-y|\geq \epsilon } p(y,s,x,s+\triangle s) dx = {1\over {g(y,s)}} lim_{\triangle s \downarrow 0} \int_{-\infty }^{+\infty } dz\: g(z)\: }\eqno (20)$$ $$[\int_{|x-y|\geq 0} dx \: k(y,s,x,s+\triangle s)\: k(x,s+\triangle s,z,T)]$$ Because the potential is bounded from below, $c\geq -M$ for some $M>0$, we easily arrive at the estimates (use the "scaled" Brownian bridge argument) $${ k(y,s,x,s+\triangle s) \leq (4\pi \triangle s)^{-1/2} exp[-{{(x-y)^2}\over {4\triangle s}}] \: exp(M\triangle s)}\eqno (21) $$ and $${ k(x,s+\triangle s,z,T)\leq [4\pi (T-s-\triangle s)]^{-1/2} \: exp[-{{(z-x)^2}\over {4(T-s-\triangle s)}}]} {exp[M(T-s-\triangle s)]} \eqno (22)$$ Then we get: $${ 0\leq lim_{\triangle s\downarrow 0} \int_{|x-y|\geq \epsilon } k(y,s,x,s+\triangle s) k(x,s+\triangle s,z,T) dx \leq }$$ $${ [4\pi (T-s)]^{-1/2} exp[M(T-s)] lim_{\triangle s\downarrow 0}\: (4\pi \triangle s)^{-1/2} }\eqno (23)$$ $$ \int_{|x-y|\geq \epsilon } dx exp[-{{(x-y)^2}\over {4\triangle s}}]\: exp[-{{(z-x)^2}\over {4(T-s-\triangle s)}}] = 0$$ So, by the classic Lebesgue bounded (dominated) convergence theorem, the required limiting property $lim_{\triangle s\rightarrow 0} \int_{|x-y|\geq \epsilon } p(y,s,x,s+\triangle s) dx = 0 $ follows and (16) holds true. As mentioned before, the stochastic continuity of the Markov process is a necessary condition for the process to be continuous in a more pedestrian sense, i. e. to admit continuous sample paths. However, it is insufficient. Hence, additional requirements are necessary to allow for a standard diffusion process realization of solutions of the general Schr\"{o}dinger problem, (1)-(3). In the next section we shall prove that our process can be regarded as continuous, by requiring a certain correlation between the kernel $k(y,s,x,t)$ and a function $g(x,t)$, (14). \subsection{Continuity of the process} It is well known that a solution of a parabolic equation cannot tend to zero arbitrarily fast, when $|x|\rightarrow \infty $, \cite{watson1}. Roughly speaking, it cannot fall off faster than a fundamental solution (provided it exists). In fact, the solution is known to fall off as fast as the fundamental solution, when the initial boundary data coincide with the Dirac measure. If a support of the initial data is spread (i.e. not point-wise), then the solution falloff is slowlier than this of the fundamental one. In our discussion, where $g(x,t)$ is a generalized solution and $k(y,s,x,t)$ is a Feynman-Kac kernel which does not need to be a fundamental solution, we expect a similar behaviour. Mathematically, our demand will be expressed as follows. Let $t-s$ be small and $K$ be a compact subset in $R$. Because $g(x,t)$ is supported on the whole $R$, so in the decomposition $${g(y,s)= \int_Kk(y,s,x,t)g(x,t)dx + \int_{R\backslash K}k(y,s,x,t)g(x,t)dx}\eqno (24)$$ the second term becomes relevant when $|y|\rightarrow \infty $ . It amounts to (in the denominator there appears $g(y,s)$): $${lim_{|y|\rightarrow \infty }\: {{\int_{-\infty }^{+\infty }k(y,s,x,t)g(x,t)\chi _K(x)dx}\over {\int_{-\infty }^{+\infty } k(y,s,x,t)g(x,t)dx}}\; = \; 0}\eqno (25) $$ where $\chi _K$ is an indicator function of the set $K$, which is equal one for $x\in K$ and zero otherwise. By means of the transition probability density $p(y,s,x,t)$ let us introduce a transformation $${(T_s^tf)(y) = \int_{-\infty }^{+\infty }p(y,s,x,t)f(x)dx} \eqno (26)$$ of a function $f(x)$, continuous and vanishing at infinity (we shall use an abbreviation $f\in C_{\infty }(R)$ to express this fact). It is clear that $(T_s^tf)(x)$ is a continuous function. For a suitable compact set $K$ we can always guarrantee the property $|f(x)|<\epsilon $ for every $x\in R\backslash K$. Then, if we exploit the property $\int_{R\backslash K}p(y,s,x,t)dx\leq 1$ if $s<t$ and the definition of $p$ in terms of $k$ and $g$, we arrive at $${|(T_s^tf)(y)|\; \leq \; \int_Kp(y,s,x,t) |f(x,t)|dx\: +\: \int_{R\backslash K} p(y,s,x,t)|f(x,t)|dx\; \leq }\eqno (27)$$ $$[\int_Kp(y,s,x,t)dx]\: \int_K|f(x,t)|dx\: +\: sup_{x\in R\backslash K}\; |f(x,t)|\: \int_{R\backslash K} p(y,s,x,t)dx \: \leq $$ $$[\int_K|f(x,t)|dx]\: {{\int_Kk(y,s,x,t)g(x,t)dx}\over {\int_{-\infty }^{+\infty }k(y,s,x,t)g(x,t)dx}}\; +\; \epsilon $$ It implies that for small $t-s$, $lim_{|y|\rightarrow \infty } (T_s^tf)(y)=0$, and so $T_s^t$ forms an inhomogeneous in time semigroup of positive contractions on $C_{\infty }(R)$. For arbitrary $t$ and $s$ the result follows by the obvious decomposition property $T_s^t=T_s^{s_1}T_{s_1}^{s_2}\cdot \cdot \cdot T_{s_n}^t$. In the well established terminology, our $p(y,s,x,t)$ is a $C_{\infty }$-Feller transition function and leads to a regular Markov process, \cite{dynk}. Moreover, by the stochastic continuity of $p(y,s,x,t)$, $T_s^t$ is strongly continuous. As yet, we do not know whether the process itself is continuous i.e. has continuous sample paths. To this end, it suffices to check whether the so called "Dynkin condition", \cite{karlin} $${ lim_{t\downarrow s}{1\over {t-s}}\: sup_{y\in K}\; [\int_{|x-y|> \epsilon } p(y,s,x,t)dx]\: =\: 0}\eqno (28)$$ is valid for every $\epsilon >0$ and every compact set $K$. We have (remember that $g(x,t)$ is strictly positive, continuous and bounded): $$sup_{y\in K}\: \int_{|x-y|> \epsilon } p(y,s,x,t)dx\; =\; sup_{y \in K}\: {1\over {g(y,s)}}\int_{|x-y|> \epsilon }k(y,s,x,t)g(x,t)dx\: \leq $$ $${{{sup_{x}\: g(x,t)}\over {inf_{y\in K}\: g(y,s)}}\: \int_{|x-y|> \epsilon }k(y,s,x,t)dx\: \leq \: C\: \int_{|x-y|> \epsilon }k_0(x-y,t-s)dx}\eqno (29)$$ where (compare e.g. the previous estimate (22)) $${C={{sup_{x}\: g(x,t)}\over {inf_{y\in K}\: g(y,s)}}\; exp[M(t-s)]}\eqno (30)$$ and $k_0(x-y,t-s)$ is the heat kernel. Finally, we arrive at: $${lim_{t\downarrow s} {1\over {t-s}}\: sup_{y\in K}\; [\int_{|x-y|> \epsilon } p(y,s,x,t)dx]\; \leq }$$ $${ C\: lim_{t\downarrow s}{1\over {t-s}}\int_{|z|> \epsilon }k_0(z,t-s)dz\: =\: 0}\eqno (31)$$ So, the stochastic process we are dealing with, is continuous. Interestingly, "a continuous in time parameter stochastic processes which possesses the (strong) Markov property and for which the sample paths $X(t)$ are almost always (i.e. with probability one) continuous functions of $t$ is called a diffusion process", see e.g. chapter 15 of \cite{karlin}. \subsection{The interpolating stochastic dynamics: compatibility with the temporally adjoint parabolic evolutions} The formulas (14) determine what is called, \cite{freid}, the generalized solution of a parabolic equation: it admits functions which are not necessarily continuous and if continuous, then not necessarily differentiable. Before, we have established the continuity of the generalized solution $g(x,t)$ under rather mild assumption about the behaviour of $g(x)$ at spatial infinity. In fact, the same assumption works for $f(x,t)$. But nothing has been said about the differentiability of $f(x,t)$ and $g(x,t)$. Consequently, our reasoning seems to be somewhat divorced from the original partial differential equations (5), for which we can take for granted that certain solutions $u(x,t)$ and $v(x,t)$ exist in the time interval $0\leq t\leq T$. For this, we must assume that $c(x,t)$ is a continuous function. Let us consider the solutions of (5) that are bounded functions of their arguments. It is instructive to point out that we do not impose any restrictions on the growth of $c(x,t)$ when $|x|\rightarrow \infty $, and consequently we do not assume that solutions of parabolic equations (5) have bounded derivatives. Then, \cite{freid}, the solution $u(x,t)$ of the forward parabolic equation (5) is known to admit the Feynman-Kac representation with the integral kernel (7),(13), where $${u(x,t)=\int k(y,s,x,t)u(y,s)dy}\eqno (32)$$ for $0\leq s<t\leq T$. At this point let us define $${U(x,t)=v(x,T-t)}\eqno (33)$$ for all $t\in [0,T]$ and observe that, as a consequence of the time adjoint equation (5) for which $v(x,t)$ \it is \rm a solution, the newly introduced function $U(x,t)$ solves the forward equation (5): $${\partial _tU(x,t)=\triangle U(x,t) - c(x,T-t)U(x,t)}\eqno (34)$$ with a slightly rearranged potential: $c(x,t)\rightarrow c(x,T-t)$. By the assumed boundedness of the solution $v(x,t)$ of (5), we arrive at the Feynman-Kac formula $${U(x,t)=\int K(y,s,x,t)U(y,s)dy}\eqno (35)$$ with the corresponding kernel $K(y,s,x,t)$ of the (time ordering implicit) operator $exp[-\int_s^tH(T-\tau )d\tau ]$, where $H(T-\tau )=-\triangle + c(T-\tau )$. Let us emphasize that in case of the time independent potential, $c(x,t)=c(x)$ for all $0\leq t\leq T$, the kernel $K$ coincides with $k$. The previous Brownian bridge argument (12), (13) retains its validity, and we have: $${K(y,s,x,t)=[4\pi (t-s)]^{-1/2} exp[-{{(x-y)^2}\over {4(t-s)}}]\cdot } \eqno (36)$$ $$\int d\mu (\alpha )\: exp[-\int_s^t c({{t-\tau }\over {t-s}}y + {{\tau -s}\over {t-s}}x + \sqrt{t-s}\: \alpha ({{\tau -s}\over {t-s}})\: ,T-\tau )d\tau ]$$ which, after specializing to the case of $s=0,t=T$ and accounting for the invariance of the Brownian bridge measure with respect to the replacement of sample paths $\omega (\tau )$ by sample paths $\omega (T-\tau )$, \cite{nag,nel}, gives rise to: $${K(y,0,x,T)=(4\pi T)^{-1/2} exp[-{{(x-y)^2}\over {4T}}]\cdot }\eqno (37)$$ $$\int d\mu (\alpha )\: exp[-\int_0^T c({\sigma \over T}y + (1- {\sigma \over T})x + \sqrt{T}\: \alpha ({\sigma \over T}), \sigma )d\sigma ]$$ where $\sigma =T-\tau $.\\ A comparison of (37) with (13) proves that we have derived an identity: $${K(y,0,x,T)=k(x,0,y,T)}\eqno (38)$$ whose immediate consequence is the formula $${U(x,T)=v(x,0)=\int k(x,0,y,T)v(y,T)dy}\eqno (39)$$ for the backward propagation of $v(y,T)$ into $v(x,0)$. We shall utilize (39) and (32), under an \it additional \rm assumption that the previous, hitherto arbitrary, probability density data $\rho _0(x), \rho _T(x)$, actually are determined by the initial and terminal values of the solutions $u(x,t),\: v(x,t)$ of (5), according to: $${\rho _0(x)=u(x,0)v(x,0)}$$ $${\rho _T(x)=u(x,T)v(x,T)}\eqno (40)$$ Our present aim is to show that with this assumption, we can identify the (still abstract) functions $f(x,t)$, $ g(x,t)$, (14), with $u(x,t)$ and $v(x,t)$ respectively. By (32), (39) there holds: $${\rho _0(x)=u(x,0)\int k(x,0,y,T) v(y,T)dy}\eqno (41)$$ $$\rho _T(x)=v(x,T)\int k(y,0,x,T)u(y,0)dy$$ and, in view of the uniqueness of solution of the Schr\"{o}dinger system, once the boundary densities and the continuous strictly positive kernel are specified, we realize that the propagation formulas (14) involve solutions of (5) through the respectively initial and terminal data: $$ {f(x)=u(x,0)}$$ $${g(x)=v(x,T)}\eqno (42)$$ Moreover, (5),(14) imply that $f(x,t)=u(x,t)$ holds true identically for all $t\in [0,T]$. What remains to be settled is whether the function $g(x,t)$ can be identified with the solution $v(x,t)$ of (5) for all $t\in [0,T]$. This property is obvious, when the time independent potential $c(x)$ is investigated instead of the more general $c(x,t)$. As well, the identification is with no doubt in case when $k(y,s,x,t)$ is a fundamental solution of the parabolic equation in variables $x,t$. In this case, $k(y,s,x,t)$ is a unique solution of the system (5), and solves the adjoint equation in variables $y,s$, \cite{kal,bes1,bes2}. Then, because $f(x),g(x)$ are locally integrable, an immediate consequence is, \cite{watson}, that $f(x,t)$ and $g(x,t)$ are positive solutions of (5). The identification of them with $u(x,t)$ and $v(x,t)$ respectively, follows from the uniqueness of positive solutions, \cite{bes1}. Let us begin from a minor generalization of (22), and define: $${U_s(x,t)=v(x,T+s-t)\; \; , \; \; t\in [s,T]}\eqno (43)$$ Clearly, a parabolic equation (34) is satisfied by $U_s(x,t)$, if instead of $c(x,T-t)$, the potential $c(x,T+s-t)$ is introduced. An immediate propagation formula follows $${U_s(x,t)=\int K_s(y,s,x,t)U_s(y,s)dy}\eqno (44)$$ The integral kernel $K_s$ differs from the previous $K$, (36), in the explicit time dependence of the potential $c(x,T-\tau )\rightarrow c(x,T+s-\tau )$. By putting $T=t$ in (44) we get: $${v(x,s)=\int K_s(y,s,x,T)v(y,T)dy}\eqno (45)$$ and by the previous part of our demonstration we know that $${g(x,s)=\int k(x,s,y,T)v(y,T)dy}\eqno (46)$$ At this point, it is enough to prove that the identity (cf. (38)) $${K_s(y,s,x,T)=k(x,s,y,T)}\eqno (47)$$ takes place for any $s;\; 0\leq s\leq T$ . Let us exploit the Brownian bridge scaling (13) again, so that $${k(x,s,y,T)=[4\pi (T-s)]^{-1/2} exp[-{(x-y)^2\over {4(T-s)}}]\cdot } \eqno (48)$$ $$\int d\mu (\alpha ) exp[-\int_s^T c({{T-\tau }\over {T-s}}x + {{\tau - s}\over {T-s}}y + \sqrt{T-s}\; \alpha ({{\tau -s}\over {T-s}}),\; \tau )d\tau ]$$ and, analogously $${K_s(y,s,x,T)=[4\pi (T-s)]^{-1/2} exp[-{(x-y)^2\over {4(T-s)}}] \cdot }\eqno (49)$$ $$\int d\mu (\alpha ) exp[-\int_s^T c({{T-\tau }\over {T-s}}y + {{\tau -s}\over {T-s}}x + \sqrt{T-s}\; \alpha ({{\tau -s}\over {T-s}}), \; T+s-\tau ) d\tau ]$$ By changing: $${\alpha ({{\tau -s}\over {T-s}})\Rightarrow \alpha (1-{{\tau -s}\over {T-s}})=\alpha ({{T-\tau }\over {T-s}})} \eqno (50)$$ and substituting $\sigma =T+s-\tau$, where $\tau $ only is the running variable, we finally recover $${K_s(y,s,x,T)=[4\pi (T-s)]^{-1/2} exp[-{(x-y)^2\over {T-s}}]\cdot } \eqno (51)$$ $$\int d\mu (\alpha )\: exp[-\int_T^s c({{\sigma -s} \over {T-s}}y + {{T-\sigma }\over {T-s}}x + \sqrt{T-s}\; \alpha ({{\sigma -s}\over {T-s}}),\; \sigma )(-d\sigma )]= k(x,s,y,T)$$ Hence, $${g(x,s)=v(x,s)}\eqno (52)$$ is valid for all time instants $0\leq s\leq T$. This implies that $p(y,s,x,t)=k(y,s,x,t){{v(x,t)}\over {v(y,s)}}$ defines a consistent transition probability density of the continuous Markovian interpolation. \\ We have succeeded to prove that:\\ (i) If a continuous, strictly positive Feynman-Kac kernel of the forward parabolic equation (5) is employed to solve the Schr\"{o}dinger boundary data problem (1) for an \it arbitrary \rm pair of nonzero probability densities $\rho _0(x)$ and $\rho _T(x)$, then we can construct a Markov stochastic process, which is continuous and provides for an interpolation between these boundary data in the time interval $[0,T]$. \\ (ii) Given the time adjoint parabolic system (5) with bounded solutions $u(x,t),\\ v(x,t)$ in the time interval $[0,T]$. If the boundary densities are defined according to (40), then the Schr\"{o}dinger problem (1)-(3) provides us with a unique continuous Markov interpolation, that is compatible with the time evolution of $\rho (x,t)=u(x,t)v(x,t), \; t\in [0,T]$. \subsection{Whence diffusions ?} Our strategy, of deducing a probabilistic solution of the Schr\"{o}dinger boundary data problem in terms of Markov stochastic processes running in a continuous time, was accomplished in a number of steps accompanied by the gradual strengthening of restrictions imposed on the Feynman-Kac potential, to yield a continuous process (cf. Section II.3), and eventually to get it compatible with a given a priori parabolic evolution (Section II.4). In a broad sense, \cite{karlin}, it can be named a diffusion. However, this rather broad definition of the diffusion process is significantly narrowed in the physical literature: while demanding the continuity of the process, the additional restrictions are imposed to guarrantee that the mean and variance of the infinitesimal displacements of the process have the standard meaning of the drift and diffusion coefficient, respectively, \cite{horst}. According to the general wisdom, diffusions arise in conjunction with the parabolic evolution equations, since then only the conditional averages are believed to make sense in the local description of the dynamics. It is not accidental that forward parabolic equations (5) are commonly called the generalized diffusion equations. Also, the fact that the Feynman-Kac formula involves the integration over sample paths of the Wiener process, seems to suggest some diffusive features of the Schr\"{o}dinger interpolation, even if we are unable to establish this fact in a canonical manner. Clearly, the conditions valid for any $\epsilon >0$: \\ (a) there holds $lim_{t\downarrow s}{1\over {t-s}}\int_{|y-x|>\epsilon } p(y,s,x,t)dx=0$, (notice that (a) is a direct consequence of the stronger, Dynkin condition, (28)),\\ (b) there exists a drift function $b(x,s)=lim_{t\downarrow s}{1\over {t-s}}\int_{|y-s| \leq \epsilon }(y-x)p(x,s,y,t)dy$, \\ (c) there exists a diffusion function $a(x,s)=lim_{t\downarrow s}{1\over {t-s}} \int_{|y-x|\leq \epsilon } (y-x)^2 p(x,s,y,t)dy$,\\ are conventionally interpreted to define a diffusion process, \cite{horst}. To our knowledge, no rigorous demonstration is available in the Schr\"{o}dinger problem context, in case when the involved semigroup kernel is \it not \rm a fundamental solution of the parabolic equation. Let us impose a restriction on a lower bound of a solution $v(x,t)$ of the backward equation (5). Namely, we assume that there exist constants $c_1>0,c_2>0$ such that $v(y,s)\geq c_1 exp(-c_2y^2)$ for all $s\in [0,t], t<T$. This property was found to be respected by a large class of parabolic equations, \cite{watson2}, and it automatically ensures that the condition (25) of Section II.3 is satisfied. Indeed: $$0\leq lim_{|y|\rightarrow \infty }\: {1\over {v(y,s)}} \int_{-\infty }^{+\infty } k(y,s,x,t)v(x,t)\chi _K(x)dx \leq $$ $$ {{1\over {c_1}} [4\pi (t-s)]^{-1/2} \: exp[M(t-s)]\cdot }\eqno (53)$$ $$ [sup_{x\in K}\: v(x,t)]\: lim_{|y|\rightarrow \infty }\: exp(c_2y^2)\: \int_K exp[-{(x-y)^2\over {4(t-s)}}dx] = 0 $$ if $t-s\geq \epsilon $ for sufficiently small $\epsilon >0$ (like for example $\epsilon = 1/16c_2$). \\ It is our purpose to complete the previous analysis by demonstrating that, with the above assumption on $v(x,t)$, the continuous Markov process we have constructed actually \it is \rm the diffusion process. Our subsequent arguments will rely on the Dynkin treatise \cite{dynk}. It is well known that the infinitesimal (local) characteristics of a continuous Markov process can be defined in terms of its, so called, characteristic operator. It is closely linked with the standard infinitesimal (Markov) generator of the process, and we shall take advantage of this link in below. Let us agree, following Dynkin, to call a continuous Markov process a diffusion, if its characteristic operator ${\cal U}$ is defined on twice differentiable functions (we skip more detailed definition, \cite{dynk}). In this case $x\rightarrow x-x_0$ and $x\rightarrow (x-x_0)^2$ allow for the definition of a drift and diffusion function respectively: $${[{\cal U} (x-x_0)](x_0,s)=b(x_0,s)}\eqno (54)$$ $$[{\cal U}((x-x_0)^2)](x_0,s)=a(x_0,s)$$ By results of Sections II.3 and II.4 we know that our transition probability density $p(y,s,x,t)=k(y,s,x,t){{v(x,t)}\over {v(y,s)}} $, inspired by the Schr\"{o}dinger boundary data problem, gives rise to a continuous Markov process. To see whether it can be regarded as a diffusion, we must verify the above two defining properties (54). At first, let us consider the infinitesimal operator $A$ (Markov generator) of the corresponding strongly continuous semigroup $T_s^t: C_{\infty }(R)\rightarrow C_{\infty }(R)$, which we have introduced via the formula (26). We are interested in domain properties of $A$, in view of the fact that the characteristic operator ${\cal U}$ is a natural extension of $A$, $A\subset {\cal U}$, \cite{dynk}. We denote $C_c^2(R)$ the space of continuous functions with compact support which possess continuous derivatives up to second order. For $h\in C_c^2(R)$ we have $${lim_{\delta \downarrow 0}\: {1\over {\delta }}[\int_{-\infty }^{+\infty } p(y,s,x,s+\delta )h(x)dx - h(y)]=}\eqno (55)$$ $${1\over {v(y,s)}}lim_{\delta \downarrow 0}\: {1\over {\delta }}[\int_{-\infty }^{+\infty }k(y,s,x,s+\delta ) v(x,s+\delta )h(x)dx - v(y,s)h(y)]$$ Because $v$ is continuously differentiable with respect to time, we have $${v(x,s+\delta )=v(x,s)+\delta \: \partial _sv(x,s')}\eqno (56)$$ where (cf. the standard Taylor expansion formula) $s'=s+ \vartheta \delta ,\: 0\leq \vartheta \leq 1$. Hence $${lim_{\delta \downarrow 0}{1\over {\delta }} [\int_{-\infty }^{+\infty } p(y,s,x,s+\delta )h(x) - h(y)]=}\eqno (57)$$ $${1\over {v(y,s)}} lim_{\delta \downarrow 0}{1\over \delta }[\int_{-\infty }^{+\infty } dx\: k(y,s,x,s+\delta )v(x,s)h(x) - v(y,s)h(x)] + \: $$ $${1\over {v(y,s)}} lim_{\delta \downarrow 0}[ \int_{-\infty }^{+\infty }k(y,s,x,s+\delta )\partial _sv(x,s')h(x)dx]$$ We shall exploit the strongly continuous semigroup evolution associated with the parabolic system (5). Because of the domain property: $C_c^{\infty }(R)\subset D(H)$ the smooth functions with compact support are acted upon by $H=\triangle -c(x,s)$ and $H$ is closed as an operator on $C_{\infty }(R)$. But then also $C_c^2(R) \subset D(H)$ and so the first term in (57) takes the form: $${{1\over {v(y,s)}}[\triangle (vh)(y,s) - c(y,s)v(y,s)h(y)]} \eqno (58)$$ while the second equals $${{1\over {v(y,s)}}[\partial _sv(y,s)]f(y)= {1\over {v(y,s)}}[-\triangle v(y,s) + c(y,s)v(y,s)]f(y)}\eqno (59)$$ Thus, (55) is point-wise convergent: $${lim_{\delta \downarrow 0}\: {1\over \delta }[ \int_{-\infty}^{+\infty }p(y,s,x,s+\delta )h(x)dx - h(y)]= }\eqno (60)$$ $${1\over {v(y,s)}}[(\triangle v(y,s)h(y) + 2\nabla v(y,s)\nabla h(y) + v(y,s)\triangle h(y) - c(y,s)v(y,s)h(y) - $$ $$(\triangle v(y,s))h(y) + c(y,s)v(y,s)h(y)]= \triangle h(y) + 2({{ \nabla v}\over v})(y,s) \nabla h(y)$$ Now, we shall establish the boundedness of: $${sup_{y\in R;0<\delta <\epsilon }\; [{1\over \delta }\: |\int_{-\infty }^{+\infty } p(y,s,x,s+\delta )h(x)dx - h(y)|]}\eqno (61)$$ for some small $\epsilon $. Because $C_c^2(R) \subset D(H)$, so there holds $${{1\over \delta }[\int_{-\infty }^{+\infty } k(y,s,x,s+\delta )v(x,s)h(x)dx - v(y,s)h(y)]\rightarrow \: [\triangle - c(y,s)](vf)(y,s)}\eqno (62)$$ uniformly in $y$, as $\delta \rightarrow 0$. It implies that for any compact set $K$ there is $${sup_{y\in K;0<\delta <\epsilon }\; {1\over \delta } | \int_{-\infty }^{+\infty } p(y,s,x,s+\delta )h(x) - h(y)| \leq }\eqno (63)$$ $$[sup_{y\in K}\: {1\over {v(y,s)}}]\: sup_{y\in K;0<\delta <\epsilon }\: [{1\over \delta }|\int_{-\infty }^{+\infty }k(y,s,x,s+\delta )v(x,s)h(x)dx - v(y,s)h(y)| + $$ $$|\int_{-\infty }^{+\infty } k(y,s,x,s+\delta )\partial _sv(x,s')h(x)dx|] < \infty $$ We have thus the required boundedness for all $y\in K$ i.e. on compact sets. For $y\in R\backslash K$ we shall make the following estimations. Because the support of $h$ is compact, we can define $supp\: h\subset [-n,n]$ for some natural number $n$. Let $K=[-3n,3n]$. Then: $$sup_{y\in R\backslash K;0<\delta <\epsilon }\: {1\over \delta }|\int_{-\infty }^{+\infty } p(y,s,x,s+\delta )h(x)dx - h(y)|= $$ $${sup_{y\in R\backslash K;0<\delta <\epsilon }\: {1\over \delta }| \int_K p(y,s,x,s+\delta )h(x)dx| \leq }\eqno (64)$$ $$[sup_{x\in K}\: |h(x)|]\: sup_{y\in R\backslash K;0<\delta <\epsilon }\: {1\over \delta }{1\over {v(y,s)}} \int_K k(y,s,x,s+\delta ) v(x,s+\delta )dx \leq $$ $$[sup_{x\in K}\: |h(x)|]\: [sup_{x\in K;s\leq s'\leq s+\epsilon }\: v(x,s')]\: sup_{y\in R\backslash K;0<\delta <\epsilon } \: {1\over \delta }{1\over {v(y,s)}}\int k(y,s,x,s+\delta )dx$$ In view of our assumption $v(y,s)\geq c_1exp(-c_2y^2)$, there holds: $${sup_{y\in R\backslash K;0<\delta <\epsilon }\: {1\over \delta }|\int p(y,s,x,s+\delta )h(x)dx| \leq }\eqno (65)$$ $$C\cdot sup_{|y|\geq 3n;0<\delta <\epsilon }\; exp(c_2y^2)\: \delta ^{-3/2}\: \int_{-n}^{+n} exp[-{{(x-y)^2}\over {4\delta }}\: dx$$ where $${C=c_1(4\pi )^{-1/2}exp(M\epsilon )\: [sup_{x\in K}|h(x)|]\: sup_{x\in K;s<s'<s+\epsilon }v(x,s')}\eqno (66)$$ If we choose $\epsilon =1/16c_2$, then $${exp(c_2y^2)\: \int exp[-{{(x-y)^2}\over {4\delta }}]\: dx \leq 4\delta exp(-{n^2\over \delta })}\eqno (67)$$ for every $|y|\geq 3n$, and so $${sup_{y\in R\backslash K;0<\delta <\epsilon }\: {1\over \delta }|\int p(y,s,x,s+\delta )h(x)dx|\leq 4Csup_{0<\delta <\epsilon }\delta ^{-1/2}exp(-{n^2\over \delta }) < \infty }\eqno (68)$$ Consequently, the desired boundedness (62) holds true for all $y\in R$, together with the previously established point-wise convergence (61). Altogether, it means, \cite{dynk}, that the weak generator of $T_s^t$ is defined at least on $C_c^2(R)$. Moreover, while acting on $h\in C_c^2(R)$ it gives $\triangle h + (\nabla ln \: v)\nabla h$. Because $T_s^t$ is strongly continuous in $C_{\infty }(R)$, the Markov generator $A$ coincides with the weak generator, \cite{dynk}, i.e. $A=\triangle +(\nabla ln\: v)\nabla $ on $C_c^2(R)$. Finally, let us choose $h_0\in C_c^2(R)$ such that $h_0(x)=1$ in some neighbourhood of the point $x_0$. Then, $(x-x_0)h_0(x)$ and $(x-x_0)^2h_0(x)$ both belong to $C_c^2(R)$ and therefore: $${A[(x-x_0)h_0 ](x_0,s) = \triangle [(x-x_0)h_0 ](x_0) + } \eqno (69)$$ $$2(\nabla ln\: v)(x_0,s)\nabla [(x-x_0)h_0](x_0)=2(\nabla ln\: v)(x_0,s)\:$$ $$ A[(x-x_0)^2h_0](x_0,s) = 2 $$ Because $A\subset {\cal U}$ and ${\cal U}$ is a local operator,\cite{dynk}, we have the following inclusion $C_c^2(R) \subset D({\cal U})$ and (we can get rid of $h_0$): $${[{\cal U}(x-x_0)](x_0,s)=2(\nabla ln\: v)(x_0,s)}\eqno (70)$$ $$[{\cal U}(x-x_0)^2](x_0,s)=2$$ It means that we indeed obtain a diffusion process with the drift $\nabla ln\: v$ and a constant diffusion coefficient, according to the standards of \cite{zambr,nel,carlen}. It is worth emphasizing that since $(x-x_0)h_0(x)$ and $(x-x_0)^2h_0(x)$ belong to $D(A)$, and since functions from $C_c^2(R)$ can be used to approximate, under an integral, an indicator function of the set $[x_0-\epsilon,x_0+\epsilon ],\epsilon >0$, we can directly evaluate: $${lim_{t\downarrow s} {1\over {t-s}}\int_{-\infty }^{+\infty } p(x_0,s,x,t)(x-x_0)h_0(x)dx = }\eqno (71)$$ $$lim_{t\downarrow s} {1\over {t-s}}\int_{|x-x_0|\leq \epsilon } p(x_0,s,x,t)(x-x_0)dx = 2(\nabla ln\: v)(x_0,s)$$ and similarly $${lim_{t\downarrow s}{1\over {t-s}}\int_{|x-x_0|\leq \epsilon }p(x_0,s,x,t)(x-x_0)^2dx = 2}\eqno (72)$$ Because the Dynkin condition (28) implies that $${lim_{t\downarrow s} {1\over {t-s}}\int_{|x-x_0|> \epsilon } p(x_0,s,x,t)dx = 0 }\eqno (73)$$ we arrive at the commonly accepted definition of the diffusion process, summarized in formulas (71)-(73), with the functional expression for the drift, (71), given in the familiar, \cite{zambr,nel,blanch}, gradient form. \section{Nonstationary Schr\"{o}dinger dynamics: from the Feynman-Kac kernel to diffusion process} In our previous paper \cite{olk}, the major conclusion was that in order to give a definitive probabilistic description of the quantum dynamics as a \it unique \rm diffusion process solving Schr\"{o}dinger's interpolation problem, a suitable Feynman-Kac semigroup must be singled out. Let us point out that the measure preserving dynamics, permitted in the presence of conservative force fields, was investigated in \cite{blanch}, see also \cite{carm,freid}. The present analysis was performed quite generally and extends to the dynamics affected by time dependent external potentials, with no clear-cut discrimination between the nonequilibrium statistical physics and essentially quantum evolutions. The formalism of Section II encompasses both groups of problems. Presently, we shall restrict our discussion to the free Schr\"{o}dinger picture quantum dynamics. Following Ref. \cite{olk} we shall discuss the rescaled problem so as to eliminate all dimensional constants. The free Schr\"{o}dinger evolution $i\partial _t\psi = -\triangle \psi $ implies the following propagation of a specific Gaussian wave packet: $${\psi (x,0)=(2\pi )^{-1/4} exp\: (-{{x^2}\over {4}})\; \longrightarrow }\eqno (74) $$ $$\psi (x,t)=({2\over \pi })^{1/4} \; (2+ 2it)^{-1/2} exp[-{x^2\over {4(1+it)}}]$$ So that $${\rho _0(x)=|\psi (x,0)|^2=(2\pi )^{-1/2}\: exp[-{x^2\over 2}] \longrightarrow }\eqno (75)$$ $$\rho (x,t)=|\psi (x,t)|^2= [2\pi (1+t^2)]^{-1/2}\: exp [-{x^2\over {2(1+t^2)}}]$$ and the Fokker-Planck equation (easily derivable from the standard continuity equation $\partial _t\rho =-\nabla (v\rho ),\; v(x,t)= xt/(1+t^2)$) holds true: $${\partial _t\rho = \triangle \rho - \nabla (b\rho )\; \; , \; \; b(x,t)= - {{1-t}\over {1+t^2}}\: x} \eqno (76)$$ The Madelung factorization $\psi =exp(R+iS)$ implies (notice that $v=2\nabla S$ and $b=2\nabla (R+S)$) that the related real functions $\theta(x,t)=exp[R(x,t)+S(x,t)]$ and $\theta _*(x,t)= exp[R(x,t)-S(x,t)]$ read: $$\theta (x,t)=[2\pi (1+t^2)]^{-1/4} exp(-{x^2\over 4}\: {{1-t}\over {1+t^2}} - {1\over 2} arctan\: t)$$ $${\theta _*(x,t)=[2\pi (1+t^2)]^{-1/4} exp(-{x^2\over 4}\: {{1+t}\over {1+t^2}} + {1\over 2} arctan\: t)}\eqno (77)$$ They solve a suitable version of the general parabolic equations (5), namely : $${\partial _t \theta =-\triangle \theta + c \theta } \eqno (78)$$ $$\partial _t\theta _* =\triangle \theta _* - c \theta _*$$ with $${c(x,t) = {x^2\over {2(1+t^2)^2}} - {1\over {1+t^2}} = 2{{\triangle \rho ^{1/2}}\over {\rho ^{1/2}}}}\eqno (79)$$ Anticipating further discussion, let us mention that the Feynman-Kac kernel, in this case, \it is \rm a fundamental solution of the time adjoint system (78). For clarity of exposition, let us recall that a \it fundamental solution \rm of the forward parabolic equation (5) is a continuous function $k(y,s,x,t)$, defined for all $x,y,\in R$ and all $0\leq s<t\leq T$, which has the following two properties: \\ (a) for any fixed $(y,s)\in R\times (0,T)$, the function $(x,t)\rightarrow k(y,s,x,t)$ is a regular (i.e. continuous and continuously differentiable the needed number of times) solution of the forward equation (5) in $R\times (s,T]$\\ (b) for all continuous functions $\phi (x)$ with a compact support, there holds $lim_{(t,x)\rightarrow (s,z)}$ $\int_{-\infty }^{+\infty } k(y,s,x,t)\phi (y) dy = \phi (z)$.\\ First, we need to verify (this will be done self-explanatorily) that $c(x,t)$, (79), is H\"{o}lder continuous of exponent one on every compact subset of $R\times [0,T]$. It follows from direct estimates: $${|c(x_2,t_2)-c(x_1,t_1)|\leq {1\over 2} |{x_2^2\over {(1+t_2^2)^2}}- {x_1^2\over {(1+t_1^2)^2}}|\: +\: |{1\over {1+t_2^2}}-{1\over {1+t_1^2}}|\leq } \eqno (80)$$ $${1\over 2}|{x_2\over {1+t_2^2}}-{x_1\over {1+t_1^2}}|( {|x_2|\over {1+t_2^2}} + {|x_1|\over {1+t_1^2}})\: + \: |t_2 -t_1|\: {{|t_1|+|t_2|}\over {(1+t_1^2)(1+t_2^2)}}$$ But, in case of $|x_1|,|x_2|\leq K$ and $|t_1|,|t_2|\leq T$ we have $${|c(x_2,t_2)-c(x_1,t_1)|\leq K\: |{x_2\over {1+t_2^2}}- {x_1\over {1+t_1^2}}|\: +\: 2T|t_2-t_1|}\eqno (81)$$ Furthermore: $${|{x_2\over {1+t_2^2}}-{x_1\over {1+t_1^2}}|\leq |{{x_2-x_1}\over {(1+t_2^2)(1+t_1^2)}}|\: +\: |{{x_2t_1^2-x_1t_2^2}\over {(1+t_2^2)(1+t_1^2)}}|\leq }\eqno (82)$$ $$|x_2-x_1|+T^2|x_2-x_1|+2KT|t_2-t_1|$$ implies (the new constant $C$ majorizes all remaining ones) $${|c(x_2,t_2)-c(x_1,t_1)|\leq C\: (|x_2-x_1|\: +\: |t_2-t_1|)\leq \sqrt 2\: C\: [(x_2-x_1)^2+(t_2-t_1)^2]^{1/2}}\eqno (83)$$ Let us also notice that we can introduce an auxiliary function $h(x,t)=arctan\: t$ such that there holds $${\triangle h-c(x,t)h-\partial_t h=-{{x^2h(x,t)}\over {2(1+t^2)^2}}\: \leq \: 0}\eqno (84)$$ We have thus satisfied the crucial assumptions I and II of Ref. \cite{bes2}. As a consequence, we have granted the existence of a fundamental solution $k(y,s,x,t)\geq 0$. Moreover, for every bounded and continuous function $\phi (x), \: |\phi (x)|\leq C$, where $C>0$ is arbitrary, the function $${u(x,t)=\int_{-\infty }^{+\infty } k(y,0,x,t)\phi (y)dy} \eqno (85)$$ is a solution of the Cauchy problem, i.e. solves (79) under the initial condition $u(x,0)=\phi (x)$, so that $|u(x,t)|\leq C$. All that implies the uniqueness of the fundamental solution $k(y,s,x,t)$, and in view of $-c(x,t)\leq 1$ its strict positivity. The function $k(y,s,x,t)$ is also a solution of the adjoint equation with respect to variables $y,s$: $\partial _sk=-\triangle _yk + c(y,s)k$ in $R\times [0,T)$. It is obvious that the Chapman-Kolmogorov composition rule holds true, in view of the validity of the Feynman-Kac representation in the present case.\\ Basically, we must be satisfied with the Feynman-Kac representation of the fundamental solution, whose existence we have granted so far. In our case, the so called parametrix method, \cite{kal}, can be used to construct fundamental solutions. In fact, since $c(x,t)$ is locally Lipschitz i.e. H\"{o}lder continuous of exponent one and quadratically bounded $|c(x,t)|\leq x^2+1$, the infinite series: $${k(y,s,x,t)=\: \sum_{n=0}^{+\infty } (-1)^n k_n(y,s,x,t)} \eqno (86)$$ where $k_0(y,s,x,t)=[4\pi (t-s)]^{-1/2} \: exp[-(x-y)^2/4(t-s)]$ is the heat kernel and $${k_n(y,s,x,t)=\int_s^td\tau \int _{-\infty }^{+\infty } dz\: c(z,\tau )\: k_{n-1}(y,s,z,\tau )k_0(z,\tau , x,t)}\eqno (87)$$ are known to converge for all $x,y\in R$, $0\leq s<t\leq T$, and $t-s<T_0$ where $T_0<T$, and define the fundamental solution, \cite{krzyz}. By putting $p(y,s,x,t)=k(y,s,x,t){{\theta (x,t)} \over {\theta (y,s)}}$ we arrive at the fundamental solution of the second Kolmogorov (Fokker-Planck) equation $${\partial _tp(y,s,x,t) =\triangle _xp(y,s,x,t) -\nabla _x [b(x,t)p(y,s,x,t)]}\eqno (88)$$ where $b=2{{\nabla \theta }\over {\theta }}$ and $\rho =uv$, and in particular $\rho =\theta \theta _*=|\psi |^2$, are consistently propagated by $p$. It is the transition probability density of the Nelson diffusion associated with the solution (74) of the Schr\"{o}dinger equation, and at the same time a solution of the first Kolmogorov (backward diffusion) equation $${\partial _sp(y,s,x,t)=-\triangle _yp(y,s,x,t) - b(y,s)\nabla _yp(y,s,x,t)}\eqno (89)$$ Equations (88), (89) prove that the pertinent process is a diffusion: it has the standard local (infinitesimal) characteristics of the diffusion process, \cite{horst}. Obviously, the above definition of $p$ in terms of $k$ induces the validity of the compatibility condition $${c(x,t)=2[\partial _tln\: \theta(x,t) + {1\over 2}[{{b^2(x,t)}\over 2}+\nabla b(x,t)]}\eqno (90)$$ connecting the drift of the diffusion process with the Feynman-Kac potential governing its local dynamics: cf. Refs. \cite{carm,blanch} and \cite{garb2} where the Ehrenfest theorem analogue was formulated for general (non-quantal included) Markovian diffusions. Let us point out that our quantally motivated example was chosen not to show up a typical for quantum wave functions property of vanishing somewhere. In fact, because of restricting our considerations to strictly positive Feynman-Kac kernels and emphasizing the uniqueness of solutions, we have left aside an important group of topics pertaining to solution of the Schr\"{o}dinger boundary data problem when:\\ (i) the boundary densities have zeros\\ (ii) the interpolation itself is capable of producing zeros of the probability density, even if the boundary ones have none.\\ Only the case (i) can be (locally) addressed by means of strictly positive semigroup kernels, however the uniqueness of solution is generally lost in space dimension higher than one, \cite{fort,beur,jam}. General existence theorems are available \cite{carlen,carm} and indicate that one deals with diffusion-type processes in this case, see e.g also \cite{zambr,zambr1,blanch}.\\ The case (ii) seems not to be ever considered in the literature, see however \cite{garb1}. \vskip0.2cm {\bf Acknowledgement}: Both authors receive a financial support from the KBN research grant No 2 P302 057 07. We would like to thank Professors John Klauder and Gert Roepstorff for correspondence concerning the differentiability of Feynman-Kac kernels and Jean-Claude Zambrini for reference suggestion.
21a657634789964ab585621cb9533096c4f0da3c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \vspace{-0.1cm} Thanks to its cost-effective, power-efficient and deployment-convenient features, reconfigurable intelligent surface (RIS) technology is envisioned to be a promising technique for enhancing the spectrum and energy efficiency of 6G-and-beyond communications systems\cite{Zhangrui_MAG,marco,Pan_MAG,You_6G_new_overiview,Pan2019intelleget,Pan2019multicell}. Deploying an RIS provides additional degrees-of-freedom (DoF) that can be used to reconfigure the wireless propagation environment, which brings tremendous benefits for the wireless systems. To reap the benefits promised by RIS, accurate channel state information (CSI) is required \cite{Overview_IEEE_LEE,Beixiong2021survey,Liang_RIS_Overview}, which is challenging to achieve for the following two reasons. First, an RIS equipped with passive elements typically does not have a receiver, so does not process complex baseband signals, which means that traditional channel estimation approaches cannot be adopted in RIS-aided systems. Due to this characteristic, it is not possible to estimate the user-RIS channel and RIS-base station (BS) channel separately, and instead the cascaded channel is estimated, i.e., the equivalent user-RIS-BS channel. Second, with a large number of antennas at the BS and reflecting elements at the RIS, the cascaded channel contains a large number of channel coefficients, which can require a larger number of pilots. Hence, developing an efficient channel estimation method for RIS-aided systems with low pilot overhead is imperative. Recently, there have been many contributions on channel estimation for RIS-aided communication systems; see for example \cite{ON_OFF_MU,Liuliang-IRS,ris-omp-1,Jiguang_atomic,ris-omp-2,ris-omp-3,Zhou_ULA_TSP,CE_MIMO_RIS} and the recent overview tutorial \cite{Overview_IEEE_LEE}. Early work focused mainly on unstructured channel models, but channel estimation for these models requires a pilot overhead that is proportional to the number of RIS reflecting elements, which is often prohibitively large. On the other hand, the sparse structure of high-frequency millimeter wave (mmWave) channels, described by the angles and gains of fewer paths, has been exploited to reduce the pilot overhead of multiple-input multiple-output (MIMO) systems efficiently by leveraging compressed sensing (CS) or direction-of-arrival (DOA) estimation \cite{Super_Resolution_Hybrid,Fan_AoD_CE-TWC2018,UPA-MIMO}. Motivated by the works on structured channel models, the sparsity of the user-RIS-BS cascaded channel was exploited in \cite{ris-omp-1} using CS to reconstruct the channel. The authors in \cite{ris-omp-2} exploited the fact that the cascaded channel matrices for multiple users exhibit a common column-block sparsity since all users share the same RIS-BS channel, and developed an iterative channel estimator based on this observation. Inspired by the common column-block sparsity property, the double-structured sparsity of the cascaded channel was considered in \cite{ris-omp-3}, using the Discrete Fourier Transform (DFT) to analyze the estimation of the angle parameters. The authors of \cite{Zhou_ULA_TSP} achieved a dramatic reduction in pilot overhead by fully utilizing the correlation among the different cascaded channels. The above-mentioned works \cite{ris-omp-2,ris-omp-3,Zhou_ULA_TSP} considered multiple users but assumed that they are equipped with only a single antenna. On the other hand, the RIS-aided MIMO scenario was considered in \cite{ris-omp-1,CE_MIMO_RIS,Jiguang_atomic}. The authors in \cite{CE_MIMO_RIS} proposed an alternating minimization and manifold optimization (MO) estimation protocol for this scenario. To increase the estimation accuracy, a super-resolution CS technique based on atomic norm minimization was applied to cascaded channel estimation in \cite{Jiguang_atomic}. However, these three works assumed only a single user and thus did not take advantage of the inherent correlation among the channels of different users in an RIS-aided system. Apart from this, \cite{ris-omp-1,Jiguang_atomic,ris-omp-3,CE_MIMO_RIS} assumed that the number of scatterers for the user-RIS channel and RIS-BS channels are known a priori, i.e., the sparsity level is known. In practice, however, these parameters may not be known beforehand. Moreover, a uniform linear array (ULA)-type BS, ULA-type users and/or ULA-type RIS were assumed in the above mentioned works, which may not be relevant for RIS-assisted communication systems. The extension to the more typical uniform planar array (UPA)-type RIS-aided multi-user (MU) system is not straightforward. First, the number of angle parameters that must be estimated doubles that of a ULA-type system, and the asymptotic properties exploited for large ULAs may not be applicable. Second but important, increasing the number of parameters makes exploiting the channel correlation among multiple users extremely complex, especially for the cascaded channel parameters. Against the above background, in this paper we propose an effective three-stage channel estimation method with low pilot overhead starting from an RIS-aided single-antenna MU mmWave communication system, in which the BS and RIS are both equipped with a UPA. Then, we extend the protocol to the multi-antenna user case, where the users are also equipped with UPAs. This is the first work that investigates the UPA-type MU MIMO case. The main contributions of this work are summarized as follows: \begin{itemize} \item We develop a three-stage uplink channel estimation protocol for an RIS-aided mmWave communication system with a multi-antenna UPA-type BS, a multi-element UPA-type RIS and multiple users. The protocol is divided into two parts: full CSI estimation in the first coherence block consisting of Stage I and Stage II, and estimation of updated gains in the remaining coherence blocks consisting of Stage III. In Stage I, only a typical user sends pilots to the BS for channel estimation, from which we obtain estimated gains and angle information that is used to reduce the pilot overhead in the next stage. In particular, angle rotation operation is adopted to deal with the power leakage issue when estimating common AoAs in this stage. In Stage II, we exploit the correlation among different users' cascaded channels and construct a re-parameterized common RIS-BS channel using the estimated CSI of the typical user, based on which we obtain the channel estimates of other users. Next, in Stage III during the remaining coherence blocks, only the cascaded channel gains for different users are re-estimated since the angle information remains constant. \item We propose an effective low-complexity one-dimensional (1-D) search method to achieve the angle rotation operation in Stage I. In \cite{Fan_AoD_CE-TWC2018}, a two-dimensional (2-D) DFT together with a 2-D search method was used to compensate for the leaked power, which has high computational complexity. To reduce the complexity, we exploit the structure of the steering vectors at the BS and then introduce an equivalent Fourier matrix and rotation matrices to divide the 2-D search into two 1-D searches. \item We extend the estimation protocol to the case of users with UPAs. The angles-of-departure (AoDs) at the users and the common angles-of-arrival (AoAs) at the BS are estimated via the proposed orthogonal matching pursuit (OMP)-based method and DFT-based method, respectively. Then the estimation of a multi-antenna channel with $J$ scatterers is decomposed into the estimation of $J$ single-scatterer channels. The cascaded AoDs at the RIS and the channel gains can be estimated using methods similar to those developed for the single-antenna case. This is the first approach proposed in the literature that exploits the correlation between different users in the multi-antenna user case. The overall number of pilots for both the single- and multi-antenna case is also analyzed. \end{itemize} The rest of this paper is organized as follows. Section \ref{sec:Model-protocol} introduces the system model and the three-stage based channel estimation protocol. Section \ref{sec:First_coherence} presents the full CSI estimation algorithm in Stage I and Stage II for the single-antenna-users case. Channel gain estimation in Stage III is discussed in Section \ref{sec:Remaining_Coherence}. Section \ref{sec:Applying-the-Protocol} applies the protocol to the multi-antenna-users case. Simulation results are given in Section \ref{sec:Simulation-Results}. Finally, Section \ref{sec:Conclusions} concludes this work. \textit{Notations}: Vectors and matrices are denoted by boldface lowercase letters and boldface uppercase letters, respectively. For a matrix $\mathbf{A}$ of arbitrary size, $\mathbf{A}^{*}$, $\mathbf{A}^{\mathrm{T}}$, $\mathbf{A}^{\mathrm{H}}$ and $\mathbf{A}^{\mathrm{\dagger}}$ stand for the conjugate, transpose, conjugate transpose and pseudo-inverse of $\mathbf{A}$. For a square full-rank matrix $\mathbf{A}$, $\mathbf{A}^{-1}$ denotes its inverse. The symbols $||\mathbf{A}||_{F}$, $||\mathbf{a}||$ represent the Frobenius norm of matrix $\mathbf{A}$ and the Euclidean norm of vector $\mathbf{a}$, respectively. $\angle\left(\cdot\right)$ denotes the angle of a complex number. $\mathrm{Diag}\{\mathbf{a}\}$ is a diagonal matrix with the entries of vector $\mathbf{a}$ on its diagonal. $\mathrm{vec}(\mathbf{A})$ denotes the vectorization of $\mathbf{A}$ obtained by stacking the columns of matrix $\mathbf{A}$. $\mathbb{E}\left\{ \cdot\right\} $ denotes the expectation operation. $[\mathbf{a}]_{m}$ denotes the $m$-th element of the vector $\mathbf{a}$, and $[\mathbf{A}]_{m,n}$ denotes the $(m,n)$-th element of the matrix $\mathbf{A}$. The $n$-th column and the $m$-th row of matrix $\mathbf{A}$ are denoted by $\mathbf{A}_{(:,n)}$ and $\mathbf{A}_{(m,:)}$ respectively. $\left\lceil a\right\rceil $ rounds up to the nearest integer. The inner product between two vectors $\mathbf{a}$ and $\mathbf{b}$ is denoted by $\left\langle \mathbf{a},\mathbf{b}\right\rangle \triangleq\mathbf{a}^{\mathrm{H}}\mathbf{b}$. Additionally, the Kronecker product, Hadamard product, Khatri-Rao product and transposed Khatri-Rao product between two matrices $\mathbf{A}$ and $\mathbf{B}$ are denoted by $\mathbf{A}\otimes\mathbf{B}$, $\mathbf{A}\odot\mathbf{B}$, $\mathbf{A}\diamond\mathbf{B}$ and $\mathbf{A}\bullet\mathbf{B}$,\footnote{The transposed Khatri-Rao product is known as the ``row-wise Kronecker product'', which utilizes the row-wise splitting of matrices with a given quantity of rows. Specifically, for given matrices $\mathbf{A}\in\mathbb{C}^{Q\times M}$ and $\mathbf{B}\in\mathbb{C}^{Q\times N}$, $\mathbf{A}\bullet\mathbf{B}$ is a $Q\times MN$ matrix of which each row is the Kronecker product of the corresponding rows of $\mathbf{A}$ and $\mathbf{B}$.} respectively. $\mathrm{i}\triangleq\sqrt{-1}$ is the imaginary unit. \vspace{-0.25cm} \section{System Model and estimation protocol \label{sec:Model-protocol}} \vspace{-0.15cm} \subsection{System Model\label{subsec:System-Model}} \vspace{-0.1cm} We consider a narrow-band time-division duplex (TDD) mmWave system, in which $K$ single-antenna users communicate with a BS equipped with an $N=N_{1}\times N_{2}$ antenna UPA, where $N_{1}$ is the number of antennas in the vertical dimension, and $N_{2}$ in the horizontal dimension. To improve communication performance, an RIS equipped with a passive reflecting UPA of dimension $M=M_{1}\times M_{2}$ ($M_{1}$ vertical elements and $M_{2}$ hoirzontal elements) is deployed. The channels are assumed to be block-fading, and hence constant in each coherence block. In addition, we assume that the direct channels between the BS and users are blocked. Otherwise first estimate the direct channels by turning off the RIS, and then the cascaded channel can be estimated by removing the direct channel's contribution from the received signal. The Saleh-Valenzuela (SV) model in \cite{mmWave_channel_overview} is used to represent the channels due to the limited scattering characteristics in the mmWave environment. Consider a typical $P=P_{1}\times P_{2}$ UPA whose steering vector $\mathbf{a}_{P}(z,x)\in\mathbb{C}^{P\times1}$ can be represented by\vspace{-0.1cm} \begin{equation} \mathbf{a}_{P}(z,x)=\mathbf{a}_{P_{1}}(z)\otimes\mathbf{a}_{P_{2}}(x),\label{eq:ax} \end{equation} where $\mathbf{a}_{P_{1}}(z)=[1,e^{-\mathrm{i}2\pi z},\ldots,e^{-\mathrm{i}2\pi(P_{1}-1)z}]^{\mathrm{T}}$ and $\mathbf{a}_{P_{2}}(x)=[1,e^{-\mathrm{i}2\pi x},\ldots,e^{-\mathrm{i}2\pi(P_{2}-1)x}]^{\mathrm{T}}$ are the steering vectors with respect to $z$-axis (vertical direction) and $x$-axis (horizontal direction) of the UPA, respectively. The variables $z$ and $x$ can be regarded as the corresponding equivalent spatial frequency with respect to $z$-axis and $x$-axis of the UPA, respectively. Denote $\mathfrak{\varrho}\in[-90^{\mathrm{o}},90^{\mathrm{o}})$ and $\mathfrak{\xi}\in[-180^{\mathrm{o}},180^{\mathrm{o}})$ as the signal elevation and azimuth angles of the UPA, respectively. There exists a relationship between the spatial frequency pair $(z,x)$ and the physical angle pair $(\mathfrak{\varrho},\mathfrak{\xi})$:\vspace{-0.1cm} \begin{equation} z=\frac{d}{\lambda_{c}}\cos(\mathfrak{\varrho}),~x=\frac{d}{\lambda_{c}}\sin(\varrho)\cos(\mathfrak{\xi}),\label{phy_angle} \end{equation} where $\lambda_{c}$ is the carrier wavelength and $d$ is the element spacing. Assuming that $d\le\lambda_{c}/2$, there is a one-to-one relationship between the spatial frequencies and the physical angles on one side of the UPA. We will assume this relationship to hold in the remainder of the paper, and we will refer to the arguments of the steering vectors interchangeably as either angles or spatial frequencies. Using the geometric channel model, the channel matrix between the RIS and the BS, denoted by $\mathbf{H}\in\mathbb{C}^{N\times M}$, and the channel matrix between user $k$ and the RIS, denoted by $\mathbf{h}_{k}\in\mathbb{C}^{M\times1}$, can be written as \vspace{-0.1cm} \begin{align} \mathbf{H}=\sum_{l=1}^{L}\alpha_{l}\mathbf{a}_{N}(\psi_{l},\nu_{l})\mathbf{a}_{M}^{\mathrm{H}}(\omega_{l},\mu_{l}),~\mathbf{h_{\mathit{k}}}=\sum_{j=1}^{J_{k}}\beta_{k,j}\mathbf{a}_{M}(\varphi_{k,j},\theta_{k,j}),\forall k\in\mathcal{K},\label{eq:H_h} \end{align} where $L$ denotes the number of propagation paths (scatterers) between the BS and the RIS, and $J_{k}$ denotes the number of propagation paths between the RIS and user $k$. In addition, $\alpha_{l}$, $(\psi_{l},\nu_{l})$ and $(\omega_{l},\mu_{l})$ are the complex path gain, AoA, and AoD of the $l$-th path in the RIS-BS channel, respectively. Similarly, $\beta_{k,j}$ and $(\varphi_{k,j},\theta_{k,j})$ represent the complex path gain and AoA of the $j$-th path in the user $k$-RIS channel, respectively. Moreover, the channel models in (\ref{eq:H_h}) can be written in a more compact way as \vspace{-0.2cm} \begin{align} \mathbf{H} & =\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}},\label{eq:H1-1}\\ \mathbf{h}_{k} & =\mathbf{A}_{M,k}\boldsymbol{\beta}_{k},\forall k\in\mathcal{K},\label{eq:hk} \end{align} where $\mathbf{A}_{N}=[\mathbf{a}_{N}(\psi_{1},\nu_{1}),\ldots,\mathbf{a}_{N}(\psi_{L},\nu_{L})]\in\mathbb{C}^{N\times L}$, $\mathbf{A}_{M}=[\mathbf{a}_{M}(\omega_{1},\mu_{1}),\ldots,\mathbf{a}_{M}(\omega_{L},\mu_{L})]\in\mathbb{C}^{M\times L}$ and $\boldsymbol{\Lambda}=\mathrm{Diag}\{\alpha_{1},\ldots,\alpha_{L}\}\in\mathbb{C}^{L\times L}$ are the AoA steering (array response) matrix, AoD steering matrix and complex gain matrix of the common RIS-BS channel, respectively, and $\mathbf{A}_{M,k}=[\mathbf{a}_{M}(\varphi_{k,1},\theta_{k,1}),\ldots,\mathbf{a}_{M}(\varphi_{k,J_{k}},\theta_{k,J_{k}})]\in\mathbb{C}^{M\times J_{k}}$ and $\boldsymbol{\beta}_{k}=[\beta_{k,1},\ldots,\beta_{k,J_{k}}]^{\mathrm{T}}\in\mathbb{C}^{J_{k}\times1}$ are the AoA steering matrix and complex gain vector of the specific user-RIS channel for user $k$, respectively. Denote $\mathbf{e}_{t}\in\mathbb{C}^{M\times1}$ as the phase shift vector of the RIS in time slot $t$ and define the user set as $\mathcal{K}=\{1,\ldots,K\}$. Assume that users transmit pilot sequences of length $\tau_{k}$ one by one for channel estimation. During the uplink transmission, in time slot $t$, $1\leq t\leq\tau_{k}$, the received signal from user $k$ at the BS can be expressed as \vspace{-0.2cm} \begin{equation} \mathbf{y}_{k}(t)=\mathbf{H}\mathrm{Diag}\{\mathbf{e}_{t}\}\mathbf{h}_{k}\sqrt{p}s_{k}(t)+\mathbf{n}_{k}(t),\label{transmission} \end{equation} where $s_{k}(t)$ is the pilot signal of the $k$-th user, $\mathbf{n}_{k}(t)\in\mathbb{C}^{N\times1}\sim\mathcal{CN}(0,\delta^{2}\mathbf{I})$ represents additive white Gaussian noise (AWGN) with power $\delta^{2}$ at the BS when user $k$ is transmitting. The scalar $p$ denotes the transmit power of each user. Assume the pilot symbols satisfy $s_{k}(t)=1,1\leq t\leq\tau_{k}$, so that Eq. (\ref{transmission}) can be expressed as \vspace{-0.3cm} \begin{equation} \begin{split}\mathbf{y}_{k}(t)=\mathbf{H}\mathrm{Diag}\{\mathbf{h}_{k}\}\mathbf{e}_{t}\sqrt{p}+\mathbf{n}_{k}(t)\triangleq\mathbf{G}_{k}\mathbf{e}_{t}\sqrt{p}+\mathbf{n}_{k}(t).\end{split} \label{eq:2} \end{equation} Here, $\mathbf{G}_{k}=\mathbf{H}\mathrm{Diag}\{\mathbf{h}_{k}\}$ is regarded as the cascaded user-RIS-BS channel of user $k$, which is the channel to be estimated in this work. Combining (\ref{eq:H1-1}) and (\ref{eq:hk}), $\mathbf{G}_{k}$ can be rewritten as \vspace{-0.1cm} \begin{align} \mathbf{G}_{k}=\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{A}_{M,k}\boldsymbol{\beta}_{k}\},\forall k\in\mathcal{K}.\label{eq:G1} \end{align} Stacking the $\tau_{k}$ time slots of (\ref{eq:2}), the received matrix $\mathbf{Y}_{k}=\left[\mathbf{y}_{k}(1),\ldots,\mathbf{y}_{k}(\tau_{k})\right]$ is given by \vspace{-0.25cm} \begin{equation} \mathbf{Y}_{k}=\sqrt{p}\mathbf{G}_{k}\mathbf{E}_{k}+\mathbf{N}_{k}\in\mathbb{C}^{N\times\tau_{k}},\label{eq:m-y-2} \end{equation} where $\mathbf{E}_{k}=\left[\mathbf{e}_{1},\ldots,\mathbf{e}_{\tau_{k}}\right]\in\mathbb{C}^{M\times\tau_{k}}$ can be treated as the phase shift training matrix of the RIS for user $k$ and $\mathbf{N}_{k}=\left[\mathbf{n}_{k}(1),\ldots,\mathbf{n}_{k}(\tau_{k})\right]\in\mathbb{C}^{N\times\tau_{k}}$. \vspace{-0.5cm} \subsection{Three-stage Channel Estimation Protocol\label{sec:protocol_overview}} \vspace{-0.1cm} The main idea of the proposed channel estimation protocol are depicted in Fig. \ref{flow_chart}, where ``Pilot'' and ``Date'' represent the phases for uplink channel estimation, and downlink data transmission at the BS side, respectively. Our work focus on the uplink channel estimation of the cascaded channels. Specifically, in Stage I, only one user's cascaded channel is estimated. For convenience, this user is referred to as the typical user.\footnote{The user closest to the RIS is generally chosen as the typical user since its reflected channel suffers from less severe path loss. Thus, the received signal at the BS is stronger to ensure high estimation performance. The location of users can be obtained using the global position system (GPS) \cite{GPS_location_RIS}, for example.} Information regarding the common RIS-BS channel from the estimate of the typical user's CSI is extracted in order to reduce the pilot overhead of channel estimation for other users in the next stage. Then, in Stage II, the cascaded channel of other users is divided into two parts, a common part and a unique part. The common parts can be readily obtained with the estimated angle information and cascaded gains of the typical user obtained in the first stage. This can help reduce the pilot overhead of estimating the other users' cascaded channel since only a few pilots are required for estimating their unique parts. Finally, it is observed that in the quasi-static situation, the positions of the BS and the RIS are fixed, and the changes in the physical positions of the users and their surrounding obstacles are negligible over milliseconds, corresponding to several channel coherence blocks \cite{mmWave_test_experiment_new,mmWave_test_Gao}. This observation leads to the reasonable assumption that the angles remain unchanged for multiple coherence blocks while the gains change from block to block. Hence, Stage III is used for estimating the varying channel gains for all users. In the following sections, we can conclude that the pilots required for different users depend on the number of paths between the user side and the RIS, which can be estimated by the BS in this work. This needs BS to determine the typical user, allocate the pilot slots required for different users, and inform the users of this knowledge before the next estimation period. The details of the adopted protocol will be discussed later, first for the single-antenna user case and then finally for the multi-antenna user case. \vspace{-0cm} \begin{figure} \begin{centering} \includegraphics[width=0.55\columnwidth]{flow_chart} \par\end{centering} \caption{The proposed three-stage channel estimation protocol.\label{flow_chart}} \end{figure} \vspace{-0.3cm} \section{Estimation in the First Coherence Block: Stage I and Stage II\label{sec:First_coherence}} \vspace{-0.1cm} In this section, we start from the single-antenna user case to describe the details of full CSI estimation of all users in the first coherence block, formulating it as two sparse recovery problems in Stage I and Stage II. Then, we analyze the pilot overhead and computational complexity of the proposed method. This section lays the foundation for the extension to the multi-antenna user case in Section \ref{sec:Applying-the-Protocol}. \vspace{-0.3cm} \subsection{Stage I: Estimation of Full CSI for Typical User\label{subsec:Stage-I:-Estimation}} \vspace{-0.15cm} In this subsection, we provide details on full CSI estimation for a typical single-antenna user, denoted as user $1$, where the common AoAs are first estimated and then the cascaded gains and AoDs are obtained. \subsubsection{Estimation of Common AoAs} Due to the UPA deployed at the BS and the RIS, the direct DFT approach in \cite{Zhou_ULA_TSP,Fan_AoD_CE-TWC2018} cannot be used for AoA estimation from $\mathbf{Y}_{1}$ in (\ref{eq:m-y-2}). Therefore, we propose a modified DFT approach utilizing the properties of the Kronecker product to estimate the common AoAs at the BS of the cascaded channel, i.e., $\mathbf{A}_{N}$ in (\ref{eq:H1-1}). To this end, we first provide two lemmas as follows. \vspace{-0.3cm} \begin{lem} \label{lem:1}When $N_{1}\rightarrow\infty$ and $\ensuremath{N_{2}\rightarrow\infty}$, the following property holds \begin{equation} \lim_{N\rightarrow\infty}\frac{1}{N}\mathbf{a}_{N}^{\mathrm{H}}(\psi_{j},\nu_{j})\mathbf{a}_{N}(\psi_{i},\nu_{i})=\begin{cases} 1 & \psi_{j}=\psi_{i},\nu_{i}=\nu_{j}\\ 0 & \textrm{otherwise} \end{cases},~\mathbf{A}_{N}^{\mathrm{H}}\mathbf{A}_{N}=N\mathbf{I}_{L},\label{eq:lemma1} \end{equation} where $N=N_{1}\times N_{2}$ and $\mathbf{I}_{L}$ is the identity matrix with dimension $L\times L$. \end{lem} \vspace{-0.3cm} \begin{IEEEproof} Please refer to Appendix A. \end{IEEEproof} Define an equivalent Fourier matrix $\widetilde{\mathbf{U}}_{N}\mathbf{\triangleq U_{\mathit{N_{\mathrm{1}}}}\otimes}\mathrm{\mathbf{U}}_{\mathit{N_{\mathrm{2}}}}\in\mathbb{C}^{N\times N}$, where $\mathrm{\mathbf{U}}_{\mathit{N_{\mathrm{1}}}}$ and $\mathrm{\mathbf{U}}_{\mathit{N_{\mathrm{2}}}}$ are the DFT matrices with $(n,m)$-th entries $[\mathrm{\mathbf{U}}_{\mathit{N_{\mathrm{1}}}}]_{n,m}=\frac{1}{\sqrt{N_{1}}}e^{-\mathrm{i}\frac{2\pi(n-1)(m-1)}{N_{1}}}$ and $[\mathrm{\mathbf{U}}_{\mathit{N_{\mathrm{2}}}}]_{n,m}=\frac{1}{\sqrt{N_{2}}}e^{-\mathrm{i}\frac{2\pi(n-1)(m-1)}{N_{2}}}$, respectively. It can be readily verified that $\widetilde{\mathbf{U}}_{N}$ is a symmetric and unitary matrix according to its definition. Now we show an asymptotic property of $\mathbf{A}_{N}$ via the linear transformation $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}$. \vspace{-0.3cm} \begin{lem} \label{lem:3}When $N_{1}\rightarrow\infty$ and $\ensuremath{N_{2}\rightarrow\infty}$, if the condition $\frac{d_{\mathrm{BS}}}{\lambda_{c}}\leq\frac{1}{2}$ holds,\footnote{This condition holds to avoid AoA ambiguity.} then the linear transformation $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{A}_{N}$ is a tall sparse matrix with only one nonzero element in each column, i.e., \begin{equation} \lim_{N\rightarrow\infty}[\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{A}_{N}]_{n_{l},l}\neq0,\forall l,\label{eq:lemma3} \end{equation} where \vspace{-0.1cm} \begin{equation} n_{l}=(n_{1}(l)-1)N_{\mathrm{2}}+n_{2}(l),\label{eq:lemma3_decom} \end{equation} and \vspace{-0.1cm} \begin{align} n_{1}(l)=\begin{cases} N_{\mathrm{1}}\psi_{l}+1 & \psi_{l}\in[0,\frac{d_{\mathrm{BS}}}{\lambda_{c}})\\ N_{\mathrm{1}}+N_{\mathrm{1}}\psi_{l}+1 & \psi_{l}\in[-\frac{d_{\mathrm{BS}}}{\lambda_{c}},0) \end{cases},~n_{2}(l)=\begin{cases} N_{\mathrm{2}}\nu_{l}+1 & \nu_{l}\in[0,\frac{d_{\mathrm{BS}}}{\lambda_{c}})\\ N_{\mathrm{2}}+N_{\mathrm{2}}\nu_{l}+1 & \nu_{l}\in[-\frac{d_{\mathrm{BS}}}{\lambda_{c}},0) \end{cases}.\label{eq:lemma3-1} \end{align} \end{lem} \begin{IEEEproof} Please refer to Appendix B. \end{IEEEproof} Since typically $L\ll N_{1},N_{2}$, Lemma \ref{lem:3} means that matrix $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{A}_{N}$ is a row sparse matrix with full column rank. By substituting (\ref{eq:G1}) into (\ref{eq:m-y-2}), we observe that $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{\mathbf{Y}_{\mathrm{1}}}$ is an asymptotically row-sparse matrix with $L$ nonzero rows, and each row corresponds to one of the AoA pairs i.e., $(\psi_{l},\nu_{l})$. Based on this fact, the estimation of the common AoAs is equivalent to finding the indices of the nonzero rows of $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{\mathbf{Y}_{\mathrm{1}}}$. Note that $n_{1}(l)$, $n_{2}(l)$ are integers, and can be derived from (\ref{eq:lemma3_decom}) as follows \vspace{-0.1cm} \begin{equation} n_{1}(l)=\left\lceil \frac{n_{l}}{N_{\mathrm{2}}}\right\rceil ,~n_{2}(l)=n_{l}-N_{\mathrm{2}}(n_{1}(l)-1).\label{n_lsub} \end{equation} By combining (\ref{n_lsub}) with Lemma \ref{lem:3}, the AoA spatial frequency pairs $\{(\psi_{l},\nu_{l})\}{}_{l=1}^{L}$ can be readily estimated. Due to the fact that different scatterers have different angles, we can draw the conclusion that any two nonzero elements are not in the same row, i.e., $n_{l}\neq n_{i}$ for any $l\neq i$. \vspace{-0.01cm} \subsubsection{Low-complexity Angle Rotation for Suppressing Power Leakage} To improve the angle estimation accuracy, the power leakage issue \cite{Fan_AoD_CE-TWC2018} should be considered. In practice, finite values for $N_{\mathrm{1}}$ and $N_{\mathrm{2}}$ lead to power leakage, which means that the resolution of the estimated AoA $(\psi_{l},\nu_{l})$ is limited by half of the DFT interval, i.e., $\frac{1}{2N_{\mathrm{1}}}$ and $\frac{1}{2N_{\mathrm{2}}}$. To mitigate the power leakage, an angle rotation operation is adopted and the rotation matrix is defined as \vspace{-0.1cm} \begin{equation} \mathbf{R\mathrm{\mathbf{(}\Delta\psi,\Delta\nu\mathbf{)}}=R_{\mathrm{1}}\mathrm{(}\mathrm{\Delta\psi}\mathrm{)}\otimes R_{\mathrm{2}}\mathrm{(}\mathrm{\Delta\nu}\mathrm{)}},\label{eq:rotR-1} \end{equation} where the diagonal matrices $\mathbf{R_{\mathrm{1}}(\mathrm{\Delta\psi})}$ and $\mathbf{R_{\mathrm{2}}(\mathrm{\Delta}\nu)}$ are respectively given by \vspace{-0.1cm} \begin{equation} \mathbf{R_{\mathrm{1}}\mathrm{(}\mathrm{\Delta\psi}\mathrm{)}}=\mathrm{Diag}\{1,e^{\mathrm{-i}\Delta\psi},\ldots,e^{-\mathrm{i}(N_{\mathrm{1}}-1)\Delta\psi}\}~\mathbf{R_{\mathrm{2}}\mathrm{(}\mathrm{\Delta\nu}\mathrm{)}}=\mathrm{Diag}\{1,e^{-\mathrm{i}\Delta\nu},\ldots,e^{\mathrm{-i}(N_{\mathrm{2}}-1)\Delta\nu}\},\label{rot_R-1=00003D00003D00003D00003D00003D00003D00003D00003D00003D00003D0} \end{equation} where $\Delta\psi\in[-\frac{\pi}{N_{1}},\frac{\pi}{N_{1}}]$ and $\Delta\nu\in[-\frac{\pi}{N_{2}},\frac{\pi}{N_{2}}]$. We construct $L$ rotation matrices $\mathbf{R\mathrm{(}\mathrm{\Delta\mathit{\psi}_{\mathit{l}},\Delta\mathit{\nu}_{\mathit{l}}}\mathrm{)}}$ to compensate for the $L$ estimated AoAs $(\psi_{l},\nu_{l})$. After angle rotation, the central point, denoted as the $(n_{l},l)$-th element of $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{R\mathrm{(}\mathrm{\Delta\mathit{\psi}_{\mathit{l}},\Delta\mathit{\nu}_{\mathit{l}}}\mathrm{)}}\mathbf{A}_{N}$, is calculated as \vspace{-0.1cm} \begin{align} & [\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{R\mathrm{(}\mathrm{\Delta\mathit{\psi}_{\mathit{l}},\Delta\mathit{\nu}_{\mathit{l}}}\mathrm{)}}\mathbf{A}_{N}]_{n_{l},l}\nonumber \\ & =[\mathbf{U_{\mathit{N_{\mathrm{1}}}}^{\mathrm{H}}}\mathbf{R_{\mathrm{1}}\mathrm{(}\mathrm{\Delta\psi_{\mathit{l}}}\mathrm{)}}\mathbf{a}_{N_{1}}(\psi_{l})]_{n_{1}(l)}\otimes[\mathrm{\mathbf{U}}_{N_{\mathrm{2}}}^{\mathrm{H}}\mathbf{\mathbf{R_{\mathrm{2}}\mathrm{(}\mathrm{\Delta\nu_{\mathit{l}}}\mathrm{)}}}\mathbf{a}_{N_{2}}(\nu_{l}))]_{n_{2}(l)}\nonumber \\ & =(\sqrt{\frac{1}{N_{1}}}\sum_{m=1}^{N_{1}}e^{-\mathrm{i}2\pi(m-1)(\psi_{l}+\frac{\Delta\psi_{l}}{2\pi}-\frac{n_{1}(l)-1}{N_{1}})})\times(\sqrt{\frac{1}{N_{2}}}\sum_{m=1}^{N_{2}}e^{-\mathrm{i}2\pi(m-1)(\nu_{l}+\frac{\Delta\nu_{l}}{2\pi}-\frac{n_{2}(l)-1}{N_{2}})}).\label{eq:rot_nl} \end{align} It can be found that the entries of $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{R\mathrm{(}\mathrm{\Delta\mathit{\psi}_{\mathit{l}},\Delta\mathit{\nu}_{\mathit{l}}}\mathrm{)}}\mathbf{A}_{N}$ have only $L$ nonzero elements when \vspace{-0.1cm} \begin{equation} \Delta\psi_{l}=2\pi\left(\frac{n_{1}(l)-1}{N_{1}}-\psi_{l}\right),~\Delta\nu_{l}=2\pi\left(\frac{n_{2}(l)-1}{N_{2}}-\nu_{l}\right).\label{parameters} \end{equation} The $(\Delta\psi_{l},\Delta\nu_{l})$ in (\ref{parameters}) are the required optimal angle rotation parameters for $(\psi_{l},\nu_{l})$, which concentrates the power of the respective frequency points and suppress power leakage. The optimal angle rotation parameters $(\Delta\widehat{\psi}_{l},\Delta\widehat{\nu}_{l})$ can be found via a 2-D search over the very small region $\Delta\psi_{l}\in[-\frac{\pi}{N_{1}},\frac{\pi}{N_{1}}]$ and $\Delta\nu_{l}\in[-\frac{\pi}{N_{2}},\frac{\pi}{N_{2}}]$ \cite{Fan_AoD_CE-TWC2018}, as follows:\vspace{-0.2cm} \begin{equation} \begin{split}(\Delta\widehat{\psi}_{l},\Delta\widehat{\nu}_{l})=\mathrm{arg}\max_{\Delta\psi_{l}\in[-\frac{\pi}{N_{1}},\frac{\pi}{N_{1}}],\Delta\nu_{l}\in[-\frac{\pi}{N_{2}},\frac{\pi}{N_{2}}]}||[\widetilde{\mathbf{U}}_{N}]_{:,n_{l}}^{\mathrm{H}}\mathbf{R\mathrm{(}\mathrm{\Delta\psi_{\mathit{l}},\Delta\nu_{\mathit{l}}}\mathrm{)}}\mathbf{Y}_{1}||^{2}.\end{split} \label{eq:2-Dsearch} \end{equation} The accuracy of the AoA estimation depends on the number of grid points. The complexity of the 2-D search is approximately $\mathcal{O}(Lg_{1}g_{2})$, where $g_{1}$ and $g_{2}$ denote the number of grid points in the interval $[-\frac{\pi}{N_{1}},\frac{\pi}{N_{1}}]$ and $[-\frac{\pi}{N_{2}},\frac{\pi}{N_{2}}]$, respectively. Obviously, large values for $g_{1}$ and $g_{2}$ lead to high computational complexity. Therefore, we exploit the structure of the steering vector and propose a 1-D search method to reduce the complexity of angle rotation. We note that the first elements of the steering vectors, i.e., $\mathbf{a}_{N_{1}}(\psi_{l})$ or $\mathbf{a}_{N_{2}}(\nu_{l})$, are equal to 1. Using this fact, we can divide the 2-D search into two 1-D searches. Specifically, we construct two rotation matrices shown below to rotate $\psi$ and $\nu$, as\vspace{-0.2cm} \begin{equation} \widetilde{\mathbf{R}}_{1}(\Delta\psi)\mathbf{\triangleq\mathrm{\mathbf{R}_{1}(\Delta\psi)}\otimes}\mathbf{D}_{\mathit{N_{\mathrm{2}}}},~\widetilde{\mathbf{R}}_{2}(\Delta\nu)\mathbf{\triangleq\mathbf{D}_{\mathit{N_{\mathrm{1}}}}\otimes}\mathrm{\mathbf{R_{\mathrm{2}}\mathrm{(}\mathrm{\Delta\nu}})},\label{rot_mat} \end{equation} where $\mathbf{R}_{1}(\Delta\psi)$ and $\mathbf{R_{\mathrm{2}}\mathrm{(}\mathrm{\Delta\nu}})$ are defined in (\ref{rot_R-1=00003D00003D00003D00003D00003D00003D00003D00003D00003D00003D0}). The matrices $\mathbf{D}_{\mathit{N_{\mathrm{1}}}}\in\mathbb{C}^{N_{1}\times N_{1}}$ and $\mathbf{D}_{\mathit{N_{\mathrm{2}}}}\in\mathbb{C}^{N_{2}\times N_{2}}$ are diagonal whose $(1,1)$ entry is equal to $1$ and whose other elements are $0$. Defining $\widetilde{\mathbf{U}}_{1}\mathbf{\triangleq U_{\mathit{N_{\mathrm{1}}}}\otimes}\mathbf{D}_{\mathit{N_{\mathrm{2}}}}$ and $\widetilde{\mathbf{U}}_{2}\mathbf{\triangleq D_{\mathit{N_{\mathrm{1}}}}\otimes}\mathbf{U}_{\mathit{N_{\mathrm{2}}}}$, we have the following proposition. \vspace{-0.25cm} \begin{proposition}\label{lem:4}The angle estimation operation for the $l$-th AoA pair $(\psi_{l},\nu_{l})$ shown in (\ref{eq:rot_nl}) can be divided into two independent angle rotation operations with the $(\overline{n_{1l}},l)$-th element of $\widetilde{\mathbf{U}}_{1}^{\mathrm{H}}\widetilde{\mathbf{R}}_{1}(\Delta\psi_{l})\mathbf{A}_{N}$, and the $(\overline{n_{2l}},l)$-th element of $\widetilde{\mathbf{U}}_{2}^{\mathrm{H}}\widetilde{\mathbf{R}}_{2}(\Delta\nu_{l})\mathbf{A}_{N}$, where $\overline{n_{1l}}$ and $\overline{n_{2l}}$ denote the nonzero element of the $l$-th column of $\widetilde{\mathbf{U}}_{1}^{\mathrm{H}}\widetilde{\mathbf{R}}_{1}(\Delta\psi_{l})\mathbf{A}_{N}$ and $\widetilde{\mathbf{U}}_{2}^{\mathrm{H}}\widetilde{\mathbf{R}}_{2}(\Delta\nu_{l})\mathbf{A}_{N}$, respectively, and satisfy \vspace{-0.55cm} \begin{equation} \begin{split}\overline{n_{1l}}=(n_{1}(l)-1)N_{\mathrm{2}}+1,~\overline{n_{2l}}=n_{2}(l).\end{split} \label{eq:Rot_index} \end{equation} \end{proposition} \vspace{-0.2cm} \begin{IEEEproof} Please refer to Appendix C. \end{IEEEproof} Based on Proposition \ref{lem:4}, the optimal angle rotation parameters $(\Delta\widehat{\psi}_{l},\Delta\widehat{\nu}_{l})$ for $(\psi_{l},\nu_{l})$ can be found by solving the two separate 1-D search problems shown in (\ref{eq:opt_shift}), which significantly reduces the complexity to $\mathcal{O}(L(g_{1}+g_{2}))$:\vspace{-0.25cm} \begin{equation} \Delta\widehat{\psi}_{l}=\mathrm{arg}\max_{\Delta\psi_{l}\in[-\frac{\pi}{N_{1}},\frac{\pi}{N_{1}}]}||[\widetilde{\mathbf{U}}_{1}]_{:,\overline{n_{1l}}}^{\mathrm{H}}\widetilde{\mathbf{R}}_{1}(\Delta\psi_{l})\mathbf{Y}_{1}||^{2},\Delta\widehat{\nu}_{l}=\mathrm{arg}\max_{\Delta\nu_{l}\in[-\frac{\pi}{N_{2}},\frac{\pi}{N_{2}}]}||[\widetilde{\mathbf{U}}_{2}]_{:,\overline{n_{2l}}}^{\mathrm{H}}\widetilde{\mathbf{R}}_{2}(\Delta\nu_{l})\mathbf{Y}_{1}||^{2}.\label{eq:opt_shift} \end{equation} Denote the estimated angle rotations as $\{(\Delta\hat{\psi}_{l},\Delta\hat{\nu}_{l})\}{}_{l=1}^{L}$, then the estimated AoA spatial frequency pair of the $l$-th path is given by \vspace{-0.15cm} \begin{equation} \hat{\psi}_{l}=\begin{cases} \frac{n_{1}(l)-1}{N_{1}}-\frac{\Delta\hat{\psi}_{l}}{2\pi} & n_{1}(l)\leq N_{1}\frac{d_{\mathrm{BS}}}{\lambda_{c}}\\ \frac{n_{1}(l)-1}{N_{1}}-1-\frac{\Delta\hat{\psi}_{l}}{2\pi} & n_{1}(l)>N_{1}\frac{d_{\mathrm{BS}}}{\lambda_{c}} \end{cases},\hat{\nu}_{l}=\begin{cases} \frac{n_{2}(l)-1}{N_{2}}-\frac{\Delta\hat{\nu}_{l}}{2\pi} & n_{2}(l)\leq N_{2}\frac{d_{\mathrm{BS}}}{\lambda_{c}}\\ \frac{n_{2}(l)-1}{N_{2}}-1-\frac{\Delta\hat{\nu}_{l}}{2\pi} & n_{2}(l)>N_{2}\frac{d_{\mathrm{BS}}}{\lambda_{c}} \end{cases}.\label{opt_psi_nu} \end{equation} With the estimated spatial frequency pairs for the AoAs, $\{(\widehat{\psi}_{l},\widehat{\nu}_{l})\}_{l=1}^{\widehat{L}}$, we can obtain an estimate of the common AoA steering matrix $\mathbf{\widehat{A}}_{N}=[\mathbf{a}_{N}(\widehat{\psi}_{1},\widehat{\nu}_{1}),\ldots,\mathbf{a}_{N}(\widehat{\psi}_{\widehat{L}},\widehat{\nu}_{\widehat{L}})]\in\mathbb{C}^{N\times\widehat{L}}$. AoA estimation of the different paths at the BS is summarized in Algorithm \ref{algorithm-1}, where $\Gamma(\mathbf{z})$ represents the operation of searching the peak power of vector $\mathbf{z}$ and $\widehat{L}$ is the estimated number of propagation paths in step 3.\footnote{If the power of the row is lager than that of its neighbor rows, and far exceeds the minimum power of $\mathbf{z}(n)$ based on a adjustable predefined ratio threshold, we put this row index into set $\Omega_{N}$. Alternately, classical minimum description length (MDL) and novel signal subspace matching (SSM) schemes \cite{subspace_mathcing} can be adopted as a pre-processing operation before Algorithm \ref{algorithm-1} to determine the $\widehat{L}$.} $\Omega_{N}$, $\Omega_{N_{1}}$, and $\Omega_{N_{2}}$ are the sets with cardinality $\widehat{L}$ , and denote the position indices of the nonzero rows for $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{A}_{N}$, $\mathbf{U}_{N_{\mathrm{1}}}^{\mathrm{H}}\mathbf{A}_{N_{\mathrm{1}}}$, and $\mathbf{U}_{N_{\mathrm{2}}}^{\mathrm{H}}\mathbf{A}_{N_{\mathrm{2}}}$, respectively. \begin{algorithm} \caption{Low-complexity Angle Rotation based AoA Estimation} \label{algorithm-1} \begin{algorithmic}[1] \REQUIRE $\mathbf{Y}_{1}$. \STATE Calculate linear transformation of $\mathbf{Y}_{1}$: $\widetilde{\mathbf{\mathbf{Y}}}_{1}=\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{Y}_{1}$;\label{Complexity_1_1} \STATE Calculate the sum power of each row: $\mathbf{z}(n)=||[\widetilde{\mathbf{\mathbf{Y}}}_{1}]_{n,:}||^{2},\forall n=1,2,\ldots,N$; \STATE Find the rows with the peak power: $(\Omega_{N},\widehat{L})=\Gamma(\mathbf{z})$, where $\Omega_{N}=\{n_{l},l=1,\cdots,\widehat{L}\}$; \STATE Construct two sets: $\Omega_{N_{1}}=\{n_{1}(l),l=1,\cdots,\widehat{L}\}$, $\Omega_{N_{2}}=\{n_{2}(l),l=1,\cdots,\widehat{L}\}$ via (\ref{n_lsub}); \FOR{$1\leq l\leq\widehat{L}$} \STATE Calculate $\overline{n_{1l}}$ and $\overline{n_{2l}}$ respectively via (\ref{eq:Rot_index}); \STATE Find the optimal angle rotation parameters, $\Delta\widehat{\psi}_{l}$ and $\Delta\widehat{\nu}_{l}$ via (\ref{eq:opt_shift});\label{Complexity_1_2} \STATE Estimate the AoA spatial frequencies $\widehat{\psi}_{l}$ and $\widehat{\nu}_{l}$ according to (\ref{opt_psi_nu}); \ENDFOR \ENSURE $\{(\widehat{\psi}_{l},\widehat{\nu}_{l})\}_{l=1}^{\widehat{L}}$ and $\mathbf{\widehat{A}}_{N}$. \end{algorithmic} \end{algorithm} \begin{remark} \label{AoA_remark} Since the common AoA steering matrix $\mathbf{A}_{N}$ is shared by all users in MU scenario, the received signals from $K$ users in Stage I and Stage II during the first coherence block can be utilized jointly to estimate $\mathbf{A}_{N}$. Accordingly, the input of Algorithm \ref{algorithm-1} is given by $\mathbf{Y}=[\mathbf{Y}_{1},\mathbf{Y}_{2},...,\mathbf{Y}_{K}]\in\mathbb{C}^{N\times(\sum_{k=1}^{K}\tau_{k})}$. In this case, the number of measurements used for the estimation of $\mathbf{A}_{N}$ increases, which enhances the estimation performance and alleviates the error propagation effect in the following stages.\end{remark} \subsubsection{Estimation of the Cascaded Spatial Frequencies and Gains} By substituting $\mathbf{A}_{N}=\mathbf{\widehat{A}}_{N}+\Delta\mathbf{A}_{N}$ and applying Lemma \ref{lem:1}, we take the linear transformation $\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}$ of the received signals to eliminate the effects of the common AoAs, i.e., \vspace{-0.15cm} \begin{align} \frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{1} & =\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{1}\mathrm{\}}\mathbf{E}_{1}+\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}(\mathbf{N}_{1}+\sqrt{p}\Delta\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{1}\mathrm{\}}\mathbf{E}_{1}).\label{eq:LT_Y1} \end{align} Here, $\Delta\mathbf{A}_{N}\triangleq\mathbf{A}_{N}-\mathbf{\widehat{A}}_{N}$ is treated as the estimation error between the common AoA and its estimate, and the third term $(\frac{1}{N}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\Delta\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{1}\mathrm{\}}\mathbf{E}_{1})$ represents the corresponding negative error propagation effect. Clearly, $\Delta\mathbf{A}_{N}$ can be reduced effectively via the MU joint estimation strategy discussed in Remark \ref{AoA_remark}. Now we define the transpose of $\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{1}$ as an equivalent measurement matrix $\overline{\mathbf{\mathbf{Y}}}_{1}\in\mathbb{C}^{\tau_{1}\times L}$ shown below \vspace{-0.15cm} \begin{equation} \begin{split}\overline{\mathbf{\mathbf{Y}}}_{1}\triangleq(\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{1})^{\mathrm{H}} & =\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{{\bf h}_{1}^{*}\}\mathbf{A}_{M}\boldsymbol{\Lambda}^{*}+\overline{\mathbf{N}}_{1}=\mathbf{E}_{1}^{\mathrm{H}}\mathbf{H}_{\mathrm{RIS}}+\overline{\mathbf{N}}_{1},\end{split} \label{eq:Y1_formula} \end{equation} where $\mathbf{H}_{\mathrm{RIS}}\triangleq$$\mathrm{Diag}\{{\bf h}_{1}^{*}\}\mathbf{A}_{M}\boldsymbol{\Lambda}^{*}$ and $\overline{\mathbf{N}}_{1}$ is the corresponding transpose of the second term in Eq. (\ref{eq:LT_Y1}), seen as the equivalent noise. By exploiting the structure of $\mathbf{H}_{\mathrm{RIS}}$, we have \vspace{-0.3cm} \begin{equation} \begin{split}\mathbf{H}_{\mathrm{RIS}}={\bf h}_{1}^{*}\bullet(\mathbf{A}_{M}\boldsymbol{\Lambda}^{*})=(\mathbf{A}_{M,1}\boldsymbol{\beta}_{1})^{*}\bullet(\mathbf{A}_{M}\boldsymbol{\Lambda}^{*})=(\mathbf{A}_{M,1}^{*}\bullet\mathbf{A}_{M})(\boldsymbol{\beta}_{1}^{*}\otimes\boldsymbol{\Lambda}^{*}),\end{split} \label{eq:H_ris} \end{equation} where $\mathbf{A}_{M,1}^{*}\bullet\mathbf{A}_{M}=[\mathbf{a}_{M}(\omega_{1}-\varphi_{1,1},\mu_{1}-\theta_{1,1}),\mathbf{a}_{M}(\omega_{2}-\varphi_{1,1},\mu_{2}-\theta_{1,1})...,\mathbf{a}_{M}(\omega_{L}-\varphi_{1,J_{1}},\mu_{L}-\theta_{1,J_{1}})]\in\mathbb{C}^{M\times J_{1}L}$, and the last equality uses the identity $(\mathbf{A}\bullet\mathbf{B})(\mathbf{C}\otimes\mathbf{D})=(\mathbf{AC})\bullet(\mathbf{BD})$ \cite{K-R_product}. To extract the cascaded directional spatial frequency pairs $\{(\omega_{l}-\varphi_{1,j},\mu_{l}-\theta_{1,j})\}_{j=1,l=1}^{J_{1}L}$ and gains $(\boldsymbol{\beta}_{1}^{*}\otimes\boldsymbol{\Lambda}^{*})$ from $\overline{\mathbf{\mathbf{Y}}}_{1}$, (\ref{eq:Y1_formula}) could be approximated using the virtual angular domain (VAD) representation and converted into a $J_{1}L$-sparse recovery problem via vectorization \cite{ris-omp-1}, but this approach has high complexity and performance loss. Instead, another method is developed as follows. We first estimate $J_{1}$ cascaded spatial frequency pairs and gains from a typical column vector of $\overline{\mathbf{\mathbf{Y}}}_{1}$ using CS, and then estimate the remaining parameters by exploiting the correlation between the typical column and other columns. Specifically, denote $\overline{{\bf y}}_{r}$ as the $r$-th column of $\overline{\mathbf{\mathbf{Y}}}_{1}$, which is given by \vspace{-0.15cm} \begin{align} \overline{{\bf y}}_{r} & =\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{{\bf h}_{1}^{*}\}[\mathbf{A}_{M}\boldsymbol{\Lambda}^{*}]_{:\mathbf{,}r}+\overline{{\bf n}}_{r}\nonumber \\ & =\mathbf{E}_{1}^{\mathrm{H}}{\bf h}_{1}^{*}\bullet(\alpha_{r}^{*}\mathbf{a}_{M}(\omega_{r},\mu_{r}))+\overline{{\bf n}}_{r}=\mathbf{E}_{1}^{\mathrm{H}}(\mathbf{A}_{M,1}^{*}\bullet\mathbf{a}_{M}(\omega_{r},\mu_{r}))\alpha_{r}^{*}\boldsymbol{\beta}_{1}^{*}+\overline{{\bf n}}_{r},\label{eq:yl_sparse} \end{align} where $\mathbf{A}_{M,1}^{*}\bullet\mathbf{a}_{M}(\omega_{r},\mu_{r})=[\mathbf{a}_{M}(\omega_{r}-\varphi_{1,1},\mu_{r}-\theta_{1,1}),...,\mathbf{a}_{M}(\omega_{r}-\varphi_{1,J_{1}},\mu_{r}-\theta_{1,J_{1}})]\in\mathbb{C}^{M\times J_{1}}$ and $\overline{{\bf n}}_{r}$ is the $r$-th column of $\overline{\mathbf{N}}_{1}$. Note that $\mathrm{Diag}\{{\bf h}_{1}^{*}\}[\mathbf{A}_{M}\boldsymbol{\Lambda}^{*}]_{:\mathbf{,}r}$ is the $r$-th column of $\mathbf{H}_{\mathrm{RIS}}$, which we denote as $\mathbf{h}_{\mathrm{RIS,\mathit{r}}}$. Since $\{(\omega_{r}-\varphi_{1,j})\}_{j=1}^{J_{1}}$ and $\{(\mu_{r}-\theta_{1,j})\}_{j=1}^{J_{1}}$ lie in the interval $[-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}},2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}]$, we can formulate (\ref{eq:yl_sparse}) as a $J_{1}$-sparse signal recovery problem \vspace{-0.25cm} \begin{equation} \overline{{\bf y}}_{r}=\mathbf{E}_{1}^{\mathrm{H}}(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})\mathbf{b}_{r}+\overline{{\bf n}}_{r},\label{eq:sparse-formula} \end{equation} where $\mathbf{A}_{1}\in\mathbb{C}^{M_{1}\times D_{1}}$ and $\mathbf{A}_{2}\in\mathbb{C}^{M_{2}\times D_{2}}$ are overcomplete dictionary matrices $(D_{1}\geq M_{1},D_{2}\geq M_{2})$ with resolutions $\frac{1}{D_{1}}$ and $\frac{1}{D_{2}}$, respectively, and the columns of $\mathbf{A}_{1}$ and $\mathbf{A}_{2}$ contain values for $\mathbf{a}_{M_{1}}(\omega_{r}-\varphi_{1,j})$ and $\mathbf{a}_{M_{2}}(\mu_{r}-\theta_{1,j})$ on the angle grid, i.e., $\mathbf{\mathbf{A}_{\mathrm{1}}}=[\mathbf{a}_{M_{1}}(-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}),\mathbf{a}_{M_{1}}((-2+\frac{4}{D_{1}})\frac{d_{\mathrm{RIS}}}{\lambda_{c}}),\ldots,\mathbf{a}_{M_{1}}((2-\frac{4}{D_{1}})\frac{d_{\mathrm{RIS}}}{\lambda_{c}})]$ and $\mathbf{\mathbf{A}_{\mathrm{2}}}=[\mathbf{a}_{M_{2}}(-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}),\mathbf{a}_{M_{2}}((-2+\frac{4}{D_{2}})\frac{d_{\mathrm{RIS}}}{\lambda_{c}}),\ldots,\mathbf{a}_{M_{2}}((2-\frac{4}{D_{2}})\frac{d_{\mathrm{RIS}}}{\lambda_{c}})].$ In addition, $\mathbf{b}_{r}\in\mathbb{C}^{D_{1}D_{2}\times1}$ in (\ref{eq:sparse-formula}) is a sparse vector with $J_{1}$ nonzero entries corresponding to the cascaded channel path gains $\{\alpha_{r}^{*}\beta_{1,j}^{*}\}_{j=1}^{J_{1}}$. To obtain the best possible CS performance, the RIS phase shift training matrix $\mathbf{E}_{1}$ should be designed to ensure that the columns of the equivalent dictionary $\mathbf{E}_{1}^{\mathrm{H}}(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})$ are orthogonal. A detailed design of $\mathbf{E}_{1}$ that achieves this goal can be found in \cite{Zhou_ULA_TSP}. A simpler method is to choose $\mathbf{E}_{1}$ as the random Bernoulli matrix, i.e., randomly generate the elements of $\mathbf{E}_{1}$ from $\{-1,+1\}$ with equal probability \cite{ris-omp-3}. Later in Section \ref{sec:Simulation-Results}, we will show that this random method has near-optimal performance, and provides a nearly orthogonal equivalent dictionary.\footnote{Please note that the number of scatterers in the user $1$-RIS channel, i.e., the sparsity level for the sparse recovery problem associated with (\ref{eq:sparse-formula}), denoted as $J_{1}$, is estimated via the selected CS-based techniques. For example, in Section \ref{sec:Simulation-Results}, the proposed estimation protocol adopts OMP as the recovery algorithm. In this case, the stopping criteria for this algorithm is based on the power of the residual error, i.e., the algorithm is stopped when the residual energy is smaller than a predefined threshold. Thus the number of iterations is treated as the estimate of $J_{1}$.} Using CS, we obtain the cascaded AoD pair, i.e., $(\omega_{r}-\varphi_{1,j},\mu_{r}-\theta_{1,j})$. The corresponding cascaded AoD, i.e., $(\omega_{r}-\varphi_{1,j})$ and $(\mu_{r}-\theta_{1,j})$, can be obtained similarly using the properties of the Kronecker product. Assume that the $m$-th element of sparse vector $\mathbf{b}_{r}$ is nonzero, then the $m$-th column of $(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})$ is the corresponding cascaded steering vector. The corresponding indices in $\mathbf{\mathbf{A}_{\mathrm{1}}}$ and $\mathbf{\mathbf{A}_{\mathrm{2}}}$, denoted as $m_{1}$ and $m_{2}$, can be derived as \vspace{-0.1cm} \begin{equation} m_{1}=\left\lceil \frac{m}{D_{\mathrm{2}}}\right\rceil ,~m_{2}=m-D_{\mathrm{2}}(m_{1}-1).\label{m_lsub} \end{equation} Finally, we obtain the estimate of the cascaded AoD, i.e., $\{(\widehat{\omega_{r}-\varphi_{1,j}})\}_{j=1}^{\hat{J_{1}}}$ and $\{(\widehat{\mu_{r}-\theta_{1,j}})\}_{j=1}^{\hat{J_{1}}}$. As a result, $\mathbf{\widehat{h}}_{\mathrm{RIS,\mathit{r}}}$ is obtained according to (\ref{eq:yl_sparse}). Estimates of the other columns of $\mathbf{H}_{\mathrm{RIS}}$, i.e., $\{{\bf h}_{\mathrm{RIS},l}\}_{l\neq r}^{L}$, can be obtained by exploiting the correlation among different columns. To illustrate the correlation relationship, a compensation matrix $\mathbf{\mathrm{\Delta}H_{\mathit{l}}}$ with respect to the reference index $r$ is defined as \vspace{-0.2cm} \begin{equation} \begin{split}\mathbf{\mathrm{\Delta}H_{\mathit{l}}} & =\frac{\alpha_{l}^{*}}{\alpha_{r}^{*}}\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{l}-\omega_{r},\mu_{l}-\mu_{r})\}=\gamma_{l}\mathrm{Diag}\{\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})\},\end{split} \label{eq:compensate_mtx} \end{equation} where $\Delta\omega_{l}$, $\Delta\mu_{l}$ are rotation factors and $\gamma_{l}$ is a gain scaling factor given by \vspace{-0.15cm} \begin{equation} \Delta\omega_{l}=\omega_{l}-\omega_{r},~\Delta\mu_{l}=\mu_{l}-\mu_{r},~\gamma_{l}=\frac{\alpha_{l}^{*}}{\alpha_{r}^{*}}.\label{rot_fac_omega_mu_scale} \end{equation} Clearly, $\Delta\omega_{l},\Delta\mu_{l}\in[-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}},2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}]$. Then, we have \vspace{-0.2cm} \[ \begin{split}\mathbf{\mathrm{\Delta}H_{\mathit{l}}}{\bf h}_{\mathrm{RIS},r}= & \mathbf{\mathrm{\Delta}H_{\mathit{l}}}\mathrm{Diag}\{{\bf h}_{1}^{*}\}(\alpha_{r}^{*}\mathbf{a}_{M}(\omega_{r},\mu_{r}))=\mathbf{\mathrm{\Delta}H_{\mathit{l}}}\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{r},\mu_{r})\}(\alpha_{r}^{*}{\bf h}_{1}^{*})\\ = & \gamma_{l}\mathrm{Diag}\{\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})\}\mathrm{Diag}\{\mathbf{a}_{M_{1}}(\omega_{r})\otimes\mathbf{a}_{M_{2}}(\mu_{r})\}(\alpha_{r}^{*}{\bf h}_{1}^{*})\\ = & \mathrm{(Diag}\{\mathbf{a}_{M_{1}}(\Delta\omega_{l})\}\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\Delta\mu_{l})\})(\mathrm{Diag}\{\mathbf{a}_{M_{1}}(\omega_{r})\}\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\mu_{r})\})(\alpha_{l}^{*}{\bf h}_{1}^{*})\\ = & \mathrm{(Diag}\{\mathbf{a}_{M_{1}}(\Delta\omega_{l})\}\mathrm{Diag}\{\mathbf{a}_{M_{1}}(\omega_{r})\})\otimes\mathrm{(Diag}\{\mathbf{a}_{M_{2}}(\Delta\mu_{l})\}\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\mu_{r})\})(\alpha_{l}^{*}{\bf h}_{1}^{*})\\ = & \mathrm{Diag}\{\mathbf{a}_{M_{1}}(\omega_{l})\}\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\mu_{l})\}(\alpha_{l}^{*}{\bf h}_{1}^{*})={\bf h}_{\mathrm{RIS},l}. \end{split} \] This equality shows that we can estimate the compensation matrix $\mathbf{\mathrm{\Delta}H_{\mathit{l}}}$ instead of directly estimating ${\bf h}_{\mathrm{RIS},l}$. Specifically, ${\bf h}_{\mathrm{RIS},r}$ is estimated by applying CS to (\ref{eq:sparse-formula}), and ${\bf h}_{\mathrm{RIS},l}$ can be rewritten as \vspace{-0.3cm} \begin{equation} \begin{split}{\bf h}_{\mathrm{RIS},l}= & \gamma_{l}\mathrm{Diag}\{\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})\}{\bf h}_{\mathrm{RIS},r}=\mathrm{Diag}\{{\bf h}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})\gamma_{l}.\end{split} \label{eq:hris-l} \end{equation} We define $\mathbf{c}_{l}(\Delta\omega_{l},\Delta\mu_{l})=\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{\widehat{{\bf h}}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})$. Then, by replacing ${\bf h}_{\mathrm{RIS},r}$ with $\widehat{{\bf h}}_{\mathrm{RIS},r}+\Delta{\bf h}_{\mathrm{RIS},r}$, the $l$-th column of $\overline{\mathbf{\mathbf{Y}}}_{1}$ in (\ref{eq:Y1_formula}) is given by \vspace{-0.3cm} \begin{align} \overline{{\bf y}}_{l} & =\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{{\bf h}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})\gamma_{l}+\overline{{\bf n}}_{l}\nonumber \\ & =\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{\widehat{{\bf h}}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})+\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{\Delta{\bf h}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})+\overline{{\bf n}}_{l}\nonumber \\ & =\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{\widehat{{\bf h}}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})+\mathbf{n}_{\mathrm{noise}},\label{eq:formula-y_l} \end{align} where $\mathbf{n}_{\mathrm{noise}}\triangleq\mathbf{E}_{1}^{\mathrm{H}}\mathrm{Diag}\{\Delta{\bf h}_{\mathrm{RIS},r}\}\mathbf{a}_{M}(\Delta\omega_{l},\Delta\mu_{l})+\overline{{\bf n}}_{l}$ represents the corresponding noise vector and $\Delta{\bf h}_{\mathrm{RIS},r}$ is the estimation error of ${\bf h}_{\mathrm{RIS},r}$.\footnote{To reduce the error propagation, the reference index $r$ can be chosen based on the maximum received power criterion, i.e., $r=\mathrm{arg}\max_{i\in[1,\widehat{L}]}||\overline{{\bf y}}_{i}||^{2}$.} To find the optimal rotation factors $(\Delta\omega_{l},\Delta\mu_{l})$, a simple 2-D search method can be used: \vspace{-0.4cm} \begin{equation} (\Delta\widehat{\omega}_{l},\Delta\widehat{\mu}_{l})=\mathrm{arg}\max_{\Delta\omega,\Delta\mu\in[-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}},2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}]}\left|\left\langle \overline{{\bf y}}_{l},\mathbf{c}_{l}(\Delta\omega,\Delta\mu)\right\rangle \right|.\label{eq:2-D_search_Corr} \end{equation} The gain scaling factor $\gamma_{l}$ can be determined as the solution to the least square (LS) problem \vspace{-0.2cm} \begin{equation} \widehat{\gamma}_{l}=\mathrm{arg}\min_{x}||\overline{{\bf y}}_{l}-\mathbf{c}_{l}(\Delta\widehat{\omega}_{l},\Delta\widehat{\mu}_{l})x||,\label{eq:LS-x_l} \end{equation} whose solution is $\widehat{\gamma}_{l}=(\mathbf{c}_{l}^{\mathrm{H}}(\Delta\widehat{\omega}_{l},\Delta\widehat{\mu}_{l})\mathbf{c}_{l}(\Delta\widehat{\omega}_{l},\Delta\widehat{\mu}_{l}))^{-1}\mathbf{c}_{l}^{\mathrm{H}}(\Delta\widehat{\omega}_{l},\Delta\widehat{\mu}_{l})\overline{{\bf y}}_{l}$. Substituting the solutions of (\ref{eq:2-D_search_Corr}) and (\ref{eq:LS-x_l}) into (\ref{eq:hris-l}), we can obtain $\widehat{{\bf h}}_{\mathrm{RIS},l}$, $(1\leq l\leq L,l\neq r)$. Finally, the estimated cascaded channel of user $1$ is given by \vspace{-0.2cm} \begin{equation} \widehat{\mathbf{G}}_{1}=\mathbf{\widehat{A}}_{N}\widehat{{\bf H}}_{\mathrm{R\mathrm{IS}}}^{\mathrm{H}},\label{eq:G1_estimate} \end{equation} where $\widehat{{\bf H}}_{\mathrm{RIS}}=[\widehat{{\bf h}}_{\mathrm{RIS},1},\cdots,\widehat{{\bf h}}_{\mathrm{RIS},L}]$. Furthermore, the cascaded AoD in ${\bf h}_{\mathrm{RIS},l}$ can be obtained as \vspace{-0.5cm} \begin{equation} \omega_{l}-\varphi_{1,j}=(\omega_{r}-\varphi_{1,j})+\Delta\omega_{l},~\mu_{l}-\theta_{1,j}=(\mu_{r}-\theta_{1,j})+\Delta\mu_{l},\label{cascaded1_omega_mu} \end{equation} where the estimate of $(\omega_{r}-\varphi_{1,j},\mu_{r}-\theta_{1,j})$ and $(\Delta\omega_{l},\Delta\mu_{l})$ can be readily obtained from (\ref{eq:yl_sparse}) and (\ref{eq:2-D_search_Corr}), respectively. The overall estimation of $\mathbf{G}_{1}$ is summarized in Algorithm \ref{algorithm-2}. \begin{algorithm} \caption{Estimation of Full CSI for Typical User in Stage I} \label{algorithm-2} \begin{algorithmic}[1] \REQUIRE $\mathbf{Y}_{1}$. \STATE Return the estimated number of propagation paths between BS and RIS $\widehat{L}$ and AoA steering matrix $\mathbf{\widehat{A}}_{N}$ from Algorithm \ref{algorithm-1}; \STATE Calculate equivalent measurement matrix $\overline{\mathbf{\mathbf{Y}}}_{1}=[\overline{{\bf y}}_{1},\ldots,\overline{{\bf y}}_{\widehat{L}}]$; \STATE Choose the typical reference index $r$ and estimate ${\bf h}_{\mathrm{RIS},r}$ by solving sparse recovery problem associated with (\ref{eq:sparse-formula}); \label{Complexity_2_1} \FOR{$1\leq l\leq\widehat{L},l\neq r$} \STATE Estimate $(\Delta\omega_{l},\Delta\mu_{l})$ according to (\ref{eq:2-D_search_Corr});\label{Complexity_2_2} \STATE Estimate $\gamma_{l}$ according to (\ref{eq:LS-x_l}); \STATE Estimate ${\bf h}_{\mathrm{RIS},l}$ according to (\ref{eq:hris-l}); \ENDFOR \ENSURE $\widehat{\mathbf{G}}_{1}=\mathbf{\widehat{A}}_{N}[\widehat{{\bf h}}_{\mathrm{RIS},1},\cdots,\widehat{{\bf h}}_{\mathrm{RIS},\widehat{L}}]^{\mathrm{H}}$. \end{algorithmic} \end{algorithm} \vspace{-0.3cm} \subsection{Stage II: Estimation of Full CSI for Other Users\label{subsec:Stage-II:-Estimation}} \vspace{-0.1cm} In this subsection, the property that all users share the common RIS-BS channel is invoked for reducing the pilot overhead of channel estimation. First, we re-exploit the structure of the cascaded channel $\mathbf{G}_{k}$, and then divide it into two parts, i.e., a common part and a unique part. Then, only re-estimating the unique part is necessary for obtaining the full CSI of the other users. \subsubsection{Re-express Cascaded Channel\label{subsec:Re-express-Cascaded-Channel}} In order to illustrate the necessity of re-expressing cascaded channel $\mathbf{G}_{k}$, let us recall its structure and see why the common RIS-BS channel $\mathbf{H}$ cannot be obtained in Stage I. According to (\ref{eq:G1}), all users share the common $\mathbf{H}$ consisting of three matrices, i.e., $\mathbf{A}_{N}$, $\boldsymbol{\Lambda}$ and $\mathbf{A}_{M}$. The first, $\mathbf{A}_{N}$, is estimated in Stage I. However, $\boldsymbol{\Lambda}$ and $\mathbf{A}_{M}$ cannot be extracted separately from $\mathbf{G}_{1}$ since we can only estimate the spatial frequencies of the cascaded AoDs, i.e., $(\omega_{l}-\varphi_{1,j})$, $(\mu_{l}-\theta_{1,j})$ and the cascaded gains, i.e., $\alpha_{l}\beta_{1,j}$ for any $l$ and $j$. If other users only utilize the obtained $\mathbf{\hat{A}}_{N}$, the estimation for these users is the same as that of the typical user, and thus the pilot overhead cannot be decreased further. Therefore, we aim to fully exploit the structure of $\mathbf{H}$ so as to utilize the common channel's information from $\boldsymbol{\Lambda}$ and $\mathbf{A}_{M}$. Motivated by this, we decompose the cascaded channel $\mathbf{G}_{k}$ into two parts, i.e., a common part and a unique part, where the common part can be obtained from the estimation of $\mathbf{G}_{1}$ in Stage I. The constructed common part has the full information of $\mathbf{A}_{N}$, and the re-parameterized information of $\boldsymbol{\Lambda}$ and $\mathbf{A}_{M}$, so as to achieve the full exploitation of $\mathbf{H}$. Then, we only need to re-estimate the unique part of the cascaded channel for the other users. To this end, we denote the common part of $\mathbf{G}_{k}$ as $\mathbf{H}_{\mathrm{s}}\in\mathbb{C}^{N\times M}$, which can be regarded as a substitute for $\mathbf{H}$ from $\mathbf{G}_{1}$. Similarly, the unique part of $\mathbf{G}_{k}$ is denoted by $\mathbf{h}_{\mathrm{s},k}\in\mathbb{C}^{M\times1}$, which can be regarded as a substitute for $\mathbf{h}_{k}$. Then, $\mathbf{G}_{k}$ can be re-expressed as \vspace{-0.35cm} \begin{equation} \mathbf{G}_{k}=\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\in\mathbb{C}^{N\times M},\forall k\in\mathcal{K}.\label{paril-G} \end{equation} \vspace{-0.2cm} In the following, we first construct the common part $\mathbf{H}_{\mathrm{s}}$ with the knowledge obtained in Stage I. Then, we estimate each user's unique part $\mathbf{h}_{\mathrm{s},k}$. \subsubsection{Construction of Common Part } Define the average value of user $1$'s complex gains $\boldsymbol{\beta}_{1}$ as $\overline{\beta}=\frac{1}{J_{1}}\mathbf{1}_{J_{1}}^{\mathrm{T}}\boldsymbol{\beta}_{1}$, then we have \vspace{-0.3cm} \begin{align} \boldsymbol{\Lambda} & =\mathrm{Diag}\{\alpha_{1},\alpha_{2},\ldots,\alpha_{L}\}=\alpha_{r}\mathrm{Diag}\{\gamma_{1}^{*},\gamma_{2}^{*},\ldots,\gamma_{L}^{*}\}\nonumber \\ & =\frac{1}{\overline{\beta}}\overline{\beta}\alpha_{r}\mathrm{Diag}\{\gamma_{1}^{*},\gamma_{2}^{*},\ldots,\gamma_{L}^{*}\}\triangleq\frac{1}{\overline{\beta}}\boldsymbol{\Lambda}_{\mathrm{s}}.\label{eq:gamma_common} \end{align} Here, $\boldsymbol{\Lambda}_{\mathrm{s}}=(\frac{1}{J_{1}}\mathbf{1}_{J_{1}}^{\mathrm{T}}\boldsymbol{\beta}_{1}\alpha_{r})\mathrm{Diag}\{\gamma_{1}^{*},\gamma_{2}^{*},\ldots,\gamma_{L}^{*}\}$. Obviously, $\boldsymbol{\beta}_{1}\alpha_{r}$ can be obtained by solving the sparse recovery problem corresponding to (\ref{eq:yl_sparse}) and $\gamma_{l}$ can be obtained according to (\ref{eq:LS-x_l}). Thus, the constructed matrix, $\boldsymbol{\Lambda}_{\mathrm{s}}$, can be readily calculated. Similarly, the matrix $\mathbf{A}_{M}$ can be rewritten as \vspace{-0.45cm} \begin{equation} \begin{split}\mathbf{A}_{M}= & [\mathbf{a}_{M}(\omega_{1},\mu_{1}),\ldots,\mathbf{a}_{M}(\omega_{L},\mu_{L})]=\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{r},\mu_{r})\}\mathbf{A}_{\Delta M},\end{split} \label{eq:AoD_common_part} \end{equation} where $\mathbf{A}_{\Delta M}=[\mathbf{a}_{M}(\Delta\omega_{1},\Delta\mu_{1}),\ldots,\mathbf{a}_{M}(\Delta\omega_{L},\Delta\mu_{L})]$. Note that the rotation factors $\Delta\omega_{l}$, $\Delta\mu_{l}$ can be obtained by Algorithm \ref{algorithm-2}, but we need to find $(\omega_{r},\mu_{r})$, which is not possible. Instead, we introduce two parameters, $\omega_{\mathrm{s}}=\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}(\omega_{r}-\varphi_{1,j})$ and $\mu_{\mathrm{s}}=\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}(\mu_{r}-\theta_{1,j})$ as substitutes for $\omega_{r}$ and $\mu_{r}$, which can be readily obtained since $(\omega_{r}-\varphi_{1,j})$ and $(\mu_{r}-\theta_{1,j})$ for $\forall j\in\{1,...,J_{1}\}$ have been estimated in Algorithm \ref{algorithm-2}. Then, define $\overline{\varphi_{1}}$ as $(-\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}\varphi_{1,j})$ and $\overline{\theta_{1}}$ as $(-\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}\theta_{1,j})$. The following relationship exists between $(\omega_{\mathrm{s}},\mu_{\mathrm{s}})$ and $(\omega_{r},\mu_{r})$: \vspace{-0.1cm} \begin{equation} \omega_{\mathrm{s}}=\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}(\omega_{r}-\varphi_{1,j})=\omega_{r}+\overline{\varphi_{1}},~\mu_{\mathrm{s}}=\frac{1}{J_{1}}\sum_{j=1}^{J_{1}}(\mu_{r}-\theta_{1,j})=\mu_{r}+\overline{\theta_{1}}.\label{omega_mu_common} \end{equation} Based on the above definitions, $\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{r},\mu_{r})\}$ in (\ref{eq:AoD_common_part}) can be represented as \vspace{-0.25cm} \[ \begin{aligned}\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{r},\mu_{r})\} & \mathrm{=Diag}\{\mathbf{a}_{M}(\omega_{\mathrm{s}}-\overline{\varphi_{1}},\mu_{\mathrm{s}}-\overline{\theta_{1}})\}\mathrm{=Diag}\{\mathbf{a}_{M_{1}}(\omega_{\mathrm{s}}-\overline{\varphi_{1}})\otimes\mathbf{a}_{M_{2}}(\mu_{\mathrm{s}}-\overline{\theta_{1}})\}\\ & \mathrm{=(Diag}\{\mathbf{a}_{M_{1}}(-\overline{\varphi_{1}})\}\mathrm{Diag}\{\mathbf{a}_{M_{1}}(\omega_{\mathrm{s}})\})\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(-\overline{\theta_{1}})\}\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\mu_{\mathrm{s}})\})\\ & \mathrm{=(Diag}\{\mathbf{a}_{M_{1}}(-\overline{\varphi_{1}})\}\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(-\overline{\theta_{1}})\})\mathrm{(Diag}\{\mathbf{a}_{M_{1}}(\omega_{\mathrm{s}})\}\otimes\mathrm{Diag}\{\mathbf{a}_{M_{2}}(\mu_{\mathrm{s}})\})\\ & \mathrm{=Diag}\{\mathbf{a}_{M}(-\overline{\varphi_{1}},-\overline{\theta_{1}})\}\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{\mathrm{s}},\mu_{\mathrm{s}})\}. \end{aligned} \] Then, combining this equality with (\ref{eq:AoD_common_part}), $\mathbf{A}_{M}$ can be rewritten as \vspace{-0.35cm} \begin{equation} \begin{aligned}\mathbf{A}_{M}=\mathrm{Diag}\{\mathbf{a}_{M}(-\overline{\varphi_{1}},-\overline{\theta_{1}})\}\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{\mathrm{s}},\mu_{\mathrm{s}})\}\mathbf{A}_{\Delta M}\triangleq\mathrm{Diag}\{\mathbf{a}_{M}(-\overline{\varphi_{1}},-\overline{\theta_{1}})\}\mathbf{A}_{\mathrm{s}},\end{aligned} \label{eq:AoD_common} \end{equation} where $\mathbf{A}_{\mathrm{s}}=\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{\mathrm{s}},\mu_{\mathrm{s}})\}\mathbf{A}_{\Delta M}$ can be readily estimated using Algorithm \ref{algorithm-2}. Based on (\ref{eq:gamma_common}) and (\ref{eq:AoD_common}), the common RIS-BS channel matrix $\mathbf{H}$ in (\ref{eq:H1-1}) is re-expressed as \vspace{-0.2cm} \begin{equation} \begin{split}\mathbf{H} & =\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}=\mathbf{A}_{N}\frac{1}{\overline{\beta}}\boldsymbol{\Lambda}_{\mathrm{s}}\mathbf{A}_{\mathrm{s}}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\}\triangleq\frac{1}{\overline{\beta}}\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\},\end{split} \label{eq:H_common} \end{equation} where $\mathbf{H}_{\mathrm{s}}=\mathbf{A}_{N}\boldsymbol{\Lambda}_{\mathrm{s}}\mathbf{A}_{\mathrm{s}}^{\mathrm{H}}$ is the common part of the cascaded channel that can be estimated using Algorithm \ref{algorithm-1} and Algorithm \ref{algorithm-2}. Then, combining (\ref{eq:H_common}) with (\ref{paril-G}), we have \vspace{-0.15cm} \begin{align} \mathbf{G}_{k} & =\mathbf{H}\mathrm{Diag}\{\mathbf{h}_{k}\}=\frac{1}{\overline{\beta}}\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\}\mathrm{Diag}\{\mathbf{h}_{k}\}\nonumber \\ & =\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\frac{1}{\overline{\beta}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\}\mathbf{h}_{k}\}=\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\},\label{eq:Gk_common} \end{align} where $\mathbf{h}_{\mathrm{s},k}=\frac{1}{\overline{\beta}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\}\mathbf{h}_{k}$ is the unique part of user $k$'s channel, that needs to be obtained. Next we will show how to estimate the unique part and present the channel estimation strategy for other users, leading to a significant reduction in the pilot overhead. \subsubsection{Estimation of Unique Part} Denote the estimate of $\mathbf{H}_{\mathrm{s}}$ as $\mathbf{\widehat{H}}_{\mathrm{s}}=\mathbf{\widehat{A}}_{N}\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}$ where $\mathbf{\widehat{A}}_{N}$, $\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}$, and $\widehat{\mathbf{A}}_{\mathrm{s}}$ are the estimates of $\mathbf{A}_{N}$, $\boldsymbol{\Lambda}_{\mathrm{s}}$, and $\mathbf{A}_{\mathrm{s}}$, respectively. By replacing $\mathbf{H}_{\mathrm{s}}$ with $\mathbf{\widehat{H}}_{\mathrm{s}}+\Delta\mathbf{H}_{\mathrm{s}}$ where $\Delta\mathbf{\mathbf{H}_{\mathrm{s}}}$ represents the error between $\mathbf{H}_{\mathrm{s}}$ and its estimate, user $k$'s received data $\mathbf{Y}_{k}$ after eliminating the effects of the estimated common AoAs is expressed as \textcolor{red}{\vspace{-0.2cm} } \begin{align} \frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{k} & =\frac{1}{N}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{H}_{\mathrm{s}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\mathbf{E}_{k}+\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{N}_{k}\nonumber \\ & =\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\mathbf{E}_{k}+\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{N}_{k}+\frac{1}{N}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\Delta\mathbf{\mathbf{H}_{\mathrm{s}}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\mathbf{E}_{k}.\label{eq:LT_Yk_common} \end{align} For the estimation of $\mathbf{h}_{\mathrm{s},k}$, define $\mathbf{w}_{k}=\mathrm{vec}(\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{k})\in\mathbb{C}^{L\tau_{k}\times1}$. Then, we have \vspace{-0.25cm} \begin{equation} \begin{split}\mathbf{w}_{k} & =\mathrm{vec}(\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\mathbf{E}_{k})+\widetilde{\mathbf{n}}_{k}=(\mathbf{E}_{k}^{\mathrm{T}}\diamond\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}})\mathbf{h}_{\mathrm{s},k}+\widetilde{\mathbf{n}}_{k}=\mathbf{W}_{k}\mathbf{h}_{\mathrm{s},k}+\widetilde{\mathbf{n}}_{k},\end{split} \label{eq:Wk_} \end{equation} where $\mathbf{W}_{k}\triangleq(\mathbf{E}_{k}^{\mathrm{T}}\diamond\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}})$ and $\widetilde{\mathbf{n}}_{k}$ is the corresponding equivalent noise vector given by $\mathrm{vec}(\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{N}_{k}+\frac{1}{N}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\Delta\mathbf{\mathbf{H}_{\mathrm{s}}}\mathrm{Diag}\{\mathbf{h}_{\mathrm{s},k}\}\mathbf{E}_{k})\in\mathbb{C}^{L\tau_{k}\times1}$. The second equality is obtained via $\mathrm{vec}(\mathbf{\mathbf{A}\mathrm{Diag}\{b\}C})=(\mathbf{\mathbf{C}^{\mathrm{T}}\diamond A})\mathbf{b}$ \cite{Xinda2017}. Then, substituting $\mathbf{h}_{k}=\mathbf{A}_{M,k}\boldsymbol{\beta}_{k}$ in (\ref{eq:hk}) into $\mathbf{h}_{\mathrm{s},k}$, we have \vspace{-0.25cm} \begin{equation} \begin{split}\mathbf{h}_{\mathrm{s},k} & =\frac{1}{\overline{\beta}}\mathrm{Diag}\{\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\}\mathbf{A}_{M,k}\boldsymbol{\beta}_{k}=(\mathbf{a}_{M}\mathrm{(}\overline{\varphi_{1}},\overline{\theta_{1}}\mathrm{)}\bullet\mathbf{A}_{M,k}\mathrm{)}\frac{1}{\overline{\beta}}\boldsymbol{\beta}_{k},\end{split} \label{eq:hc_k} \end{equation} where $\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\bullet\mathbf{A}_{M,k}=[\mathbf{a}_{M}(\varphi_{k,1}+\overline{\varphi_{1}},\theta_{k,1}+\overline{\theta_{1}}),\ldots,\mathbf{a}_{M}(\varphi_{k,J_{k}}+\overline{\varphi_{1}},\theta_{k,J_{k}}+\overline{\theta_{1}})]\in\mathbb{C}^{M\times J_{k}}$. Since both $\varphi_{k,l}+\overline{\varphi_{1}}$ and $\theta_{k,l}+\overline{\theta_{1}}$ lie within $[-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}},2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}]$, we can formulate (\ref{eq:Wk_}) as a $J_{k}$-sparse signal recovery problem \vspace{-0.25cm} \begin{equation} \begin{split}\mathbf{w}_{k}= & \mathbf{W}_{k}\mathbf{h}_{\mathrm{s},k}+\widetilde{\mathbf{n}}_{k}=\mathbf{W}_{k}\mathrm{(}\mathbf{a}_{M}(\overline{\varphi_{1}},\overline{\theta_{1}})\bullet\mathbf{A}_{M,k}\mathrm{)}\frac{1}{\overline{\beta}}\boldsymbol{\beta}_{k}+\widetilde{\mathbf{n}}_{k}=\mathbf{W}_{k}\mathrm{(}\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}}\mathrm{)}\mathbf{d}_{k}+\widetilde{\mathbf{n}}_{k}.\end{split} \label{eq:wk_sparse} \end{equation} Here $\mathbf{A}_{1}\in\mathbb{C}^{M_{1}\times D_{1}}$ and $\mathbf{A}_{2}\in\mathbb{C}^{M_{2}\times D_{2}}$ are overcomplete dictionary matrices similar to (\ref{eq:sparse-formula}) satisfying $D_{1}\geq M_{1}$ and $D_{2}\geq M_{2}$, and $\mathbf{d}_{k}\in\mathbb{C}^{D_{1}D_{2}\times1}$ is a sparse vector with $J_{k}$ nonzero entries corresponding to $\{\frac{1}{\overline{\beta}}\beta_{k,j}\}_{j=1}^{J_{k}}$. Hence, the angle estimation problem corresponding to (\ref{eq:wk_sparse}) can be solved using CS-based techniques. To improve the estimation performance, the alternating optimization (AO) method in \cite{Zhou_ULA_TSP} can be adopted to optimize the RIS phase shift training matrix $\mathbf{E}_{k}$ so as to ensure the near column-orthogonality of the equivalent dictionary $\mathbf{W}_{k}\mathrm{(}\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}}\mathrm{)}$. In addition, the estimate of the number of scatterers between user-RIS channel for user $k$, i.e., the sparsity level for the sparse recovery problem associated with (\ref{eq:wk_sparse}) $J_{k}$, is obtained by the selected CS-based techniques, similarly to the estimation of $J_{1}$ discussed before. Note that we obtain the equivalent AoA pair of user $k$'s user-RIS channel, i.e., $(\varphi_{k,j}+\overline{\varphi_{1}},\theta_{k,j}+\overline{\theta_{1}})$, by solving angle estimation problem based on (\ref{eq:wk_sparse}). The corresponding equivalent AoAs, i.e., $(\varphi_{k,j}+\overline{\varphi_{1}})$ and $(\theta_{k,j}+\overline{\theta_{1}})$, can be obtained similar to (\ref{m_lsub}). Assume that the $p$-th element of sparse vector $\mathbf{d}_{k}$ is nonzero, then the corresponding indices in $\mathbf{\mathbf{A}_{\mathrm{1}}}$ and $\mathbf{\mathbf{A}_{\mathrm{2}}}$ in (\ref{eq:wk_sparse}), denoted by $p_{1}$ and $p_{2}$, are derived as \vspace{-0.2cm} \begin{equation} p_{1}=\left\lceil \frac{p}{D_{\mathrm{2}}}\right\rceil ,~p_{2}=p-D_{\mathrm{2}}(p_{1}-1).\label{pl_sub} \end{equation} Finally, we obtain an estimate of the equivalent AoA spatial frequencies for user $k$'s user-RIS channel, i.e., $\{(\widehat{\varphi_{k,j}+\overline{\varphi_{1}}})\}_{j=1}^{\hat{J_{k}}}$ and $\{(\widehat{\theta_{k,j}+\overline{\theta_{1}}})\}_{j=1}^{\hat{J_{k}}}$. Furthermore, user $k$'s cascaded AoDs, i.e., $(\omega_{l}-\varphi_{k,j})$ and $(\mu_{l}-\theta_{k,j})$, for $\forall l\in\{1,...,L\}$ and $\forall j\in\{1,...,J_{k}\}$, can be also obtained as follows:\vspace{-0.6cm} \begin{subequations} \label{cascaded_omega_mu} \begin{align} \omega_{l}-\varphi_{k,j} & =\omega_{r}-\varphi_{k,j}+\omega_{l}-\omega_{r}\nonumber \\ & =\omega_{r}+\overline{\varphi_{1}}-(\overline{\varphi_{1}}+\varphi_{k,j})+\omega_{l}-\omega_{r}=\omega_{\mathrm{s}}-(\overline{\varphi_{1}}+\varphi_{k,j})+\Delta\omega_{l},\label{eq:cascaded_omega}\\ \mu_{l}-\theta_{k,j} & =\mu_{r}-\theta_{k,j}+\mu_{l}-\mu_{r}\nonumber \\ & =\mu_{r}+\overline{\theta_{1}}-(\overline{\theta_{1}}+\theta_{k,j})+\mu_{l}-\mu_{r}=\mu_{\mathrm{s}}-(\overline{\theta_{1}}+\theta_{k,j})+\Delta\mu_{l}.\label{eq:cascaded_mu} \end{align} \end{subequations} \vspace{-0.25cm} Based on (\ref{rot_fac_omega_mu_scale}), (\ref{omega_mu_common}) and (\ref{eq:wk_sparse}), the parameters $\Delta\omega_{l}$, $\Delta\mu_{l}$, $\omega_{\mathrm{s}}$, $\mu_{\mathrm{s}}$, $(\overline{\varphi_{1}}+\varphi_{k,j})$ and $(\overline{\theta_{1}}+\theta_{k,j})$ for $\forall l\in\{1,2,...,L\}$ and $\forall j\in\{1,2,...,J_{k}\}$ can be readily estimated. Finally, the completed CS-based estimation of $\mathbf{G}_{k}$ for $2\leq k\leq K$ is summarized in Algorithm \ref{algorithm-3}. As shown in Algorithm \ref{algorithm-3}, the obtained common part of cascaded channel $\mathbf{H}_{\mathrm{s}}$ allows us to estimate the unique part $\mathbf{h}_{\mathrm{s},k}$ with reduced pilot overhead. \begin{algorithm} \caption{Estimation of Full CSI for Other Users in Stage II} \label{algorithm-3} \begin{algorithmic}[1] \REQUIRE $\mathbf{Y}_{k}$, $\mathbf{\widehat{A}}_{N}$. \STATE Obtain the estimate $\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}$ based on (\ref{eq:gamma_common}); \STATE Obtain the estimate $\widehat{\mathbf{A}}_{\mathrm{s}}$ based on (\ref{eq:AoD_common}); \STATE Obtain the estimate of the common part, i.e., $\mathbf{\widehat{H}}_{\mathrm{s}}=\mathbf{\widehat{A}}_{N}\widehat{\boldsymbol{\Lambda}}_{\mathrm{s}}\widehat{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}$; \FOR{$2\leq k\leq K$} \STATE Calculate $\mathbf{w}_{k}=\mathrm{vec}(\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{k})$; \STATE Calculate equivalent dictionary $\mathbf{W}_{k}(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})$ according to (\ref{eq:Wk_}); \STATE Estimate unique part $\mathbf{h}_{\mathrm{s},k}$ by solving sparse recovery problem associated with (\ref{eq:wk_sparse});\label{complexity_3_1} \STATE Obtain the estimate of cascaded channel, i.e., $\widehat{\mathbf{G}}_{k}=\widehat{\mathbf{H}}_{\mathrm{s}}\mathrm{Diag}\{\widehat{\mathbf{h}}_{\mathrm{s},k}\}$; \ENDFOR \ENSURE $\widehat{\mathbf{G}}_{k},2\leq k\leq K$. \end{algorithmic} \end{algorithm} \vspace{-0.35cm} \subsection{Pilot Overhead and Computational Complexity Analysis\label{subsec:complexity_analysis}} \vspace{-0.15cm} In this subsection, we first analyze the pilot overhead required for the full CSI estimation. Then, the corresponding computational complexity is evaluated. For simplicity, $J_{1}=J_{2}=\cdots=J_{K}=J$ is assumed. \subsubsection{Pilot Overhead Analysis \label{subsec:Pilot-Overhead}} Clearly, the number of pilot symbols directly affects the sparse recovery performance for equations (\ref{eq:sparse-formula}) and (\ref{eq:wk_sparse}). According to \cite{Sparse-Recovery_Measurement}, to find a $l$-sparse complex signal (vector) with dimension $n$, the number of measurements $m$ is required to be on the order of $\mathcal{O}(l\log(n))$, which is proportional to the sparsity level $l$. Based on this fact, we first analyze the number of pilots required for the typical user, i.e., user $1$. For the sparse recovery problem associated with (\ref{eq:sparse-formula}) in Stage I, the dimension of the equivalent sensing matrix $\mathbf{F_{\mathrm{1}}}\triangleq\mathbf{E}_{1}^{\mathrm{H}}(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})$ is $\tau_{1}\times D_{1}D_{2}$ where $D_{1}\geq M_{1}$ and $D_{2}\geq M_{2}$, and the corresponding sparsity level is $J_{1}$, thus the pilot overhead required for user $1$ should satisfy $\tau_{1}\geq\mathcal{O}(J_{1}\log(D_{1}D_{2}))\geq\mathcal{O}(J_{1}\log(M_{1}M_{2}))=\mathcal{O}(J_{1}\log(M))$. For the sparse recovery problem associated with (\ref{eq:wk_sparse}) in Stage II, the dimension of the equivalent sensing matrix $\mathbf{F}_{k}\triangleq\mathbf{W}_{k}(\mathbf{A}_{\mathrm{1}}\otimes\mathbf{A}_{\mathrm{2}})$ is $L\tau_{k}\times D_{1}D_{2}$ where $D_{1}\geq M_{1}$ and $D_{2}\geq M_{2}$, and the corresponding sparsity level is $J_{k}$, thus user $k$ needs $\tau_{k}\geq\mathcal{O}(J_{k}\log(D_{1}D_{2})/L)\geq\mathcal{O}(J_{k}\log(M_{1}M_{2})/L)=\mathcal{O}(J_{k}\log(M)/L)$ pilot symbols. Therefore, the overall required pilot overhead in the first coherence block is $\mathcal{O}(J\log(M)+(K-1)J\log(M)/L)$. \vspace{-0.2cm} \subsubsection{Computational Complexity Analysis\label{subsec:Computational-Complex}} For the estimation of the typical user in Stage I shown in Algorithm \ref{algorithm-2}, the computational complexity mainly stems from Algorithm \ref{algorithm-1} in Step 1, the CS-based method for the estimation of ${\bf h}_{\mathrm{RIS},r}$ in Step \ref{Complexity_2_1} and the correlation based scheme in Step \ref{Complexity_2_2}. Specifically, the dominant complexity for Algorithm \ref{algorithm-1} are calculating the matrix multiplication in its Step \ref{Complexity_1_1} with computational complexity of $\mathcal{O}(N^{2}\tau_{1})$ and implementing the angle rotation in its Step \ref{Complexity_1_2} with computational complexity of $\mathcal{O}(N\tau_{1}L(g_{1}+g_{2}))$. We take OMP as the recovery algorithm, whose corresponding dominant complexity is $\mathcal{O}(mnl)$ \cite{Zhou_ULA_TSP}, where $m$ is the length of the measurements, and $n$ is the length of the sparse signal with sparsity level $l$. Hence, the complexity for estimating ${\bf h}_{\mathrm{RIS},r}$ is $\mathcal{O}(\tau_{1}D_{1}D_{2}J_{1})$. Additionally, the computational complexity of the correlation based scheme is given by $\mathcal{O}(M\tau_{1}(L-1)d_{1}d_{2})$, where $d_{1}$ and $d_{2}$ represent the search grids for $\Delta\omega_{l}$ and $\Delta\mu_{l}$ within $[-2\frac{d_{\mathrm{RIS}}}{\lambda_{c}},2\frac{d_{\mathrm{RIS}}}{\lambda_{c}}]$, respectively. The overall computational complexity in Stage I is $\mathcal{O}(\tau_{1}D_{1}D_{2}J+N^{2}\tau_{1}+N\tau_{1}L(g_{1}+g_{2})+M\tau_{1}(L-1)d_{1}d_{2})$. Then, we analyze the computational complexity for the estimation of other users in Stage II shown in Algorithm \ref{algorithm-3}, which mainly stems from the CS-based method for estimation of $\mathbf{h}_{\mathrm{s},k}$ in Step \ref{complexity_3_1}. Similarly, we choose OMP to solve the sparse recovery problem associated with (\ref{eq:wk_sparse}), and thus the corresponding computational complexity is $\mathcal{O}(\tau_{k}LD_{1}D_{2}J_{k})$. Consider $(K-1)$ users in total, the overall computational complexity in Stage II during the first coherence block is $\mathcal{O}((K-1)\tau_{k}LD_{1}D_{2}J)$.\vspace{-0.2cm} \section{Channel Estimation in Remaining Coherence Blocks\label{sec:Remaining_Coherence}} \vspace{-0.15cm} After the first coherence block, we adopt the LS estimator to re-estimate the cascaded gains since the angles remain unchanged during the remaining coherence blocks. Later we will see the required pilot overhead can be reduced further in this stage. Without loss of generality, we consider an arbitrary $k$ from $\mathcal{K}$ and show how to re-estimate user $k$'s channel gains. Similar to (\ref{eq:Y1_formula}), we first take user $k$'s equivalent measurement matrix $\overline{\mathbf{\mathbf{Y}}}_{k}$, i.e., $\overline{\mathbf{\mathbf{Y}}}_{k}=(\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{k})^{\mathrm{H}}\in\mathbb{C}^{\tau_{k}\times L}$, where $\mathbf{\widehat{A}}_{N}$ has been acquired in Stage I. Then, following the same derivations as for (\ref{eq:yl_sparse}), the $r$-th column of $\overline{\mathbf{\mathbf{Y}}}_{k}$ , denoted as $\overline{{\bf y}}_{k,r}$, is given by \vspace{-0.35cm} \begin{equation} \begin{split}\overline{{\bf y}}_{k,r} & =\mathbf{E}_{k}^{\mathrm{H}}(\mathbf{A}_{M,k}^{*}\bullet\mathbf{a}_{M}(\omega_{r},\mu_{r}))\alpha_{r}^{*}\boldsymbol{\beta}_{k}^{*}+\overline{{\bf n}}_{k,r}\triangleq\mathbf{E}_{k}^{\mathrm{H}}\mathbf{V}_{k,r}\alpha_{r}^{*}\boldsymbol{\beta}_{k}^{*}+\overline{{\bf n}}_{k,r}.\end{split} \label{eq:Yk_r} \end{equation} Here, $\mathbf{V}_{k,r}\triangleq\mathbf{A}_{M,k}^{*}\bullet\mathbf{a}_{M}(\omega_{r},\mu_{r})$$=$$[\mathbf{a}_{M}(\omega_{r}-\varphi_{k,1},\mu_{r}-\theta_{k,1}),...,\mathbf{a}_{M}(\omega_{r}-\varphi_{k,J_{k}},\mu_{r}-\theta_{k,J_{k}})]\in\mathbb{C}^{M\times J_{k}}$ and $\overline{{\bf n}}_{k,r}$ is the $r$-th column of $[\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}(\mathbf{N}_{k}+\sqrt{p}\Delta\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{h}_{k}\mathrm{\}}\mathbf{E}_{k})]^{\mathrm{H}}$. We have already obtained an estimate of $\mathbf{V}_{k,r}$, denoted by $\widehat{\mathbf{V}}_{k,r}$, in the first coherence block. Specifically, for the typical user, i.e., user $1$, $\{(\omega_{r}-\varphi_{1,j},\mu_{r}-\theta_{1,j})\}_{j=1}^{J_{1}}$ are estimated from (\ref{eq:yl_sparse}) and (\ref{cascaded1_omega_mu}) in Stage I, while for other users, $\{(\omega_{r}-\varphi_{k,j},\mu_{r}-\theta_{k,j})\}_{j=1}^{J_{k}}$ are estimated from (\ref{cascaded_omega_mu}) in Stage II. \vspace{-0.07cm} The updated cascaded channel gain $\boldsymbol{\beta}_{k}^{*}\alpha_{r}^{*}$ in (\ref{eq:Yk_r}) can be found using the LS estimator \vspace{-0.27cm} \begin{equation} \widehat{\boldsymbol{\beta}_{k}^{*}\alpha_{r}^{*}}=(\mathbf{\widehat{V}}_{k,r}^{\mathrm{H}}\mathbf{E}_{k}\mathbf{E}_{k}^{\mathrm{H}}\widehat{\mathbf{V}}_{k,r})^{-1}\mathbf{\widehat{V}}_{k,r}^{\mathrm{H}}\mathbf{E}_{k}\overline{{\bf y}}_{k,r}.\label{eq:cascaded_gain_update} \end{equation} Then, following the same operation shown in (\ref{eq:G1_estimate}), and substituting (\ref{eq:cascaded_gain_update}) into (\ref{eq:Yk_r}), the estimate of user $k$'s cascaded channel during the remaining coherence blocks is given by \vspace{-0.4cm} \begin{equation} \begin{split}\widehat{\mathbf{G}}_{k}= & \mathbf{\widehat{A}}_{N}\mathbf{\hat{H}}_{\mathrm{RIS},k}^{\mathrm{H}}=\mathbf{\widehat{A}}_{N}[{\bf \hat{h}}_{\mathrm{RIS}k,1},\ldots,{\bf \hat{h}}_{\mathrm{RIS}k,L}]^{\mathrm{H}}=\mathbf{\widehat{A}}_{N}[\mathbf{\widehat{V}}_{k,1}\widehat{\boldsymbol{\beta}_{k}^{*}\alpha_{1}^{*}},\ldots,\mathbf{\widehat{V}}_{k,L}\widehat{\boldsymbol{\beta}_{k}^{*}\alpha_{L}^{*}}]^{\mathrm{H}},\end{split} \label{eq:Gk_stage_3} \end{equation} where ${\bf h}_{\mathrm{RIS}k,r}$ represents the $r$-th column of $\mathbf{H}_{\mathrm{RIS},k}$. For the pilot overhead analysis, we assume $J_{1}=J_{2}=\cdots=J_{K}=J$ as before. For the LS problem in (\ref{eq:Yk_r}), $\tau_{k}\geq J_{k}$ should hold for user $k$. Thus, the minimum number of pilot symbols can be chosen as $\tau_{k}=J_{k}$, which is less than that required in Stage II. Given $K$ total users, the overall minimum pilot overhead is $JK$. On the other hand, the dominant complexity of LS problem in (\ref{eq:Yk_r}) is $\mathcal{O}(\tau_{k}J^{2})$. Since obtaining the entire cascaded channel, i.e., $\mathbf{G}_{k}$, needs to solve the LS problem $L$ times, the total computational complexity for user $k$ is $\mathcal{O}(\tau_{k}J^{2}L)$. Thus the overall computational complexity in each remaining coherence block is $\mathcal{O}(\tau_{k}J^{2}LK)$.\vspace{-0.35cm} \section{Extension to Multi-antenna User Case \label{sec:Applying-the-Protocol}} \vspace{-0.15cm} In this section, we extend the full CSI estimation method in the first coherence block to the multi-antenna user case.\footnote{The re-estimation of channel gains in the remaining coherence blocks can be extended to the multi-antenna-users case in a straightforward way, and thus will not be explicitly considered.} First, the system model and corresponding two-phase channel estimation strategy are described. Then, we adopt an OMP-based method to estimate the AoDs at the users in Phase I. The remaining parameters including the common AoAs at the BS, the cascaded AoDs at the RIS, and the cascaded gains are estimated in Phase II, similarly to the methods developed for the single-antenna user case in Section \ref{sec:First_coherence}. Lastly, the required pilot overhead and computation complexity for the proposed method are analyzed. \vspace{-0.45cm} \subsection{Multi-antenna Users Model and Channel Estimation Strategy} \vspace{-0.15cm} \subsubsection{System Model} We assume that $K$ users are present with an $Q_{k}=Q_{k1}\times Q_{k2}$ UPA for user $k$, while the other settings are the same as in the single-antenna user case. Then, $\mathbf{h}_{k}$ in (\ref{eq:H_h}) and (\ref{eq:hk}) can be modified as \vspace{-0.25cm} \begin{equation} \mathbf{H_{\mathit{k}}}=\sum_{j=1}^{J_{k}}\beta_{k,j}\mathbf{a}_{M}(\varphi_{k,j},\theta_{k,j})\mathbf{a}_{Q_{k}}^{\mathrm{H}}(\eta_{k,j},\chi_{k,j})=\mathbf{A}_{M,k}\mathbf{B}_{k}\mathbf{A}_{Q,k}^{\mathrm{H}}\in\mathbb{C}^{M\times Q_{k}},\forall k\in\mathcal{K},\label{eq:hk_q} \end{equation} where $(\eta_{k,j},\chi_{k,j})$ represents the AoD of the $j$-th path in the user $k$-RIS channel, and $\mathbf{A}_{Q,k}=[\mathbf{a}_{Q_{k}}(\eta_{k,1},\chi_{k,1}),\ldots,\mathbf{a}_{Q_{k}}(\eta_{k,J_{k}},\chi_{k,J_{k}})]\in\mathbb{C}^{Q_{k}\times J_{k}}$ and $\mathbf{B}_{k}=\mathrm{Diag}\{\beta_{k,1},\ldots,\beta_{k,J_{k}}\}\in\mathbb{C}^{J_{k}\times J_{k}}$ are the AoD steering matrix and complex gain matrix of user $k$, respectively. Other parameters are as defined in Section \ref{sec:Model-protocol}. With $\mathbf{H_{\mathit{k}}}$, the transmission model in (\ref{transmission}) becomes \vspace{-0.3cm} \begin{equation} \mathbf{y}_{k}(t)=\mathbf{H}\mathrm{Diag}\{\mathbf{e}_{t}\}\mathbf{H_{\mathit{k}}}\sqrt{p}\mathbf{s}_{k}(t)+\mathbf{n}_{k}(t),\label{eq:Qk_transmitter} \end{equation} where $\mathbf{s}_{k}(t)\in\mathbb{C}^{Q_{k}\times1}$ is the pilot vector for user $k$ in time slot $t$. Vectorizing (\ref{eq:Qk_transmitter}), we have \vspace{-0.3cm} \begin{equation} \mathbf{y}_{k}(t)=\sqrt{p}(\mathbf{s}_{k}^{\mathrm{T}}(t)\otimes\mathbf{I}_{N})\mathrm{vec}(\mathbf{H}\mathrm{Diag}\{\mathbf{e}_{t}\}\mathbf{H}_{k})+\mathbf{n}_{k}(t)\triangleq\sqrt{p}(\mathbf{s}_{k}^{\mathrm{T}}(t)\otimes\mathbf{I}_{N})\mathbf{G}_{k}\mathbf{e}_{t}+\mathbf{n}_{k}(t),\label{eq:Q-antenna} \end{equation} where $\mathbf{I}_{N}$ represents the $N\times N$ identity matrix, and $\mathbf{G}_{k}=\mathbf{H}_{k}^{\mathrm{T}}\diamond\mathbf{H}$ is the cascaded user-RIS-BS channel of user $k$ that is to be estimated. The above equality is also obtained via $\mathrm{vec}(\mathbf{\mathbf{A}\mathrm{Diag}\{b\}C})=(\mathbf{\mathbf{C}^{\mathrm{T}}\diamond A})\mathbf{b}$. Combining (\ref{eq:hk_q}) with (\ref{eq:H1-1}), $\mathbf{G}_{k}$ can be rewritten as \vspace{-0.15cm} \begin{align} \mathbf{G}_{k} & =(\mathbf{A}_{M,k}\mathbf{B}_{k}\mathbf{A}_{Q,k}^{\mathrm{H}})^{\mathrm{T}}\diamond(\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}})=(\mathbf{A}_{Q,k}^{\mathrm{*}}\otimes\mathbf{A}_{N})(\mathbf{B}_{k}^{\mathrm{T}}\otimes\boldsymbol{\Lambda})(\mathbf{A}_{M,k}^{\mathrm{T}}\diamond\mathbf{A}_{M}^{\mathrm{H}})\nonumber \\ & =(\mathbf{A}_{Q,k}^{\mathrm{*}}\otimes\mathbf{A}_{N})(\mathbf{B}_{k}\otimes\boldsymbol{\Lambda})(\mathbf{A}_{M,k}\bullet\mathbf{A}_{M}^{\mathrm{*}})^{\mathrm{T}},\label{eq:G_k-Q} \end{align} where the above equalities are obtained using $(\mathbf{A}\otimes\mathbf{B})(\mathbf{C}\diamond\mathbf{D})=(\mathbf{AC})\diamond(\mathbf{BD})$ and $\mathbf{A}^{\mathrm{T}}\diamond\mathbf{B}^{\mathrm{T}}=(\mathbf{A}\bullet\mathbf{B})^{\mathrm{T}}$ \cite{K-R_product,Xinda2017}. The third term $(\mathbf{A}_{M,k}\bullet\mathbf{A}_{M}^{\mathrm{*}})$ accounts for the cascaded AoDs at the RIS, similar to the single-antenna user case. \subsubsection{Channel Estimation Strategy} For the full-CSI estimation of any user $k$, a two-phase estimation strategy is adopted, where the AoDs at the users, i.e., $\mathbf{A}_{Q,k}$, is estimated in Phase I, after which the remaining parameters in (\ref{eq:G_k-Q}) are estimated in Phase II. Specifically, in this strategy, $\Upsilon_{k}$ blocks of time slots are used for the channel estimation of user $k$, and the $i$-th block has $V_{k}^{(i)}$ time slots. The RIS phase shift vector remains invariant for each time slot within a given block, and is denoted by $\mathbf{e}^{(i)}$ for $\forall i\in\{1,2,...,\Upsilon_{k}\}$. Later we will see that Phase I only occurs in the first block, and $V_{k}^{(i)}$ can be different for different users or/and different blocks. \vspace{-0.5cm} \subsection{Estimation in Phase I: Angle Estimation at Users} \vspace{-0.1cm} In this subsection, we describe the estimation of the AoDs at the users. During the first block, user $k$ transmits the pilot sequence $\mathbf{S}_{k}^{(1)}=\left[\mathbf{s}_{1}^{(1)},\ldots,\mathbf{s}_{V_{k}^{(1)}}^{(1)}\right]\in\mathbb{C}^{Q_{k}\times V_{k}^{(1)}}$, and the received signal matrix $\mathbf{Y}_{k}^{(1)}=\left[\mathbf{y}_{k}^{(1)}(1),\ldots,\mathbf{y}_{k}^{(1)}(V_{k}^{(1)})\right]\in\mathbb{C}^{N\times V_{k}^{(1)}}$ at the BS is given by \vspace{-0.25cm} \begin{equation} \mathbf{Y}_{k}^{(1)}=\sqrt{p}\mathbf{H}\mathrm{Diag}\{\mathbf{e}^{(1)}\}\mathbf{H}_{k}\mathbf{S}_{k}^{(1)}+\mathbf{N}_{k}^{(1)}=\sqrt{p}\mathbf{A}_{N}\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{e}^{(1)}\}\mathbf{A}_{M,k}\mathbf{B}_{k}\mathbf{A}_{Q,k}^{\mathrm{H}}\mathbf{S}_{k}^{(1)}+\mathbf{N}_{k}^{(1)}.\label{Y_matrix} \end{equation} $\mathbf{A}_{Q,k}$ can be directly obtained from (\ref{Y_matrix}). Specifically, for the estimation of $\mathbf{A}_{Q,k}$, an OMP-based method can be adopted, which takes the transpose of (\ref{Y_matrix}) and formulates it as a simultaneously sparse approximation problem \cite{CE_MIMO_RIS,Hybrid_beamforming}\vspace{-0.25cm} \begin{equation} (\mathbf{Y}_{k}^{(1)})^{\mathrm{H}}=(\mathbf{S}_{k}^{(1)})^{\mathrm{H}}\mathbf{A}_{Q,k}\mathbf{\Gamma}_{k}+(\mathbf{N}_{k}^{(1)})^{\mathrm{H}}\in\mathbb{C}^{V_{k}^{(1)}\times N},\label{AoD_UE} \end{equation} where $\mathbf{\Gamma}_{k}$ represents the remaining terms according to (\ref{Y_matrix}). Similar to equations (\ref{eq:sparse-formula}) and (\ref{eq:wk_sparse}), by using the VAD representation, (\ref{AoD_UE}) can be approximated as\vspace{-0.2cm} \begin{equation} (\mathbf{Y}_{k}^{(1)})^{\mathrm{H}}=(\mathbf{S}_{k}^{(1)})^{\mathrm{H}}(\mathbf{A}_{Q,1}\otimes\mathbf{A}_{Q,2})\widetilde{\mathbf{\boldsymbol{\mathbf{\Gamma}}}}_{k}+(\mathbf{N}_{k}^{(1)})^{\mathrm{H}},\label{AoD_VAD} \end{equation} where $\mathbf{A}_{Q,1}\in\mathbb{C}^{Q_{k1}\times D_{1}}$ and $\mathbf{A}_{Q,2}\in\mathbb{C}^{Q_{k1}\times D_{2}}$ are overcomplete dictionary matrices $(D_{1}\geq Q_{k1},D_{2}\geq Q_{k1})$ similar to (\ref{eq:sparse-formula}), and contain values for $\mathbf{a}_{Q_{k1}}(\eta_{k,j})$ and $\mathbf{a}_{Q_{k2}}(\chi_{k,j})$. $\widetilde{\mathbf{\boldsymbol{\mathbf{\Gamma}}}}_{k}\in\mathbb{C}^{D_{1}D_{2}\times N}$ is a row-sparse matrix with $J_{k}$ non-zero rows. Similar to the single-antenna user case in Section \ref{sec:First_coherence}, the sparsity level for the sparse recovery problem associated with (\ref{AoD_VAD}) $J_{k}$, is obtained by OMP. Therefore, the AoDs at user $k$, i.e., $\{\eta_{k,j}\}{}_{j=1}^{J_{k}}$ and $\{\chi_{k,j}\}{}_{j=1}^{J_{k}}$ can be obtained similar to (\ref{m_lsub}). Assume the $q$-th row of the sparse matrix $\widetilde{\mathbf{\boldsymbol{\mathbf{\Gamma}}}}_{k}$ is nonzero, then the corresponding indices in $\mathbf{A}_{Q,1}$ and $\mathbf{A}_{Q,2}$ in (\ref{AoD_VAD}), denoted by $q_{1}$ and $q_{2}$, are derived as\vspace{-0.1cm} \begin{equation} q_{1}=\left\lceil \frac{q}{D_{\mathrm{2}}}\right\rceil ,~q_{2}=q-D_{\mathrm{2}}(q_{1}-1).\label{ql_sub} \end{equation} \vspace{-0.45cm} \subsection{Estimation in Phase II: Estimation of Remaining Parameters} \vspace{-0.1cm} In this subsection, we estimate the remaining parameters in (\ref{eq:G_k-Q}) by converting the estimation problems into several equivalent problems as in the single-antenna user case, which can be solved using the methods in Section \ref{sec:First_coherence}. First, denote the typical user as user $1$ and stack the total $(\sum_{i=1}^{\Upsilon_{1}}V_{1}^{(i)})$ slots, the received signal matrix is obtained as $\mathbf{Y}_{1}=\left[\mathbf{Y}_{1}^{(1)},\ldots,\mathbf{Y}_{1}^{(\Upsilon_{1})}\right]\in\mathbb{C}^{N\times(\sum_{i=1}^{\Upsilon_{1}}V_{1}^{(i)})}$. Then, the common AoAs in (\ref{eq:G_k-Q}), i.e., $\mathbf{A}_{N}$, can be readily estimated via DFT-based method by calculating $\widetilde{\mathbf{U}}_{N}^{\mathrm{H}}\mathbf{Y}_{1}$ since Lemma \ref{lem:3} holds. With $\mathbf{\widehat{A}}_{N}$ and $\mathbf{\widehat{A}}_{Q,k}$ obtained in Phase I, considering the $i$-th time block and replacing $\mathbf{A}_{N}$ and $\mathbf{A}_{Q,k}$ with $\mathbf{\widehat{A}}_{N}+\Delta\mathbf{A}_{N}$ and $\mathbf{\widehat{A}}_{Q,k}+\Delta\mathbf{A}_{Q,k}$, respectively, $\mathbf{Y}_{k}^{(i)}\in\mathbb{C}^{N\times V_{k}^{(i)}}$ can be processed as \vspace{-0.2cm} \begin{align} \mathbf{\check{Y}}_{k}^{(i)} & \triangleq\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\mathbf{Y}_{k}^{(i)}(\mathbf{\widehat{A}}_{Q,k}^{\mathrm{H}}\mathbf{S}_{k}^{(i)})^{\dagger}\nonumber \\ & =\frac{1}{N\sqrt{p}}\mathbf{\widehat{A}}_{N}^{\mathrm{H}}\{(\mathbf{\widehat{A}}_{N}+\Delta\mathbf{A}_{N})\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{e}^{(i)}\}\mathbf{A}_{M,k}\mathbf{B}_{k}(\mathbf{\widehat{A}}_{Q,k}+\Delta\mathbf{A}_{Q,k})^{\mathrm{H}}\mathbf{S}_{k}^{(i)}+\mathbf{N}_{k}^{(i)}\}(\mathbf{\widehat{A}}_{Q,k}^{\mathrm{H}}\mathbf{S}_{k}^{(i)})^{\dagger}\nonumber \\ & =\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{e}^{(i)}\}\mathbf{A}_{M,k}\mathbf{B}_{k}+\mathbf{\check{N}}_{k}^{(i)}\in\mathbb{C}^{L\times J_{k}},\label{received_mtx_Q} \end{align} where $\Delta\mathbf{A}_{N}$ and $\Delta\mathbf{A}_{Q,k}$ stand for the estimation errors of $\mathbf{A}_{N}$ and $\mathbf{A}_{Q,k}$, respectively. $\mathbf{\check{N}}_{k}^{(i)}$ represents the remaining terms of the second equality. As discussed in Remark \ref{AoA_remark}, all users are allowed to estimate the common $\mathbf{A}_{N}$ jointly so as to acquire the MU diversity gains to alleviate the error propagation effects caused by $\Delta\mathbf{A}_{N}$. Accordingly, the input of Algorithm \ref{algorithm-1} is given by $\mathbf{Y}=[\mathbf{Y}_{1},\mathbf{Y}_{2},...,\mathbf{Y}_{K}]\in\mathbb{C}^{N\times(\sum_{k=1}^{K}\sum_{i=1}^{\Upsilon_{k}}V_{k}^{(i)})}$. In the following, we decompose the estimation of a multi-antenna user, i.e., user $k$, with a channel composed of $J_{k}$ scatterers, into the estimation of $J_{k}$ channels with a single path for a virtual single-antenna user, i.e., user $(k,j)$ for $j\in\{1,\ldots,J_{k}\}$. The $j$-th column of $\mathbf{\check{Y}}_{k}^{(i)}$ is given by \vspace{-0.2cm} \begin{equation} [\mathbf{\check{Y}}_{k}^{(i)}]_{(:,j)}=\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{e}^{(i)}\}[\mathbf{A}_{M,k}]_{(:,j)}\beta_{k,j}+[\mathbf{\check{N}}_{k}^{(i)}]_{(:,j)}=\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{[\mathbf{A}_{M,k}]_{(:,j)}\beta_{k,j}\}\mathbf{e}^{(i)}+[\mathbf{\check{N}}_{k}^{(i)}]_{(:,j)}.\label{Yk_Q_block} \end{equation} Stacking $\Upsilon_{k}$ blocks of (\ref{Yk_Q_block}), we have \vspace{-0.15cm} \begin{align} \left[[\mathbf{\check{Y}}_{k}^{(1)}]_{(:,j)},...,[\mathbf{\check{Y}}_{k}^{(\Upsilon_{k})}]_{(:,j)}\right]= & \boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{[\mathbf{A}_{M,k}]_{(:,j)}\beta_{k,j}\}\mathbf{\widetilde{E}}_{k}+\left[[\mathbf{\check{N}}_{k}^{(1)}]_{(:,j)},...,[\mathbf{\check{N}}_{k}^{(\Upsilon_{k})}]_{(:,j)}\right]\nonumber \\ = & \boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{\widetilde{h}}_{\{k,j\}}\}\mathbf{\widetilde{E}}_{k}+\left[[\mathbf{\check{N}}_{k}^{(1)}]_{(:,j)},...,[\mathbf{\check{N}}_{k}^{(\Upsilon_{k})}]_{(:,j)}\right],\label{Q_collect} \end{align} where $\mathbf{\widetilde{E}}_{k}=\left[\mathbf{e}^{(1)},\ldots,\mathbf{e}^{(\Upsilon_{k})}\right]\in\mathbb{C}^{M\times\Upsilon_{k}}$. The term $\mathbf{\widetilde{h}}_{\{k,j\}}\triangleq[\mathbf{A}_{M,k}]_{(:,j)}\beta_{k,j}\in\mathbb{C}^{M\times1}$ is treated as the channel between the RIS and the virtual single-antenna user $(k,j)$, which only contains one scatterer. \subsubsection{Estimation for Typical User} This part is the extension of Section \ref{subsec:Stage-I:-Estimation} for the typical user, i.e., user $1$. Denote the transpose of (\ref{Q_collect}) for user $1$ as $\mathbf{\widetilde{Y}}_{\{1,j\}}\in\mathbb{C}^{\Upsilon_{1}\times L}$, which is given by\vspace{-0.15cm} \begin{equation} \mathbf{\widetilde{Y}}_{\{1,j\}}=\mathbf{\widetilde{E}}_{1}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{\widetilde{h}}_{\{1,j\}}^{*}\}\mathbf{A}_{M}\boldsymbol{\Lambda}^{*}+\mathbf{\widetilde{N}}_{\{1,j\}}.\label{Y1_formula_Q} \end{equation} We note that the channel estimation problem for (\ref{Y1_formula_Q}) has a form similar to that for (\ref{eq:Y1_formula}), and can be solved following the steps developed in Section \ref{subsec:Stage-I:-Estimation}. Thus the virtual single-antenna cascaded AoDs for user $(1,j)$, i.e., $\{(\omega_{l}-\varphi_{1,j})\}_{l=1}^{L}$ and $\{(\mu_{l}-\theta_{1,j})\}_{l=1}^{L}$, and the cascaded gains $\{\alpha_{l}\beta_{1,j}\}_{l=1}^{L}$ can be estimated. It is unnecessary for us to repeat the steps shown in Section \ref{subsec:Stage-I:-Estimation} $J_{1}$ times to solve the angle estimation problem connected with (\ref{Y1_formula_Q}). That is because we have obtained the rotation factors $(\Delta\omega_{l},\Delta\mu_{l})$ and gain scaling factor $\gamma_{l}$ defined in (\ref{rot_fac_omega_mu_scale}) after the estimation procedure for the first virtual single-antenna user, user $(1,1)$. This allows us to solve the sparse recovery problem corresponding to (\ref{eq:yl_sparse}) without performing additional operations for the channel estimation of the other virtual single-antenna users $(1,j)$ for $j\neq1$.\footnote{The virtual single-antenna users $(1,j)$ for $j\neq1$ can be treated as other users and the corresponding parameters can be estimated by the method shown later. However, the pilot overhead for virtual users $(1,j)$ for any $j$ should be the same, depending on the the number of time blocks $\Upsilon_{1}$. So we still solve problem corresponding to (\ref{eq:yl_sparse}).} In particular, for user $(1,j)$, the quantities $(\omega_{r}-\varphi_{1,j})$, $(\mu_{r}-\theta_{1,j})$ and $\alpha_{r}^{*}\beta_{1,j}^{*}$ can be obtained via the solution to (\ref{eq:yl_sparse}). Then, $\{(\omega_{l}-\varphi_{1,j})\}_{l\neq r}$, $\{(\mu_{r}-\theta_{1,j})\}_{l\neq r}$ and $\{\alpha_{l}^{*}\beta_{1,j}^{*}\}_{l\neq r}$ can be directly obtained with the known $(\Delta\omega_{l},\Delta\mu_{l})$ and $\gamma_{l}$ obtained in the estimation for user $(1,1)$. Based on this, the estimates of user $1$'s cascaded gains and cascaded AoDs at the RIS, i.e., $\alpha_{l}\beta_{1,j}$, $(\omega_{l}-\varphi_{1,j})$ and $(\mu_{l}-\theta_{1,j})$, for $\forall l\in\{1,...,L\}$ and $\forall j\in\{1,...,J_{1}\}$, are obtained, which allows us to determine $\mathbf{G}_{1}$ in (\ref{eq:G_k-Q}). \subsubsection{Estimation for Other Users} Following the idea of the virtual single-antenna user, we convert the channel estimation for the other multi-antenna users into the estimation of $\sum_{k=2}^{K}J_{k}$ single scatterer channels for the other single-antenna users. The idea of constructing the common part as in Section \ref{subsec:Stage-II:-Estimation} still applies, using the common RIS-BS channel to reduce the pilot overhead. Specifically, after eliminating the effects of the common AoAs at the BS and the unique AoDs at the users estimated in Phase I, and following (\ref{eq:LT_Yk_common}), $[\mathbf{\check{Y}}_{k}^{(i)}]_{(:,j)}$ in (\ref{Yk_Q_block}) can be reformulated as \begin{equation} [\mathbf{\check{Y}}_{k}^{(i)}]_{(:,j)}=\boldsymbol{\Lambda}\mathbf{A}_{M}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{\widetilde{h}}_{\{k,j\}}\}\mathbf{e}^{(i)}+[\mathbf{\check{N}}_{k}^{(i)}]_{(:,j)}=\widetilde{\boldsymbol{\Lambda}}_{\mathrm{s}}\widetilde{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}\}\mathbf{e}^{(i)}+[\mathbf{\check{N}}_{k}^{(i)}]_{(:,j)},\label{LT_Yk_Q} \end{equation} where $\widetilde{\boldsymbol{\Lambda}}_{\mathrm{s}}=\alpha_{r}\beta_{1,1}\mathrm{Diag}\{\gamma_{1}^{*},\gamma_{2}^{*},\ldots,\gamma_{L}^{*}\}$ and $\widetilde{\mathbf{A}}_{\mathrm{s}}=\mathrm{Diag}\{\mathbf{a}_{M}(\omega_{r}-\varphi_{1,1},\mu_{r}-\theta_{1,1})\}\mathbf{A}_{\Delta M}$ can be constructed using the estimated parameters of the virtual single-antenna user $(1,1)$.\footnote{When user $(1,1)$ is the typical user, it can be verified that $\omega_{\mathrm{s}}$ and $\mu_{\mathrm{s}}$ defined in (\ref{omega_mu_common}) are $(\omega_{r}-\varphi_{1,1})$ and $(\mu_{r}-\theta_{1,1})$, respectively, and $\frac{1}{J_{1}}\mathbf{1}_{J_{1}}^{\mathrm{T}}\boldsymbol{\beta}_{1}\alpha_{r}$ in (\ref{eq:gamma_common}) is $\alpha_{r}\beta_{1,1}$.} The matrix $\mathbf{A}_{\Delta M}$ can be determined by (\ref{eq:AoD_common_part}). Accordingly, $\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}=\frac{1}{\beta_{1,1}}\mathrm{Diag}\{\mathbf{a}_{M}(-\varphi_{1,1},-\theta_{1,1})\}\mathbf{\widetilde{h}}_{\{k,j\}}$ is the unique part of the cascaded channel for virtual single-antenna user $(k,j)$ that is to be estimated. Stacking $\Upsilon_{k}$ time blocks of (\ref{LT_Yk_Q}) and vectorizing, we have \vspace{-0.15cm} \begin{equation} \begin{split}\mathbf{\widetilde{w}}_{\{k,j\}}\triangleq\mathrm{vec}(\left[[\mathbf{\check{Y}}_{k}^{(1)}]_{(:,j)},...,[\mathbf{\check{Y}}_{k}^{(\Upsilon_{k})}]_{(:,j)}\right])=(\mathbf{\widetilde{E}}_{k}^{\mathrm{T}}\diamond\widetilde{\boldsymbol{\Lambda}}_{\mathrm{s}}\widetilde{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}})\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}+\widetilde{\mathbf{n}}_{\{k,j\}}\in\mathbb{C}^{L\Upsilon_{k}\times1},\end{split} \label{eq:Wk_-Q} \end{equation} where $\widetilde{\mathbf{n}}_{\{k,j\}}$ is the corresponding equivalent noise for virtual user $(k,j)$. The last equality is obtained via $\mathrm{vec}(\widetilde{\boldsymbol{\Lambda}}_{\mathrm{s}}\widetilde{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}}\mathrm{Diag}\{\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}\}\mathbf{\widetilde{E}}_{k})=(\mathbf{\widetilde{E}}_{k}^{\mathrm{T}}\diamond\widetilde{\boldsymbol{\Lambda}}_{\mathrm{s}}\widetilde{\mathbf{A}}_{\mathrm{s}}^{\mathrm{H}})\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}$. Since the form of (\ref{eq:Wk_-Q}) is similar to (\ref{eq:Wk_}), $\mathbf{\widetilde{h}}_{\mathrm{s},\{k,j\}}$ can be estimated similarly to what was done for (\ref{eq:wk_sparse}).\vspace{-0.05cm} With the estimates of the multi-antenna user $k$'s cascaded gains and cascaded AoDs, i.e., $\alpha_{l}\beta_{k,j}$, $(\omega_{l}-\varphi_{k,j})$ and $(\mu_{l}-\theta_{k,j})$, for $\forall l\in\{1,...,L\}$, $\forall j\in\{1,...,J_{k}\}$, obtained by solving the problem connected with (\ref{eq:Wk_-Q}) $J_{k}$ times, $\mathbf{G}_{k}$ in (\ref{eq:G_k-Q}) can be determined for $\forall k\in\{2,3...,K\}$. \vspace{-0.5cm} \subsection{Pilot Overhead Analysis} \vspace{-0.25cm} In this subsection, we analyze the pilot overhead of the full CSI estimation algorithm for the multi-antenna user case, assuming $J_{1}=J_{2}=\cdots=J_{K}=J$ and $Q_{1}=Q_{2}=\cdots=Q_{k}=Q$. Similar to the analysis in Section \ref{subsec:Pilot-Overhead}, for user $1$, the number of pilots in Phase I should satisfy $V_{1}^{(1)}\geqslant\mathcal{O}(J_{1}\log(D_{1}D_{2}))\geqslant\mathcal{O}(J_{1}\log(Q_{11}Q_{12}))=\mathcal{O}(J_{1}\log(Q_{1}))$ so as to ensure the $J_{1}$-sparse recovery problem associated with (\ref{AoD_VAD}). In Phase II, the number of slots within each block $V_{1}^{(i)}$, should satisfy $V_{1}^{(i)}\geqslant J_{1}$, otherwise the right inverse $(\mathbf{A}_{Q,1}^{\mathrm{H}}\mathbf{S}_{1}^{(i)})^{\dagger}$ does not exist. On the other hand, the number of blocks, $\Upsilon_{1}$, is determined by sparse recovery applied to (\ref{Y1_formula_Q}). The angle estimation associated with (\ref{Y1_formula_Q}) can be implemented by a $1$-sparse recovery problem, and thus we have $\Upsilon_{1}\geq\mathcal{O}(\log(M))$. As shown before, $J_{1}$ virtual single-antenna users share the same blocks and can be processed simultaneously. In addition, the first block is also used for Phase II. Hence the total number of pilot symbols required for user $1$ should satisfy $\tau_{1}=\sum_{i=1}^{\Upsilon_{1}}V_{1}^{(i)}=V_{1}^{(1)}+\sum_{i=2}^{\Upsilon_{1}}V_{1}^{(i)}\geq\mathcal{O}(J_{1}\log(Q_{1}))+(\mathcal{O}(\log(M))-1)J_{1}$. For the other users $2\leq k\leq K$, we have the inequalities $V_{k}^{(1)}\geqslant\mathcal{O}(J_{k}\log(Q_{k}))$ and $V_{k}^{(i)}\geqslant J_{k}$, for the same reasons as for user $1$. As before, the angle estimation problem connected with (\ref{eq:Wk_-Q}) can be treated as a $1$-sparse recovery problem, and $J_{k}$ virtual single-antenna users simultaneously share the same blocks, where the number of time blocks for user $k$ satisfies $\Upsilon_{k}\geq\mathcal{O}(\log(M)/L)$. Therefore, the total number of pilot symbols required for user $k$ should satisfy{} $\tau_{k}=V_{k}^{(1)}+\sum_{i=2}^{\Upsilon_{k}}V_{k}^{(i)}\geq\mathcal{O}(J_{k}\log(Q_{k}))+(\mathcal{O}(\log(M)/L)-1)J_{k}$. Finally, the overall pilot overhead for the multi-antenna users is given by $\mathcal{O}(JK\log(Q)+J\log(M)+(K-1)J\log(M)/L)-JK$. Table \ref{Pilot_Table} summarizes the total number of pilots of the proposed method and other existing algorithms for full-CSI estimation. It is observed that the proposed method achieves a significant reduction in the pilot overhead for both the single-antenna and multi-antenna user cases, owing to the exploitation of the correlation among different users. \begin{table} \caption{Total Number of Pilots of Various Methods} \begin{centering} \begin{tabular}{lll} \toprule {\footnotesize{}Case } & {\footnotesize{}Methods } & {\footnotesize{}Pilot Overhead}\tabularnewline \midrule {\footnotesize{}Single-antenna User } & {\footnotesize{}Proposed Full-CSI Estimation} & {\footnotesize{}$\mathcal{O}(J\log(M)+(K-1)J\log(M)/L)$}\tabularnewline \midrule {\footnotesize{}Single-antenna User } & {\footnotesize{}Direct-OMP \cite{ris-omp-1}} & {\footnotesize{}$\mathcal{O}(JLK\log(MN)/N)$}\tabularnewline \midrule {\footnotesize{}Single-antenna User} & {\footnotesize{}DS-OMP \cite{ris-omp-3} } & {\footnotesize{}$\mathcal{O}(JK\log(M))$}\tabularnewline \midrule {\footnotesize{}Single-antenna User } & {\footnotesize{}Row-Structure OMP\cite{ris-omp-2} } & {\footnotesize{}$\mathcal{O}(JK\log(M))$}\tabularnewline \midrule {\footnotesize{}Multi-antenna User } & {\footnotesize{}Extension of the proposed method } & {\footnotesize{}$\mathcal{O}(JK\log(Q)+J\log(M)+(K-1)J\log(M)/L)-JK$}\tabularnewline \midrule {\footnotesize{}Multi-antenna User } & {\footnotesize{}CS-EST OMP \cite{CE_MIMO_RIS} } & {\footnotesize{}$\mathcal{O}(JK\log(Q)+JLK\log(MJL)/N)$}\tabularnewline \bottomrule \end{tabular} \par\end{centering} \label{Pilot_Table} \end{table} \vspace{-0.55cm} \section{Simulation Results\label{sec:Simulation-Results}} \vspace{-0.15cm} In this section, simulation results are provided to evaluate the performance of the proposed three-stage channel estimation protocol for both the single-antenna user case and multi-antenna user case. We assume that channel gains $\alpha_{l}$ and $\beta_{k,j}$ follow a complex Gaussian distribution with zero mean and variance of $10^{-3}d_{\mathrm{BR}}^{-2.2}$ and $10^{-3}d_{\mathrm{RU}}^{-2.8}$, respectively. Here, $d_{\mathrm{BR}}$ is defined as the distance between the BS and the RIS, while, $d_{\mathrm{RU}}$ is defined as the distance between the RIS and the users. The antenna spacing at the BS and the element spacing at the RIS are assumed to satisfy $d_{\mathrm{BS}}=d_{\mathrm{RIS}}=\frac{\lambda_{c}}{2}$. The initial RIS phase shift training matrix $\mathbf{E}$ is chosen as the random Bernoulli matrix, i.e., the elements are selected from $\{-1,+1\}$ with equal probability \cite{ris-omp-3}. The transmitted power is set to $p=1$ W. It is assumed that the propagation angles change every ten channel coherence blocks, while the gains change for each coherence block. Unless otherwise specified, for the single-antenna user case, the dimensions of the UPAs deployed on the BS and the RIS are $N_{1}=N_{2}=10$ and $M_{1}=M_{2}=10$, respectively. $d_{\mathrm{BR}}$ and $d_{\mathrm{RU}}$ are set to $10$ m and $100$ m \cite{ris-omp-3}, respectively. The number of users is set to $K=4$. The number of scatterers between the BS and the RIS, and that between the RIS and users are set to $L=5$ and $J_{1}=\cdots=J_{K}=4$. For the multi-antenna user case, the corresponding parameter settings are $N_{1}=N_{2}=8$, $M_{1}=M_{2}=8$, $d_{\mathrm{BR}}=80$m, $d_{\mathrm{RU}}=40$m, $L=3$ and $J_{1}=\cdots J_{k}=2$. In addition, we set the number of users to $K=6$ and all the users adopt $36$-antenna UPAs with $6$ rows and $6$ columns, i.e., $Q_{k1}=Q_{k2}=6$ for $\forall k\in\mathcal{K}$. The antenna spacing at the user equipments still satisfies $d_{\mathrm{UE}}=\frac{\lambda_{c}}{2}$. The normalized mean square error (NMSE) is chosen as the main metric for evaluating estimation performance, which is defined by $\mathrm{NMSE}=\mathbb{E}\{(\sum_{k=1}^{K}||\widehat{\mathbf{G}}_{k}-\mathbf{G}_{k}||_{F}^{2})\mathbf{/}(\sum_{k=1}^{K}||\mathbf{G}_{k}||_{F}^{2})\}.$ We compare the proposed three-stage channel estimation protocol with the following channel estimation methods, in which Direct-OMP \cite{ris-omp-1} and DS-OMP \cite{ris-omp-3} were developed for the single-antenna user case while CS-EST OMP \cite{CE_MIMO_RIS} was developed for the multi-antenna user case.\vspace{-0.1cm} \begin{itemize} \item Direct-OMP \cite{ris-omp-1}: By directly formulating the VAD representation of the cascaded channel as a sparse recovery problem using the vectorization operation, the authors in \cite{ris-omp-1} used OMP to reconstruct the channels. We extend this method to UPA-Type BS in our simulation. \item DS-OMP \cite{ris-omp-3}: By exploiting the common row-block sparsity and common column-block sparsity of the cascaded channel to formulate a sparse recovery problem, the authors in \cite{ris-omp-3} adopted OMP to reconstruct the channels. \item CS-EST OMP \cite{CE_MIMO_RIS}: The authors of \cite{CE_MIMO_RIS} proposed an OMP-based three-stage channel estimation in ULA-type MIMO case, which estimated AoDs at the users in Stage I, AoAs at the BS in Stage II, and cascaded channel gains in Stage III. We extend the method in \cite{CE_MIMO_RIS} to UPA -type MIMO case and regard it as the benchmark. \item Proposed full-CSI: During the first coherence block, full CSI for all users is estimated using Algorithm \ref{algorithm-2} in Stage I and Algorithm \ref{algorithm-3} in Stage II assuming a UPA-type RIS and a UPA-type BS. OMP is adopted to solve the sparse recovery problems in these two stages. \item Oracle full-CSI: This method is treated as the performance upper bound of the Proposed full-CSI method assuming that perfect angle information is known by the BS, providing perfect knowledge of the support of the sparsity recovery problems. In this case, the channels are estimated using the LS estimator in Stage I and Stage II. \item Proposed gains-only: During the remaining coherence blocks, only the gains are updated using the LS method shown in Section \ref{sec:Remaining_Coherence} for Stage III. Here, the angle information is known and estimated using the proposed full-CSI method with an average pilot overhead of $T=15$ (The number of pilots for typical user, i.e., $\tau_{1}$, is set to $36$ in Stage I, while that for other users, i.e., $\tau_{k}$, for $2\leq k\leq K$, are set to $8$ in Stage II). \item Oracle gains-only: This method is regarded as the performance upper bound of the Proposed gains-only method during the remaining coherence blocks, and assumes that the BS perfectly knows the angle information when using the LS estimator. \end{itemize} \subsection{Single-antenna User Case} In this subsection, the following four figures compare the performance of different estimation methods for the single-antenna user case. In particular, due to the different number of pilots allocated to the typical user and other users in the first coherence block for the Proposed full-CSI method, we consider the users' average pilot overhead as a measure of pilots, denoted as $T$. To reduce the error propagation,\footnote{As shown in Section \ref{sec:First_coherence}, the estimation error of the typical user in Stage I leads to unavoidable error propagation for the estimation of other users in Stage II.} we allocate more pilots to the typical user and fewer pilots to the other users. Specifically, in Fig. \ref{bs_antenna}, Fig. \ref{pathL}, and Fig. \ref{opt_train}, $36$ pilots and $8$ pilots are allocated to the typical user and the other users, respectively, thus the average number of pilots for the proposed method is given by $T=15$. Fig. \ref{pilot} illustrates the relationship between NMSE performance and pilot overhead of the various methods, where the SNR is set to $0$ dB. We increase the pilot overhead for the typical user mainly for less error propagation. It can be clearly seen that an increase in the number of pilots improves the performance of all algorithms. In order to achieve the same estimation performance, e.g., $\mathrm{NMSE}=10^{-2}$, the required average pilot overhead of the Proposed full-CSI method is much lower than the methods in \cite{ris-omp-1,ris-omp-3} during the first coherence block. On the other hand, during the remaining coherence blocks, we note that the Proposed gains-only method only needs $T=12$ pilots to achieve the same performance as the Direct-OMP and DS-OMP methods with $T=26$. Additionally, it is observed that the Proposed gains-only method performs generally the same as its upper bound, i.e., Oracle gains-only method, which implies that the Proposed full-CSI method with the average pilot overhead $T=15$ during the first coherence block can provide accurate angle estimation information for the Proposed gains-only method to estimate the updated channel gains during the remaining coherence blocks. \vspace{-0cm} \begin{figure} \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{PILOT_NEW8_10} \par\end{center} \caption{NMSEs vs. Average pilot overhead $T$ of each user with SNR = $0$ dB.} \label{pilot \end{minipage \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{antenna_NEW8_11} \par\end{center} \caption{NMSEs vs. Number of antennas at BS side: $N=N_{1}\times N_{2}$, $N_{1}=N_{2}$.} \label{bs_antenna \end{minipage} \end{figure} \vspace{-0cm} Fig. \ref{bs_antenna} depicts the NMSE performance as a function of the number of antennas at the BS, where we set the SNR to $0$ dB and assume $N_{1}=N_{2}$. It can be observed that as the number of antennas at the BS increases, the estimation accuracy of the Proposed full-CSI method with fewer average pilots, $T=15$ ($36$ pilots allocated to the typical user and $8$ pilots allocated to the other users), is improved significantly, and achieves nearly the same performance as the Oracle full-CSI method when $N$ is larger than $144$ $(12\times12)$. This is because the Proposed full-CSI method must first estimate the number of scatterers in the RIS-BS link from the received signal. The estimation accuracy of this step is determined by the asymptotic property shown in Lemma \ref{lem:3} and the resolution of the rotation matrices defined in (\ref{rot_mat}). The asymptotic property in Lemma \ref{lem:3} requires that both $N_{1}$ and $N_{2}$ be sufficiently large. In addition, we observe the gap between the Proposed gains-only method and the Oracle gains-only method is large when $N=36$ $(6\times6)$. This behavior illustrates that with small scale antenna array, the Proposed full-CSI method provides inaccurate angle estimation information for the estimation of gains during the remaining coherence blocks, which deteriorates the estimation accuracy of the Proposed gains-only method further. Fortunately, with the increase of the number of antennas, the gap becomes marginal, which means that the angle information has been estimated perfectly in the first coherence block with large scale antenna array. \begin{figure} \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{pathL_NEW8_2} \par\end{center} \caption{NMSEs vs. Number of scatterers in RIS-BS link.} \label{pathL \end{minipage \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{Opt_new_8_2} \par\end{center} \caption{Performance of Optimized vs. Non-Optimized RIS phase shift training matrices.} \label{opt_train \end{minipage} \end{figure} \vspace{-0cm} Fig. \ref{pathL} illustrates the NMSE performance of algorithms with different pilot overhead versus the number of scatterers in the RIS-BS link, where the SNR is set to $0$ dB. As shown in Fig. \ref{pathL}, the estimation accuracy decreases as the number of scatterers increases. The reasons for this behavior can be summarized as follows. First, the number of unknown parameters (angles and gains) to be estimated increases, and thus the OMP-based estimation performs worse for the same pilot overhead. Second, since the number of scatterers is unknown in our Proposed full-CSI UPA-type based method, the estimation accuracy of the Proposed full-CSI method is relatively more sensitive to an increase of the number of scatterers than the other methods, which further deteriorates the performance of the Proposed gains-only method in the remaining coherence blocks. By contrast, the NMSEs of the DS-OMP method and the Direct-OMP method with $T=26$ pilots increases only moderately with the increase of the number of scatterers since the parameters including the numbers of scatterers between the RIS-BS link and the user-RIS link are known by BS for these two methods. Fig. \ref{opt_train} illustrates whether the optimization of the RIS phase shift training matrix $\mathbf{E}$ provides a significant benefit for the estimation performance. ``Type I RIS Pattern'' refers to choosing the training matrix as the random Bernoulli matrix, i.e., generating the initial training matrix with elements from $\{-1,+1\}$ with equal probability \cite{ris-omp-3}. ``Type II RIS Pattern'' refers to generating the initial training matrix with elements as $[\mathbf{e}_{t}]_{m}=\exp\left(\mathrm{i}\angle(a+\mathrm{i}b)\right)$ where $a$ and $b$ follow independent and identically uniform distribution $\mathcal{U}(0,1)$. It is observed that the performance of the Type I training matrix is essentially the same as that of the optimized training matrix, and far outperforms that of the Type II training matrix. This behavior can be explained by exploring the mutual coherence property of the equivalent sensing matrices for problems associated with (\ref{eq:sparse-formula}) and (\ref{eq:wk_sparse}). For a given matrix $\mathbf{D}$, the maximal coherence of $\mathbf{D}$, denoted as $\mu(\mathbf{D})$, is defined as \begin{equation} \mu(\mathbf{D})=\max_{i\neq j}\frac{|\mathbf{D}_{(:,i)}^{\mathrm{H}}\mathbf{D}_{(:,j)}|}{||\mathbf{D}_{(:,i)}||||\mathbf{D}_{(:,j)}||}, \end{equation} which is the largest absolute inner product between any two columns of $\mathbf{D}$. According to the compressive sensing theory \cite{CS-Overview_Yonina}, the sensing matrix with smaller $\mu(\mathbf{D})$ could provide better recovery performance for sparse vectors. The random Bernoulli matrix, which is a typical sensing matrix with lower correlation of its columns and satisfies the constant modulus constraint, is chosen as the Type I training matrix. Furthermore, numerical results validate that the maximal coherence of the sensing matrices generated by the Type I training matrix is significantly lower than that generated by the Type II training matrices, and nearly the same as that generated by the optimized training matrix. Since optimization of the training matrix requires extra computational complexity, this result suggests that \textquotedbl Type I RIS Pattern\textquotedbl{} be chosen for the RIS phase shift training matrix. \subsection{Multi-antenna User Case} In this subsection, the NMSE and weighted sum rate (WSR) of the multi-antenna user case are respectively shown in Fig. \ref{multi-user SNR} and Fig. \ref{rate} by using different estimation methods. The users' average pilot overhead is considered for the proposed method in the multi-antenna user case, similar to that in the single-antenna user case. Specifically, for estimating the AoDs at the users, we allocate $10$ slots to all the users including the typical user and other users in Phase I, i.e., $V_{k}^{(1)}=10$ for $\forall k\in\mathcal{K}$. In phase II, additional $3$ blocks of time slots are allocated to the typical user and each block has $4$ slots, i.e., $V_{1}^{(2)}=V_{1}^{(3)}=V_{1}^{(4)}=4$. Therefore, the pilot overhead allocated to the typical user and other users are $22$ and $10$, respectively. The average pilot overhead for the proposed method is given by $T=12$. In addition, for fairness, CS-EST OMP consumes the same number of slots for the estimation of AoDs at the users. Fig. \ref{multi-user SNR} displays the NMSE performance of different methods versus SNR. It is observed that the gap between the Proposed full-CSI method and its upper bound, i.e., the Oracle full-CSI method, becomes smaller with the increase of SNR. In particular, when the SNR is larger than $5$ dB, the NMSE of the proposed method with $T=12$ exceeds that of the CE-EST OMP method with $T=28$, and has the same trend as that of the Oracle full-CSI method, i.e., the NMSEs decrease linearly with the SNR. This behavior implies the angle information can be obtained accurately by the Proposed full-CSI method at large SNR region. In this case, the NMSE differences between the proposed method and its upper bound mainly results from the estimation errors of channel gain information. By contrast, the NMSE of the CS-EST OMP method still has a performance bottleneck in the high SNR region even under the scenario of up to $28$ pilots per user. \begin{figure} \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{SNR_smallarray_MIMO} \par\end{center} \caption{NMSEs vs. SNR.} \label{multi-user SNR \end{minipage \begin{minipage}[t]{0.5\textwidth \begin{center} \includegraphics[width=1\columnwidth]{RATE_SNR_New} \par\end{center} \caption{WSR vs. SNR.} \label{rate \end{minipage} \end{figure} \vspace{-0cm} Fig. \ref{rate} shows the WSR performance of the MU MIMO system based on the channels estimated using different algorithms. The weighting factors, the maximum BS power, and the number of data streams are set to $\varpi_{k}=1$ for $\forall k\in\mathcal{K}$, $P_{max}=1$ W, and $d=16$, respectively. The details of the calculation for WSR can refer to Appendix D. In Fig. \ref{rate}, the case with perfect CSI is adopted as the upper bound of the Proposed full-CSI and CE-EST OMP methods. As can be observed, the WSR achieved by the proposed method with $T=12$ pilots is always larger than that achieved by the CS-EST OMP method with the same number of pilots of $T=12$. When SNR = $5$ dB, the proposed method outperforms the other three CS-EST OMP methods. To achieve the same WSR, the pilot overhead required by the proposed method is less than half that of the CE-EST OMP method. With the further increase of the SNR, the gap between the proposed method and the upper bound becomes smaller gradually, which implies that extension of the proposed full-CSI method to the multi-antenna user case can achieve high estimation accuracy. \section{Conclusions\label{sec:Conclusions}} \vspace{-0.15cm} In this paper, we adopted a novel three-stage uplink channel estimation protocol that leads to a significant reduction in the number of pilots for a UPA-type RIS-aided mmWave system with a UPA-type BS. The proposed estimation methods were developed starting from the single-antenna user case, and were shown to fully exploit the correlation among the channels of different users. To reduce the power leakage problem during the common AoA estimation in Stage I, a low-complexity 1-D search method was developed. Then we extended the protocol to the UPA-type multi-antenna user case. An OMP-based method was proposed for estimation of the AoDs at the users. Numerical results showed that choosing the RIS training matrix as the random Bernoulli matrix has near-optimal performance. Simulation results validated that the proposed methods outperform other existing algorithms in terms of pilot overhead. In addition, the proposed algorithms approach the genie-aided upper bound in the high SNR regime. \textcolor{red}{\vspace{-0.25cm} }
80aca6cf9208cc6170f3caf9fe58fc16cd4bae2d
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The 17 crystal groups on Euclidean plane $\mathbb{E}^2$ have long been known (as an intuitive discovery of medieval Islamic art e.g. the artistic mosaics of Alhambra in Granada, Spain). B.N.Delone (Delaunay) described the 46 types of their fundamental domains only in 1959, see \cite{Delone}. H. Poincare in 1882 had already attempted to describe the analogous plane groups in Bolyai-Lobachevsky hyperbolic plane $\mathbb{H}^2$. A significant result of A,M. Macbeath was the description of algebraic combinatorial classification of Non-Euclidean plane crystallographic groups with compact quotient space by their signature, see \cite{Macbeath} and \cite{molnar2019}. \\ In this paper, we would like to determine the best circle inscribed in the fundamental domain of a given discontinuous group in hyperbolic plane $\mathbb{H}^{2}$. This problem was actually raised by Prof. Emil Moln\'{a}r in \cite{molnar2019} on the base of \cite{luvcic1991}. The fundamental domain for planar discontinuous groups and uniform tilings was studied by Lu\v{c}i\'{c} and Moln\'{a}r in \cite{luvcic1991, luvcic1990combinatorial}. The algorithm for classification of fundamental polygons for a given discontinuous group was also presented by Lu\v{c}i\'{c}, Moln\'{a}r and Vasiljevi\'{c} in \cite{luvcic2018}. We are interested in the theorem, see \cite{luvcic1991}, as follows \begin{theorem}[Lu\v{c}i\'{c}-Moln\'{a}r]\label{ExistenceInball} Among all convex polygons in $\mathbb{E}^2$, $\mathbb{S}^2$, and $\mathbb{H}^2$ with given angles $\alpha_1, \alpha_2, \cdots, \alpha_m, m \geq 3$, there exists up to similarity (for $\mathbb{E}^2$) and up to an isometry (for $\mathbb{S}^2$ and $\mathbb{H}^2$), respecting the order of the angles, exactly one circumscribing a circle. \end{theorem} \noindent That theorem will guarantee the existence of the inscribed circle in a fundamental domain for given angles. We shall determine the best circle, that is the inscribed circle with the largest radius into fundamental domains determined by a discontinuous group in $\mathbb{H}^2$.\\ In this first section we study a typical case, the hyperbolic plane group $G=[3,3,3,3]$ with 4 rotation centers of order 3 in $\mathbb{H}^2$, their fundamental domains and its representation in the tree graphs, see Fig.\ref{TreeGraph}. Basically, these tree graphs are topological images of the fundamental domain under the canonical projection mapping $\kappa:\mathbb{H}^2 \longrightarrow \mathbb{H}^2 / G$,~~ $X \longmapsto \bar{X} := X^{G}$, or simply, $\kappa$ identifies all points which form the same orbit by this group. Otherwise, to obtain the topological fundamental domain, we imagine a scissor dissecting these tree graphs, and open it up (or unfold) through the fault to construct the pre-image fundamental domain. In section \ref{sectionLagrange} we shall consider the constrained optimum problem and apply the Lagrange multiplier method to find the solution. We will present the sufficient conditions for the local maximum points through second derivative method, called bordered Hessian criterion. In section \ref{Other_Possibility} we use the optimality condition based on section \ref{sectionLagrange} to determine the optimum incircle radius of the other type of $G$. Moreover, we also describe the optimum condition geometrically in section \ref{Conclussion_G4}.\\ In section \ref{Generalization}, we develop the method to more general $G=[3,3,3,\cdots,3]$ of $l$ rotational points, where $l \geq 4$. All types of fundamental domains are characterized combinatorially by a Diophantine equation system. Based on these constructions, we will show the global optimum of inscribed circle radius of all fundamental domain types of $G$. We also provide an important fact on the area of the fundamental domain of all types. Now, as a motivation, we begin with recalling the proof of the Theorem \ref{ExistenceInball} in hyperbolic plane $\mathbb{H}^2$. \\ \noindent \textbf{\textit{Proof of Theorem \ref{ExistenceInball} (for hyperbolic case)}}\\ Given $p$ is a polygon with given angles $\alpha_1, \alpha_2, \cdots, \alpha_m \in (0, \frac{\pi}{2})$, near vertices $A_1,A_2, \cdots, A_m$ which is circumscribed around a circle $k(X,x)$. Let $B_1, B_2, \cdots,$\\$ B_m$ be the set of points of tangency of $p$ and $k$, such that the angles $B_m X B_1$, $B_1XB_2$, $\cdots$, $B_{m-1}$ are equal to $\beta_1, \beta_2, \cdots, \beta_m$. Then, $\beta_1 + \beta_2 + \cdots + \beta_m=2\pi$.\\ By applying trigonometry to the rectangular central triangles $XA_i B_i$ we obtain the formula (in $\mathbb{H}^2$) \begin{equation} \label{key1} \cos{\left( \frac{\alpha_i}{2}\right)}=\cosh{x} \sin{\left( \frac{\beta_i}{2} \right)} \end{equation} Therefore \begin{equation*} \frac{\cos{\left( \frac{\alpha_1}{2}\right)}}{\sin{\left( \frac{\beta_1}{2} \right)}}= \frac{\cos{\left( \frac{\alpha_2}{2}\right)}}{\sin{\left( \frac{\beta_2}{2} \right)}} =\cdots= \frac{\cos{\left( \frac{\alpha_m}{2}\right)}}{\sin{\left( \frac{\beta_m}{2} \right)}}=\cosh{x}, \end{equation*} for a factor $\cosh{x}>1$ is necessary for $p$ in $\mathbb{H}^2$.\\ The existence of $x$ and also $\beta_i$, such that $\sum_{i}\beta_i=2\pi$, can be shown as follows.\\ Consider $\displaystyle{\cos{\left( \frac{\alpha_1}{2}\right)}, \cos{\left( \frac{\alpha_2}{2}\right)}, \cdots, \cos{\left( \frac{\alpha_m}{2}\right)}}$\\ From \ref{key1}, we have $\beta_i=2 \sin^{-1}\left(\frac{\cos{\left( \frac{\alpha_i}{2} \right)}}{\cosh{x}} \right)$.\\ Now, consider the following continuous function \begin{equation} S(x)=\left(\sum_{i}^{m} 2 \sin^{-1}\left(\frac{\cos{\left( \frac{\alpha_i}{2} \right)}}{\cosh{x}} \right) \right)-2\pi ~~~~x \in (0, \infty) \end{equation} \begin{align*} S(0)&=\left(\sum_{i}^{m} 2 \sin^{-1}\left(\cos{\left( \frac{\alpha_i}{2} \right)} \right) \right)-2\pi =(m-2)\pi-(\alpha_1 + \alpha_2 + \cdots + \alpha_m)>0 \\ &(\text{since}~ \alpha_1 + \cdots + \alpha_m < (m-2) \pi ~\text{on}~ \mathbb{H}^2) \end{align*} We choose $x_0$, such that $\cosh{x_0} > \frac{1}{\sin{\left( \frac{2 \pi}{m}\right)}}$. \begin{align*} S(x_0)&=\left(\sum_{i}^{m} 2 \sin^{-1}\left(\frac{\cos{\left( \frac{\alpha_i}{2} \right)}}{\cosh{x_0}} \right) \right)-2\pi \\ &<\left(\sum_{i}^{m} 2 \sin^{-1}\left(\cos{\left( \frac{\alpha_i}{2} \right)} \sin{\left( \frac{2 \pi}{m}\right)} \right) \right)-2\pi\\ &<\left(\sum_{i}^{m} 2 \sin^{-1}\left( \sin{\left( \frac{2 \pi}{m}\right)} \right) \right)-2\pi = 0. \end{align*} We see that the function $S$ change sign in $[0,x_0]$. Since $S$ is continuous, by the intermediate value theorem, there is a value $r \in [0,x_0]$, such that $S(r)=0$. In other words $\left(\sum_{i}^{m} 2 \sin^{-1}\left(\frac{\cos{\left( \frac{\alpha_i}{2} \right)}}{\cosh{r}} \right) \right)=2\pi$. Hence, the inscribed circle radius is $x=r$ with the corresponding central angles $\beta_i$ satisfying $\beta_1 + \beta_2 + \cdots + \beta_m=2\pi$ $\square$ \subsection{The Hyperbolic Plane Group $G=[3,3,3,3]$} As a typical example the group $G=[3,3,3,3]$ contains exactly 4 rotational centers each of order 3 on a topological sphere. The tree surface graphs from $G=[3,3,3,3]$ are presented in Fig.\ref{TreeGraph}. There are 5 types of graphs that represent the fundamental domains of $G$. We could construct fundamental domains based on these tree graphs. The complete corresponding fundamental domains are sketchily given in Fig.\ref{Fundamentals}. \subsubsection{Type-5 fundamental domain} We would like to find the best-inscribed circle into the fundamental domain of the above hyperbolic plane group G. We are first focused on the type-5 fundamental domain. Since this type has the most edges, we guess that the largest circle radius would be attained in this type. \begin{figure}[h!] \centering \includegraphics[scale=0.40]{TreeGraph.png}\\~\\~\\ \caption{All together: 5 types of tree surface graphs of fundamental domains for $G=[3,3,3,3]$ on a sphere} \label{TreeGraph} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.45]{Fundamentals.png}\\~\\~\\ \caption{All together: 5 types of sketchy fundamental domains for $G=[3,3,3,3]$} \label{Fundamentals} \end{figure} This tree graph on the sphere is the surface diagram of the conjectured optimal fundamental domain of G=[3,3,3,3] given by its Conway-Macbeath signature. This diagram is a tree graph on a topological elastic sphere with the given 3-centers as $4=m$ vertices, each of valence (degree) 1 and $2=y$ additional vertices, each of valence 3. Then imagine a scissor we take, and cut the sphere along this tree graph to obtain a topological domain with the later metrical properties. Then the number of vertices is $6=v$, and the number of edges is $5=e$. The criterion of a tree: $v=e+1$ is fulfilled. We get a fundamental polygon of $m*1+y*3=10$ vertices (and sides), as in Fig.\ref{Treegraphtype2}. To give more details, see Fig.\ref{Treegraphtype2}, we dissect the tree graph of type-5 through directions: $\bar{P_1} \rightarrow \bar{R_1} \rightarrow \bar{P_1} \rightarrow \bar{P_2} \rightarrow \bar{R_2} \rightarrow \bar{P_2} \rightarrow \bar{R_3} \rightarrow \bar{P_2} \rightarrow \bar{P_1} \rightarrow \bar{R_4} \rightarrow \bar{P_1}$. Then we denote the future angles $\alpha_1, \alpha_2, \alpha_6$ at vertex $\bar{P_1}$ and $\alpha_3, \alpha_4, \alpha_5$ at vertex $\bar{P_2}$, Fig. \ref{Treegraphtype2}. We construct the fundamental domain by opening up the dissected elastic tree surface graph. As a result, we obtain type-5 fundamental domain as shown in Fig.\ref{Fundamentals}-\ref{Fundamentaldomain5}. \begin{figure}[h!] \centering \includegraphics[scale=0.35]{treeGraph2.png} \caption{Type-5 tree surface graph of $G=[3,3,3,3]$ is dissected by a scissor with orientation $\bar{P_1} \rightarrow \bar{R_1} \rightarrow \bar{P_1} \rightarrow \bar{P_2} \rightarrow \bar{R_2} \rightarrow \bar{P_2} \rightarrow \bar{R_3} \rightarrow \bar{P_2} \rightarrow \bar{P_1} \rightarrow \bar{R_4} \rightarrow \bar{P_1} $ } \label{Treegraphtype2} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.50]{Disecting.png} \caption{Type-5 fundamental domain of $G=[3,3,3,3]$. Imagine also later on for $G=[3,3,3,\cdots,3]$ ($l$-times).} \label{Fundamentaldomain5} \end{figure} We have some metrical properties as presented in equation system $(\ref{cons_g1}) \cdots (\ref{cons_h2})$. \begin{align} \cos{\left(\frac{{\alpha}_1}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_1}{2}\right)} \label{cons_g1}\\ \cos{\left(\frac{{\alpha}_2}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_2}{2}\right)}\label{cons_g2}\\ \cos{\left(\frac{{\alpha}_3}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_3}{2}\right)}\label{cons_g3} \\ \cos{\left(\frac{{\alpha}_4}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_4}{2}\right)}\label{cons_g4}\\ \cos{\left(\frac{{\alpha}_5}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_5}{2}\right)}\label{cons_g5}\\ \cos{\left(\frac{{\alpha}_6}{2}\right)}=\cosh{x} ~\sin{\left(\frac{{\beta}_6}{2}\right)}\label{cons_g6}\\ \cos{\left(\frac{ \pi}{3}\right)}=\cosh{x} ~\sin{\left(\frac{\theta}{2}\right)}\label{cons_g7} \end{align} where \begin{align} \sum_{i=1}^{6} \beta_{i}~+~ 4 \theta =2 \pi \label{cons_h}\\ \alpha_{1}+\alpha_{2}+\alpha_{6}=2 \pi \label{cons_h1} \\ \alpha_{3}+\alpha_{4}+\alpha_{5}=2 \pi \label{cons_h2} \end{align} From these 10 equations, we treat the equation (\ref{cons_g7}) as an equation that provides the objective function $f$. That is, we form $f= \cosh(x)=\frac{\cos{\left( \frac{\pi}{3} \right)}}{\sin{\left( \frac{\theta}{2} \right)}}$ and we want to find the best value of radius $x$, i.e. $f$ is maximal. But, there are some conditions i.e equations $(\ref{cons_g1}),\cdots, (\ref{cons_g6}), (\ref{cons_h}), (\ref{cons_h1})$, and $(\ref{cons_h2})$ which should be satisfied. Therefore, we face to a constrained extremum problem. We shall describe the so-called Lagrange multipliers method to attack this problem in Sect.\ref{sectionLagrange}. Now we shall motivate our approach. We want to find the best value of radius $x$, it means that the maximum value of $x$ with the constraint above. We shall consider the equation (\ref{cons_g7}) as the candidate of our objective function as follows \begin{equation*} x=f(\theta)=\cosh^{-1}\left(\frac{1}{2 \sin(\frac{\theta}{2})}\right) \end{equation*} One could formally reduce the conditions above by substituting all of the constraints to $f$. Then we have $f=f(\alpha_1, \cdots, \alpha_6, \beta_1, \cdots, \beta_6, \theta)$, where the natural domain of $f$ (subset of $\mathbb{R}^{13}$) is determined by the remaining constrains. We first study the very specific case (regular case) where all vertices have the same interior angles $ 2 \pi / 3 $. This case provides our conjectured optimum. \subsection{Very Specific Case (Regular Case)} We consider a specific case, the so called regular case, by setting $\alpha_1=\alpha_2=\cdots=\alpha_6$. The constraints (\ref{cons_h1}), (\ref{cons_h2}) and (\ref{cons_h}) impose the vertex angles are equal, $\frac{2 \pi}{3}$. This choice also affects that the central angles are equal, also to $\theta$, and satisfying the following inequality (the triangle condition in the hyperbolic plane). \begin{align*} \frac{\theta}{2}+\frac{1}{2}\frac{2 \pi}{3}+\frac{\pi}{2} < \pi,~\text{then}~ \theta < \frac{\pi}{3} \end{align*} We just have one equation for solving the radius $x$, $\cos{\left(\frac{\pi}{3}\right)}=\cosh{(x)} \sin{\left(\frac{\theta}{2}\right)}$. Therefore $x=\cosh^{-1}{\left(\frac{1}{2 \sin{(\frac{\theta}{2})}}\right)}$. The value $x$ is only depend on the central angle $\theta$.\\ Since the sum of all central angles should be $2 \pi$, it follows that $10 \theta = 2 \pi$, then $\theta = \frac{\pi}{5}$. Now we can compute directly the exact value of $x$, in this specific case. \begin{equation*} x \approx 1.061275061 \end{equation*} The area A of a circle disc with radius $x$ in hyperbolic plane is given by \begin{equation*} A=4 \pi \sinh^{2}\left( \frac{x}{2} \right),~\text{in our case}~A \approx 3.883222071. \end{equation*}\\ \noindent Furthermore, the density $d$ is described by division of the area of the circle and that one of the fundamental polygon. We could compute directly that the area of polygon is $\frac{4}{3} \pi$, characteristic invariant for group $G=[3,3,3,3]$. In our calculation, we found that $d \approx 0.9270509814$. Our conjecture is that this regular case would give the best circle, in term the largest, inscribed one into the fundamental polygon of group $G=[3,3,3,3]$. We would like to investigate this conjecture by studying some possible situations. We will have a conditional extremum problem. First, we use the tools in multivariate calculus, the so called Lagrange multiplier method. \section{The Lagrange Multiplier Method}\label{sectionLagrange} Based on the system of equations in the previous section, eq.$(\ref{cons_g1}), \cdots (\ref{cons_h2})$. We formulate the following conditional extremum problem. From these 10 equations we set the function $f$ from the constrain equation (\ref{cons_g7}) and some constraints $g_i, h, h_j$ from 9 remaining equations. We would like to find the maximum of radius $x$. From the equation (\ref{cons_g7}), we have $\cosh(x)=\frac{\cos{\left(\frac{ \pi}{3}\right)}}{\sin{\left(\frac{\theta}{2}\right)}}$. Since $\cosh$ is a monotonic increasing function for $x>0$, to maximize $x$, we just maximize $\cosh(x)$. We take $f(\alpha_1, \cdots \alpha_6,\beta_1, \cdots, \beta_6, \theta)=\frac{\cos{\left(\frac{ \pi}{3}\right)}}{\sin{\left(\frac{\theta}{2}\right)}}$ for the objective function. We formulate the constraints by setting a subtraction of the expression in equations $(\ref{cons_g1}), \cdots, (\ref{cons_g6})$ from the expression in equation \ref{cons_g7}, i.e. $\frac{\cos{\left(\frac{ \pi}{3}\right)}}{\sin{\left(\frac{\theta}{2}\right)}}-\frac{\cos{\left(\frac{{\alpha}_i}{2}\right)}}{\sin{\left(\frac{{\beta}_i}{2}\right)}}=0$, for $i=1,..,6$. Since $\alpha_i, \beta_i, \theta$ are the variables, half of them representing angles of rectangular triangles, we can restrict their value in $[0,\pi ]$.\\ Therefore we treat our problems in region $[0, \pi]^{13} \subset \mathbb{R}^{13}$. For convenience, we also write the tuple $( \alpha_1, \alpha_2, \alpha_3, \alpha_4, \alpha_5, \alpha_6, \beta_1, \beta_2, \beta_3, \beta_4, \beta_5, \beta_6, \theta)=\boldsymbol{X}$, as an element in $[0, \pi]^{13} \subset \mathbb{R}^{13}$. The set of constrain is described by the following system. \begin{align*} &g_i=\cos{\left(\frac{\pi}{3}\right)}\sin{\left(\frac{\beta_i}{2}\right)}-\cos{\left(\frac{\alpha_i}{2}\right)}\sin{\left(\frac{\theta}{2}\right)}=0,~~\text{where}~\cos{\frac{\pi}{3}}=\frac{1}{2},\\&~\text{and}~i=1,2,\cdots,6\\ &h=\sum_{i=1}^{6} \beta_i + 4 \theta-2 \pi =0,~~ h_1=\alpha_1+\alpha_2+\alpha_6-2 \pi=0,\\ &h_2=\alpha_3+\alpha_4+\alpha_5-2 \pi=0 \end{align*} The complete construction of our constrained extremum problem is described as follows \begin{align} \text{Maximize}~f(\boldsymbol{X})=\frac{1}{2 \sin{\left( \frac{\theta}{2} \right)}},~ \text{subjected to the above constraints}. \end{align} \subsection{The Compactness of Constrained Region} We consider the constrained region $S$. The compactness of $S$ could help us to guarantee the existence of maximum (and minimum) of $f$ in $S$. Consider $g_i(\boldsymbol{X})=\frac{1}{2} \sin{\left( \frac{\beta_i}{2}\right)}-\sin{\left( \frac{\theta}{2}\right)} \cos{\left( \frac{\alpha_i}{2}\right)}$ for $i=1,\cdots 6$ are bounded continuous functions. Therefore $g_i^{-1}(0)$, the inverse images of $0$, closed set, under continuous function are also closed in $\mathbb{R}^{13}$. One could see that $h^{-1}(0)$, $h_{1}^{-1}(0)$, $h_{2}^{-1}(0)$ are also closed in $\mathbb{R}^{13}$. With this compactness assumption, we note that $f$ is bounded in $S$, then $f$ has a maximum and minimum in $S$. We have obtained that our conjectured point $X_0$ satisfies the necessary condition for the local maximum of the constrained extremum problem. We need to observe further whether this point is really a local maximum point. We apply the second derivative test, called the bordered determinant criterion test in \cite{Magnus2019, Trench2012}. \section{Other Fundamental Domain Types: Finding Global Maximum}\label{Other_Possibility} Based on our analysis on the Type-5 of the fundamental domains for $G=[3, 3, 3, 3]$, we obtain the largest radius $x\approx1.061275061$. We need to compare it with the largest radius reached on the other fundamental domains of types 1, 2, 3, 4. (Fig. \ref{TreeGraph}-\ref{Fundamentals}). The analogous methods, Lagrange multiplier, and bordered determinant are applied to the cases of types 3 and 4, when they have independent parameters raised by the additional point. While the fundamental domains of types 1 and 2 have only fixed vertex angles. The appeared equation system could be solved immediately by some appropriate substitutions. \begin{enumerate} \item \textbf{Type-1} \\ The constructed fundamental domain on this type has no additional point. It just contains two rotational centers $R_1, R_4$ and two rotational centers $R_2, R_3$ that appear twice, see Fig.\ref{TreeGraph}, and \ref{Fundamentals}. The angles on the rotational vertices $R_1$ and $R_4$ are equals to $\frac{2\pi}{3}$. While the angles on the twice appeared vertices $R_2^1$, $R_2^2$, $R_3^1$, $R_3^2$ are a half of its original, i.e $\frac{2\pi}{6}$. We derive the following system of equations \begin{align*} \cos{\left( \frac{1}{2} \cdot \frac{2\pi}{3}\right)}=\cosh{x}\sin{\left( \frac{\theta_1}{2}\right)},&~~ \cos{\left(\frac{1}{2} \cdot \frac{2\pi}{6}\right)}=\cosh{x}\sin{\left( \frac{\theta_2}{2}\right)}\\ 2 \theta_1 &+ 4 \theta_2 = 2 \pi \end{align*} Basically, this equation system has only fix parameters. Using some appropriate substitutions, we conclude that the value of radius $x$ is given by \begin{equation*} x=\cosh^{-1}{\left( \frac{3}{2} \right)} \approx 0.962423 \end{equation*} \item \textbf{Type-2}\\ In this type, we have rotational center $R_1$ that appears three times on the fundamental domain. We derive the single equation similar to type-1, we get the numerical value $x \approx 0.927539$. \item \textbf{Type-3}\\ This type has a single additional point on the three graph Fig.\ref{TreeGraph}. The corresponding fundamental domain is given in Fig.\ref{Fundamentals}. On that fundamental domain the additional point $P$ appears four times, namely $P^1$, $P^2$, $P^3$, and $P^4$. We denote the angle near $P^1$, $P^2$, $P^3$, $P^4$ are $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$ and their corresponding central angles are $\beta_1$, $\beta_2$, $\beta_3$, $\beta_4$ respectively. The interesting is that the value of $x$ depends on $\alpha_1$, $\alpha_2$, $\alpha_3$, $\alpha_4$. Inspired by exploration on the type-5 in section 2, we can formulate the set of constraints and find the maximum value of $\cosh{x}=\frac{1}{2 \sin{\left( \frac{\theta}{2}\right)}}$. Finally, this equations system could be solved for $x$, that is $x\approx 1.031718$. \item \textbf{Type-4} In this type, the vertex $R_3$ appears twice on the fundamental domain, see Fig.\ref{Fundamentals}. While the additional point $P$ is copied three times, namely $P^1$, $P^2$, $P^3$. We denote the angle near $P^1$, $P^2$, $P^3$ by $\alpha_1, \alpha_2, \alpha3$ and their corresponding central angles by $\beta_1, \beta_2, \beta_3$, respectively. We have the equation system of the constraints and the corresponding approximated value of $x$ is about $1.011595$.\\ \end{enumerate} The summary of all largest possible inscribed circle radii on each type of fundamental domains is presented in the following tables. \begin{table}[h!] \centering \begin{tabular}{||c c||} \hline Type & Largest Radius \\[0.5 ex] \hline \hline 1 & 0.962423 \\ \hline 2 & 0.927539 \\ \hline 3 & 1.031718 \\ \hline 4 & 1.011595 \\ \hline 5 & 1.061275 \\ \hline \end{tabular} \caption{The largest inscribed circle radius comparison} \label{comparison} \end{table} According to the largest radius comparison, Table \ref{comparison}, the largest radius of all types is attained on type-5, namely $x \approx 1.061275$. Based on the exploration of the constrained optimum problem, it could be conjectured that optimum conditions might happen whenever the corresponding parameters are equal. This intuition could be shown in the next section. \section{Geometric Argument: Conclusion to [3,3,3,3]}\label{Conclussion_G4} According to the approach of Lagrange multiplier method and bordered determinant criterion, it could be concluded that the maximum possible inscribed circle radius is attained whenever the corresponding independent vertex angles are equal. The following theorem will state this more intuitively and geometrically simpler: \textbf{Whenever we equalize the corresponding angles at $G-$equivalent vertices, the radius will increase}. This could be further developed to the proof of necessary conditions for the general problem for arbitrary cocompact plane group $G$ (as conjectured by the authors of \cite{luvcic2018}). \begin{theorem}\label{Theorem2} If we exchange in Theorem 1 two angles, say $\alpha_1$ and $\alpha_2$ both to $\frac{\alpha_1+\alpha_2}{2}$ in a given configuration with fixed radius $x$, so $\cosh{x}$, then for the corresponding central angles $\beta_1$ and $\beta_2$ their arithmetic mean $\frac{\beta_1 + \beta_2}{2}$ increases. So, changing $\alpha_1$, $\alpha_2$ to $\frac{1}{2}(\alpha_1 + \alpha_2)$ only, the inscribed circle will have bigger radius in the procedure. \end{theorem} \noindent Before proving this theorem, we first discuss a Jensen-type inequality of Lucic- Molnar by \cite{luvcic1991} for $\mathbb{H}^2$ which is described in the following lemma. \begin{lemma} The function $\beta : (0, \frac{\pi}{2}) \ni \alpha \mapsto \beta(\alpha) \in (0, \frac{\pi}{2})$, as above, given by $\sin{(\beta(\alpha))} = \frac{\cos{\alpha}}{\cosh{x}}$ , with fixed $x$ and $\cosh{x}$ , is concave (from below). \end{lemma} \noindent \textbf{\textit{Proof}} [By comunication with Prof. Emil Moln\'{a}r]\\ As we look at formulas in Theorem \ref{ExistenceInball}, $\displaystyle \cos{\left(\frac{\alpha}{2}\right)}=\cosh{x} \sin{\left(\frac{\beta}{2} \right)}$ is the crucial relation (with fixed $\cosh{x}>1$) to define central angles $\beta_i(\alpha_i)$ of $\alpha_i$, $(i=1,\cdots,m \geq 3)$ as function $\beta(\alpha)$ of $\alpha$. Let us start with \begin{equation} \sin{(\beta(\alpha))}=\frac{\cos{\alpha}}{\cosh{x}},~~~ (0<\alpha<\frac{\pi}{2}) \label{awal} \end{equation} By differentiating both sides by $\alpha$, we obtain \begin{align*} \frac{d}{d\alpha} (\sin{(\beta(\alpha))})&= \frac{d}{d\alpha}\left( \frac{\cos{\alpha}}{\cosh{x}} \right),~\text{it leads to}~ \cos(\beta(\alpha)) \frac{d\beta(\alpha)}{d\alpha}&=-\frac{\sin{\alpha}}{\cosh{x}} \end{align*} \begin{align} \frac{d\beta(\alpha)}{d\alpha}=\frac{1}{\cosh{x}}\left( -\frac{\sin{\alpha}}{\cos{(\beta{(\alpha)})}}\right).\label{beta'} \end{align} We differentiate again $\frac{d\beta(\alpha)}{d\alpha}$ by $\alpha$ \begin{align*} \frac{d^2}{d\alpha^2}(\beta(\alpha))&=\frac{d}{d\alpha}\left(\frac{d\beta(\alpha)}{d\alpha}\right) =\frac{1}{\cosh{x}}\left( \frac{\sin{(\beta{(\alpha)})}(\sin{\alpha})\frac{d\beta(\alpha)}{d\alpha}-\cos{(\beta{(\alpha)})}\cos{\alpha}}{\cos^{2}(\beta(\alpha))} \right) \end{align*} Using the facts $\displaystyle{\frac{1}{\cosh^{2}x}=1-\tanh^2{x}}$ and $\displaystyle{\frac{1}{\cos^{2}\beta{(\alpha)}}=1+\tan^2{\beta{(\alpha)}}}$ and also substituting Eq.\ref{awal}, \ref{beta'}, we obtain \begin{align*} \frac{d^2}{d\alpha^2}(\beta(\alpha))&=-\frac{1}{\cosh^{2}x}\left( \frac{\tan{(\beta{(\alpha)})} \sin^2{\alpha}+\cos{(\beta{(\alpha)})}\cos{\alpha} \cosh{x}}{\cos^2{\beta{(\alpha)}}}\right)\\ &=-\frac{1}{\cosh^{2}x} \left(\frac{\tan^2{(\beta{(\alpha)})}+1}{\tan{(\beta{(\alpha)})}} \right)\left( \tan^2{(\beta{(\alpha)})} \sin^2{\alpha}+\cos^2{\alpha}\right) < 0~~ \square \end{align*} Thus $\beta{(\alpha)}$ is concave (from below) function.\\~\\ \noindent \textbf{\textit{Proof of Theorem \ref{Theorem2}}}.\\ By Lemma 1, and because sinus is a monotone increasing function in $(0,\frac{\pi}{2})$ \begin{equation*} \sin{\left( \beta{\left( \frac{\alpha_1 + \alpha_2}{2} \right)} \right)}> \frac{\sin{(\beta{(\alpha_1)})} + \sin{(\beta{(\alpha_2)})}}{2} = \left( \frac{\sin{\beta_1}+\sin{\beta_2}}{2} \right) \end{equation*} holds as a Jensen-type inequality (The graph of the function is over the segment $(\alpha_1; \beta_1)$ $(\alpha_2; \beta_2)$ in midpoint $\left( \frac{\alpha_1 + \alpha_2}{2} \right)$, then this stands every point of the segment). Then the sum would be $\sum_{i}\beta_i>2\pi$. \textbf{To equalize it again}, by the procedure in Theorem \ref{ExistenceInball} with previous angles and two times $\frac{1}{2}(\alpha_1+\alpha_2)$ instead of $\alpha_1$ and $\alpha_2$, $\cosh{x}$ and $x$ have to be chosen bigger. In our local optimal cases where every possible equality has been reached, such an increasing of $x$ $(\cosh{x})$ by choosing $\left( \frac{\alpha_1 + \alpha_2}{2} \right)$ is not possible, so $x$ cannot increase in such a way. Comparison of these local optima serves the optimum since the existence has already been guaranteed by compactness of the domain of variables. $\square$ \section{Generalization to $G=[3,3,3 \cdots, 3]$ \\ of $l \geq 4$ rotational centers each of order 3}\label{Generalization} Finally, we shall see that group $G=[3,3,3, \dots,3]$, with $l$-times rotational center of order 3, $l \geq 4$. The largest inscribed circle radius could be attained by equalizing the angles corresponding to the additional vertices as many as possible. We follow the following propositions in \cite{luvcic1990combinatorial, molnar2019} to study this general construction. \begin{proposition}\label{Proposition_Emil} For the number $w$ of additional points of an orbifold tree holds \begin{equation*} w \leq 2 \alpha g + l -q-2 \end{equation*} If $n$ is the number of edges (and vertices) of a fundamental domain of a plane group $G$, then (with some exceptions if the domain is unique) holds $\displaystyle{n_{\mathrm{min}} \leq n \leq n_{\mathrm{max}}}$, where \begin{equation} n_{\mathrm{min}}=2 \alpha g ~ \text{if} ~l=q=0,~~ \text{or}~~ n_{\mathrm{min}}=q_0 + \left( \sum_{k=1}^{q} l_k \right)+2\alpha g + 2 l + 2 q -2~ \text{otherwise} \end{equation} and\\ \begin{equation} n_{\mathrm{max}}=\left( \sum_{k=1}^{q} l_k \right) + 6 \alpha g + 4l+5q-6, \end{equation} where $\alpha=2$ if the orbifold is orientable and $\alpha=1$ otherwise, and $q_0$ is the number of boundary components containing no dihedral corner. Moreover, for a given $G$ there exist fundamental domains with $n_{\mathrm{min}}$ and $n_{\mathrm{max}}$ edges. \end{proposition} We study that in our cases $G=[3,3,3, \cdots, 3]$ above the $l$ rotational center are embedded into a topological sphere, i.e $g=0$. Since it is an orientable surface, $\alpha=2$. Moreover, it has no boundary component, $q=q_0=0$. Applying these conditions to the proposition, we have Lemma \ref{Proposition_Emil} as follows \begin{lemma}\label{lemma_addpoint} In $G=[3,3,3, \cdots, 3]$ of $l$-rotational centers of order 3, $l \geq 4 $, there are possible number of additional points $w$ that are bounded as follows \begin{equation}\label{additional} 0 \leq w \leq l-2, \end{equation} Furthermore the possible number $n$ of sides (and vertices) of the fundamental polygon is given by \begin{equation} 2l-2 \leq n \leq 4l-6 \end{equation} \end{lemma} Finally, we give the last theorem of this paper, namely the maximum radius of inscribed circle into the fundamental domain of $G=[3,3,3,\cdots,3]$. \begin{theorem} Let $G=[3,3,3,\cdots,3]$ be a group with $l$-rotational centers of order 3, $l \geq 4$. The largest inscribed circle radius in its fundamental domain is realized when $l-2$ additional points are given, and their corresponding vertex angles are equalized. Furthermore, the inscribed circle radius $x$ is given by formula \begin{equation} x=\cosh^{-1}{\left(\frac{1}{2 \sin{(\frac{\pi}{4l-6}})}\right)},~~~\text{for all}~~l=4, 5, 6, \cdots \end{equation} \end{theorem} We need some preparations to prove this theorem. We divided our discussion into 3 following subsections also with additional information. \subsection{On combinatorial structure to the tree graph \\ of $G=[3,3,3,\cdots,3]$} Firstly, the tree graph on the topological sphere for corresponding fundamental domain can be obtained completely through the algorithm in \cite{luvcic2018}, indicated previously. Particularly, in this case, $G=[3,3,3,\cdots,3]$, the tree graphs can be represented by the set of solutions for a "linear Diophantine equation system". \\ Let $\mathrm{A}_i$ be the number of rotational centers that have degree $i$ in the tree surface graph, i.e they have $i$ edges connected. Hence, the total number of all $A_i$ should be $l$, $\sum_{i=1}^{l-1} \mathrm{A}_i = l$. Note that the maximum possible degree of a rotational center is $l-1$.\\ Again, let $\mathrm{B}_j$ be the number of additional points whose degree is $j$ in the tree graph. The minimum degree of an additional point is $3$. While the maximum possible degree is $l$, e.g it happens in a star graph. Therefore, by adding all $B_j$, we get $w$, the total number of additional points, i.e $\sum_{j=3}^{l} \mathrm{B}_j = w$. Furthermore, in our tree surface graph, the vertices can be either rotational centers or additional points. Note that the sum of all degrees of vertices in a graph is equal to 2 times the number of its edges. Since in a tree graph with $v$ vertices the number of edges is $v-1$, we can state the following equation \begin{equation*} \sum_{i=1}^{l-1} i \cdot \mathrm{A}_i + \sum_{j=3}^{l} j \cdot \mathrm{B}_j = 2 (l+w-1)=n, \end{equation*} where $n$ is the number of vertices (sides of the fundamental polygon to the tree graph of vertices $v=l+w$ edges $v-1=l+w-1$).\\ Therefore, all of possible tree graphs for $G$ have to satisfy the solutions $\{ A_i, B_j \}$, $i=1 \cdots l-1$, $j=3 \cdots l$ of the following "linear Diophantine equation system" \begin{align}\label{Diophantine1} \sum_{i=1}^{l-1} i \cdot \mathrm{A}_i + \sum_{j=3}^{l} j \cdot \mathrm{B}_j &= 2 (l+w-1)=n\\ \sum_{i=1}^{l-1} \mathrm{A}_i &= l \\ \sum_{j=3}^{l} \mathrm{B}_j &= w \\ \label{Diophantine4} \mathrm{A}_i, \mathrm{B}_j, &\in \mathbb{N} \cup \{0\},~ \text{where}(0 \leq w \leq l-2) \end{align} Example as before: Let $G=[3,3,3,3]$, i.e $l=4$. The possible additional points are $w=0,1,2$. The corresponding linear Diophantine equations system is given by \begin{align*} \mathrm{A}_1 + 2 \mathrm{A}_2 + 3 \mathrm{A}_3 + 3 \mathrm{B}_3 + 4 \mathrm{B}_4 = 2 ( 4 + w-1)\\ \mathrm{A}_1 + \mathrm{A}_2 + \mathrm{A}_3 = 4,~ \mathrm{B}_3 + \mathrm{B}_4 = w,~ \text{where}~w=0,1,2 \end{align*} The complete 5 solutions of the system above and their corresponding tree surface graphs, see Fig.\ref{TreeGraph}, are presented in the following table \begin{table}[h!] \centering \begin{tabular}{||c c c c c c c||} \hline Additional points & $\mathrm{A}_1$ & $\mathrm{A}_2$ & $\mathrm{A}_3$ & $\mathrm{B}_3$ & $\mathrm{B}_4$ & Tree surface graph \\[0.5 ex] \hline \hline 0 & 2 & 2 & 0 & 0 & 0 & Type-1 \\ \hline 0 & 3 & 0 & 1 & 0 & 0 & Type-2 \\ \hline 1 & 4 & 0 & 0 & 0 & 1 & Type-3 \\ \hline 1 & 3 & 1 & 0 & 1 & 0 & Type-4 \\ \hline 2 & 4 & 0 & 0 & 2 & 0 & Type-5 \\ \hline \end{tabular} \caption{The Diophantine equations system solution and its tree surface graph representations for $G=[3,3,3,3]$} \label{} \end{table} for $l=4$ rotational centers, there are maximum $l-2$ additional points, (\ref{additional}). Consider $w=l-2$ maximum additional points added, then the corresponding solution of (\ref{Diophantine1})-(\ref{Diophantine4}), is $\mathrm{A}_1=l$, $\mathrm{A}_i=0$ for $i\neq 1$, and $\mathrm{B}_3=l-2$, $\mathrm{B}_j=0$ for $j\neq 3$. The corresponding inscribed circle radius of each linear Diophantine solution (tree surface graph types) could be described in the next two subsections. \subsection{The constrained optimum problem in a single equation} Consider a tree surface graph and its fundamental domain of $G$. Let $R_i$ be a rotational center with $i$ adjacent edges ($i \in \{1, 2, 3, \cdots, l-1\}$). The scissor dissecting in this tree surface graph yields the fundamental domain, particularly the rotational center with $i$ edges are dissected into $i$ identical angles, i.e $\frac{1}{i} \frac{2 \pi}{3}$. Furthermore, the corresponding trigonometric relation formed by right triangle in Fig.\ref{Trigono1} can be written as follows \begin{align*} \cos{\left(\frac{\alpha_i}{2}\right)}&=\cosh{x}\sin{\left(\frac{\beta_i}{2}\right)},~\text{it leads to}~ \cos{\left(\frac{1}{i} \frac{ \pi}{3} \right)}=\cosh{x} \cdot \sin{\left(\frac{\beta_i}{2}\right)} \end{align*} \text{then}~$\beta_i =2\sin^{-1}{\left( \frac{\cos{\left(\frac{1}{i}\frac{\pi}{3}\right)}}{\cosh{x}} \right)}$, for $i=1, 2,\cdots, l-1$. Particularly, if $i=1$, i.e the rotational center appears as a "leaf" in the tree surface graph, we have \begin{align}\label{cosh} \cosh{x}=\frac{1}{2 \sin{\left( \frac{\beta_1}{2} \right)}} \end{align} Remark: The conditions $\cosh{x}>1$ in (\ref{cosh}) affect to the boundness of $\beta_1$, i.e we can define the interval for $\beta_1$, that is $\beta_1 \in (0,\frac{\pi}{3})$.\\ \begin{figure}[h!] \begin{center} \begin{minipage}[b]{0.45\textwidth} \includegraphics[scale=0.35]{Rotational} \caption{Right triangle with rotational center. For larger $\alpha_i$ we get smaller $\beta_i$.} \label{Trigono1} \end{minipage} \begin{minipage}[b]{0.45\textwidth} \includegraphics[scale=0.35]{Additional.PNG} \caption{Right triangle with additional point} \label{Trigono2} \end{minipage} \end{center} \end{figure} By substituting the expression $\cosh{x}$ (\ref{cosh}) into $\beta_i$'s we obtain \begin{equation}\label{beta_i} \beta_i=2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right)} \sin{\left( \frac{\beta_1}{2} \right)} \right) \end{equation} Remark: the argument of $\sin^{-1}$ in (\ref{beta_i}) need to be naturally on the interval $[-1,1]$ (in this situation $[0,1]$). That means, \begin{align*} 0 \leq 2\cos{\left( \frac{1}{i} \frac{\pi}{3}\right)} \sin{\left( \frac{\beta_1}{2}\right)} \leq 1, ~ \text{for every}~ i=1,\cdots,l-1 \end{align*} it means $\beta_1$, is bounded i.e. \begin{align}\label{beta_11} 0 \leq \beta_1 \leq 2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{i} \frac{\pi}{3} \right)}} \right)},~\text{for every}~i=1,\cdots,l-1. \end{align} It means $\beta_1$ bounded by the least upper bound, i.e. $0 \leq \beta_1 \leq 2\sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)}$, for fixed $l \geq 4$. A similar argumentation is applied in the right triangle with the additional point as vertex see Fig.\ref{Trigono2}. Unlike the rotational center case, in this case, we have $\alpha_j=\frac{2\pi}{j}$ Then the trigonometric relationship in the triangle related to additional points is given by \begin{equation}\label{beta_j} \beta_j=2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right)} \sin{\left( \frac{\beta_1}{2} \right)} \right),~\text{for}~j=3, 4,\cdots, l. \end{equation} Again, since the argument of $\sin^{-1}$ should be on $[-1,1]$ (in our case $[0,1]$), by the analogous consideration as in (\ref{beta_11}), we have \begin{align}\label{beta_12} 0 \leq \beta_1 \leq 2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{\pi}{j} \right)}} \right)},~\text{for every}~j=3,\cdots,l. \end{align} It means we have $0 \leq \beta_1 \leq 2\sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{\pi}{l} \right)}} \right)}$, for fixed $l \geq 4$. The sum of all central angles of the inscribed circle i.e. $\beta_i$'s and $\beta_j$'s should be equal to $2\pi$, once complete rotation. That is, the following conditions should be fulfilled for every $\{\mathrm{A}_i;\mathrm{B}_j \}$ solutions of (\ref{Diophantine1})-(\ref{Diophantine4}) \begin{equation}\label{central_angles} \sum_{i=1}^{l-1}i\mathrm{A}_i \beta_i + \sum_{j=3}^{l}j\mathrm{B}_j \beta_j = 2\pi \end{equation} By substituting $\beta$'s from (\ref{beta_i}) and (\ref{beta_j}) we have a nice relation as follows \begin{align} \sum_{i=1}^{l-1} &i\mathrm{A}_i 2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right)} \sin{\left( \frac{\beta_1}{2} \right)} \right) \nonumber \\ &+ \sum_{j=3}^{l}j\mathrm{B}_j 2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right)} \sin{\left( \frac{\beta_1}{2} \right)} \right) = 2\pi. \end{align} Note that based on (\ref{beta_11}) and (\ref{beta_12}), $\beta_1$ is defined on \begin{equation*} 0 \leq \beta_1 \leq 2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)} \end{equation*} In this last equation, we need to find $\beta_1 $ only to determine the corresponding inradius $x$ in each Diophantine solution. For convenience, we write $\beta_1$ as $\beta$, and the upper bound $2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)}=:K_l$, for fixed $l \geq 4$. Finally, we formulate our problem concretely as follows: \begin{lemma}\label{Essential_Lemma} In each tree surface graphs of $G=[3,3,3,\cdots,3]$ of $l \geq 4$ rotational centers of order 3 there is a Diophantine system (\ref{Diophantine1})-(\ref{Diophantine4}), its solution $\{ \mathrm{A}_i; \mathrm{B}_j \}$ $i=1,\cdots l-1$, $j=3, \cdots l$; and the radius of inscribed circle $x$ is obtained by \begin{equation} \cosh{x}=\frac{1}{2 \sin{\left( \frac{\beta}{2} \right)}} \end{equation} where $\beta$ is the root of Equation \begin{align}\label{h_function} \sum_{i=1}^{l-1}&i\mathrm{A}_i 2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right)} \sin{\left( \frac{\beta}{2} \right)} \right) \nonumber \\ &+ \sum_{j=3}^{l}j\mathrm{B}_j 2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right)} \sin{\left( \frac{\beta}{2} \right)} \right) - 2\pi=0, \end{align} in the interval $[0, K_l]$, where $K_l=2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)}$. One could observe that the smaller root $\beta$ obtained, the larger inradius $x$ determined. $\square$ \end{lemma} \subsection{Proof of theorem 3} By observations in Sect.3, we have seen that the theorem holds in $G=[3,3,3,3]$, $l=4$. Hence, it is sufficient to prove the remaining cases i.e $l\geq 5$.\\ Firstly, we denote the previous function $h$ in (\ref{h_function}), in Lemma \ref{Essential_Lemma} as follows: For every fixed solution $\{ \mathrm{A}_i, \mathrm{B}_j \}, i=1 \cdots l-1, j=3, \cdots l$ of Diophantine system (\ref{Diophantine1})-(\ref{Diophantine4}), we define a function, extended at the endpoint of its interval \begin{align*} h:\left[0, K_l \right] \longrightarrow \mathbb{R},~\text{where}~ K_l=2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)}, \text{and}~ h~ \text{is defined by} \end{align*} \begin{align} h(\beta)=\sum_{i=1}^{l-1}&i\mathrm{A}_i 2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right)} \sin{\left( \frac{\beta}{2} \right)} \right) \nonumber \\ &+ \sum_{j=3}^{l}j\mathrm{B}_j 2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right)} \sin{\left( \frac{\beta}{2} \right)} \right) - 2\pi. \end{align} Observe that, $ K_l=2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{1}{l-1} \frac{\pi}{3} \right)}} \right)} < 2 \sin^{-1}{\left( \frac{1}{2\cos{\left( \frac{\pi}{3} \right)}} \right)}=\pi$. Hence, $[0,K_l] \subset [0,\pi]$, in particular $\frac{\beta}{2} \in [0,\frac{K_l}{2}] \subset [0, \frac{\pi}{2}]$. Note that $h$ is a strictly increasing function in $[0, K_l]$, since $h$ appears as a linear combination of composition terms of $\sin^{-1}$ and $\sin$, where $\sin{(\frac{\beta}{2})}$ increasing on $[0, K_l] \subset [0,\frac{\pi}{2}]$, and also $\sin^{-1}$ is increasing on $[0,1]$. As Lemma 3 stated, once we solve $h(\beta)=0$ for $\beta$, then the inradius $x$ can be simply computed. In this setting, we want to minimize the root $\beta$. Remark: The existence of the root $\beta$ in $[0, K_l]$ is guaranteed by the continuity of $h$. In fact, $h(0)=-2\pi<0$, and also, \begin{align*} h(K_l)&=\sum_{i=1}^{l-1}i\mathrm{A}_i 2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right)} \sin{\left( \frac{K_l}{2} \right)} \right)\\ & + \sum_{j=3}^{l}j\mathrm{B}_j 2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right)} \sin{\left( \frac{K_l}{2} \right)} \right) - 2\pi\\ &\geq \sum_{i=1}^{l-1}i\mathrm{A}_i 2\sin^{-1}\left( {\cos{(\frac{\pi}{3})}} \right) + \sum_{j=3}^{l}j\mathrm{B}_j 2\sin^{-1}\left( {\cos{(\frac{\pi}{3})}} \right) - 2\pi\\ &=\frac{\pi}{3}\left(\sum_{i=1}^{l-1}i\mathrm{A}_i+\sum_{j=3}^{l}j\mathrm{B}_j \right)-2\pi=\frac{\pi}{3} \left( 2(l-1+w)\right)-2\pi \geq 0 \end{align*} then by the intermediate value theorem, we may validate this claim. Moreover, since $h$ is strictly increasing in $[0,K_l]$, the root is unique in that interval. Our claim is that the maximum inscribed circle radius $r$ is realized by the solution to Diophantine system whose number of additional points are maximum $w=l-2$. In this situation, the Diophantine system has exactly a unique solution i.e $\mathrm{A}_1=l$, $A_i=0$ for $i=2, \cdots, l-1$ and $B_3=l-2$, $B_j=0$ for $j=4, \cdots, l$. The corresponding function $h$ is $ h_{l-2}(\beta)=(l+3(l-2))\cdot2 \sin^{-1}(\sin{(\frac{\beta}{2})})-2\pi$ $=(4l-6)\beta-2\pi$. Clearly, the root of $h_{l-2}(\beta)=0$ is $\beta_{l-2}=\frac{2\pi}{4l-6}$ that gives the corresponding radius $r_{l-2}=\cosh^{-1}\left( \frac{1}{2 \sin{(\frac{\beta_{l-2}}{2})}}\right)=\cosh^{-1}\left( \frac{1}{2 \sin{(\frac{\pi}{4l-6})}}\right)$, as expressed in the theorem.\\~\\ \textbf{Suppose indirectly} that there exists a solution to Diophantine system with less additional points $w$, $0\leq w < l-2$, say $\{ \mathrm{A}_i^{*},\mathrm{B}_j^{*} \}$ and the corresponding equation $h^*(\beta)=0$, such that it has a root $\beta^{*}$ whose resulting radius $r^{*}$ is larger than $r_{l-2}$, $r^* > r_{l-2}$. It is equivalent to $\beta^{*} < \beta_{l-2}$. Since $h^*$ is strictly increasing, $0=h^*(\beta^*)<h^*(\beta_{l-2})$, i.e. $h^*(\beta_{l-2})>0$. Meanwhile, we have $h_{l-2}(\beta_{l-2})=0$ already. It would lead to the following inequality \begin{align*} h^*(\beta_{l-2}) > 0, ~~\text{or explicitely} \end{align*} \begin{align}\label{48'''} \sum_{i=1}^{l-1}&i \mathrm{A}_i^* 2\sin^{-1}\left( 2 \cos{\left( \frac{1}{i} \frac{\pi}{3} \right) } \sin{\left( \frac{\beta_{l-2}}{2} \right)}\right)\\&+\sum_{j=3}^{l}j \mathrm{B}_j^* 2\sin^{-1}\left( 2 \cos{\left( \frac{\pi}{j} \right) } \sin{\left( \frac{\beta_{l-2}}{2} \right)}\right) \nonumber -2\pi >0, \end{align} as indirect assumption.\\~\\ Substitute $\beta_{l-2}=\frac{2\pi}{4l-6}$ and apply from Appendix the Jensen-type inequalities: \begin{equation} \sin^{-1}\left[ 2 \sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{3i}\right)}\right] < \frac{3}{\pi}\sin^{-1}\left[ 2 \sin{\left(\frac{\pi}{4l-6} \right)}\right]\left( \frac{\pi}{2}-\frac{\pi}{3i} \right), \end{equation} and \begin{equation} \sin^{-1}\left[ 2 \sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{j}\right)}\right] < \frac{3}{\pi}\sin^{-1}\left[ 2 \sin{\left(\frac{\pi}{4l-6} \right)}\right]\left( \frac{\pi}{2}-\frac{\pi}{j} \right), \end{equation} for $i=1,\cdots,l-1$ and $j=3,\cdots,l$. Therefore, in (\ref{48'''}) the sums will be much simpler, we can refer to the equations (\ref{Diophantine1}-\ref{Diophantine4}) in Diophantine system for $(\mathrm{A}_i^*, \mathrm{B}_j^*)$ and we obtain the following \begin{align*} &\frac{3}{\pi} \sin^{-1} \left[ 2 \sin{\left( \frac{\pi}{4l-6}\right)} \right] \left\{ \frac{\pi}{2} \left[ 2(l+w-1) \right]-\frac{\pi}{3}l-\pi w \right\} -2\pi >0,\\ &\text{i.e.}~ \sin^{-1}\left[ 2 \sin{\left( \frac{\pi}{4l-6} \right)} \right] \left( 2l-3\right)>2\pi,~\text{then}~2\sin{\left( \frac{\pi}{4l-6} \right)}>\sin{\left(4\cdot \frac{\pi}{4l-6} \right)}. \end{align*} Since $l \geq 5$, thus $4l-6 \geq 14$, we get a contradiction in interval $(0, \frac{\pi}{14}]$ by the easy analysis of sine function. $\square$ \subsubsection{Remark on the area of fundamental domain $\mathcal{F}_G$} We have just found the optimum radius of the inscribed circle of $G$. This optimal radius provides the optimum density of the circle into its Fundamental domain $\mathcal{F}_G$ immediately. Since the following observation shows that the area of $\mathcal{F}_G$ is constant for every Diophantine solution. The area of the fundamental domain $\mathcal{F}_G$ is proportional to the angle defect $\bigtriangleup$. In fact, $\mathcal{F}_G$ can be dissected into a number of right triangles, as illustrated in Fig \ref{Fundamentaldomain5}, where the area of each right triangle could be simply computed through its defect angle, see Fig.\ref{Trigono1}, \ref{Trigono2}. Let $\bigtriangleup_i$ be the defect angle of the right triangle about rotational center $R_i$, (Fig. \ref{Trigono1}). Similarly, let $\bigtriangleup_j$ be the defect angle of right triangle about rotational point, (Fig. \ref{Trigono2}). The dissecting of $\mathcal{F}_G$ gives the result \begin{equation} \text{Area}~\mathcal{F}_G=\sum_{i=1}^{l-1} i~\mathrm{A}_i~2~\bigtriangleup_i + \sum_{j=3}^{l} j~\mathrm{B}_j~2~\bigtriangleup_j, \end{equation} where $\{\mathrm{A}_i ; \mathrm{B}_j \}$ is a Diophantine solution in \ref{Diophantine1}-\ref{Diophantine4}.\\ Now, by considering Fig. \ref{Trigono1}-\ref{Trigono2}, the defect angles are exactly $\bigtriangleup_i=\pi-\frac{\pi}{2}-\frac{\alpha_i}{2}-\frac{\beta_i}{2}$ and $\bigtriangleup_j=\pi-\frac{\pi}{2}-\frac{\alpha_j}{2}-\frac{\beta_j}{2}$. Also, by Diophantine conditions in (\ref{Diophantine1})-(\ref{Diophantine4}) together with central angle condition in (\ref{central_angles}), we can conclude \begin{align*} \text{Area}~\mathcal{F}_G&=\sum_{i=1}^{l-1} i~\mathrm{A}_i~2~\bigtriangleup_i + \sum_{j=3}^{l} j~\mathrm{B}_j~2~\bigtriangleup_j\\ & =2\sum_{i=1}^{l-1} i~\mathrm{A}_i~\left( \frac{\pi}{2}-\frac{\alpha_i}{2}-\frac{\beta_i}{2} \right)+2\sum_{j=3}^{l} j~\mathrm{B}_j~\left( \frac{\pi}{2}-\frac{\alpha_j}{2}-\frac{\beta_j}{2} \right)\\ &=\pi (2(l+w-1))-\frac{2}{3}\pi l-2 \pi w -2 \pi=\left( \frac{4}{3}l-4 \right)\pi.~~ \square \end{align*} \section{Appendix} \begin{lemma} The upper bounds of ~$\sin^{-1}\left(2\sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{3i} \right)} \right)$ \\and~ $\sin^{-1}\left(2\sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{j} \right)} \right)$ is given by \begin{align*} &\sin^{-1}\left( 2 \sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{3i}\right)}\right) < \frac{3}{\pi}\sin^{-1}\left( 2 \sin{\left(\frac{\pi}{4l-6} \right)}\right)\left( \frac{\pi}{2}-\frac{\pi}{3i} \right),\\ &\text{and}\\ &\sin^{-1}\left( 2 \sin{\left( \frac{\pi}{4l-6} \right)} \cos{\left( \frac{\pi}{j}\right)}\right) < \frac{3}{\pi}\sin^{-1}\left( 2 \sin{\left(\frac{\pi}{4l-6} \right)}\right)\left( \frac{\pi}{2}-\frac{\pi}{j} \right), \end{align*} for all $i=1 \cdots l-1$, $j=3, \cdots, l$ and $l \geq 5$. \end{lemma} \begin{proof} We will provide the proof for the first inequality, then the second inequality can be proven in a similar way. Consider \begin{align*} 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)} < 2\sin{\left(\frac{\pi}{4l-6} \right)} < \cos{\left( \frac{\pi}{3i} \right)},~ \text{for all}~ i=1,\cdots l-1,~ l \geq 5 \end{align*} Since $\sin^{-1}$ is increasing in $(0,1]$, then we have \begin{equation*} \sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right) < \sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) < \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right). \end{equation*} Since $\sin^{-1}$ is concave up, then the slope of its secant line through the origin $(0,0)$ and $(x, \sin^{-1}(x))$ is increasing, that is $\frac{\sin^{-1}(x_1)}{x_1} < \frac{\sin^{-1}(x_2)}{x_2},~\text{if}~x_1<x_2$. Therefore, \begin{equation*} \frac{\sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right)}{2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}} < \frac{\sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) }{2\sin{\left(\frac{\pi}{4l-6} \right)}} < \frac{\sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)}{\cos{\left( \frac{\pi}{3i} \right)}}. \end{equation*} Now, we multiply all (positive) sides in the inequality by\\ (positive) $2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}$ to have \begin{align*} \sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right) &<\sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) \cdot \cos{\left( \frac{\pi}{3i} \right)}\\ &< 2\sin{\left(\frac{\pi}{4l-6} \right)} \cdot \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right) . \end{align*} Hence, we have \begin{align*} \frac{\sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right)}{2\sin{\left(\frac{\pi}{4l-6} \right)} \cdot \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)} < \frac{\sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) \cdot \cos{\left( \frac{\pi}{3i} \right)}}{2\sin{\left(\frac{\pi}{4l-6} \right)} \cdot \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)}. \end{align*} Note that $\cos{\left( \frac{\pi}{3i} \right)} \geq \cos{\left( \frac{\pi}{3} \right)}$ for all $i=1 \cdots l-1$. Since $\sin^{-1}$ is concave up, then we have $\frac{\sin^{-1}\left( \cos{\left( \frac{\pi}{3i}\right)} \right)}{\cos{\left( \frac{\pi}{3i}\right)}} \geq \frac{\sin^{-1}\left( \cos{\left( \frac{\pi}{3} \right)} \right)}{\cos{\left( \frac{\pi}{3} \right)}}=\frac{\pi}{3}$. Therefore, $\frac{\cos{\left( \frac{\pi}{3i}\right)}}{\sin^{-1}\left( \cos{\left( \frac{\pi}{3i}\right)} \right)} \leq \frac{3}{\pi}$. Then we have \begin{align*} &\frac{\sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right)}{2\sin{\left(\frac{\pi}{4l-6} \right)} \cdot \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)} < \frac{3}{\pi} \frac{\sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right)}{2\sin{\left(\frac{\pi}{4l-6} \right)}}.\\ &\text{By simplifying then}\\ &\sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right) < \frac{3}{\pi} \sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)\\ &\text{since}~ \sin^{-1}\left( \cos{\left( \frac{\pi}{3i} \right)} \right)=\left( \frac{\pi}{2}-\frac{\pi}{3i} \right), \text{then}\\ &\sin^{-1}\left( 2\sin{\left(\frac{\pi}{4l-6} \right)}\cos{\left( \frac{\pi}{3i} \right)}\right) <\frac{3}{\pi} \sin^{-1}\left(2\sin{\left(\frac{\pi}{4l-6} \right)} \right) \left( \frac{\pi}{2}-\frac{\pi}{3i} \right) \end{align*} as we claimed. \end{proof} \section*{Acknowledgement} I am very grateful to Prof. Emil Molnar for a bunch of meaningful mathematical discussions. I am also really thankful to Dr. Jen\H{o} Szirmai who guided my doctorate studies in Budapest.
46aa3648d01c1ca652ca298278ea43d6ffe237ad
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The excitation spectrum of the nucleon in the energy range up to 2.5 GeV is presently investigated with electromagnetically induced reactions at JLab, ELSA and Mainz. A major progress is expected from the experimental determination of polarization observables and exclusive measurements. New theoretical methods appear to be necessary for the analysis of the incoming data, as traditional partial wave analyses have to deal with a large number of parameters. For instance, the methods developed at EBAC at JLab are based on theories of non-resonant meson dynamics which, taken together with the resonant contributions, describe a variety of reactions. However at large energies, two-body reactions show regular features which with very high likelihood are not due to individual resonances. For energies above 2~GeV, the angular distributions are strongly forward peaked and show a smooth energy dependence. Moreover, at backward angles there is another regularity. In that angular region many reactions show a rise of the cross sections, with a magnitude that depends smoothly on the energy. Most of our knowledge about resonances with large spin has been obtained from the study of pion scattering near 180$^0$. A summary of models for high energy meson-nucleon backward scattering is given in refs.~\cite{Barger67,Barger68,Berger71,Gregorich71,Storrow75}. These models are based on Regge phenomenology using parameters determined by a systematic analysis of the backward differential cross section at different energies that were available before 1972. None of these models can describe simultaneously the data on differential cross sections and polarizations available at different energies. Experimental results at certain fixed energies were reproduced by introducing non-Regge terms \cite{Birsa77} in addition to the standard Regge amplitudes. There are data for $\pi N$ backward scattering published after 1974 \cite{Baglin75,Jacholkowski77,Ghidini82,Baker83,Armstrong87}, but unfortunately, most of these data have never been analyzed in the framework of the previous Regge models. A major goal of this work is to analyze differential cross sections and polarization data of $\pi N$ backward scattering for invariant collision energy $\sqrt{s} \geq $ 3 GeV. We include experimental results available for the reactions $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$. Regge phenomenology is applied in order to fix the reaction amplitude given by the contribution from four exchange trajectories, namely $N_\alpha$, $N_\gamma$, $\Delta_\delta$ and $\Delta_\beta$. These trajectories are parameterized by real linear functions of the squared four-momentum transfer $u$. Using $505$ data points, we obtain a $\chi^2$ per data point of $1.84$. Let us briefly recall the status of the published Regge phenomenology for backward pion-nucleon scattering. The differential cross section data for the reactions $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$, and $\pi^-p \to \pi^0n$ can be described by three baryon trajectories, called $N_\alpha$, $N_\gamma$ and $\Delta_\delta$, which start with the nucleon, the $D_{13}(1520)$, and the $\Delta_{33}(1232)$, respectively. The $N_\alpha$ and $\Delta_\delta$ trajectories are the leading baryon trajectories for the $u$-channel isospin $I_u=1/2$ and $I_u=3/2$ reactions. The dip structure at $u \approx -$0.15~GeV$^2$ in the backward $\pi^+p \to \pi^+p$ differential cross section is due to a zero in the $N_\alpha$ trajectory. A third trajectory $N_\gamma$ is needed because of the differences in the dip structure of the $\pi^+p \to \pi^+p$ and $\pi^-p \to \pi^0n$ differential cross sections. First polarization data were obtained after 1971 \cite{Aoi71,Dick72,Dick73,Birsa76}. None of the Regge models available at that time could describe those data. Furthermore, for the $\pi^-p \to \pi^-p$ reaction the $N_\alpha$ and $N_\gamma$ exchanges do not contribute. Thus, within the frame of pole-exchange Regge models there is no relative phase between the spin flip and spin non-flip amplitudes from the $\Delta_\delta$ trajectory alone. As a consequence, the available models predicted zero polarization for the $\pi^-p \to \pi^-$p reaction, in contradiction to the polarization data. Several attempts were made in order to resolve the conflict between polarization data and the models. First, let us mention the Regge model of ref. \cite{Gregorich71} which originally included only the $N_\alpha$ and $\Delta_\delta$ trajectories. The polarization predicted by this model for the $\pi^-p \to \pi^-p$ reaction was not zero because the trajectories were considered as complex functions parameterized in a quite sophisticated way. Later on it was found that the predictions of this model disagreed with the polarization data and the model was substantially modified \cite{Park74} through additional inclusion of the $N_\gamma$ trajectory and by using sophisticated parametrizations of the vertex functions. After that the differential cross section and polarization at two beam momenta, $5.91$ and 6~GeV, were well fitted. It was unclear whether this modified model still is able to reproduce data on differential cross sections available at other energies that were analyzed originally \cite{Gregorich71}, because the systematic analysis was not repeated. In the Regge analysis of ref. \cite{Mir81} the $N_\alpha$ and $N_\gamma$ trajectories were taken as real functions. But, following refs. \cite{Storrow75,Storrow73} the $\Delta_\delta$ contribution was parameterized differently with regard to the real and imaginary part of the amplitude. That allows to obtain relative phases between the spin flip and non-flip amplitudes as required by the non-zero polarization for the $\pi^-p \to \pi^-p$ process. It was shown that this model reproduces well the data on differential cross sections at pion momenta above 23 GeV and the $\pi^+p \to \pi^+p$ polarization \cite{Aoi71}. However, these calculations could not describe the polarization data \cite{Dick73} in the $\pi^-p \to \pi^-p$ reaction. Although a reasonable description of the data was achieved, except for the $\pi^-p$ polarization, one should note that the modification of the $\Delta_\delta$ amplitude employed is not conventional in Regge phenomenology. Furthermore, it was not shown whether this model can reproduce the data at momenta below 23~GeV. Another effort to describe the data on differential cross sections and polarizations was presented in ref. \cite{Birsa77}. Only data at the pion momentum of 3.5~GeV \cite{Birsa76,Banaigs68,Banaigs69,Bradamante73,DeMarzo75} and 6~GeV \cite{Dick72,Dick73,Owen69,Boright70} were included in the analysis performed with two different Regge models. The first one takes into account the $N_\alpha$, $N_\gamma$ and the modified \cite{Storrow75,Storrow73} $\Delta_\delta$ contributions discussed above. The second model \cite{Donnachie74} accounts for amplitudes given by the standard $N_\alpha$, $N_\gamma$ and $\Delta_\delta$ Regge pole terms, but includes in addition a coherent background amplitude which is ascribed to quark rearrangement processes \cite{Gunion72}. It was shown that both models reproduce the data well when fitted separately to the experimental results at momenta of 3.5 GeV and 6 GeV. But it was not possible to obtain a reasonable description of the experiments within a simultaneous fit of the 3.5 GeV and 6 GeV data. Therefore, it was speculated that the polarization data for the $\pi^-p \to \pi^-p$ reaction might indicate that some of the amplitudes have an energy dependence that differs from the power law in invariant collision energy, which is typical for Regge phenomenology. However, any definite conclusion requires more systematic and comprehensive theoretical studies of backward pion scattering, which was not done. The present study shows that the addition of a second Delta trajectory leads to an economic description of backward pion-nucleon scattering for large energies in terms of a simple Regge phenomenology. The amplitudes obtained at high energies allow an extrapolation to lower energies. We inspect how rapidly the Regge phenomenology starts to deviate from the data at invariant collision energies $2.4 \le \sqrt{s} \le 3$ GeV. The analysis concentrates on available polarization data that are expected to be sensitive to possible contributions of high mass resonances. The indicated energy range is chosen for two principal reasons. First, the energy dependence of the differential cross section for the pion-nucleon scattering at 180$^0$ shows some structures at these energies. Second, many earlier analyses \cite{Hoehler83,Hendry78,Cutkosky79,Koch80} found evidence for excited baryons with masses within this energy range. We study in detail the data available for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ scattering at 180$^0$ and calculate the confidence level for the discrepancy between the data and our results. That allows us to estimate whether the oscillations of differential cross section around the continuation of the Regge result is of systematic or rather of statistical nature. Finally we investigate the relation between the baryon trajectories fixed by our analysis in the scattering \, region and the \, baryon spectrum. Indeed the \, ordering of the hadronic states according to the Regge trajectories in the Chew-Frautschi plot is one of the most remarkable features of the Regge phenomenology. However, the Regge classification of baryons in many studies \cite{Storrow72,Mir83,Glozman02,Goity07} was done by using known or predicted states and not scattering data. Here we address the question whether the trajectories obtained in the analysis of backward pion scattering are the same as those given by the known baryon spectrum. The paper is organized as follows. In sect. 2, we introduce our formalism. Sect. 3 provides a comparison of the results of our calculations with data on differential cross sections and polarizations at invariant collision energies above 3 GeV. In sect. 4 the amplitude is extrapolated to lower energies. The paper ends with a Summary. An appendix summarizes the available world data set on backward pion-nucleon scattering. \vfill \pagebreak \section{Formalism} The differential cross section for backward pion-nucleon scattering reads \begin{eqnarray} \frac{d\sigma}{du}=\frac{|{\cal M}^{++}|^2+|{\cal M}^{+-}|^2}{64\pi sq^2}. \end{eqnarray} Here, the $s$-channel helicity non-flip and flip amplitudes are called ${\cal M}^{++}$ and ${\cal M}^{+-}$, respectively, while $q$ denotes the pion momentum in the $s$-channel center-of-mass (CM) system, using the notations s,t, and u for the Mandelstam variables. The polarization asymmetry is given by \begin{eqnarray} P=\frac{2\,{\rm Im}\!\left[{\cal M}^{++}{{\cal M}^{+-}}^\ast\right]}{{|{\cal M}^{++}|}^2 + {|{\cal M}^{+-}|}^2}. \end{eqnarray} Taking the amplitudes with specified $u$-channel isospin, we can write the $\pi N$ backward scattering amplitudes as \begin{eqnarray} {\cal M}^{\pi^+p\,\to\, \pi^+ p}&=&\frac{2}{3}{\cal M}^N+\frac{1}{3}{\cal M}^\Delta, \\ {\cal M}^{\pi^-p\,\to\, \pi^- p}&=&{\cal M}^\Delta, \\ {\cal M}^{\pi^-p\,\to\, \pi^0 n}&=&\frac{\sqrt{2}}{3}{\cal M}^N-\frac{\sqrt{2}}{3}{\cal M}^\Delta, \end{eqnarray} where the $N$ and $\Delta$ superscripts denote $u$-channel isospin $1/2$ and $3/2$ contributions, respectively. The $s$-channel helicity amplitudes are expressed in terms of the invariant amplitudes $A$ and $B$ as \begin{eqnarray} {\cal M}^{++}&=&2\left[m_NA+\left(E_N\sqrt{s}-m_N^2\right)B\right]{\rm cos}(\theta/2), \\ {\cal M}^{+-}&=&2\left[E_NA+m_N\left(\sqrt{s}-E_N\right)B\right]{\rm sin}(\theta/2), \label{flip} \end{eqnarray} where $E_N$ refers to the energy of the nucleon in the $s$-channel CM system, and $\theta$ is the $s$-channel scattering angle. We parameterize the invariant amplitudes $A$ and $B$ \cite{Hoehler83} by \begin{eqnarray} A &=& \sum_i \beta_i^A(u) \frac{ \zeta_i(u)}{\Gamma\!\left(\alpha_i-1/2\right)} \left(\frac{s}{s_0}\right)^{\alpha_i-1/2}, \\ B &=& \sum_i \beta_i^B(u) \frac{ \zeta_i(u)}{\Gamma\!\left(\alpha_i-1/2\right)} \left(\frac{s}{s_0}\right)^{\alpha_i-1/2}, \label{amplitude} \end{eqnarray} where $s_0 = $ 1 GeV$^2$ is a scaling factor and $\zeta_i(u)$ is the Regge propagator, \begin{eqnarray} \zeta_i(u)= \frac{1+{\cal S}_i \, {\rm exp}\!\left[-i\pi\!\left(\alpha_i(u)-1/2\right)\right]}{{\rm sin} [\pi\!\left(\alpha_i(u)-1/2\right)]}, \label{signature} \end{eqnarray} with ${\cal S}_i$ denoting the signature of the trajectory. The $i$-th baryon Regge trajectory, $\alpha_i$, is parameterized as a linear function of $u$, \begin{eqnarray} \alpha_i(u) = {\alpha_0}_i + \alpha\prime \,u~, \end{eqnarray} where $i$ labels the trajectories $N_\alpha, N_\gamma, \Delta_\delta$ and $\Delta_\beta$, respectively. The slope $\alpha\prime$ and the intercept $\alpha_0$ are determined by a fit to the data. As will be discussed later, we take the same slope parameter for all four trajectories. \begin{table}[t] \begin{center} \caption{\label{notation} The leading baryon trajectories. The last column shows the parity partners that have the same signature but opposite parity.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|c|c|c|} \hline Trajectory & Isospin & Parity & Signature & Partner \\ \hline $N_\alpha$ & $1/2$ & $+$ & $+$ & $N_\beta$ \\ $N_\gamma$ & $1/2$ & $-$ & $-$ & $N_\delta$ \\ $\Delta_\delta$ & $3/2$ & $+$ & $-$ & $\Delta_\gamma$ \\ $\Delta_\beta$ & $3/2$ & $-$ & $+$ & $\Delta_\alpha$ \\ \hline \end{tabular} \end{center} \end{table} The baryon trajectories used in the present work are given in table \ref{notation}. The signature of a trajectory is defined as ${\cal S} = (-1)^{J-1/2}$, where $J$ is the baryon spin. The classification of the Regge trajectories is given in terms of the signature ${\cal S} = {\pm}$ 1 and the parity $P = {\pm}$ 1. Thus one should consider four trajectories for the nucleon as well as for the Delta-isobar states. Furthermore, the parity partners of the discussed trajectories are defined \cite{MacDowell,Gribov} in the Regge formalism as trajectories with the same signature but opposite parity. They are also indicated in table \ref{notation}. The residue functions $\beta^A(u)$ and $\beta^B(u)$ for each trajectory are parameterized by \begin{eqnarray} \beta^A(u)&=&a+b\,u~, \\ \beta^B(u)&=&c+d\,u~. \end{eqnarray} \section{Results for high energies} \begin{figure}[b] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_pi+p_c.eps}} \caption{ \label{dsdu_pi+p_c} The differential cross section for $\pi^+p \to \pi^+p$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies, $\sqrt{s}$, indicated in the legend. The references to the data are given in tab. \ref{pi+p}. The solid lines are the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_pi+p_a.eps}} \caption{ \label{dsdu_pi+p_a} The differential cross section for $\pi^+p \to \pi^+p$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies, $\sqrt{s}$, indicated in the legend. The references to the data are given in tab. \ref{pi+p}. The solid lines are the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_pi+p_b.eps}} \caption{ \label{dsdu_pi+p_b} The differential cross section for $\pi^+p \to \pi^+p$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies indicated in the legend. The references to the data are given in tab. \ref{pi+p}. The solid lines show the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.39\textwidth}{!}{\includegraphics{dsdu_pi-p_a.eps}} \caption{ \label{dsdu_pi-p_a} The differential cross section for $\pi^-p \to \pi^-p$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies indicated in the legend. The references to the data are given in tab. \ref{pi-p}. The solid lines are the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_pi-p_b.eps}} \caption{ \label{dsdu_pi-p_b} The differential cross section for $\pi^-p \to \pi^-p$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies indicated in the legend. The references to the data are given in tab. \ref{pi-p}. The solid lines are the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_cex.eps}} \caption{ \label{dsdu_cex} The differential cross section for $\pi^-p \to \pi^0 n$ backward scattering as a function of the $u$-channel four-momentum transfer squared shown for different invariant collision energies indicated in the legend. The references to the data are given in tab. \ref{cex}. The solid lines are the results of our model calculation. Both data and calculations were scaled by the indicated factors.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{pol.eps}} \caption{ \label{pol_pi+p_pi-p} The polarization for $\pi^+p \to \pi^+p$ and $\pi^-p \to \pi^-p$ backward scattering at the pion beam momentum of 6 GeV ($\sqrt{s}=3.49$ GeV) as a function of the $u$-channel four-momentum transfer squared. The references to the data are specified in table~\ref{pol}. The solid lines are the results of our model calculation. } \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_180.eps} } \caption{ \label{dsdu180_pi+p_pi-p_cex} Differential cross section for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ reaction at the scattering angle $\theta = 180^\circ$ as a function of the invariant collision energy. The references to the data are shown in table \ref{dsdu180}. The solid lines are the results of our model calculation. } \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_u=0.eps}} \caption{\label{dsduu0_pi+p_pi-p_cex} Differential cross section for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n $ scattering at the four-momentum transfer squared $u = 0$ as a function of the invariant collision energy. The references to the data are shown in table \ref{dsduu0}. The solid lines show our results. } \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{dsdu_180_1.eps}} \caption{\label{differ}The difference $D$ between the experimental differential cross sections and the Regge calculation at the scattering angle $\theta$=180$^0$ as defined by eq. (\ref{difru}), shown as a function of the invariant collision energy for different reaction channels.} \end{figure} We use almost all data available for the differential cross sections and polarization asymmetries for backward $\pi N$ scattering with invariant collision energy $\sqrt{s} \ge $ 3~GeV, see tables \ref{pi-p}-\ref{dsduu0} in the appendix for a short overview. For the $\pi^-p \to \pi^0n$ reaction (the charge-exchange channel, abbreviated as CEX), the data by Boright {\it et al.} \cite{Boright70} and Schneider {\it et al.} \cite{Schneider69} are known to be inconsistent with the experimental results from DeMarzo {\it et al.} \cite{DeMarzo75} and Chase {\it et al.} \cite{Chase70}. The Regge model analyses \cite{Berger71,Mir81} done previously included the data from refs.~\cite{Boright70,Schneider69}. However, in our study we include the experimental results from refs.~\cite{DeMarzo75,Chase70} since these data are more recent and furthermore for these data the appropriate radiative corrections have been applied \cite{DeMarzo75,Hoehler83}. For $\pi^+p \to \pi^+p$ and $\pi^-p \to \pi^-p$ backward scattering, the data from refs.~\cite{Brody66,Bashian74,Ashmore67,Babaev72,Buran76,Babaev77} are not included in our fitting since they are not consistent with the much more recent data indicated in the tables. The problem of discrepancies in the absolute normalization of the differential cross sections measured in different experiments is discussed in details in refs.~\cite{Berger71,Mir81}. In the present analysis we apply the procedure proposed in refs.~\cite{Berger71,Mir81} in order to account for the systematic uncertainties due to the absolute normalization. Note that in table~\ref{pol} we indicate references to the data available at invariant collision energies from 2.35 to 3.49 GeV. However, only experimental polarization data at energies above $\sqrt{s}=$ 3~GeV were used in the global fit. The other data at low energies are compared with the Regge calculation in order to clarify how much they deviate from the expectation based on the reaction amplitudes constructed at high energies. \begin{table}[b] \begin{center} \caption{\label{chi2} Summary of the $\chi^2$ for the differential cross sections and polarization data for $\pi N$ backward scattering. Here ND denotes the number of data points.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|l|c|c|} \hline Observable & ND & $\chi^2\!/$ND \\ \hline $d\sigma/du$ ($\pi^+p$) & 227 & 2.32 \\ $d\sigma/du$ ($\pi^-p$) & 187 & 1.43 \\ $d\sigma/du$ (CEX) & ~59 & 1.94 \\ $P$ ($\pi^+p$) & ~20 & 0.71 \\ $P$ ($\pi^-p$) & ~12 & 0.65 \\ \hline Total & 505 & 1.84 \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{\label{para-u} Parameters of the $N_\alpha$, $N_\gamma$, $\Delta_\delta$ and $\Delta_\beta$ amplitudes obtained in the global fit. Note that the slope $\alpha^\prime$ was taken to be the same for the different amplitudes.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|r|r|r|r|r|} \hline Parameters & $N_\alpha$ & $N_\gamma$ & $\Delta_\delta$ & $\Delta_{\beta}$ \\ \hline $a$ [GeV$^{-1}$] & $-60.68$ & $ 47.22$ & $ -75.15$ & $ 1419.99$ \\ $b$ [GeV$^{-3}$] & $326.52$ & $-215.84$ & $-138.75$ & $ 3052.84$ \\ $c$ [GeV$^{-2}$] & $546.40$ & $-101.11$ & $ 64.16$ & $-192.64$ \\ $d$ [GeV$^{-4}$] & $307.42$ & $-128.04$ & $ 86.77$ & $-695.81$ \\ $\alpha_0$ & $-0.36$ & $ -0.62$ & $ 0.03$ & $ -2.65$ \\ \hline $\alpha'$ [GeV$^{-2}$] & \multicolumn{4}{c|}{0.908} \\ \hline \end{tabular} \end{center} \end{table} Finally, the number of data points (ND) included in the global fit and the obtained $\chi^2$ for various observables are listed in table \ref{chi2}. Here we also indicate the $\chi^2/$ND for different reaction channels. The small $\chi^2$ obtained for the fit to the polarization data originates from large uncertainties in the experimental results. By fitting $505$ data points we get a total $\chi^2=$~1.84 per data point. The 21 free parameters of the model are listed in tab.~\ref{para-u}. Note that the slope $\alpha^\prime$ was taken to be the same for the different baryon trajectories. That ensures that the baryon trajectories are parallel in the plane given by the spin and mass of baryons. Figs.~\ref{dsdu_pi+p_c}-\ref{dsdu_pi+p_b} show experimental results on $\pi^+ p \to \pi^+p$ differential cross sections together with our calculations. Note that the data indicate a dip near $u \simeq -$~0.15~GeV$^2$ where the $N_\alpha$ amplitude passes through zero. Indeed taking into account the parameters listed in table~\ref{para-u} it is clear that at $u \simeq -$~0.15~GeV$^2$ the $N_\alpha$ trajectory becomes $\alpha(u) \simeq -$~1/2 and, therefore, the signature factor given by eq.~(\ref{signature}) is $\zeta_\alpha(u) =$~0. However, the dip in the $\pi^+ p \to \pi^+ p$ differential cross sections is partially filled due to the contributions from other trajectories that have zeros in the signature factors at different values of the four-momentum transfer squared $u$. Also note that at small values of $|u| < $~0.1~GeV$^2$ the data indicate an exponential dependence. At $|u| > $~0.4~GeV$^2$ the differential cross sections show a smooth, almost constant behavior. The dip observed in the $\pi^+ p \to \pi^+ p$ differential cross sections allows to conclude that the $N_\alpha$-trajectory dominates this reaction channel. In general there is reasonable agreement between the $\pi^+p$ backward scattering data and our Regge calculation. There is, however, a disagreement between our results and the data of ref. \cite{Baker71} at $\sqrt{s}=$~3.265 GeV in the vicinity of the dip, which signals that the Regge approximation starts to break down for low energies. Figs.~\ref{dsdu_pi-p_a}-\ref{dsdu_pi-p_b} illustrate our calculation together with experimental results on $\pi^- p \to \pi^-p$ differential cross sections. Now the data do not have a dip structure but rather show a smooth $u$-dependence. There is no $N_\alpha$-trajectory contribution to this reaction channel. The reaction is entirely governed by the $\Delta$-trajectories. Taking the parameters from table~\ref{para-u} one might expect that in the scattering region the first zero of the $\Delta_\delta$-trajectory is located around $u \simeq -$~1.68~GeV$^2$, while the first zero of the $\Delta_\beta$-trajectory is around $u \simeq -$~2~GeV$^2$. There are no data available at these four-momentum transfers to clarify the situation. And, moreover, at these large values of $|u|$ one should expect additional contributions from the $t$ and $s$ channel exchanges too. It is interesting that the data shown in fig.~\ref{dsdu_pi-p_a} indicate some increase of the differential cross section at $|u| >$~0.8 GeV$^2$, although the accuracy of the experimental results is not so high. The model calculation does not produce such a trend but rather suggests that the differential cross sections slightly decrease. The differential cross section for the $\pi^- p \to \pi^0n$ charge exchange reaction is shown in fig.~\ref{dsdu_cex}. The data indicate a dip around $u \simeq -$~0.15~GeV which originates from the $N_\alpha$-trajectory. However, the structure of the dip observed in the charge exchange reaction differs from the one seen in the $\pi^+p \to \pi^+p$ differential cross section. Historically this difference motivated the inclusion of an additional $N_\gamma$-trajectory in the Regge analysis of pion-nucleon backward scattering. Our Regge model describes the differential cross sections available for the $\pi^-p \to \pi^0n$ reaction fairly well. Concerning the differential cross sections we find that our Regge calculation reproduces the data available for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ backward scattering reasonably well and thus corroborates the finding of the previous analyses \cite{Barger67,Barger68,Berger71,Gregorich71,Storrow75} that, in principle, three trajectories, namely $N_\alpha$, $N_\gamma$ and $\Delta_\delta$ play a significant role in describing the data on differential cross sections. The most striking feature of the data is the dip observed for the $\pi^+p \to \pi^+p$ and $\pi^-p \to \pi^0n$ reaction. This dip allows to determine the intercept $\alpha_0$ of the $N_\alpha$-trajectory. At the same time there is no dip in the $\pi^-p \to \pi^-p$ backward scattering since there is no contribution from the $N_\alpha$-trajectory in that reaction channel. Fig.~\ref{pol_pi+p_pi-p} shows the polarization for $\pi^+ p \to \pi^+p$ and $\pi^-p \to \pi^-p$ backward scattering. These data were collected \cite{Dick72,Dick73} at the pion beam momentum of 6~GeV corresponding to an invariant collision energy of $\sqrt{s}=$~3.49~GeV. For both reactions the polarization substantially depends on the four-momentum transfer squared $u$. Note that only the $\Delta$-trajectories contribute to the $\pi^-p \to \pi^-p$ backward scattering. Therefore, within models that only take into account the $\Delta_\delta$-trajectory, it is impossible to reproduce the polarization for the $\pi^-p \to \pi^-p$ reaction. Previous analyses \cite{Barger67,Barger68,Berger71,Gregorich71,Storrow75} made many ad hoc assumptions, but did not manage to achieve any consistent fits. The present work includes the $\Delta_\beta$-trajectory, which allows to reproduce the polarization data. \section{Extrapolation below 3 GeV} Fig.~\ref{dsdu180_pi+p_pi-p_cex} shows the energy dependence of the differential cross section for the reactions $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0p$ measured at the pion scattering angle of 180$^0$. The data indicate considerable structures for center-of-mass energies up to $\simeq$~3~GeV or even higher. There were intense discussions \cite{Hoehler83,Lennox75,Ma75,Hendry81} whether these structures originate from the excitation of high mass baryons. Indeed, these data on pion scattering at 180$^0$ seem to be up to now the only direct evidence of the existence of excited baryons with masses above 2.4~GeV. The lines in fig.~\ref{dsdu180_pi+p_pi-p_cex} show the results of our calculation extrapolated to low energies. Note that the data below 3~GeV were not included in our fit. We observe that the data oscillate around the non-resonant continuum given by the Regge amplitude. Above $\simeq$ 3 GeV our calculations approach the experimental data. Although the differential cross section of the $\pi^-p \to \pi^-p$ reaction indicates some fluctuations above $\simeq$~3~GeV we consider this as being due to statistical uncertainties. This point will be illustrated below. Fig.~\ref{dsduu0_pi+p_pi-p_cex} displays the data available for the $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ reactions at the four-momentum transfer squared $u =$~0~GeV$^2$ as a function of energy. One sees that the Regge approach agrees with the data above $\sqrt{s} \simeq $~3~GeV. Next we investigate whether the fluctuations of the data with respect to our Regge calculation shown in fig.~\ref{dsdu180_pi+p_pi-p_cex} for the energies $2 \le \sqrt{s} \le 3.5$~GeV is of systematic or of statistical nature. For that purpose, we evaluate the difference $D$ between the experimental differential cross sections and those predicted by the Regge model at the scattering angle $\theta=$~180$^o$ at each $\sqrt{s}$ and for each reaction channel, {\it i. e.} \begin{eqnarray} D =\frac{d\sigma^{\rm Exp.}}{du}-\frac{d\sigma^{\rm Regge}}{du}~, \label{difru} \end{eqnarray} and present the results in fig.~\ref{differ} using a linear scale. At energies $\sqrt{s} >$~2.8 GeV the data available for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ scattering at $\theta=$~180$^0$ are consistent with the predictions of our Regge calculation. Below that energy the data indicate some room for additional contributions. At least the magnitude of the variations seem to be larger than the statistical fluctuations of the experimental results. Furthermore, the different $\pi N$ reaction channels do not indicate the same pattern of differences between the data and the Regge results. The largest difference occurs for the reaction $\pi^+p \to \pi^+p$ at 180$^0$, while the values of $D$ obtained for $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ are almost identical. Note that in the $s$-channel only excitations of $\Delta$-isobars is possible for the $\pi^+p \to \pi^+p$ reaction. The two other reactions allow for the excitation of nucleon as well as $\Delta$-resonances in the $s$-channel. It is interesting to illustrate the arguments of ref.~\cite{Lennox75} where similar differences between the Regge predictions and data for $\pi^+p$ elastic scattering at 180$^0$ were evaluated in terms of an additional resonance contribution. Note that at 180$^0$ only the spin non-flip amplitude contributes, see eq.~(\ref{flip}). The amplitude due to the excitation of baryon resonances in the $s$-channel is given by \begin{eqnarray} {\cal M}^{++} = \sum_n \frac{C_n \, X_n \, (J+1/2)}{\epsilon -i}(-1)^{l}~, \end{eqnarray} where the summation is done over the resonances, $C_n$ is a Clebsch-Gordan coefficient, $X_n$ denotes the resonance elasticity, $J$ the spin of the resonance, and $l$ the orbital momentum between pion and nucleon. Here \begin{eqnarray} \epsilon = \frac{M_R^2-s}{M_R\Gamma_R}, \end{eqnarray} with $M_R$ and $\Gamma_R$ being the resonance mass and width, respectively. The width was taken as energy-dependent. Furthermore, recalling $ P_l(\cos\theta = {-1}) = (-1)^l$, one sees explicitly that the resonance amplitudes interfere with the non-resonant amplitude either constructively or destructively according to the parity of the resonance. In the case of $\pi^+p \to \pi^+p$ elastic scattering the $s$-channel contributions from the $D_{35}(2350)$ and $H_{3,11}(2420)$ with $l =$ 2 and $l =$ 5 are the major candidates responsible for the change of the sign of $D$ seen in fig.~\ref{differ}. \begin{figure}[t] \centering \resizebox{0.45\textwidth}{!}{\includegraphics{Pol_pi+p.eps}} \caption{ \label{pol_pi+p} The polarization asymmetry for $\pi^+p \to \pi^+p$ backward scattering at different invariant collision energies as indicated. The references to the data are specified in tab.~\ref{pol}. The solid lines show the results of our Regge calculation with the model parameters listed in table~\ref{para-u}.} \end{figure} \begin{figure}[t] \centering \resizebox{0.45\textwidth}{!}{\includegraphics{Pol_pi-p.eps}} \caption{ \label{pol_pi-p} The polarization asymmetry for $\pi^-p \to \pi^-p$ backward scattering at different invariant collision energies as indicated. The references to the data are specified in table~\ref{pol}. The solid lines show the results of our Regge calculation with the model parameters listed in table~\ref{para-u}.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{Jalpha.eps}} \caption{\label{nalpha} Chew-Frautschi plot for the $N_\alpha$ trajectory for baryons with parity $P=+$ 1 and signature ${\cal S}=+$ 1. The line shows the Regge trajectory according to table~\ref{para-u}. The symbols indicate the results from other approaches.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{Jgamma.eps}} \caption{\label{ngamma} $N_\gamma$ trajectory for baryons with parity $P=-$ 1 and signature ${\cal S}=-$ 1. The line shows the Regge trajectory according to table~\ref{para-u}. The symbols indicate the results from other approaches.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{Jdelta.eps}} \caption{ \label{ddelta} $\Delta_\delta$ trajectory for $\Delta$-isobars with parity $P=+$ 1 and signature ${\cal S}=-$ 1. The line shows the Regge trajectory, while the symbols indicate the results from other approaches.} \end{figure} \begin{figure}[t] \centering \resizebox{0.4\textwidth}{!}{\includegraphics{Jbeta.eps}} \caption{\label{dbeta} $\Delta_\beta$ trajectory for $\Delta$-isobars with parity $P=-$ 1 and signature ${\cal S}=+$ 1. The line shows the Regge trajectory, while the symbols indicate the results from other approaches.} \end{figure} As is illustrated by fig.~\ref{pol_pi+p_pi-p} the Regge model reproduces the polarization data for $\pi^+p \to \pi^+p$ and $\pi^-p \to \pi^-p$ backward scattering at $\sqrt{s}=$ 3.49 GeV. In fig.~\ref{pol_pi+p}, we show the polarization data for backward scattering in the reaction $\pi^+p \to \pi^+p$ available in the energy range $2.4 < \sqrt{s} < 3$ GeV \cite{Bradamante73,Sherden70}. For energies above $\sqrt{s} \simeq $~2.73 GeV the Regge results reproduce the polarization data reasonably well, but within the range $2.46 \le \sqrt{s} \le 2.64$~GeV, there is room for additional contributions at $|u| > $~0.1~GeV$^2$. This finding is consistent with the data on differential cross section presented in fig.~\ref{dsdu180_pi+p_pi-p_cex}. The situation is quite different for the polarization observed in $\pi^-p \to \pi^-p$ backward scattering \cite{Fukushima80,Birsa76}. The data \cite{Fukushima80} at $\sqrt{s} \simeq$~2.45~GeV show a $u$-dependence similar to the one given by the Regge model, but with a systematic shift to negative values. The data at $\sqrt{s}=$~2.65~GeV and $\sqrt{s}=$~2.73~GeV indicate an almost zero polarization within the experimental uncertainties. This is in line with the $\pi^-p \to \pi^-p$ data on the differential cross section at 180$^0$ displayed in fig.~\ref{dsdu180_pi+p_pi-p_cex}. Let us finally come to the baryon trajectories. Fig. \ref{nalpha} shows the Chew-Frautschi plot for the $N_\alpha$ trajectory. The poles of the amplitude of eq.~(\ref{amplitude}) correspond to baryons with spin~$J$, as is indicated by the dashed lines. The results of partial wave analyses (PWA) from the Karlsruhe-Helsinki (KH) \cite{Hoehler83,Koch80} and the George Washington University (GWU) \cite{Arndt04} groups are included in the figure, too. We also indicate the averaged values given by the Particle Data Group \cite{PDG} and the most recent systematic analysis by Klempt and Richard \cite{Klempt09}. The analysis by Hendry~\cite{Hendry78,Hendry81} is based on an impact parameter approach that differs significantly from our result for the $K_{1,13}(2700)$. Fig.~\ref{ngamma} shows the Chew-Frautschi plot for the $N_\gamma$ trajectory. There are two resonances with a four-star rating by the PDG on this trajectory, and even the $I_{1,11}(2600)$ is classified with three stars. The $\Delta_\delta$- and $\Delta_\beta$-trajectories are shown in figs.~\ref{ddelta} and \ref{dbeta}, respectively. The mass of the $H_{3,11}(2420)$ isobar obtained by the GWU PWA differs significantly from other results. Our analysis suggests a $G_{39}$ resonance with a mass of 2.83~GeV as member of the $\Delta_{\beta}$ trajectory. \section{Summary} We have performed a systematic analysis of the data on differential cross sections and polarizations available for $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$ and $\pi^-p \to \pi^0n$ scattering at backward angles. We started out from a Regge model including the $N_\alpha$, $N_\gamma$, $\Delta_\delta$ and $\Delta_\beta$ trajectories and determined the reaction amplitude at energies $\sqrt{s} >3$~GeV. In contrast to previous analyses, the present study reproduces the polarization asymmetriess for both $\pi^+p$ and $\pi^-p$ backward elastic scattering within standard Regge phenomenology. We found that it is not necessary to resort to non-Regge terms \cite{Gregorich71,Park74,Mir81,Storrow75,Storrow73} to describe the polarization, but rather that it is important to include the $\Delta_\beta$-trajectory which was neglected in previous analyses. After the reaction amplitude was fixed at high energies we have inspected the data on differential cross sections for scattering at $\theta = 180^0$ and the polarization asymmetry at energies $2 \le \sqrt{s} \le 3$~GeV. The data available at $\theta = 180^0$ are of special interest because the $\pi^+p \to \pi^+p$, $\pi^-p \to \pi^-p$, and $\pi^-p \to \pi^0n$ differential cross sections indicate considerable structures for center-of-mass energies up to $\simeq 3$~GeV. The data fluctuate around the cross sections given by the Regge calculation. This can be considered as direct evidence of the excitation of baryons with masses up to approximately 2.8~GeV. This point of view is further supported by the data on the polarization asymmetry available at these energies. \begin{acknowledgement} This work is partially supported by the Helmholtz Association through funds provided to the virtual institute ``Spin and strong QCD'' (VH-VI-231), by the EU Integrated Infrastructure Initiative HadronPhysics2 Project (WP4 QCDnet) and by DFG (SFB/TR 16, ``Subnuclear Structure of Matter''). This work was also supported in part by U.S. DOE Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC, operates Jefferson Lab. F.H. is grateful to the support from the Alexander von Humboldt Foundation during his stay in J\"{u}lich where the main part of this paper was completed and the support by COSY FFE grant No. 41445282 (COSY-058). A.S. acknowledges support by the JLab grant SURA-06-C0452 and the COSY FFE grant No. 41760632 (COSY-085). \end{acknowledgement} \vfill \pagebreak \section*{Appendix: Data collection} The appendix summarizes the world data on backward pion-nucleon scattering. \begin{table}[htb] \begin{center} \caption{\label{pi+p} References to data on $\pi^+p \to \pi^+p$ differential cross sections for backward scattering.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|r|r|r|r|l|c|} \hline $\sqrt{s}$~~ & $p_\pi$~ & $u_{min}$ & $u_{max}$ & Experiment & Ref. \\ GeV & \hspace{-2mm} GeV \hspace{-2mm} & GeV$^2$ & GeV$^2$ & & \\ \hline 3.02 & 4.4 & $-$0.04 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.06 & 4.5 & $-$0.05 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.09 & 4.6 & $-$0.05 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.12 & 4.7 & 0.01 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.15 & 4.8 & 0.00 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.18 & 4.9 & 0.00 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.21 & 5.0 & 0.00 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.24 & 5.1 & $-$0.00 & 0.07 & Lennox 75 & \cite{Lennox75} \\ 3.26 & 5.2 & $-$0.00 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.26 & 5.2 & $-$0.82 & 0.01 & Baker 71 & \cite{Baker71} \\ 3.29 & 5.3 & $-$0.01 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.32 & 5.4 & $-$0.01 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.38 & 5.6 & $-$0.01 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.43 & 5.8 & $-$0.01 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.46 & 5.9 & $-$0.87 & 0.06 & Owen 69 & \cite{Owen69} \\ 3.49 & 6.0 & $-$0.02 & 0.06 & Lennox 75 & \cite{Lennox75} \\ 3.75 & 7.0 & $-$1.15 & 0.01 & Baker 71 & \cite{Baker71} \\ 3.99 & 8.0 & $-$0.06 & 0.04 & Frisken 65 & \cite{Frisken65} \\ 3.99 & 8.0 & $-$0.06 & 0.04 & Orear 66 & \cite{Orear66} \\ 4.40 & 9.8 & $-$2.29 & 0.03 & Owen 69 & \cite{Owen69} \\ 4.43 & 10.0 & $-$17.45 & $-$0.62 & Baglin 75 & \cite{Baglin75} \\ 5.16 & 13.7 & $-$2.82 & 0.01 & Owen 69 & \cite{Owen69} \\ 5.74 & 17.1 & $-$0.01 & 0.01 & Owen 69 & \cite{Owen69} \\ 7.56 & 30.0 & $-$0.55 & 0.00 & Baker 83 & \cite{Baker83} \\ 9.73 & 50.0 & $-$0.50 & 0.00 & Baker 83 & \cite{Baker83} \\ 11.50 & 70.0 & $-$0.05 & $-$0.01 & Baker 83 & \cite{Baker83} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{\label{cex} References to data on $\pi^-p \to \pi^0n$ differential cross sections for backward scattering.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|r|r|r|r|l|c|} \hline $\sqrt{s}$~~ & $p_\pi$~ & $u_{min}$ & $u_{max}$ & Experiment & Ref. \\ GeV & GeV & GeV$^2$ & GeV$^2$ & & \\ \hline 3.21 & 5.0 & $-$1.91 & 0.05 & Chase 70 & \cite{Chase70} \\ 3.49 & 6.0 & $-$2.12 & 0.03 & Chase 70 & \cite{Chase70} \\ 3.49 & 6.0 & $-$1.29 & 0.05 & DeMarzo 75 & \cite{DeMarzo75} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[ht] \begin{center} \caption{\label{pi-p} References to data on $\pi^-p \to \pi^-p$ differential cross sections for backward scattering.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|r|r|r|r|l|c|} \hline $\sqrt{s}$~~ & $p_\pi$~ & $u_{min}$ & $u_{max}$ & Experiment & Ref. \\ GeV & \hspace{-2mm} GeV \hspace{-2mm} & GeV$^2$ & GeV$^2$ & & \\ \hline 3.46 & 5.9 & $-$0.89 & 0.06 & Owen 69 & \cite{Owen69} \\ 3.75 & 7.0 & $-$0.60 & 0.02 & Baker 71 & \cite{Baker71} \\ 3.99 & 8.0 & $-$0.39 & 0.01 & Anderson 68 & \cite{Anderson68} \\ 3.99 & 8.0 & $-$0.06 & 0.04 & Frisken 65 & \cite{Frisken65} \\ 3.99 & 8.0 & $-$0.06 & 0.05 & Orear 66 & \cite{Orear66} \\ 4.22 & 9.0 & $-$1.40 & $-$0.02 & Jacholkowski 77 & \cite{Jacholkowski77} \\ 4.40 & 9.8 & $-$2.39 & 0.03 & Owen 69 & \cite{Owen69} \\ 4.44 & 10.0 & $-$1.15 & $-$0.07 & Ghidini 82 & \cite{Ghidini82} \\ 4.84 & 12.0 & $-$1.46 & $-$0.03 & Jacholkowski 77 & \cite{Jacholkowski77} \\ 5.16 & 13.7 & $-$0.18 & 0.02 & Owen 69 & \cite{Owen69} \\ 5.56 & 16.0 & $-$0.73 & $-$0.09 & Anderson 68 & \cite{Anderson68} \\ 5.60 & 16.3 & $-$0.17 & 0.00 & Owen 69 & \cite{Owen69} \\ 7.56 & 30.0 & $-$0.38 & 0.00 & Baker 83 & \cite{Baker83} \\ 9.73 & 50.0 & $-$0.44 & $-$0.00 & Baker 83 & \cite{Baker83} \\ 11.50 & 70.0 & $-$0.25 & $-$0.01 & Baker 83 & \cite{Baker83} \\ 13.00 & 90.0 & $-$0.25 & $-$0.01 & Baker 83 & \cite{Baker83} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{\label{pol} References to polarization asymmetry $P$ data for $\pi N$ backward scattering. Note that only data for $\sqrt{s} > $ 3 GeV were included in our fit.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|c|r|r|r|l|c|} \hline & $\sqrt{s}$ & $p_\pi$ & $u_{min}$ & $u_{max}$ & Experiment & Ref. \\ & GeV & \hspace{-2mm} GeV \hspace{-2mm} & GeV$^2$ & GeV$^2$ & & \\ \hline $\pi^+p$ & 2.40 & 2.59 & $-$1.40 & 0.03 & Martin 75 & \cite{Martin75} \\ $\pi^+p$ & 2.43 & 2.65 & $-$1.31 & 0.03 & Martin 75 & \cite{Martin75} \\ $\pi^-p$ & 2.45 & 2.71 & $-$1.00 & 0.05 & Fukushi. 80 & \cite{Fukushima80} \\ $\pi^+p$ & 2.46 & 2.75 & $-$0.41 & 0.11 & Sherden 70 & \cite{Sherden70} \\ $\pi^+p$ & 2.53 & 2.93 & $-$0.44 & 0.10 & Sherden 70 & \cite{Sherden70} \\ $\pi^-p$ & 2.53 & 2.93 & $-$0.75 & 0.06 & Sherden 70 & \cite{Sherden70} \\ $\pi^+p$ & 2.65 & 3.25 & $-$0.45 & 0.09 & Sherden 70 & \cite{Sherden70} \\ $\pi^-p$ & 2.65 & 3.25 & $-$0.77 & 0.06 & Sherden 70 & \cite{Sherden70} \\ $\pi^+p$ & 2.73 & 3.50 & $-$0.59 & 0.06 & Bradam. 73 & \cite{Bradamante73} \\ $\pi^-p$ & 2.73 & 3.50 & $-$0.55 & $-$0.05 & Fukushi. 80 & \cite{Fukushima80} \\ $\pi^-p$ & 2.73 & 3.50 & $-$0.89 & $-$0.23 & Birsa 76 & \cite{Birsa76} \\ $\pi^+p$ & 2.82 & 3.75 & $-$0.50 & 0.08 & Sherden 70 & \cite{Sherden70} \\ $\pi^+p$ & 2.90 & 4.00 & $-$0.75 & $-$0.01 & Bradam. 73 & \cite{Bradamante73} \\ $\pi^+p$ & 3.49 & 6.00 & $-$0.86 & 0.03 & Dick 72 & \cite{Dick72} \\ $\pi^-p$ & 3.49 & 6.00 & $-$0.93 & $-$0.03 & Dick 73 & \cite{Dick73} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{\label{dsdu180} References to differential cross section data for $\pi N$ scattering at $\theta=180^\circ$. CEX denotes the $\pi^-p \to \pi^0n$ charge exchange reaction.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|r|r|l|c|} \hline Reaction & ${\sqrt{s}}_{min}$ & ${\sqrt{s}}_{max}$ & Experiment & Ref. \\ & GeV & GeV & & \\ \hline $\pi^+p$ & 3.03 & 3.49 & Lennox 75 & \cite{Lennox75} \\ $\pi^-p$ & 3.03 & 3.29 & Kormanyos 66 & \cite{Kormanyos66} \\ CEX & 3.02 & 3.49 & Kistiakowsky 72 & \cite{Kistiakowsky72} \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htb] \begin{center} \caption{\label{dsduu0} References to differential cross section data for $\pi N$ scattering at $u =$ 0 GeV$^2$.} \renewcommand{\arraystretch}{1.2} \begin{tabular}{|c|r|r|l|c|} \hline Reaction & ${\sqrt{s}}_{min}$ & ${\sqrt{s}}_{max}$ & Experiment & Ref. \\ & GeV & GeV & & \\ \hline $\pi^+p$ & 3.46 & 5.74 & Owen 69 & \cite{Owen69} \\ $\pi^+p$ & 7.56 & 11.50 & Baker 83 & \cite{Baker83} \\ $\pi^-p$ & 3.75 & 3.75 & Baker 71 & \cite{Baker71} \\ $\pi^-p$ & 3.46 & 5.60 & Owen 69 & \cite{Owen69} \\ $\pi^-p$ & 7.56 & 13.03 & Baker 83 & \cite{Baker83} \\ $\pi^-p$ & 3.99 & 5.56 & Anderson 68 & \cite{Anderson68} \\ $\pi^-p$ & 4.22 & 4.84 & Jacholkowski 77 & \cite{Jacholkowski77} \\ $\pi^-p$ & 4.44 & 4.43 & Ghidini 82 & \cite{Ghidini82} \\ $\pi^-p$ & 3.99 & 4.84 & Armstrong 87 & \cite{Armstrong87} \\ CEX & 3.49 & 3.99 & DeMarzo 75 & \cite{DeMarzo75} \\ \hline \end{tabular} \end{center} \end{table} \newpage \newpage \newpage
16c1f7f308d579c60d3f519bc26c487b28a1c3c8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The study of the characteristic changes in spectral and variability properties of X-ray binaries is proved to be a valuable source of information on the physics governing the accretion processes and on the fundamental parameters of black holes (BHs). The simultaneous study of the spectral and timing evolution of a BH source during a state transition has been a subject of many investigations [see references in a review by \cite{rm}]. \cite{fb04}, hereafter FB04, introduced a classification of the spectral states in GRS 1915 +105 and studied the spectral state evolution. Using X-ray colors (hardness ratio) they introduced three spectral states. State A: in which the strong blackbody-like (BB) component of color temperature $\gax 1$ keV dominates in the overall spectrum and little time variability is detected. State B: similar to state A but substantial red-noise variability on scales $>1$ s occurs in this state. State C: the spectra are harder than those in states A and B. Photon indices of the power-law components vary from 1.8 to 2.5. White-red noise (WRN) variability on scales $>1$ s takes place in this state. Furthermore FB04 discussed the connection between states A, B, C observed in GRS 1915 with the three ``canonical'' states in black hole candidates (BHCs) also identified by their timing and spectral properties. At a low luminosity state the energy spectrum is dominated by a hard Comptonization component combined (convolved) with a weak thermal component. The spectrum of this low (luminosity) hard state (LHS) is presumably a result of Comptonization (upscattering) of soft photons, originated in a relatively weak accretion disk, off electrons of the hot ambient plasma [see e.g. \cite{st80}]. Variability in LHS is high (fractional root-mean-square variability is up to 40\%) and presented by a flat-top broken power law (white-red noise) shape, accompanied by quasi-periodic oscillations (QPOs) in the range of 0.01-30 Hz, observed as narrow peaks in the power density spectrum (PDS). In high soft state (HSS) a photon spectrum is characterized by a prominent thermal component which is probably a signature of a strong emission coming from a geometrically thin accretion disk. A weak power-law component is also present at the level of not more than 20\% of the total source flux. In the HSS the flat-top variability ceases, QPOs disappear and PDS acquires a pure power-law shape. The total variability in HSS is usually about 5\% fractional rms. The intermediate state (IS) is a transitional state between LHS and HSS. Note in addition to LHS, IS and HSS sometimes very soft state (VSS) is observed in which the BB component is dominant and the power-law component is either very weak or absent at all. The bolometric luminosity in VSS is a factor of 2-3 lower than that in HSS. FB04 concluded that probably all three states A, B, C of GRS 1915+105 are instances of something similar to the HSS/IS observed in other BHC systems, associated to the high accretion rate value for this source, although during the hardest intervals LHS might be sometimes reached. We come to the similar conclusions analyzing spectral and timing data from GRS 1915+105 obtained by {\it RXTE} (see below). Close correlations of nearly periodic variability [quasi-periodic oscillations (QPO)] observed during low-hard and intermediate states with the photon index of the Comptonization spectral component have been reported in multiple state transitions observed from accreting BHs [see \cite{vig}, \cite{ST06}, (2007), (2009), hereafter V03, ST06, ST07 ST09 respectively]. The ubiquitous nature of these correlations suggests that the underlying physical processes which lead to the observed variability properties are closely tied to the corona; furthermore, they vary in a well defined manner as the source makes a transition between spectral states. The fact that the same correlations are seen in many sources, which vary widely in both luminosity (presumably with mass accretion rate) and state, suggests that the physical conditions controlling the index and the low frequency QPOs are characteristics of these sources. Moreover, they may be an universal property of all accreting compact systems, including neutron sources too [see \cite{tf04}, hereafter TF04, and \cite{ts05}]. When a BH is in LHS, radio emission is also detected and a jet is either seen or inferred \citep{f01}. Several models are successful in reproducing the energy spectrum from the radio domain to the hard X-rays [see e.g. \cite{MFF01}, \cite{vrc01}, \cite{cf02}, \cite{M03} and \cite{g05}]. The multiplicity of models that can fit well the time average spectrum of galactic BHs indicates that this alone is not enough to distinguish the most realistic one among them. X-ray timing features can be the key features to finally finding the common physical connection between the corona, the accretion disk and the jet radio emission in BHs. There is a big debate in the literature on the origin of quasiperiodic oscillation (QPO) frequencies [see e.g. \cite{rm}] and its connection with the radio emission. \cite{mfk05} reported on correlation between radio luminosity and X-ray timing features in X-ray binaries containing a number of low magnetic field neutron stars and one black hole GX 339-4. They showed that in the low-hard state (LHS) radio luminosity is correlated with the low frequency QPO (LFQPO). Note ST09 demonstrate that in LHS of Galactic BHs LFQPO changes by order of magnitude, from 0.2 to 2 Hz whereas the photon index has almost the same value of 1.5. Below we show that in GRS 1915+105 the photon index monotonically increases with LFQPO and disk mass accretion rate, although the radio luminosity does not correlate with LFQPO and X-ray luminosity in the whole range of spectral states, from low-hard to high-soft through intermediate states. Recently \cite{kyl08} suggested a model which explains how the QPO phenomenon is related to an appearance of radio flares (jets). Below (see \S 3) we present details of our observational study of the QPO connection with the X-ray and radio flaring activity in GRS 1915+105. In LHS and IS which we consider in our study, only a small part of the disk emission component is seen directly. The energy spectrum is dominated by a Comptonization component presented by a power law. To calculate the total normalization of the ``seed'' disk blackbody component we model the spectrum with a Generic Comptonization model [BMC XSPEC model, see details in \cite{bmc}] which consistently convolves a disk blackbody with a Green function of the Compton Corona to produce the Comptonization component. We argue that the disk emission normalization calculated using this approach produces a more accurate correlation with respect to the correlation with the direct disk component which was obtained using the additive model, multicolor disk plus power law [see e.g. \cite{mr}]. This Paper is a continuation of the study of index-QPO and index-seed photon normalization correlations in BH sources started in ST07 and ST09. Particularly here we present a study of the index-seed photon normalization (disk flux) correlation observed from GRS 1915+105 when it evolves from LHS to HSS. The description of \textit{RXTE} data-set used is given in \S \ref{data}. We have analyzed a broader sample of state transitions from GRS 1915+105 and we found a diverse phenomenology for index evolution through a transition. In \S \ref{obs_transition} we provide a detailed description of state transitions analyzed in this study. In \S 4 we discuss and interpret the results of our observational study. Specifically in \S 4 we consider the effect of the bulk motion Comptonization in the inner part of the accretion flow on the index evolution during a state transition. Also we show that the index saturation effect is a direct consequence of the existence of this inner bulk motion region and, therefore, can be considered as an observational signature of the converging flow (black hole). Furthermore in \S 4 we discuss the TF04 model and the Monte Carlo simulations by \cite{lt09} (in preparation) in which the observable index evolution with $\dot m$ has been already predicted. Conclusions follow in \S \ref{summary}. \section{Observations and Data Reduction \label{data}} In the present Paper, we have used publicly available data of the \textit{RXTE} observatory obtained from January 1997 to April 2006. In total, our study includes 107 observations made at different BH spectral states (LHS, IS, HSS) of the system. Data sets were selected to represent a complete rise-middle-decay track of bright X-ray activity episodes behavior along bright radio flaring events ($S_{15GHz}\ge$250 mJy). Therefore we have chosen powerful ($\ge$250 ASM counts/s) flaring episodes of GRS~1915+105 with a good coverage of simultaneous radio/X-ray observation. In the past some of these data of spectral transitions in GRS 1915+105 were analyzed by \cite{trud99}, \cite{trud01}, \cite{muno99}, \cite{reig00}, ST07 and \cite{rodr08} for the 1997 -- 1998 and 2005 -- 2006 transitions respectively. Standard tasks of the LHEASOFT/FTOOLS 5.3 software package were utilized for data processing using methods recommended by RXTE Guest observer Facility according to ``The RXTE Cook Book'' (http://heasarc.gsfc.nasa.gov/docs/xte/recipes/cook\_book.html). For spectral analysis we used PCA {\it Standard 2} mode data, collected in 3 -- 20~keV energy range. The standard dead time correction procedure has been applied to the data. To construct broad-band spectra, data from HEXTE detectors have also been used. We subtracted background corrected in off-source observations. Only HEXTE data in 20 -- 150~keV energy range were used for the spectral analysis in order to exclude the channels with largest uncertainties. The HEXTE data have been re-normalized based on the PCA. The data are available through the GSFC public archive (http://heasarc.gsfc.nasa.gov). In Tables 1-6 we list groups of observations covering the complete dynamical range LHS-(IS)-HSS-(IS)-LHS of the source evolution during flaring events. We present here period ranges MJD=50462 -- 51081 and MJD=53382 -- 53852, as different types (samples) of bright X-ray activity, with transitions between hard and soft states. Two selected data sets have different patterns of radio/X-ray behavior and of light curve shapes. We also use public GRS~1915+105 data from the All-Sky Monitor (ASM) on-board \textit{RXTE} \citep{sw99}. The ASM light curves (2-12 keV energy range ) were retrieved from the public \textit{RXTE}/ASM archive at HEASARC (http://xte.mit.edu/ASM\_lc.html). The monitoring {\it Ryle Radio Telescope} (15~GHz) data in the 1997 -- 2006 period were kindly provided by Dr. Guy Pooley. The technical details of the radio telescope are described by~\cite{pf97}. \subsection{Spectral analysis} \subsubsection{BMC and iron line components of the model spectrum} The broad-band source spectra were modeled in XSPEC with an additive model consisting of {\it two BMC}: a BMC with high energy cut-off ({\it BMC1} component) and {\it BMC2} component: $wabs*(bmc+bmc*highecut$). We also use a multiplicative {\it wabs} model to take into account of an absorption by neutral material. The {\it wabs} model parameter is an equivalent hydrogen column $N_H$. Systematic error of 1\% has been applied to the analyzed X-ray spectra. The {\it BMC} model describes the outgoing spectrum as a convolution of the input ``seed'' blackbody-like spectrum, whose normalization is $N_{bmc}$ and color temperature is $kT$, with the Comptonization Green's function. Similarly to the ordinary {\it bbody} XSPEC model, the normalization $N_{bmc}$ is a ratio of the source (disk) luminosity to the square of the distance \begin{equation} N_{bmc}=\biggl(\frac{L}{10^{39}\mathrm{erg/s}}\biggr)\biggl(\frac{10\,\mathrm{kpc}}{d}\biggr)^2. \label{bmc_norm} \end{equation} The resulting spectrum is characterized by the parameter $\log(A)$ related to the Comptonized fraction $f$ as $f=A/(1+A)$ and a spectral index $\alpha=\Gamma-1$. There are several advantages of using the BMC model with respect to other common approaches used in studies of X-ray spectra of accreting compact objects, i.e. sum of blackbody/multi-color-disk and power-law/thermal Comptonization. First, the BMC, by the nature of the model, is applicable to the general case where there is an energy gain through not only thermal Comptonization but also via dynamic (bulk) motion Comptonization \citep[see][for details]{ST06}. Second, with respect to the phenomenological $powerlaw$ model, the BMC spectral shape has an appropriate low energy curvature, which is essential for a correct representation of the lower energy part of spectrum. Our experience with $powerlaw$ components shows that the model fit with this component is often inconsistent with the $N_H$ column values and produces an unphysical component ``conspiracy'' with the $highecut$ part. Specifically, when a multiplicative component $highecut$ is combined with BMC, the cutoff energies $E_{cut}$ are in the expected range of 20$\sim$30 keV, while in a combination with $powerlaw$, $E_{cut}$ often goes below 10 keV, resulting in unreasonably low values for photon index. As a result, the implementation of the phenomenological $powerlaw$ model makes much harder, or even impossible to correctly identify the spectral state of the source, which is an imminent task for our study. Third, and even a more important property of the BMC model, it calculates consistently the normalization of the original ``seed'' component, which is expected to be a correct mass accretion rate indicator. Note the Comptonized fraction is properly evaluated by the BMC model. We consider a scenario related to our spectral model (see Fig. \ref{geometry}) where the Compton cloud along with converging flow are located in the innermost part of the source and the Keplerian disk is extended from the Compton cloud (CC) to the optical companion (see e.g. TF04). An iron K$_{\alpha}$-line ($laor$) component \citep{laor} was included in our model spectrum. To summarize the spectral model parameters are the equivalent hydrogen absorption column density {\bf $N_H$}; spectral indices $\alpha_1$, $\alpha_2$ (photon index $\Gamma=\alpha+1$); color temperatures of the blackbody-like photon spectra $kT_1$, $kT_2$; $\log(A_1)$, $\log(A_2)$ related to the Comptonized fractions $f_1$, $f_2$ [$f=A/(1+A)$]; normalizations of the blackbody-like components {\bf $N_{bmc1}$}, {\bf $N_{bmc2}$} for the {\it BMC1} and {\it BMC2} components of the resulting spectrum, respectively. We find that color temperatures $kT_1$ and $kT_2$ are about 1 keV for all available data and thus we fix values of $kT_1$ and $kT_2$ at 1 keV. An equivalent hydrogen absorption column density was fixed at the level of $N_H=5\times 10^{22}$~cm$^{-2}$ \citep{trud01}. { When the parameter $\log(A)\gg1$ we fix $\log(A)=2$ (see Tables ~1-6), because the Comptonized fraction $f=A/(1+A)\to$1 and variation of $A$ does not improve the fit quality any more. During LHS the BMC2 component is often very low or barely detectable [see Table 1 (MJD=50462-50561), Table 3 (MJD=51067-51081), Table 6 (MJD=53829-53852) and Fig.~\ref{spec_evol_4Sm} and panel ``1S'']. This observational fact is in agreement with scenario of BH spectral state transition (TF04). During LHS the spectrum is characterized by a strong hard power-law component. In other words the energy spectrum is dominated by a Comptonized component seen as a power-law hard emission in energy range from $\sim 10$ to $\sim 70$ keV while the disk emission remains weak (LHS, IS), because only a small fraction of the disk emission component $(1-f)$ is directly seen (Fig.~\ref{spec_evol_4Sm}, panel ``1S''). Although during IS (Fig.~\ref{spec_evol_4Sm}, panel ``2S'') the contributions of two BMC components to the overall spectrum are of the same order sometimes we can barely identify one of these components [see these cases in Table 2, (MJD=50737, 50743), Table 3 (MJD=50908, 50909, 51003), Table 5 (MJD=53718) when N$_{bmc2}\ll$N$_{bmc1}$]. On the other hand the model with two BMC components are really needed in the most cases of the intermediate state (IS) and high-soft state (HSS). In Figure \ref{two_BMC} we demonstrate that the fit qualities are unacceptable when the only one BMC component is included in the spectrum. Specifically, for IS-HSS observation 91701-01-11-00 on 18 May 2005 the values of $\chi^2_{red}$=12.3 for 75 d.o.f. (bottom left panel) for $wabs*bmc*highecut$ model. However $\chi^2$ is significantly improved when the second BMC component is included in the model. For $wabs*(bmc+bmc*highecut)$ $\chi^2_{red}$=3.28 for 76 d.o.f. (see the related count spectrum along with the model and the related residual in the central bottom panel of Fig. \ref{two_BMC}). Ultimately we achieve a remarkable agreement with the data using $wabs*(bmc+bmc*highecut+laor)$ model for which $\chi^2_{red}$=1.04 for 73 d.o.f. In Figure \ref{two_BMC} (top and right bottom panels) we show the data along with the best-fit spectra and their components for our two component BMC model (see Table 4 for the best-fit parameter values). In HSS and very soft state (VSS) (MJD=53515-53640, Table 4) the soft luminosity is high and the spectrum is dominated by a thermal component ($\Gamma_2>$3.7). Note the hard power-law component is barely seen in VSS (see Fig.~\ref{spec_evol_4Sm}, panels ``4S''). We find a broad emission line between 6 and 7 keV in the IS and HSS spectra. Up to the date the iron K$_{\alpha}$ emission line in IS and HSS of GRS~1915+105 was detected with ASCA, Chandra, XMM-Newton, BeppoSAX by \cite{kot2000}, \cite{Mart02}, \cite{Mil05}. However the determination of the iron line shape with {\it RXTE} is a problem because of the low energy resolution. As a first trial we added a Gaussian component to fit the spectrum varying the width and normalization of the line. Fits using the Gaussian always produce residuals around 7 keV while fits with XSPEC Laor model do not have such a problem. Thus through the Paper we incorporate the Laor-line profile to fit the line component. The line feature has a statistical significance of (3-10) $\sigma$ depending on the spectral states. This line feature is variable with time average intensity of 2$\times$10$^{-11}$ erg/s/cm$^2$ and exhibits equivalent width (EW) in the range of 50-600 eV across the data. We found that adding the laor-line component significantly improves the fit quality of IS and HSS spectra. Fitting an IS spectum (e.g. for 90105-05-03-05 observation) without the line components leads to $\chi^2_{red}$=1.24. When the line component (Laor) is included the fit quality becomes much better, $\chi^2_{red}$=1.01. The fit of HSS spectrum (901050-08-02-00) without the iron-line component is unacceptable, $\chi^2_{red}$=2.01, and $\chi^2_{red}$=1.24 when the line component is included. The best-fit parameters of the source spectrum and values of $\chi^2_{red}$ including d.o.f are presented in Tables 1-6. \subsubsection{Observational evidence of ``blackbody-like'' component peaked at $\sim 20$ keV in eight IS spectra} The adopted spectral model shows a very good performance for 99 cases among 107 spectra used in our analysis. Namely, the value of reduced $\chi^2$-statistic $\chi^2_{red}=\chi^2/N_{dof}$, where $N_{dof}$ is a number of degree of freedom for a fit, is less or around 1.0 for most observations. However for 8 observations of intermediate state the fit of the data with the model {\it wabs*(bmc*highecut+bmc+ laor)} is not so good, $\chi^2_{red}$ reaches 1.5 and even higher. We found that in the residual of data vs model there is a characteristic bump around 20 keV (see left bottom panel of Fig. \ref{sp_bbody}) which can be fitted by blackbody-like shape of color temperature about 4.5 keV (see right bottom and top panels of Fig. \ref{sp_bbody} and Table 7 for values of the best-fit parameters). This ``high-temperature BB'' component is strong in each of the eight observations and its EW varies from 400 to 700 eV. We discuss a possible origin of this ``BB'' component in \S 4. \subsection{Timing analysis} The \textit{RXTE} light curves were analyzed using the {\it powspec} task. For the timing analysis in 2 -- 30 keV energy range, we use the \textit{RXTE}/PCA data in the {\it binned} and {\it event} modes containing X-ray events below 13/15 keV and above 13/15 keV for 1997/2005 data sets respectively. Specifically, depending on {\it RXTE} epoch, the channel ranges (0-35) for binned and (36-255) for event modes relate to energy bands 1.94-12.99 keV (binned) and 13.36-103.19 keV (event) for epoch 3 (1997-1998 data set), and relate to 2.06-14.76 keV (binned) and 15.18-117.86 keV (event) for epoch 5 (2005-2006 data set). The time resolutions for event and binned modes are $1.52\times10^{-5}$ s and $8\times10^{-3}$ s, respectively. The observational exposition time periods vary from 1.5 to 10 ks. Thus for all of these observations we can obtain power spectra in the wide frequency range (roughly from 0.001 Hz up to 100/10000 Hz for binned and event modes respectively). These frequency ranges allow us to produce power spectra for all studied cases in the 0.1-100 Hz frequency range. We subtracted the contribution due to Poissonian statistics and Very Large Event Window for all power density spectra (PDS). The data analysis of the PDSs was performed using a simplified version of the diffusion model [see \cite{tsa07}, hereafter TSA07, and \cite{ts08}] in which the PDS continuum shape at frequencies below the driving frequency can be approximated by empirical model $P_X\sim (1.0+(x/x_{*})^2)^{-in}$ ($KING$ model in QPD/PLT). Following TSA07, the break frequency found in the PDS is related to a diffusion time of perturbation propagation while the QPO low frequency is an eigenfrequency of the volume (magnetoacoustic) oscillation of the medium (in our case it is a Compton cloud). Note, TSA07 demonstrated that the diffusion model as a linear superposition of Lorentzians related to the eigenvalues of the diffusion problem can be also presented by the continuos shape which is flat below break frequency and power law at frequencies above the break. Given these asymptotic forms of PDS at low and high frequency limits they named their diffusion model as a white-red noise (WRN) model. For the quasi-uniform perturbation source distribution the slope of the PDS power-law part depends on the law of viscosity in the corona or in the disk (see details in TSA07). The parameters of this WRN diffusion model are the break (diffusion) frequency and the index of the power-law distribution of the viscosity over the radius. To fit the QPO features, we use Lorentzian shape. We quote the Lorentzian centroid as a QPO frequency. \section{Observational results \label{obs_transition}} \subsection{Evolution of spectral properties during state transitions \label{transitions}} Observations of Galactic BH X-ray binaries reveal diverse spectral and dynamic phenomenologies. The evolution of a BH binary is usually described in terms of spectral states. There are several flavors of BH state definitions in literature, which slightly differ in BH state definitions and terminology \citep[see, for example][]{rm,bell00,bell05,kw08}. To distinguish different states, the properties observed in the energy spectrum and Fourier power density spectrum (PDS) are utilized. As we have already emphasized in the introduction section we use, in our study, the general state classification for four major BH states: {\it low-hard} (LHS), {\it intermediate} (IS), {\it high-soft} (HSS) and {\it very soft} state (VSS). The general picture of LHS-IS-HSS transition is illustrated in Figure \ref{spec_evol_4S} where we bring together spectra of LHS, IS, HSS and VSS to demonstrate the source spectral evolution from low-hard to soft states. We should emphasize different shapes of the spectra for the different spectral states. In the LHS spectrum the Comptonization component is dominant and the blackbody (BB) component is barely seen in 3-150 keV energy range. The IS and HSS spectra are characterized by a strong soft BB component and a power law extended up to 150 keV. In VSS the soft BB component is dominant and the power-law component is relatively weak with respect of this in IS and HSS. { In the RXTE data of GRS 1915+105 observations there are long periods when the photon index $\Gamma_1$ and normalization $N_{bmc1}$ of the hard BMC monotonically increase (or decrease) with time. We call these periods as long transition periods. The days, when the source X-tay flux starts to increase while it is still in the LHS, can be considered as a beginning of the rise transition. In these times the energy spectrum is characterized by low index values $\Gamma_1\sim$1.7 and the thermal component is at low level or undetectable at all. In Figure \ref{outburst_97_rise} from top to bottom we show an evolution of flux density $S_{15GHz}$ at 15 GHz (Ryle Telescope), \textit{RXTE}/ASM count rate, BMC normalization and photon index $\Gamma$ during the 1997 rise transition of GRS~1915+105 (MJD 50500 -- 50700). Red/black points ({\it for two low panels}) correspond to hard/soft components with $\Gamma_1$ and $\Gamma_2$, respectively. In the bottom we plot the photon index $\Gamma$ versus the BMC normalization ({\it left}) and Comptonized fraction ({\it right}) for this transition. Here red triangles/black circles correspond to hard/soft components with $\Gamma_1$ and $\Gamma_2$, correspondingly. One can see that in the beginning of this transition the resulting spectrum consists of one Comptonization component whose photon index $\Gamma_1$ steadily increases {\it from 1.7} towards the softer states and finally saturates at the value of 3. The Comptonization fraction $f=A/(1+A)$ of the hard component, related to index $\Gamma_1$, shows a sign of decreasing towards the softer state. When the \textit{RXTE}/ASM count rate exceeds 50 counts/s the soft Comptonized component appears with a weight which is comparable to that of the hard component. The photon index of the soft component $\Gamma_2$ saturates to the level of 4.2 when the BMC normalization (disk flux) increases. The Comptonization fraction $f$ of the soft component is about 0.5 and higher. As seen from Fig. \ref{outburst_97_rise} the start of this rise transition coincides with active phase of X-rays, and of radio emissions which exceed 10 ASM counts/s and 50 mJy levels respectively in the 1997 rise transition. Around MJD 50580 day the source reaches the HSS (when $\Gamma_1\sim 3$). Then a long HSS period from MJD 50580 to 50700, when $\Gamma_1$ stays almost the same, is followed by the state transition to IS during which $\Gamma_1$ decreases to 2.5 (see Fig. \ref{outburst_97_middle}-\ref{outburst_97-98_decay}). We see a similar behavior of X-ray, radio fluxes and photon indices during the 2005 bright X-ray episode. The only difference of that with the 1997 episode was that the 2005 rise started at the intermediate state and went very quickly to HSS (see Fig. \ref{outburst_05_IS}-\ref{outburst_05-06_decay}). After MJD 53800 the source came back to IS-LHS when $\Gamma_1\lax 2$ (see Fig. \ref{outburst_05-06_decay} and Table 6). Note typical X-ray and radio fluxes during IS are about 40-60 ASM counts/s and $\le$50 mJy respectively. \subsection{ Observational (correlated and non-correlated) characteristics of X-ray and radio emissions} In fact, we do not find any correlation of X-ray and radio fluxes when the source changes its spectral states. Also we do not find a correlation of radio activity with the X-ray photon index (see Fig. \ref{outburst_index_radio_X-ray}). However we find a strong correlation of the iron line EW with radio flux density $S_{15G{\rm Hz}}$ at 15 GHz (see Fig. \ref{outburst_EW_radio}). In Figure \ref{outburst_EW_radio} we also include points which have been recently reported by \cite{nl09} who have analyzed archival HETGS (High Energy Transmission Grating Spectrometer) observations of GRS 1915+105 from the Chandra X-ray Observatory. } { } The prominent HSS events were observed during 2005 -- 2006 observations around MJD 53490 and MJD 53690 (see Figs. \ref{outburst_05_IS}-\ref{outburst_05-06_decay}). The 2005 -- 2006 observations confirm the index evolution vs BMC normalization (disk flux) found in the 1997 -- 1998. The indices of the hard and soft components indeed increase and then saturate at values of $\lax 3$ and 4.2 respectively (see Fig. \ref{outburst_05-06_decay}). Index $\Gamma_1$ started a saturation at lower values of BMC1 normalization (presumably proportional to disk mass accretion rates) than that were in 1997. It is also worth noting a so called ``pivoting'' effect, i.e. when inequality $N_{bmc1} >N_{bmc2}$ switches to $N_{bmc2} >N_{bmc1}$ or vice versa, which is seen in the 1997-1998 and 2005 - 2006 observations. One can see this composite pivoting picture combining Figs. \ref{outburst_97_rise}-\ref{outburst_05-06_decay}. In fact, these pivoting points correspond to the spectral transitions between adjacent states LHS-IS to HSS and vice versa. In Figure \ref{outburst_index_norm} we collect all data points for the index-normalization correlation for rise and decay stages. We do not find much differences in the correlation patterns related to the rise and decay transitions (compare left and right panels) in contrast that ST09 found in other BHs. We also find that the photon index of X-ray spectrum is tightly correlated with the quasi-periodic oscillations (QPO) frequency (see Fig. \ref{outburst_index_qpo}) which can be considered as a strong argument that QPOs and X-ray Comptonization spectrum emerge from the same geometrical configuration (Compton cloud). However the flux density $S_{15GHz}$ and QPO frequency are not correlated with each other when the source changes its spectral states. In Figure \ref{radio_QPO_independence} we show an evolution of the flux density $S_{15GHz}$ at 15 GHz (Ryle Telescope), \textit{RXTE}/ASM count rate and $\nu_{QPO}$ during 1997 (left column) and 2005 (right column) rise transitions. The left column panel demonstrates the presence of QPO during a low radio activity ($<$30 mJy). The right column panel shows an example of the presence of QPO when the radio flux is high ($\sim$100 -- 200 mJy). Given that the quasi-periodic oscillations (QPOs) of X-ray flux are present independently of the radio flux level, we can conclude that the radio appearances and QPO phenomenon are not closely related and probably the radio and X-ray (oscillating) emission areas have different origins in the source. \subsection{Evolutions of energy and power spectra during a minor X-ray/radio flares \label{minor_flare}} In Figure \ref{radio_appearances} we show the details of a typical evolution of X-ray timing and spectral characteristics for minor X-ray/radio flares. In the top panels of Figure \ref{radio_appearances} we show the flux density $S_{15GHz}$ at 15 GHz (Ryle Telescope) and the \textit{RXTE}/ASM count rate during the 2005 rise transition stage (see also Fig. \ref{outburst_05_IS}). Red points A, B and C on the panel of the \textit{RXTE}/ASM count rate vs time correspond to moments at MJD=53416, 53422 and 53442 (before, during and after the minor X-ray/radio flare) respectively. Points A and C were chosen as the nearest possible points to point B (taking into account the time-table of archive data). Point B corresponds to the maximum of radio flux of 300 mJy and EW of 600 eV. Note that QPO centroid frequency before the flare (at point A) is at 1.8 Hz and shifts to 0.9 Hz (point C) after the flare. PDSs (left bottom column) are plotted versus the energy spectrum (right bottom column) for three points A (top), B (middle) and C (bottom) of the X-ray light curve. There are QPOs at A and C points (A1, C1 panels) but there is none at B point (B1 panel), at the X-ray flare peak. For the photon spectra (right bottom column) red points stand for observational data, while the model is shown by components with blue line for {\it BMC1}, black line for {\it BMC2} and dashed purple line for the {\it laor} line component. Note that the spectral characteristics undergo noticeable changes during X-ray/radio flare. Specifically at the flare peak (point B) the total flux increases at least by a factor of 1.5 with respect to that before the flare. Although photon index of {\it BMC1} component $\Gamma_1$ changes from 2.9 (A and C points) to 3.0 (B point) respectively. We also studied the energy dependence of the PDS shape and integrated power variability as a function of the photon energy. In Fig. \ref{radio_appearances} ({\it left bottom column pannel}) we show two power spectra for two energy bands 2-15 keV (red) and 15-30 keV (blue). One can see that PDSs weakly depend on the energy band. In particular, a value of the low frequency QPO $\nu_{QPO}$ is the same for the low energy and high energy PDSs. \section{Interpretation and discussion of observational results \label{theory}} Before to proceed with the interpretation of the observations let us to briefly summarize them as follows: i. The spectral data of GRS 1915+105 are well fit by two (soft and hard) BMC components for most of analyzed IS and HSS spectra (see Fig. \ref{two_BMC}) while LHS spectra essentially require only one BMC component, the soft BMC component is very weak (see Tables 1, 3-4, 6 and panel S1 in Fig. \ref{spec_evol_4Sm}). ii. In addition to two BMC components 8 IS-HSS spectra require an extra component which can be fitted by `` high temperature BB-like" profile (see Fig. \ref{sp_bbody} and Table 7). iii. The Green's function indices of each of these components rise and saturate with an increase of the BMC normalization (disk flux). The photon index saturation levels of the soft and hard components are about 4.2 and 3 respectively (see Fig. \ref{outburst_index_norm}). iv. We also find a tight positive correlation of QPO frequencies with the index (see Fig. \ref{outburst_index_qpo}) and consequently that with the disk flux. vi. The iron line EW correlates with the radio flux (see Fig. \ref{outburst_EW_radio}). vii. QPO appearances and their frequency values are not correlated with radio flux when the source undergoes the spectral changes from IS to HSS (see Fig. \ref{radio_QPO_independence}). vii. We also do not find any correlation between X-ray and radio fluxes and X-ray power-law index (see Fig. \ref{outburst_index_radio_X-ray}). viii. Although we find changes of power and energy spectra during a minor X-ray/radio flares when QPO features disappear in PDS and energy spectrum becomes softer than that was before and after the flare (see Fig. \ref{radio_appearances}). \subsection{Index-QPO and index-$\dot m$ correlations. Index saturation} We confirm the index-QPO correlation in GRS 1915+105 previously found by V03 and ST07. This correlation was indeed predicted by \cite{tlm98}, hereafter TLM98, who argued that the transition layer [Compton cloud (CC)] formed between the Keplerian disk and the central object (NS or BH), contracts and becomes cooler when the disk mass accretion rate $\dot m$ increases. The observational effect of the CC contraction were later demonstrated by ST06, TSA07, TS08 and \cite{mtf09} in Cyg X-1 and XTE J1650-500 respectively. As a result of the transition layer (TL) contraction the QPO low frequency $\nu_L$, which is presumably the TL's normal mode oscillation frequency, increases with $\dot m$ given that $\nu_L$ is inversely proportional to the TL size. On the other hand the index monotonically increases when the TL (CC) cools down. TF04 provided the details of the index-QPO correlation model where they pointed out that this correlation is a natural consequence of the spectral state transition. In this Paper we have firmly established the index correlation with $\nu_L$ along with the index saturation vs the BMC normalization $N_{bmc}$ (Eq. \ref{bmc_norm}) for the soft and hard Comptonized components of the X-ray spectra of GRS 1915+105 (see Fig. \ref{outburst_index_norm}). Below we show that $N_{bmc}$ is actually proportional to mass accretion rate in the disk. Namely the disk flux $L$ (as a source of soft photons for Comptonization, see e.g. Fig. \ref{geometry} for the geometry of soft photon illumination of Comptonized region) can be represented as \begin{equation} L=\frac{GM_{bh} \dot M}{R_*}=\eta(r_*) \dot m_d L_{\rm Ed}. \label{lumin} \end{equation} Here $R_{*}=r_{*}R_{\rm S}$ is an effective radius where the main energy release takes place in the disk, $R_{\rm S}=2GM/c^2$ is the Schwarzschild radius, $\eta=1/(2r_*)$, $\dot m_d=\dot M_d/\dot M_{crit}$ is dimensionless mass accretion rate in units of the critical mass accretion rate $\dot M_{crit}=L_{\rm Ed}/c^2$ and $L_{\rm Ed}$ is the Eddington luminosity. On the other hand \begin{equation} L_{\rm Ed}=\frac{4\pi GMm_pc}{\sigma_{\rm T}} \label{ed_lumin} \end{equation} i.e. $L_{Ed}\propto M$ and thus using Eqs. (\ref{lumin}-\ref{ed_lumin}) we obtain that \begin{equation} L\propto\eta(r_*) \dot m_d m. \label{lumin_m} \end{equation} In HSS when the inner disk radius $R_{*}$ reaches its lowest value $R_{*}\gax3R_{\rm S}$, the efficiency of the gravitational energy release $\eta(r_*)$ reaches its highest value and thus the disk flux increases only when the disk mass accretion rate increases (see Eq. \ref{lumin_m}). Given that BMC normalization $N_{bmc}$ is proportional to $\dot m_d$ in HSS the observational effect of the index saturation with $N_{bmc}$ is translated to the index saturation with $\dot m_d$. First we interpret the index saturation related to the hard Comptonization component ({\it BMC1}). We suggest that this BMC1 component of the emergent spectrum is presumably originated in the converging flow onto a compact object, in our case to the BH (see Fig. \ref{geometry}). In fact, in HSS the plasma temperature of the accretion flow is comparable with the color temperature of the disk photons (see TF04). Thus, in order to explain the high energy tail observed in HSS of BH sources, one should assume either an unknown source of high energy non-thermal electrons [see e.g. \cite{cop99}] or consider effects of energy transfer from the converging flow electrons to the photons {emitted} from the innermost part of the accretion flow. Optical depth of the converging flow $\tau$ is proportional to $\dot m_d$ if one assumes that disk accretion flow continuously goes to the converging flow and there are no other components in the accretion flow [see e.g. a model of two component accretion flow by \cite{CT95}, hereafter CT95]. This effect of the index saturation vs optical depth of the bulk flow (BM) $\tau$ was first predicted by \cite{tz98} and then it was subsequently reproduced in Monte-Carlo simulations by \cite{lt99}, hereafter LT99. It is worth noting that the index saturation effect is an intrinsic property of the bulk motion onto a BH given that the spectral index $\alpha=\Gamma-1$ is a reciprocal of the Comptonization parameter $Y$ [see this proof in ST09 and \cite{BTK07}] which saturates when the BM optical depth, or $\dot M$, increases. In fact, the Y-parameter is a product of the average photon energy exchange per scattering $\eta$ and the mean number of photon scattering $N_{sc}$, i.e. $Y=\eta N_{sc}$. For the thermal Comptonization case, $Y\sim (4kT/m_ec^2)\tau^2$ given that in this case $\eta=4kT/m_ec^2$ and $N_{sc}\sim \tau^2$ for $\tau\gg 1$ \citep[see e.g.][]{rl79} and, thus, the thermal Comptonization spectral index is \begin{equation} \alpha\sim [(4kT/m_ec^2)\tau^2]^{-1}. \label{alpha_plmm} \end{equation} In the case of converging flow, the preferable direction for upscattered photons is the direction of bulk motion onto the BH, i.e along the radius. Note that the fractional photon energy change is $$ \Delta E/E=(1-\mu_1 V_{R}/c)/(1-\mu_2 V_{R}/c). $$ where $\mu_1$ and $\mu_2$ are the cosines of the angles between the direction of the electron velocity ${\bf n}={\bf V}_R/V_R$ and direction of incoming and outcoming (scattered) photons respectively. The number of scatterings of the up-Comptonized photons $N_{sc}$ can be estimated as a ratio of the radial characteristic size of the converging flow $L$ and the free path $l$ in the direction of motion, namely $N_{sc}\propto L/l=\tau$ given that $\Delta E/E$ has a maximum at $\mu_2=1$ for given $\mu_1$ and $V_R$. On the other hand the efficiency per scattering for bulk motion flow $\eta\propto 1/\tau$ when $\tau\gg 1$ [\cite{lt07}, hereafter LT07] {hence for bulk motion Comptonization, the Y-parameter does not depend on $\tau$} for high values of $\tau$ or dimensionless mass accretion rate $\dot m$. Thus one can conclude that {\it the Comptonization parameter $Y=\eta N_{sc}$ and hence the energy index $\alpha=Y^{-1}$} {\it saturate to a constant value when optical depth (or mass accretion rate) of the BM flow increases}. However the index saturation value is determined by the plasma temperature during a transition [see LT99]. The plasma temperature strongly depends on the mass accretion rate in the bulk motion region $\dot M_{bm}$ and its illumination by the disk photons $L$ (see TLM98 and TF04). For higher $\dot M_{bm}$ and $L$ the plasma temperature is lower. The level of the index saturation decreases when the plasma temperature in the bulk motion increases (TF04). Thus the index saturation levels can be different from source to source depending on the strength of the disk. Looking at Figure \ref{outburst_index_norm} one can also notice that the index $\Gamma_1$ starts its saturation at different values of BMC normalization ($\propto\dot m_d$) for different types of active episodes. In fact, the index should saturate with mass accretion rate in the converging flow $\dot m_{bm}$ which is a sum of the disk mass accretion rate $\dot m_{d}$ and that in sub-Keplerian flow (CT95). Hence one can argue that this lower value of $\dot m_d$ at which the index saturates can be a sign of the presence of extra (sub-Keplerian) component in the accretion flow onto BH in GRS 1915+105. \cite{lt09}, hereafter LT09, study the index-$\dot m$ correlation and also a modification of the disk blackbody spectrum due to Comptonization in the optically thick CC, which is formed due to accumulation of accretion matter in the TL. They indeed explain the saturations of the indices of the soft and hard components of the resulting spectrum. Specifically LT09 show that gravitational energy of the accretion flow is released in the optically thick and relatively cold TL when mass accretion rate $\dot m_d$ is higher than 1. The level of the index saturation depends on the radial velocity in the transition layer (TL). LT09 also show that the observable saturation index of the soft BMC component $\Gamma_2\sim 4.2$ can be reproduced in their Monte Carlo simulations for values of the TL radial velocity $\gax 0.05$ c. \subsection{Physical origin of ``high temperature BB-like'' component''?} In 8 of IS-HSS spectra we find an observational evidence of the bump around 20 keV which can be fitted by ``$\sim 4.5$ keV BB-like'' profile (see Fig. \ref{sp_bbody} and Table 7). One can argue that this observable bump at 20 keV is a signature of the Compton reflection bump [see e.g. \cite{bst74}, ST80, CT95, and \cite{mz}]. But this interpretation encounters difficulties given that the hard power-law tails of these spectra are too steep to form the Compton bump. Indeed, ST80 and later LT07 demonstrated that the Compton bump as a result of a photon accumulation due to downscattering of hard photons in the cold medium (for example disk) cannot be produced if the photon index of the incident hard photon spectrum $\Gamma>2$. In fact, as one can see from Table 7, that in all spectra where we detect this $\sim 20$ keV feature the index of the hard BMC component $\Gamma_1>2$ (the indices vary between 2.5 and 3). In principle, this bump may also be a result of photoelectric absorption of the photons below 10 keV in the cold medium (disk) even if the incident spectrum is very steep. The photoelectric absorption cross-section $\sigma_{ph}\sim (7.8~\rm keV/E)^3\sigma_{\rm T}$, where E is photon energy and $\sigma_{\rm T}$ is Thomson cross-section (e.g. CT95). However, \cite{lat04} and \cite{rm08} show that the ionization of such a disk by the intensive X-ray radiation during IS-HSS invalidates the basic assumptions of the presence of the cold material in the innermost part of the source. Note the hard tail of X-ray spectrum in IS-HSS is formed in the converging flow (CF), i.e. in the innermost part of the accretion flow, because we see the CF signature (the index saturation) when the source goes to IS-HSS. On the other hand \cite{t02} argued that the specific spectral and timing features of X-ray radiation could be seen in BH sources only. Particularly, he stated that the photon-photon interaction of the effectively upscattered photons results in the powerful pair production near a BH horizon. Indeed, a large fraction of the upscatterred photons going inward are deflected by the relativistic free-fall electron in the outward direction [the aberration effect of light, see e.g. \cite{rl79} and Appendix A]. These diverted upscattered photons of energy $E_{up}$ interact with incoming photons of energy $E_{in}$ flux and ultimately this interaction leads to the pair creation if the condition $E_{up}E_{in}\gax (m_ec^2)^2$ is satisfied. Note that free-fall bulk motion with Lorentz factor $\gamma\gg1$ should be very close to horizon, i.e. \begin{equation} \frac{\Delta R}{R_{\rm S}}\approx \frac{1}{\gamma^2} \label{distance_to_horizon} \end{equation} where $\Delta R=R-R_{\rm S}$ is radial distance to horizon. Thus the created positrons extensively interact with accreting electrons there and therefore the annihilation line photons are created and distributed over the relatively narrow shell near BH horizon. Specifically \begin{equation} \Delta R \lax3\times10^{4}\left(\frac{10}{\gamma}\right)^2\frac{m}{10}~{\rm cm}. \label{distance_to_horizon_m} \end{equation} The proper energy (in the comoving flow frame) of annihilation line photons $E_{511}$ should be seen by the Earth observer (in the zero frame) at the redshifted energy \begin{equation} E_0=(1-R_{\rm S}/R)^{1/2}E_{511}\approx\frac{E_{511}}{\gamma} \label{redshift} \end{equation} where $(1-R_{\rm S}/R)^{1/2}\approx(\Delta R/R_{\rm S})^{1/2}=1/\gamma$ (see according Eq. \ref{distance_to_horizon}). In other words the line energy displacement due to gravity as viewed by a far away observer in free space is $z+1=1/(1-R_{\rm S}/R)^{1/2}=\gamma$. A significant fraction of these annihilation line photons strongly gravitationally redshifted can be directly seen by the Earth observer as a bump located at $\sim 20$ keV and presumably related to the representative value of $\gamma\sim 20$. Laurent \& Titarchuk (2009, in preparation) made extensive Monte Carlo simulations of the X-ray spectral formation in the converging flow taking into account photon-electron, photon-photon and pair-electron interactions. These simulations confirm our expectations that in some cases the emergent spectra of IS and HSS consists of the redshifted annihilation line located at $\sim 20$ keV, which can be fitted by ``high temperature BB-like'' profile, and also the simulated spectra are extended to energies of order of a few MeV [see Fig. \ref{sp_bbody} and \cite{grove98} for details of IS-HSS spectral components]. \subsection{Radio$-$X-ray connection} \cite{mfk05}, hereafter MFK05 reported on correlations between radio luminosity and X-ray timing (QPO) features in X-ray binary systems containing low magnetic field neutron stars and black holes. The MFK05 conclusions on the radio-QPO correlation based on observations of seven neutron star and one black hole GX~339-4. For GX~339-4 they used data only in low-hard state before and after outburst. \cite{ts05} based on the analysis of {\it RXTE} data from NS 4U1728-34 confirmed a correlation of X-ray and radio emissions with LF QPOs through all spectral states for this particular NS source. However, we do not find a real low-hard state in GRS 1915+105 for which photon index $\Gamma_1$ should be about 1.5, as that in GX 339-4 (see ST09), and therefore we cannot confirm or refute the radio-QPO correlation in GRS 1915+105 LHS similar to that found by MFK05 in GX 339-4. On the other hand we find that LFQPOs do not correlate with the radio flux while they correlate with the hard component photon index $\Gamma_1$ through IS and HSS (see Fig. \ref{radio_QPO_independence} and Fig. \ref{outburst_index_qpo} respectively). Absence of correlation between radio luminosity and QPO can be explained by the different origins of these quantities. While the QPO phenomenon is probably related to the transition layer oscillations (see \S4.1), it is confirmed by the index-QPO correlation, the radio emission is presumably originated in the wind or wide open jet. Furthermore, because the radio flux and iron line EW are strongly correlated (see Fig. \ref{outburst_EW_radio}) one can conclude that the iron line is also formed in the wind [see more on the line formation origin in LT07, \cite{stl09} and \cite{tls09}]. It is also worth noting that the X-ray flux, and photon index do not correlate with the radio flux (see Fig. \ref{outburst_index_radio_X-ray}). It can be explained by the different mechanisms of the energy releases in X-ray and radio. X-ray radiation is presumably formed in the innermost part of the disk and in the transition layer (or Compton cloud) while the radio emission is presumably formed in the jet or winds which are probably launched at the outskirts of the disk [see e.g. \cite{mm99}, \cite{m00}, \cite{mm03} and TSA07]. Using the aforementioned correlations of EW with radio flux we can suggest that some fraction of accretion flow may go to the outflow. Probably the powerful outflow is launched at outer parts of the accretion disk and it is not by chance that we can see this strong correlation of the iron line EW with the radio flux. Thus there are two ways for the matter to proceed: i. in the outflow if the local mass accretion rate in the disk exceeds the critical value (which is proportional to radius, see TSA07), ii. in the disk where the matter proceeds and it ultimately converges onto BH. This final stage of the accretion we observe as a saturation of index with mass accretion flow (converging flow signature). Whereas we do not find any correlation between radio and X-ray fluxes during the spectral transition, although, probably, we see an indication of X-ray-radio connection during a minor flare event. In Figure \ref{radio_appearances} we show the spectral and timing properties of X-ray emission of a typical minor X-ray/radio flare As one can see from this Figure (see low panels there) the PDS and energy spectra are different at the peak of the flare from those before and after the flare. Specifically the soft component is more pronounced but QPO features are not seen at the peak of the flare while they are present before and after the flare. Moreover the flat part (white noise) of the peak PDS is extended to higher frequencies (break frequency $\nu_b\sim10$ Hz at the peak vs $\nu_b\sim 2$ Hz before and after the flare). In \S \ref{minor_flare} we reported the results of the study the energy dependence of the PDS shape and integrated power variability as a function of the photon energy. In Figure \ref{radio_appearances} we show two power spectra for two energy bands 2-15 keV (red) and 15-30 keV (blue). One can see that PDSs weakly depend on the energy band. We can suggest that the sizes of the photon emission areas $L_{CC}$ related to these two energy bands are also the same. In fact, \cite{tbw01} argue that $\nu_{QPO}$ is proportional to the ratio of magneto-acoustic (plasma) velocity $V_{MA}$ and Compton cloud size $L_{CC}$ and hence one can conclude that the emission areas are the same because $\nu_{QPO}$ are the same for these two energy bands. \cite{le74}, hereafter LE74, suggest that the thin disk is always unstable in the inner region when radiation pressure dominates gas pressure. Using numerical simulations \cite{l74} found that the innermost region of a disk around a BH is secular unstable against clumping of the gas into rings which observational appearances can be seen by the Earth observer as X-ray-radio flares. We can speculate that an increase of the soft and hard components in the observable X-ray spectrum at the flare peak (compare panels B2 and A2, C2 of Fig. \ref{radio_appearances}) can be a sign that the radiation pressure gets to dominate in the inner disk region. On the other hand the sign of the destruction of some part of the innermost part of the disk, as an effect of the high pressure instability, should be seen in the PDS. TSA07 argue that the break frequency $\nu_b$ in PDS is proportional to the diffusion frequency $\nu_d=1/t_{visc}\sim {\hat \nu}/R^2$ where $\hat \nu$ is a viscosity, $t_{visc}$ is a viscous timescale and $R$ is a radial size of the innermost part of the disk. Given that $\nu_b$ increases at the peak with respect of that before and after the flare (compare panels B1 and A1, C1 of Fig. \ref{radio_appearances}) it can imply that the size $R$ decreases when $\nu_b$ increases, i.e. some part of the innermost part of the disk is probably destroyed. In terms of the diffusion theory, this disk instability arises in the inner region where the viscous stress $W$ is a decreasing function of the surface density $\Sigma$ and thus an effective diffusion coefficient of the nonlinear equation for $\Sigma$ becomes {\it negative} there [see Eqs. 4-5 in LE74]. On the other hand Makeev \& Titarchuk (2009, in preparation), hereafter MT09, obtain this disk instability as a solution the linear diffusion equation for $\Sigma$ and they do not specify any (ad hoc) assumption about the nature of the disk viscosity (cf. LE74). They just assume the power-law viscosity distribution over the disk and they use the TLM98 angular velocity distribution in the transition layer (TL). MT09 study the TSA07 model of the PDS formation and they find that this model predicts the existence of the two distinct zones within the TL of the black hole or neutron star, with oppositely different types of diffusion of perturbation taking place in each zone. A simple fact of the change of sign of the angular velocity derivative in the linear diffusion equation for $\Sigma$ at the critical radius $R_{max}$ results in a {\it negative} diffusion coefficient of the equation in the interval $R_{in}<r<R_{max}$. Moreover the change of sign of the diffusion coefficient at $R_{max}$ leads to a turnover of the angular momentum transfer, changing it towards the central body of the accreting system, instead of being pushed outwards as in the Keplerian disk. One of the considered scenarios implies an unstable diffusion of the perturbations in the TL inner zone which might be an indication of the development of a X-ray flare followed by a radio flare. The absence of any quasiperiodic oscillations in the TL and raising $\nu_b$ are possible indications of this instability. \section{Conclusions \label{summary}} We concentrate our efforts on the study of the correlation between the spectral index and the accretion disk luminosity. We argue that the shape of the correlation pattern can contain the direct BH signature. Namely, we show both observationally and theoretically that the index saturates with mass accretion rate which is a signature of a converging flow. This index saturation effect can exist only in BH sources. Also this correlation pattern carries the most direct information on the BH mass and the distance to the source (see ST09). We compiled the state transition data from GRS 1915+105 collected with the \textit{RXTE} mission. We examined the correlation between the photon index of the Comptonized spectral component, its normalization and the QPO frequency (see Figs.~\ref{outburst_97_rise}-\ref{outburst_05-06_decay}, \ref{outburst_index_norm}-\ref{outburst_index_qpo}). The spectral data of GRS 1915+105 are well fitted by two (soft and hard) BMC components for most of analyzed IS and HSS spectra (see Fig. \ref{two_BMC}) while LHS spectra essentially require only one BMC component. In addition to two BMC components 8 IS-HSS spectra require an extra component which can be fitted by `` high temperature BB-like" profile. We suggest this ``BB'' component is probably a signature of the redshifted annihilation line formed in the very narrow shell near a BH horizon due to high photon compactness taking place during intermediate and high-soft states (see Fig. \ref{sp_bbody} and Table 7). A remarkable result of our study is that the index - normalization (mass accretion rate) correlation seen in GRS 1\-9\-1\-5+105 is predicted by the theory of the converging flow. We demonstrate that a strong index saturation vs disk flux seen in the index-disk flux correlation (see Fig.~\ref{outburst_index_norm}) is an observational signature of the presence of the converging flow, which should only exist the BH sources. In other words, {\it this index saturation effect provides a robust observational evidence for the presence of black hole in GRS~1915+105}. We also find a tight positive correlation of QPO frequencies with the index (see Fig. \ref{outburst_index_qpo}) and consequently that with the disk flux. Our comprehensive analysis of X-ray and radio emissions in GRS 1915+105 shows that QPOs are seen independently of radio activity of the source during the spectral transition from low-hard to high-soft state. Specifically these QPO features have been detected at any level of the radio flux and even when the radio emission is at the noise level (see left and right panels in Fig.~\ref{radio_QPO_independence} correspondingly). We also do not find any correlation between X-ray and radio fluxes and X-ray power-law index. (see Fig. \ref{outburst_index_radio_X-ray}). However, we establish a strong correlation between equivalent width of iron line and radio flux in binary GRS~1915+105 (see Fig. \ref{outburst_EW_radio}). We are grateful to the referee whose constructive suggestions help us to improve the paper quality. We acknowledge productive discussion with Nikolai Shaposhnikov and Ada Paizis and we also would like to thank Guy Pooley who kindly provided us {\it Ryle Radio Telescope} data.
0d0ddec1ef2241bbee20871154fbaa54e51f7c54
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{\label{sec:Intro} INTRODUCTION} Using strain in thin films to achieve expanded or contracted lattices of novel materials has proven fruitful in producing new phases or functionalities for known bulk compounds. An interesting avenue of investigation is the growth of anisotropically strained films of perovskite oxides. The heavily hole-doped perovskite manganites that order in the C-type antiferromagnetic (AF) arrangement,\cite{Kajimoto,Tobe} e.g. Nd$_{1-x}$Sr$_{x}$MnO$_3$ with $0.75\leq x\leq 0.9$, are of interest in this regard. Single crystals of compounds with $x\gtrsim 0.8$ have not been successfully grown to our knowledge. The C-type AF state consists of a one-dimensional ordering of $d_{3z^2-r^2}$ orbitals of the Mn$^{3+}$ sites oriented along the elongated $c$ axis of the tetragonal structure, facilitating ferromagnetic (FM) coupling of Mn$^{3+}$ and Mn$^{4+}$ spins along $c$ and an AF spin alignment along the $a$ axes.\cite{CAFExpt} $a$-axis oriented films under anisotropic strain should allow for the manipulation of the FM double-exchange and AF superexchange interactions responsible for this ordering.\cite{CAFTheory,CAFStability} They should make accessible the study of intrinsic electrical anisotropy, a motivation for which is the large static dielectric constant ($\epsilon\sim 100$) observed\cite{CohnEps} in polycrystalline Ca$_{0.2}$La$_{0.8}$MnO$_3$, and hypothesized to arise from an enhanced polarizability along the one-dimensional $c$-axis in the AF phase. Relevant to the successful growth of $a$-axis films of these compounds are thermal expansion coefficients along the $a$ and $c$ axes that are opposite in sign in a broad temperature range from the cubic-tetragonal transition to well below room temperature (a consequence of the orbital ordering).\cite{Tobe} This differential thermal expansion induces highly anisotropic and temperature-dependent in-plane strain. We find that thicker films under anisotropic tensile strain within the substrate plane develop uniaxial crack arrays with regular crack spacing on the order of 1 $\mu$m. These crack arrays yield in-plane electrical resistivities that are highly anisotropic with an anisotropy ratio that varies approximately exponentially with film thickness, reaching values $\gtrsim 10^3$. The N\'eel temperatures of films under tensile strain are enhanced by 25~K, independent of thickness, over those of the bulk target and compressively strained films, consistent with an increased stability of the orbital and spin order that is controlled principally by the $c$-axis length. \section{\label{sec:Expt} EXPERIMENT} The polycrystalline target of nominal composition Nd$_{0.2}$Sr$_{0.8}$MnO$_3$ (NSMO) was prepared by conventional solid state reaction as described elsewhere.~\cite{Terashita} Thin films were grown by pulsed laser deposition on pseudocubic (110)-oriented (LaAlO$_3$)$_{0.3}$-(Sr$_2$AlTaO$_6$)$_{0.7}$ (LSAT; $a=0.3868$~nm) and LaAlO$_3$ (LAO; $a=0.379$~nm), and orthorhombic (100)-oriented NdGaO$_3$ (NGO; $a=0.543$~nm, $b=0.550$~nm, $c=0.771$~nm) substrates. An excimer laser (KrF:$\lambda=248$~nm) with a frequency of 10~Hz and and an energy density $\sim$1-2~J/cm$^{2}$ were employed. The substrate temperature was $750^{\circ}$C and partial oxygen pressure 260~mTorr. Following deposition, films were cooled in $\sim 760$ Torr O$_2$ at $1^{\circ}$C/min. to $500^{\circ}$C and held for 1 hour before cooling to room temperature. The crystallographic orientation, film thickness~($t$) and lattice constants were evaluated using a Philips X'Pert x-ray diffractometer (Cu K$_{\alpha}$ radiation). Surface morphology was studied with scanning electron microscopy (SEM). Four-probe, in-plane dc resistivity (with silver epoxy contacts) was measured on specimens with typical dimensions $6\times 1.3 \times t$~mm$^3$. The room-temperature thermopower was measured for all specimens with a steady-state method using gold leads and a chromel-constantan thermocouple. The target lattice constants, $a=0.5390(5)$~nm, $c=0.766(1)$~nm are in reasonable agreement with those reported by Kajimoto {\it et al.}\cite{Kajimoto} for crushed, melt-grown crystals at lower doping. The value of the N\'eel temperature, $T_N\simeq 242$~K was inferred from the peak in $d\log\rho/d(1/T)$ (further discussed below). \section{\label{sec:Disc} RESULTS AND DISCUSSION} \subsection{\label{Structure} Lattice Constants and Morphology} All of the films are orthorhombic with their longer $c$ axes in the film plane. Taking the $a$ axis in the growth direction, the (100) orientation of the films is indicated in x-ray diffraction $2\theta$-$\omega$ scans, shown for 130 nm-thick films grown on LSAT (110) and LAO (110) in Fig.~\ref{XRD} (a) and (b), respectively, by the presence of only $(2h,0,0)$ film reflections near the $(h,h,0)$ substrate reflections. Phi scans of asymmetric film and substrate reflections, shown in Fig.~\ref{PhiScans} for the 130-nm film grown on LSAT, confirm the cube-on-cube orientation with NSMO [010]$\parallel$LSAT [1$\overline{1}$0] and NSMO [001]$\parallel$LSAT [001]; the same orientation relationship was found for films grown on LAO. X-ray results for the films on NGO indicate NSMO [010]$\parallel$NGO [010] and NSMO [001]$\parallel$NGO [001]. The lattice constants for all films were determined from reciprocal space maps in the vicinity of the the film (600), (440), (620), and (404) reflections, with nearby substrate reflections serving as internal references. The corresponding NGO reflections have the same indices as the films. For (110)-oriented LAO and LSAT the corresponding substrate reflections are (330), (400), (420), and (222), respectively. Figure~\ref{LatticeConstants} shows lattice constants as a function of film thickness for films on the three substrates. For all substrates and thicknesses, the film $c$-axes are fully strained to those of the substrate. The $a$ and $b$ axis lengths are clearly relaxed in response to the compressive (LAO) and tensile (LSAT, NGO) in-plane strain along the film [010] directions, with the most substantial effect occurring for NGO. For LAO films the result is an expansion along [010] and contraction along the film normal ([100]), whereas for NGO films the $b$-axes contract and the $a$-axes expand. For both substrates the thickest film is tetragonal. The films on LSAT exhibit more modest contractions along both [010] and [100] with increasing thickness, maintaining tetragonality. In spite of these differences in behavior for the NGO and LSAT films under tensile strain, their unit cell volumes (Fig.~\ref{CellVolume}) show very similar decreases with increasing thickness. Scanning electron micrographs (Fig.~\ref{SEM}) demonstrate that the relaxation of tensile strain along [010] for the films on NGO and LSAT is accommodated by the formation of unidirectional crack arrays running along the film [001] direction. The spacing of these cracks, determined from analyses of larger-area images from films on each of the substrates, approximately describes a log-normal distribution (shown in Fig.~\ref{SEM}~(c)for the 65-nm NGO film), with median values of $\sim 1-2\mu{\rm m}$ and $\sim 3-4\mu{\rm m}$ for LSAT and NGO films, respectively. These distributions did not change appreciably with thickness for either substrate. The width of the cracks themselves varies within a given film (particularly evident in Fig.~\ref{SEM} (a) for the NGO film), and the mean crack width is greater in the thicker films. Though transverse-sectional microscopy was not pursued, as we discuss further below the transport data implies that the cracks do not penetrate through to the substrate. Similar crack arrays were observed previously\cite{Olsson} for [110]-oriented YBa$_2$Cu$_3$O$_{7-\delta}$ and PrBa$_2$Cu$_3$O$_{7-\delta}$ films grown on [110] SrTiO$_3$, where they were attributed to anisotropic thermal expansion mismatch between substrate and film upon cooling from the growth temperature. The same mechanism appears applicable to the present oxide film-substrate systems since, as noted above, NSMO has thermal expansion coefficients of opposite sign: positive along [010] and negative along [001]. The lattice mismatch, $(a_{sub}-a_{NSMO})/a_{sub}$ ($a_{sub}$ and $a_{NSMO}$ are the substrate and bulk target lattice constants, respectively) is shown as a function of temperature in Fig.~\ref{Misfit} along the [010] and [001] film directions for each of the three substrates. These curves were computed using published thermal expansion data for the substrates.\cite{LAOLSATExp,NGOExpan} The target lattice constants were measured up to 200$^{\circ}$C and their temperature dependencies found to match well those of Tobe {\it et al.} (Ref.~\onlinecite{Tobe}) measured over a broader temperature range for compounds with a slightly different stoichiometry; the target data were then extended to higher temperature using the suitably scaled expansion data. At the growth temperature (750$^{\circ}$C) the tensile mismatch for LSAT and NGO is greatest along the film [001] direction. Upon cooling, the mismatch along [001] decreases since the c-axis expands, while that along [010] increases. For the films on NGO the [010] mismatch approaches 2\% at room temperature, the same amount by which the $b$-axis lattice parameter decreases abruptly with increasing thickness. Although the calculated compressive mismatch along [010] for films on LAO is only 0.5\% at room temperature, the compressed lattice is only stable at low thicknesses. Evidently there is a comparable critical thickness for NSMO above which both compressed and expanded lattices are relaxed. In spite of the linear crack arrays that develop in the thicker films under tensile strain, all of the films remain smooth as indicated by well-defined Kiessig oscillations seen in x-ray reflectivity measurements over an extended angular range (Fig.~\ref{XRR}). Reflectivity simulations\cite{Spirkle} imply a film surface roughness with variance $\sigma\sim 0.4$nm for the thinnest films, comparable to the perovskite unit cell dimension, with a modest increase for thicker films. \subsection{\label{Transport} Transport Properties} The dc electrical resistivity was measured for each film along the [010] and [001] directions as a function of temperature for $T\leq 325$~K. Film resistances exceeding 1 G$\Omega$ prevented measurements below $\sim 50$~K. The room-temperature resistivity anisotropy, $\rho_b/\rho_c$, is found to increase approximately exponentially with increasing film thickness for the LSAT and NGO films (Fig.~\ref{RvsThickness}), reflecting principally an increase along [010] due to cracking. For the thickest films $\rho_b/\rho_c\simeq 1000$. The corresponding ratio for the LAO films also increases with thickness, reaching $\rho_b/\rho_c\simeq 2.4$ for the 150-nm film. As this thickest film is tetragonal with unit cell volume and $T_N$ closest to the bulk target (Fig.~\ref{LatticeConstants}), we take the latter value to be representative of the intrinsic anisotropy of the NSMO compound. To investigate the possibility that variations in oxygen content with thickness might contribute to the changes in resistivity, the room-temperature thermopower (TEP) was measured for all films. The TEP is a sensitive measure of Mn valence that is largely independent of cation in this region of the manganite phase diagram\cite{Raveau,CohnCLMO}. The value measured for the target was $-80\mu{\rm V/K}$. The thermopowers for the films did not vary by more than $\sim 10\%$ for all thicknesses, typically falling in the range $-(80-100)\mu{\rm V/K}$, as shown for the NGO films in the inset of Fig.~\ref{RvsThickness}. Thus significant and systematic variations in the oxygen content of the films with thickness are ruled out. Fig.~\ref{NSMOrhovsT} shows the $T$ dependence of the in-plane resistivities for the 67-nm film grown on (110) LSAT along with that of the target. Interestingly, in spite of the substantial resistive anisotropy ($\rho_b/\rho_c\simeq 35$ for this film), the $T$ dependencies are essentially identical along the [010] and [001] directions (inset, Fig.~\ref{NSMOrhovsT}). This implies that the cracks do not penetrate into the substrate, since a stronger temperature dependence would be expected associated with thermally activated tunnel conduction as observed in other cracked manganite films.\cite{Fisher} The antiferromagnetic transition temperatures, $T_N$, determined from maxima in $d\ln\rho(T)/d(1/T)$ {\it vs.} $T$ (inset, Fig.~\ref{NSMOrhovsT}), are $265\pm 3$~K for LSAT and NGO films, and $T_N=240\pm 3$~K, the same as the bulk target, for films grown on LAO. In G-type AF's like Ca$_{1-y}$Sr$_y$MnO$_3$, expansion of the lattice due to Sr substitution\cite{Chmaissem} enhances AF superexchange interactions (and $T_N$) due to an increase in the Mn-O-Mn bond angle. Stability of the C-type AF ordered state is increased by structural modifications tending to isolate the FM one-dimensional chains.\cite{CAFStability} In the present films it seems likely that the significant enhancement of $T_N$ for films with expanded unit cells is principally due to the expansion along [001]. Though enhanced superexchange, e.g. due to an increase in the Mn-O-Mn bond angles along [100] and [010] is possible, the observation that $T_N$ for these films is independent of thickness, in spite of the very different $b$-axis lengths due to cracking in the thicker films, argues against its prominent role in determining the increase in $T_N$. More likely is that the expansion along [001] actually enhances the double-exchange coupling along [001] in this compound, thereby increasing the stability of the orbital and C-type N\'eel ordering.\cite{CAFStability} We note that the $c$-axis length in the bulk target is $\simeq 1.2$~\% smaller than that of a $x=0.75$ compound\cite{Kajimoto} with $T_N\simeq 300$~K. Thus the observed increase in $T_N$ for the films under tension is consistent with existing data on the structure and phase behavior in this regime of composition, where $T_N$ is plausibly controlled principally by the $c$-axis length. \section{\label{sec:CONCL} CONCLUSION} Epitaxial $a$-axis-oriented films of Nd$_{0.2}$Sr$_{0.8}$MnO$_3$ have been grown under both compressive and tensile strain. The intrinsic resistivity anisotropy of the material, inferred from transport measurements on a 150-nm thick, compressively strained film, is $\rho_b/\rho_c=2.4$. Uniaxial crack arrays oriented along the film [001] direction develop in films under tensile strain with thickness $\gtrsim 65 {\rm nm}$ due to anisotropic thermal expansion mismatch upon cooling from the growth temperature. Typical crack spacings are a few $\mu{\rm m}$. The resistivity anisotropy measured for the cracked films exceeds $10^3$. Identical temperature dependencies of the resistivity along and transverse to the cracks implies that not all penetrate through to the substrate, and thus the large anisotropy is attributed to a thin and meandering conduction path for transport along [010]. The increased N\'eel temperature for films under tensile strain, from 240~K for the bulk target to 265~K independent of thickness, suggests that enhanced FM exchange stabilizes the orbital and spin order of the C-type AF state, and that their stability in this region of the phase diagram is largely controlled by the $c$-axis length. \bigskip \textbf{ACKNOWLEDGMENTS} \bigskip We thank Mr. Alsayegh Husain for technical assistance with SEM scanning. This material is based upon work supported by the National Science Foundation under grants DMR-0072276 (Univ. Miami) and DMR-0504769 (Montana State Univ.), the Research Corporation (Univ. Miami), and the U.S. DOE Office of Basic Energy Sciences (Montana State Univ., Grant No. DE-FG-06ER46269). \noindent $^{\dagger}$ present address: 201-1650 Pembina Hwy., Winnipeg, MB R3T2G3 CA \newpage
16e9edc6be2e52920ef3a4592747372487524c25
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} The biological and physical properties of proteins are compelling for many reasons. While just a small amount of the nowadays hundreds of thousands known protein sequences are experimentally characterized, the variety of their functions is overwhelming. Though the structure has been resolved only for a subset of these sequences, the number of stable folds that are expressed in nature is seemingly small compared to the number of sequences. The relationship between fold and function is far from obvious, and examples such as intrinsically unstructured proteins and multi-functional folds resist simple schemes for classification. The question what really makes a protein functional hence needs to be addressed in the context of its specific biological environment. From a physical point of view, an attempt to find some unifying concepts for the interpretation of dynamics and thermodynamics is the description of proteins in term of energy landscapes\cite{frauenfelder}, in which the evolution of the system is related to the dynamics on a high-dimensional rugged energy surface. The existence of local minima, connected by saddles of different barrier height and rank, leads to a distribution of timescales that are reflected in the dynamics of the proteins. Although the energy landscape provides a reduced description, the complex interactions in proteins and their interaction with the environment, which involve multi-body interactions and subtle effects of charges, make its complete characterization neither experimentally nor theoretically conceivable. This situation is somewhat reminiscent of other complex systems such as glasses which, though being more homogeneous systems, share the property of displaying a high-dimensional landscape leading to complex dynamics. The energy landscape picture is useful for a qualitative analysis of protein properties, but for quantitative studies, an exhaustive sampling and and the full characterization are practically infeasible for this high-dimensional representation. Therefore, in order to obtain comprehensive quantitative predictions on generic protein properties from the information on the landscape, the picture needs to be simplified. A recent work\cite{nakagawa1} has shown that a reduced description of the energy landscape, originally devised for the analysis of super-cooled liquids by Stillinger and Weber\cite{stillinger}, can successfully capture the essential thermodynamic aspects of folding in the context of a simplified protein model. In particular, it was shown that the density of states constructed from the local minima of the energy landscape, called inherent structures, can be used to compute the most important thermodynamic observables. This finding is important because it provides a general scheme for theoretical studies of protein thermodynamics, showing how the relevant information can be quantitatively accessed from its imprint on the potential energy surface. The approach has consequently been used in the context of studies of the folding properties of a $\beta$-barrel forming protein\cite{kim}, the construction of the free energy landscape by mechanical unfolding\cite{imparato}, and the network of native contacts\cite{wall}. Other earlier works including inherent structure analysis but not necessarily seeking to characterize the full thermodynamics of folding can be found in \cite{guo,shea}. However the validity of an analysis based on the inherent structure landscape (ISL) must be critically examined because the method involves a fundamental assumption which could be questioned: The vibrational free energy within the basin of attraction of an inherent structure is assumed to be independent of the basin. A recent study \cite{wall} tried to go beyond this approximation by assuming that the vibrational free energy can depend on the energy of the inherent structures. Still, the question is subtle as we show in the present work that, even when the vibrational free energy depends on the inherent structure energy, the derivation of thermodynamic quantities such as the specific heat from the ISL can be validly carried without any change in the procedure. Therefore an understanding of the limits of the ISL approach requires a deeper analysis. This is the aim of the present work. We proceed in two steps. In a first step (Section \ref{section3}), after briefly summarizing the ISL formalism and introducing the protein model used in Section \ref{section2}, we test the validity of the ISL approach by comparing its results to the data obtained from equilibrium molecular dynamics for a set of structures. We selected four previously unstudied two-state folding proteins of varying size and secondary structure elements. In a second step (Section ~ \ref{section4}), we critically revisit the major hypotheses of the ISL approach, as well as its practical limitations, such as the sampling of the phase space, and suggest routes for improvements. Finally, we summarize our findings in Section \ref{section5} and give an outlook on possible future studies that stem from our results. \section{Methods} \label{section2} \subsection{Inherent structure analysis} \label{ihstheory} In this subsection, we briefly review the major results of \cite{nakagawa1,nakagawa2} on obtaining reduced thermodynamics from an analysis of the inherent structures. The method is general and not bound to a specific protein model, provided the phase space of the protein can be explored by molecular dynamics, and that the energies of the visited states can be calculated. From simulations at fixed temperature close to the folding temperature $T_f$, which insures that the system evolves in a large part of the configuration space, the local potential minima, labelled by $\alpha_i$, are determined by conjugate gradient minimizations performed at fixed frequency along a molecular dynamics trajectory. The global minimum $\alpha_0$ is defined as the reference ground state with zero energy. Let $\lbrace x_i \rbrace,\ i=1,2,...,3N$, denote the $3N$ Cartesian coordinates of the $N$-particle system, and $V(\lbrace x_i\rbrace)$ its potential energy function. The probability to find a particular minimum $\alpha_i$ with potential energy $e_{\alpha_i}$ can be written as \begin{eqnarray} p(\alpha_i,T)&=&\frac{1}{Z(T)}\int_{B(\alpha_i)}d^{3N}x\ e^{-\beta V(\lbrace x_i \rbrace)}=\frac{1}{Z(T)}e^{-\beta e_{\alpha_i}} \int_{B(\alpha_i)} d^{3N}x\ e^{-\beta \Delta V_{\alpha_i} (\lbrace x_i \rbrace ) } \ \ \ , \label{eq1} \end{eqnarray} where $\Delta V_{\alpha_i} = V - e_{\alpha_i}$ , $Z$ denotes the configurational part of the partition function and $B(\alpha_i)$ is the basin of attraction of the minimum $\alpha_i$. With the definition\begin{eqnarray} e^{-\beta F_v(\alpha_i,T)}&:=&\int_{B(\alpha_i)} }d^{3N}x\ e^{-\beta \Delta V_{\alpha_i}(\lbrace x_i \rbrace) \label{fdefine} \ \ \ , \end{eqnarray} the unknown integral over the complex landscape of the basin of attraction $B(\alpha_i)$ is summed in an free-energy like function $F_v(\alpha_i,T)$ which in principle depends both on the nature of the basin and temperature. Notice that although we use the index $v$ like ''vibrational'', $F_v(\alpha_i,T)$ is obtained from the full nonlinear integral over $B(\alpha_i)$, and not from its harmonic approximation.\\The inherent structure landscape approach makes two key assumptions\cite{nakagawa1} which enable to considerably reduce the amount of information needed on the landscape while keeping its most important features. \begin{itemize} \item{(A1) The function $F_v(\alpha,T)$ for two minima $\alpha_1,\alpha_2$ that are distinct but close in energy, $e_{\alpha_1} \approx e_{\alpha_2}$, is the same for both minima: $F_v(\alpha_1,T)\approx F_v(\alpha_2,T)$. Consequently, $F_v(\alpha,T)\approx F_v(e_{\alpha},T)$.} \item{(A2) The function $F_v(e_{\alpha},T)$ does not vary significantly for different minima, i.e. $F_v(e_{\alpha},T)\approx F_v(T)$.} \end{itemize} Both assumptions were discussed in \cite{nakagawa2}. In section \ref{section4}, we show that assumption (A2) can actually be relaxed to the less strong form $\beta F_v(e_{\alpha},T)\approx f_{v}(e_{\alpha})+ \beta F_{v}(0,T)$ while most calculations remain feasible and some of the thermodynamic variables unchanged. With these assumptions, the contribution from the function $F_v$ factorizes in the numerator and denominator of (\ref{eq1}) so that it can be eliminated to give \begin{eqnarray} \label{eq:pz} p(\alpha_i,T)&=&\frac{1}{Z_{IS}(T)}e^{-\beta e_{\alpha_i}} \ \ \ \ , \ \ \ \ Z_{IS}=\sum_{\alpha=\alpha_0}^{\alpha_{max}}e^{-\beta e_{\alpha}}\ \ \ , \end{eqnarray} where the sum in the partition function includes all inherent structures found from the global minimum $\alpha_0$ to the minimum $\alpha_{max}$ having the highest energy. Here, the energy scale is shifted such that the energy of the global minimum $\alpha_0$ is zero. Introducing an energy density function for the inherent structures $\Omega_{IS}(e)=\sum_{\alpha=\alpha_0}^{\alpha_{max}}\delta(e-e_{\alpha})$, the probability to find a minimum in the interval $[e_{\alpha},e_{\alpha}+de_{\alpha}]$ at temperature $T$ is \begin{eqnarray} \label{eq:piszis} P_{IS}(e_{\alpha},T)de_{\alpha}&=&\frac{1}{Z_{IS}}\Omega_{IS}(e_{\alpha})e^{-\beta e_{\alpha}}\ de_{\alpha} \ \ \ \ , \ \ \ \ Z_{IS}=\int_{e_{\alpha_0}}^{e_{\alpha_{max}}} de_{\alpha}\ \Omega_{IS}(e_{\alpha}) e^{-\beta e_{\alpha} } \ \ \ . \end{eqnarray} For the model used in this work, the low energy minima are in practice sparsely separated in energy. As the ground state is isolated, one obtains $Z_{IS}(T)=1/p_0(T)$ with the probability of the ground state $p_0(T)$, so that the inherent structure density of states can be estimated from the probability to be in the basin of attraction of a minimum in a fixed temperature simulation at temperature $T_{MD}$ as \begin{eqnarray} \Omega_{IS}(e_{\alpha})&=&\frac{e^{\beta_{MD} e_{\alpha}}}{p_0(T_{MD})}P_{IS}(e_{\alpha},T_{MD})\ \ \ . \label{ihs} \end{eqnarray} Though above we have chosen a continuous notation to simplify the equations, it should be noted that the density built from an estimate of the probability density function of sampled minima for the present model in practice always comprises discrete and continuous parts which can be integrated separately. Once the inherent structure density of states is known, one can compute the inherent structure partition function $ Z_{IS} $ from (\ref{eq:piszis}), from which all thermodynamics functions can be derived, including the free energy $F_{IS}$ and the internal energy $U_{IS}$. Given that most states of the system can be sampled close to the folding temperature, it is sufficient to simulate the system at a single temperature $T_{MD}$ to construct the inherent structure landscape, in contrast to the full thermodynamics where one needs to sample different ranges of temperatures. Therefore, the ISL approach can be computationally very efficient. In the following, we will restrict ourselves to the computation of specific heat $C_{V,IS}$ being a quantity of fundamental importance in a physical system, as it is sensitive to fluctuations and, for instance, shows a clear signature of phase transitions. It can be deduced from numerical derivatives of the partition function $Z_{IS}$ through \begin{eqnarray} C_{V}=T\left(\frac{\partial S}{\partial T}\right)_V \ \ \ , \end{eqnarray} and hence \begin{eqnarray} C_{V,IS}=T\left(\frac{\partial^2 (\beta^{-1}\log(Z_{IS})}{\partial T^2} \right)_V \ \ \ \ \ . \end{eqnarray} \subsection{Model and selected proteins} Since our goal is to analyze the validity of the ISL approach and not to derive quantitative data for a particular protein, we decided to choose a simplified model, which allows the sampling of phase space at a reasonable computational cost. However the model must be rich enough to properly describe the complex features of its physics, and should be able to distinguish between proteins which differ, for instance, in their secondary structure. We use frustrated off-lattice G\=o-models identical to the ones introduced in \cite{nakagawa1} because they provide a good compromise between all-atom simulations and simplified models that do not fully describe the geometry of a protein. These models provide a representation with a single particle per residue centered at the location of each $C_{\alpha}$-atom. For details on the model and the parameters, we refer to \cite{nakagawa1,nakagawa2} and a brief review in the appendix \ref{ap1}. Although the validity of such models to provide a faithful representation of protein folding is a recurrent subject of debate, off-lattice G\=o-models have been successfully used to study folding kinetics \cite{karanicolas} and the mechanical resistance of proteins \cite{paci}. From a physical point of the view, despite a strong bias towards the ground state, these models have a complex energy landscape with a large number of local minima well suited for the analysis in terms of inherent structures.\\As the results of the ISL approach depend on the density of states of the inherent structures, for a reliable test of the method it is important to examine examples which could differ in their properties, i.e. to investigate proteins of different size and structure. To test the inherent structure approach beyond the previously employed immunoglobulin (IG) binding domain of protein G (2GB1), we selected four two-state folding proteins of varying size and folds from the PDB database\cite{pdb}: the trp-cage mini-protein construct (1L2Y, 20 residues, $\alpha$-helical), the ww domain FPB28 (1E0L, 37 residues, $\beta$-sheets), the src-SH3 domain (1SRL, 56 residues, $\beta$-sheets) and ubiquitin (1UBQ, 76 residues, $\alpha-\beta$-fold). The motivation for these choices is discussed in Section \ref{section3} for each protein (see insets of figures \ref{fig1}-\ref{fig4} for structural representations of these four proteins drawn with pyMol\cite{pyMol}). The positions of the $C\alpha$-atoms of the PDB files is chosen as a reference for the construction of the G\=o-model. In the case of NMR resolved structures, the first structure is selected as the reference. The native contacts of the model were established according to the distances between atoms belonging to different residues. A native contact is formed if the shortest distance belonging to atoms of two different residues is smaller than $5.5\AA$. The number of native contacts according to this criterion are: $N_{nat}=91$ for the ww domain, $N_{nat}=225$ for ubiquitin, $N_{nat}=216$ for src-SH3 and $N_{nat}=36$ for trp-cage. This definition is simple and includes some arbitrariness. There exist other methods for probing contacts between side-chains, e.g. by invoking the van der Waals radii of residue atoms and solvent molecules\cite{sobolev}. Though using the latter method preserves the main structure of the contact map, it leads to quantitative differences in the the number and location of contacts along the sequence. Consequently, one can expect that the topology of the energy surface and key thermodynamic properties such as the folding temperature are also altered when the definition of the contact map is varied. For the purpose of the present study which does not attempt to give a quantitative description of side chain contacts, and focusses on global properties of the landscape rather than its detailed relation to the network of contacts, the cutoff-based approach is acceptable.\\Molecular dynamics simulations were performed using the Brooks-Br\"unger-Karplus algorithm \cite{bbk} with a time-step of $dt=0.1$ and a friction constant of $\gamma=0.01,0.025$ (all units in this section are dimensionless, see \cite{nakagawa1} for details). To ensure equilibration, the system was thermalized starting from the native state (PDB coordinates) for $t=2\cdot 10^5$. The simulation time for a single temperature point and a single initial condition was $t=2\cdot 10^7$, and the data obtained for both inherent structure sampling (fixed temperature) and thermodynamic sampling (variable temperature) were averaged over various initial sets of velocities. Minimization was performed using the conjugate gradient method with the Polak-Ribi\`ere algorithm. To estimate the vibrational free energy at the minimum, mass weighted normal mode analysis was performed using LAPACK diagonalization routines. The second-order derivatives of the potential energy function at the minimum were calculated by numerical differentiation of the analytical first-order derivatives. \section{Reduced and full thermodynamics of a set of model proteins} \label{section3} In this section, the validity of the ISL approach is tested by comparing the equilibrium thermodynamics deduced from molecular dynamics simulations to the reduced thermodynamics from inherent structure sampling. As discussed in Section \ref{section2}, we evaluate the specific heat $C_V$ as a function of temperature as a representative example of the thermodynamic observables. \subsection{src-SH3} \begin{figure} \centering \includegraphics[width=6.6cm]{f1a} \includegraphics[width=9.2cm]{f1b} \caption{Results for src-SH3. \textit{Left:} Inherent structure density of states $\Omega_{IS}(e_{\alpha})$. The inset shows a close up on the low-energy range. The size of the energy bins for the density estimate is $\Delta\ E=0.2\ k_BT_f$. \textit{Right:} Comparison of the specific heat from equilibrium trajectories $C_V(T)$ (points), from which the specific heat of a harmonic oscillator in $3N$ dimensions has been subtracted, and $C_{V,IS}(T)$ from inherent structure analysis (solid red line); see text for further explanations.} \label{fig1} \end{figure} The src-SH3 domain was chosen since it has the same number of residues as IG binding domain of protein G studied in \cite{nakagawa1,nakagawa2}, but contrasts to the latter in terms of structure. The src-SH3 domain is mostly composed of $\beta$-sheets and does not contain an $\alpha$-helical-secondary structure element (only five residues form a small right-handed helix segment). The inherent structure density of states, shown on the left-hand side of figure \ref{fig1}, was obtained from various simulations close to the folding temperature ($T_{MD}\approx T_f$) and built from $\approx 72000$ minima according to (\ref{ihs}). After computing an energy histogram using $1000$ bins to yield an estimate of the inherent structure probability density, energy bins with only a single count have been discarded from the analysis to avoid a bias that could be introduced by insufficiently sampled isolated minima. The right hand side of figure \ref{fig1} shows a comparison between the temperature dependence of the specific heat calculated from inherent structures, $C_{V,IS}(T)$, and the temperature dependence of the specific heat calculated from equilibrium molecular dynamics simulations at variable temperature $C_V(T)$. The equilibrium thermodynamics has been determined by averaging the results of 10 initial conditions per temperature step except for points close to the transition where the results of 20 initial conditions have been used. Despite this averaging, the variance, indicated by error-bars on the y-axis, is large in the vicinity of the folding transition as the waiting time for a transition to occur becomes comparable to the simulation time. For a harmonic system, $C_V(T)=N_{dof}k_B/2$ where $N_{dof}$ is the number of degrees of freedom. At low temperatures $T\ll T_f$, harmonic contributions are dominating, and the difference between $C_V(T)$ and $C_{V,IS}(T)$ is approximately $3Nk_B$, which is subtracted from $C_V(T)$ in figure \ref{fig1}. Figure \ref{fig1} shows that the ISL approach is able to capture the main features of the thermodynamics of the G\=o-model of the src-SH3 domain. The value of the folding temperature is correctly determined by the ISL approach, but $C_{V,IS}$ underestimates the maximum by more than $20\%$ if the highest point of $C_V(T)$ is selected as a reference. This discrepancy at the maximum had previously also been observed for the inherent structure analysis of the IG binding domain of protein G \cite{nakagawa1}. On the other hand, towards higher temperatures, $C_{V,IS}(T)$ decays slower than $C_{V}(T)$. \subsection{ubiquitin} \begin{figure} \centering \includegraphics[width=6.6cm]{f2a} \includegraphics[width=9.2cm]{f2b} \caption{Results for ubiquitin, see caption of figure \ref{fig1} for annotation.The size of the energy bins for the density estimate is $\Delta\ E=0.24 \ k_BT_f$. \hspace{4in}} \label{fig2} \end{figure} Ubiquitin, with its 76 amino-acids, is a protein with a fairly rich secondary structure since it contains a $\alpha$ helix and 5 $\beta$ sheets. Similarly to the src-SH3 domain, the ubiquitin G\=o-model presents a sharp folding transition associated with a large peak in the specific heat (see right-hand side of figure \ref{fig2}). The specific heat $C_V(T)$ was estimated from averages on 8 initial conditions per temperature step, and $\Omega_{IS}(e_{\alpha})$ was obtained using $\approx 79000$ minima from several independent trajectories close to the folding temperature using a histogram of $2000$ bins. The agreement between the full thermodynamics and the ISL approach is better than for src-SH3, though similar trends of discrepancies can be observed. \subsection{ww domain} \begin{figure} \centering \includegraphics[width=6.4cm]{f3a} \includegraphics[width=9.4cm]{f3b} \caption{Results for the ww domain, see caption of figure \ref{fig1} for annotation. The size of the energy bins for the density estimate is $\Delta\ E=0.05 \ k_BT_f$.\hspace{4in}} \label{fig3} \end{figure} To contrast with sharp two-state transitions of protein G (56 residues), src-SH3 (56 residues) and ubiquitin (76 residues), we selected smaller structures, the ww domain (37 residues) and the trp-cage (20 residues), to examine the performance of the inherent structure approach for less structured proteins, showing a broader transition. For such small protein domains, the validity of the G\=o-model can be questioned as the model is built from the geometrical structure of the folded state. For small molecules the discrimination between folded and unfolded states becomes subtle due to fluctuations covering a large part of the accessible configurational space in a broad range of temperatures. The point in selecting these structures is not to asses the validity of the G\=o-model itself, but to test the ISL approach in very stringent cases to highlight possible limitations. The density of states $\Omega_{IS}(e_{\alpha})$ was obtained from $\approx 79000$ minima from several independent trajectories slightly above the folding transition (see figure \ref{fig3}) using a histogram of $2000$ bins. In contrast to the two previous cases, the histogram of minima does not show a clear separation of basins of local minima associated to the folded/unfolded state. We observe a difference in the apparent shape of the density of states (left-hand side of figure \ref{fig3}), which is globally concave in contrast to the convex densities obtained for src-SH3 and ubiquitin. The same shape was also found for protein G in \cite{nakagawa1}, for which the relation between the concave shape and the two-hump structure of $P_{IS}(e_{\alpha},T)$ was discussed in the vicinity of the folding temperature. Moreover, by comparing the insets of the left-hand side of figs. \ref{fig1}-\ref{fig3}, one observes that the low energy range of $\Omega_{IS}(e_{\alpha})$ is more discrete, and states tend to lie less densely packed.\\ The temperature dependence of $C_V(T)$ was obtained by averaging over 12 initial conditions. An interesting feature of the curve is the shoulder in the low temperature range which indicates a partially unfolded structure associated to the breaking of a small number of contacts. Comparing the results of $C_V(T)$ and $C_{V,IS}(T)$, it is apparent that though the specific heat reconstructed from inherent structure thermodynamics correctly captures the global shape of $C_V(T)$, including the existence of the shoulder, important deviations can be observed. Similarly to the cases analyzed above, $C_{V,IS}(T)$ underestimates $C_{V}(T)$ at lower temperatures while giving an overestimation at high temperatures. In contrast to the results for larger proteins, we also observe a significant shift of the transition temperature. \subsection{trp-cage} \begin{figure} \centering \includegraphics[width=6.4cm]{f4a} \includegraphics[width=9.4cm]{f4b} \caption{Results for trp-cage, see caption of figure \ref{fig1} for annotation. The size of the energy bins for the density estimate is $\Delta\ E=0.02 \ k_BT_f$. \hspace{4in}} \label{fig4} \end{figure} With only 20 residues, the last protein fragment studied in this series is also the smallest, and mainly consists of a single $\alpha$-helix. Its inherent structure density of states $\Omega_{IS}(e_{\alpha})$ was estimated from $\approx 90000$ minima sampled from several independent trajectories close to the folding temperature using a histogram of $2000$ bins. For such a small system, the low lying energy states are largely separated from each other, and the resulting density of states presents large gaps in a relatively broad range of energies. The continuum representation assumed for the inherent structure landscape is certainly questionable in such a case. The equilibrium thermodynamics were constructed from averages on 10 runs over different initial conditions. It is interesting to notice that the value of the specific heat computed with reduced thermodynamics is still fairly close to the actual specific heat, although we notice again that the peak is underestimated and a high temperature tail is observed as in the previous cases. In contrast to the results for the ww domain, the folding temperature of the trp-cage protein domain is correctly found by the analysis of inherent structures. \subsection{Discussion} Our studies of four proteins, combined with the earlier results on protein G \cite{nakagawa1,nakagawa2}, allow us to describe some trends in the inherent structure analysis of G\=o-model proteins.\\ For the density of states given by (\ref{ihs}), a general exponential dependence, $\Omega_{IS}(e_{\alpha}) \propto \exp(-e_{\alpha}/k_B T_0)$ is observed for all proteins, with slightly different slopes for the low energy states, corresponding to states occupied in the folded configuration, and for the high energy states, occupied in the unfolded configuration. The value of $T_0$ associated to the low energy range is a good estimate of the folding temperature, as previously reported for the case of protein G \cite{nakagawa1}. Figure \ref{newfig} (left-hand panel), which compares the density of states of the inherent structures for the four proteins shows that, when being presented in reduced units as a function of $e_{\alpha}/k_B T_f$, the functional form of these densities is highly similar. For the large proteins that we studied, src-SH3 and ubiquitin (as well as protein G), the slope is slightly larger in the high energy range than in the low energy range. The converse is true for the small protein domains ww and trp-cage. A formal calculation of the reduced specific heat $C_{V,IS}$ from a bi-exponential density of inherent structure energies shows that this property is related to the sharpness of the folding transition. A density of states that is curved downwards for the energies associated to unfolded configurations leads to the broad folding transition expected for small protein domains. \begin{figure} \centering \includegraphics[width=7.8cm]{f5a} \includegraphics[width=8.0cm]{f5b} \caption{\textit{Left:} Comparison between the different inherent structure density of states $\Omega_{IS}(e_{\alpha})$ including the data for the IG binding domain of protein G from \cite{nakagawa1}; the color coding is: protein G (black), src-SH3 (green), ww domain (red), ubiquitin (blue), trp-cage (magenta). \textit{Right:} Deviation between the specific heat obtained from full thermodynamics and inherent structure analysis. } \label{newfig} \end{figure} The calculation of the specific heat $C_{V,IS}$ shows that the ISL approach is able to determine the specific heat of a protein with reasonable accuracy, including the overall shape of the folding transition. To obtain such an agreement the ground state probability $p_0(T_{MD})$ must be sufficiently well sampled to ensure that the density of inherent states is correctly normalized. However, our studies of several proteins shows that limitations exist since systematic deviations from the full thermodynamics are apparent. At low temperatures, and up to the transition temperature, $C_{V,IS}(T)$ underestimates the specific heat. The peak of $C_{V,IS}(T)$ is less pronounced than expected from the equilibrium trajectories, and tends to broaden towards higher temperatures ($T>T_f$) where $C_{V,IS}$ is larger than $C_V(T)$. These deviation are shown in a comparative illustration in the right-hand side panel of figure \ref{newfig} for the four proteins. Our data also reveals that the G\=o model, with its strong bias towards the native state, does not yield qualitatively differences depending on the secondary structure of the protein under consideration. \\Owing to the results found for various proteins which show systematic deviations from the results of equilibrium thermodynamics, it is important to examine the assumptions made in approximating the inherent structure of states, which we do in the following section. \section{The limitations of the ISL approach} \label{section4} \subsection{Local normal mode analysis} \label{NMA} In Section \ref{section2} we introduced the major assumptions (A1) and (A2) of the ISL approach. The derivation of thermodynamic quantities such as the specific heat is carried out as if the free energy contribution within a basin of attraction, defined by (\ref{fdefine}), did not depend on the particular inherent structure $\alpha_i$. It is difficult to test this assumption as it would in principle require the determination of the complete basin on the energy landscape, including the calculation of all the saddle points that determine the frontier of the basin as well as the shape of the basin within this frontier. Still, one can at least compute $F_v(\alpha)$ {\em in the harmonic approximation,} as done also in \cite{wall}. Assuming that the contribution to the integral (\ref{fdefine}) can be approximated by local normal modes in the vicinity of the energy minimum, the effective free energy can be written as \begin{eqnarray} \beta F_{NMA}({\alpha},T)&=& \sum_{q=1}^{3N-6}\log\left(\frac{\hbar \omega_q(\alpha)}{k_BT} \right) = \sum_{q=1}^{3N-6}\log(\omega_q(\alpha)/\omega_q(0)) - \sum_{q=1}^{3N-6}\log(k_BT/\hbar \omega_q(0))\nonumber\\& =& f_{NMA}({\alpha})+ \beta F_{NMA}(0,T)\; \label{harmapprox} \ \ \ , \end{eqnarray} where $\omega_q(0)$ are the normal mode frequencies at the ground state $\alpha=0$. This expression allows us to calculate a harmonic approximation to $F_v(\alpha,T)$ by calculating the normal modes for each minimum that we sample, and subsequently summing their different contributions according to (\ref{harmapprox}). Figure \ref{fig5} shows the ${\alpha}$-dependent part $f_{NMA}({\alpha})$ as a function of the energy minimum for the two examples of src-SH3 domain (left) and ww domain (right). The minima were obtained along a single trajectory close to the folding temperature. The distribution of the minima along the energy axis reflects the character of the probability distribution function for the two proteins, one being divided into two basins (src-SH3 domain), the other being a single distribution (ww domain). \begin{figure} \centering \includegraphics[width=8.0cm]{f6a} \includegraphics[width=7.8cm]{f6b} \caption{The frequency dependent component $\sum_{q=1}^{3N-6} \log(\omega_q)$ of the vibrational free energy $\beta F_{NMA}$ in the harmonic approximation, calculated from a sample of 1000 minima. \textit{Left:} Src-SH3 domain. \textit{Right:} ww domain. An arbitrary offset was added to shift the origin of the ordinate. } \label{fig5} \end{figure} As a first important observation, we note that the variation of $f_{NMA}({\alpha})$ with $e_{\alpha}$ cannot be assumed to be negligible as assumed in (A2), according to which $F_{NMA}$ should be approximately constant in $e_{\alpha}$. Both proteins show the same trend toward a decreasing effective free energy with increasing $e_{\alpha}$. For the src-SH3 domain, a nonlinear dependence can be observed in the high energy range in agreement with previously reported results on protein G \cite{wall}.\\A second point to be noticed is that, for a given $e_{\alpha}$, a distribution of values of $f_{NMA}(\alpha)$ can be found (variance on the y-axis in figure \ref{fig5}). While such a variance (or fluctuation) could be attributed to the limited numerical accuracy of the normal modes at high energies, this is also true for the low energy part for which the numerical scheme provides accurate results, as can be checked on the vanishing six lowest eigenmodes. Consequently, it appears that the first approximation (A1), i.e. $F(\alpha,T)\approx F(e_{\alpha},T)$ does not hold as assumed, leaving the possibility that the observed deviations in section \ref{section3} could be caused by this simplification. \subsection{Free energy correction} The results of the previous subsection seem to challenge the validity of the ISL approach performed under the assumptions (A1) and (A2). At a first glance, the main problem seems to arise from the approximation (A2) that $F_{v}(\alpha,T) $ does not depend on the particular basin considered, which is obviously untrue. On the other hand, although there is clearly a variance associated to $F_{v}$ for the same energy range $e_{\alpha}$ in disagreement with approximation (A1), one can observe a general evolution of $F_{v}(\alpha,T) $ with $e_{\alpha}$, which suggests that approximating $F_{v}(\alpha,T) $ by $F_{v}(e_{\alpha},T) $ may be acceptable. However, we show below that, if the free energy within a basin can be written as a sum \begin{eqnarray} \beta F_v(\alpha,T) &=&f_{v}(e_{\alpha})+\beta F_{v}(0,T) \; , \label{separation} \end{eqnarray} with $ f_{v}(e_{\alpha}=0):=0$ at the ground state, the calculation of $C_{V,IS}$ {\em can be carried out without any change,} so that approximation (A2) appearing to be to particularly bad at a first glance may not be the decisive one. It should be noticed that, as discussed in the previous section, the property (\ref{separation}) is verified if the motion in each basin of attraction can be described by a combination of harmonic vibrations.\\ Starting again from (\ref{eq1}), and proceeding as in section \ref{ihstheory}, we can eliminate the part of $F_v(\alpha,T)$ that depends on temperature only in the expression for the partition function and the probability distribution function, keeping only the $\alpha$-dependent part. One gets \begin{eqnarray} Z(T) &=&e^{- \beta F_{v}(0,T)} \int\Omega_{IS}(e_{\alpha})e^{-\beta e_{\alpha}}e^{- f_{v}(e_{\alpha})}\ \ \ d e_{\alpha}\ \ \ \ \ , \label{newz}\\ Z_{IS}(T) &=& \int\Omega_{IS}(e_{\alpha})e^{-\beta e_{\alpha}}e^{- f_{v}(e_{\alpha})} d e_{\alpha}\ \ \ . \label{newzis} \end{eqnarray} The principle of the calculation is to compute $Z_{IS}(T)$ form a \textit{measurement} of the probability density function $P_{IS}(T)$ estimated from MD simulations at a given temperature $T_{MD}$. This can be achieved through the intermediate calculation of a density of states of inherent structures $\Omega_{IS}(e_{\alpha})$, which is temperature independent and from which $Z_{IS}(T)$ can be obtained at all temperatures. In equation (\ref{newzis}), we notice that the inclusion of the term $e^{-f_{v}(e_{\alpha})}$ is equivalent in definition an ''effective'' density of states $\Omega_{IS}(e_{\alpha}) e^{-f_{v}(e_{\alpha})}$ from which the classical thermodynamic expression can be derived as shown below. In the calculation of section \ref{ihstheory}, the density of states is given by equation (\ref{eq:piszis}). This density $\Omega_{IS}^{(0)}$ and all other observable calculated in section \ref{ihstheory} will henceforth denote with and index $(0)$. In the new scheme including the $\alpha$-dependent part of the free energy in the harmonic approximation, the probability density $P_{IS}(e_{\alpha},T)$ becomes \begin{eqnarray} P_{IS}(e_{\alpha},T) &=& \frac{1}{Z_{IS}(T)}\Omega_{IS}(e_{\alpha})e^{-\beta e_{\alpha}}e^{- f_{v}(e_{\alpha})}\ \ \ . \end{eqnarray} yielding the inherent structure density \begin{eqnarray} \Omega_{IS}(e_{\alpha}) &=& \frac{P_{IS}(e_{\alpha},T_{MD})}{p_{0}(T_{MD})} e^{\beta_{MD} e_{\alpha}}e^{ f_{v}(e_{\alpha})}\ \ \ . \end{eqnarray} The latter expression shows that if the variation of $F_v(\alpha,T)$ with $e_{\alpha}$ cannot be ignored, the previously derived density of states $\Omega_{IS}^{(0)}(e_{\alpha})$ is not the correct one. The two are related by \begin{eqnarray} \Omega_{IS}(e_{\alpha})&=&\Omega_{IS}^{(0)}(e_{\alpha})e^{ f_{v}(e_{\alpha})}\label{relatedens} \ \ \ \ . \end{eqnarray} A similar result was also reported in \cite{wall} which considered the particular case of a piecewise linear dependence on $e_{\alpha}$. Though the densities differ, substituting (\ref{relatedens}) in equations (\ref{newzis}) or (\ref{eq:piszis}), we immediately have $Z_{IS}(T)=Z^0_{IS}(T)$, and the inherent structure observables such as $U_{IS}=\langle e_{\alpha} \rangle$ and $C_{V,IS}=\left(\langle e_{\alpha}^2\rangle -\langle e_{\alpha}\rangle^2\right)/k_BT^2$ are unchanged, i.e. $U_{IS}=U_{IS}^{(0)}$ and $C_{V,IS}=C_{V,IS}^{(0)}$ even when $f_{v} \neq 0$.\\As a consequence, using equation (\ref{newz}), the full free energy $F(T)$ of the protein can be written as \begin{eqnarray} F(T)&=&-k_BT\log(Z(T))=-k_BT\log(Z_{IS}(T))+ F_{v}(0,T)\nonumber \\ &=&-k_BT\log(Z_{IS}^{(0)}(T))+ F_{v}(0,T)=F_{IS}^{(0)} + F_{v}(0,T)\ \ \ \ . \end{eqnarray} Therefore, in the ISL formalism, taking into account the variation of $F_v(\alpha,T)$ as in equation (\ref{separation}) does not alter the free energy and cannot be expected to be at the origin of the quantitative differences between $C_V$ derived from the ISL formalism and the full numerical results presented in section \ref{section3}. We conclude that the origin of these discrepancies is likely to be found in the non-separability between the $\alpha$ and the $T$ dependency within the basins. Such an non-separability can be expected as soon as the anharmonicity of the different basins is taken into account. This is certainly relevant for proteins, in particular as the denaturation involves frequent transitions between basins of different shape and volume associated to the semi-rigid folded and the highly flexible unfolded state.\\When $F_v(\alpha,T)$ cannot be separated into $\alpha$- and $T$-dependent contributions to simplify the calculation of the partition function, the remaining possible approximation that can tackle the computational difficulty would be the saddle point approximation of the free energy \cite{stillinger}. This approximation is acceptable in the thermodynamic limit for large systems, but cannot be justified in the present problem as the number of particles involved is still small and the interactions between particles are heterogeneous. While the harmonic approximation seems to be invalid for the present problem which involves large conformational changes due to the denaturation transition, results on super-cooled liquids \cite{sciortino} indicate that the correction of the heterogeneity of the basins at low temperatures is small and the decoupling approximation of vibrational and inherent structure contributions appears to be possible at least in these temperature regimes. \subsection{Effect of limited sampling efficiency} The main practical difficulty of the ISL method comes from the need to properly sample all the inherent structures in order to get a meaningful density of states $\Omega_{IS}(e_{\alpha})$ in all energy ranges. In the present scheme of inherent structure sampling, a single temperature is selected to simulate the dynamics of the protein for a finite period of time. The choice of a temperature close to the folding temperature is natural as the protein samples both the folded and the unfolded configurations at this temperature. There are however two questions that have to be answered: \textit{i)} How long should we follow a protein MD trajectory to get a sufficient sampling? \textit{ii)} Is it possible to combine data from simulations at a few different temperatures instead of keeping $T$ fixed?\\Let us first analyze the effect of the sampling time. It should be chosen long enough to cover the slowest intrinsic timescale of the system, and various trajectories with different initial conditions or realizations of the thermostat should be used to ensure that the order of events does not alter the shape of the distribution. An estimate of the time range that sampling must cover is provided by the folding/unfolding time of the protein, to guarantee that the protein explores both configurational subspaces. A possible check of the choice of the simulation time is to compute the specific heat with an increasing number of samples, and stop when the improvement brought by additional samples is negligible. Still, as computer time is limited, it cannot be ensured that all relevant states are sufficiently well sampled to yield a converged probability density. In particular, the high energy minima are sampled only with low probability, such that the high energy cut-off in the density of states is likely to be underestimated in finite time sampling. In this section, we analyze the impact of this cutoff on a model density to see how the inherent structure specific heat is possibly affected. To analyze the effect of the sampling independently of a particular case, let us assume a ''model'' inherent structure density of states taken as a single exponential \begin{equation*} \Omega_{IS}^{(0)}(e_{\alpha})= \begin{cases} e^{e_{\alpha}/a} & 0\le e_{\alpha} \le e_{max} \\ 0 & e_{max} < e_{\alpha} \end{cases} \ \ \ , \end{equation*} similar to the shape of the densities that can be found in limited ranges of energies for the numerical results in section \ref{section3}. The partition function than can be readily calculated as \begin{eqnarray} Z_{IS}^{(0)}&=&\int_{0}^{e_{max}} de_{\alpha}\ \Omega_{IS}(e_{\alpha})\ e^{-\beta e_{\alpha}}=\frac{e^{e_{max}(a^{-1}-\beta)}-1}{a^{-1}-\beta}\ \ \ . \end{eqnarray} Likewise, we can calculate the first two moments $\langle e_{\alpha}\rangle$ and $\langle e_{\alpha}^2\rangle$ to find the specific heat as a function of the temperature, the parameter $a$ and cutoff in energy $e_{max}$ \begin{eqnarray} C_{V,IS}^{(0)}(T;a,e_{max})&=&\frac{\langle e_{\alpha}^2\rangle-\langle e_{\alpha}\rangle^2}{k_B T^2} \ \ \ , \end{eqnarray} and analyze the result graphically as a function of the energy cutoff $e_{max}$ in figure \ref{fig6}. \begin{figure} \centering \includegraphics[width=8.0cm]{f7a} \includegraphics[width=8.0cm]{f7b} \caption{\textit{Left:} The specific heat $C_{V,IS}$ as a function of temperature and the cutoff parameter $e_{max}$. \textit{Right: } Location of the maxima of $C_{V,IS}$ for different values of the cutoff $e_{max}$.} \label{fig6} \end{figure} As can be inferred from left-hand side of figure \ref{fig6}, $\max_{T}\left[C_{V,IS}(T;a,e_{max})\right]$ increases with higher cut-off $e_{max}$. Using symbolical computation \cite{Mathematica}, we can further inspect the result to find the local maxima of the the specific heat for fixed cutoff $e_{max}$. On the right-hand side of figure \ref{fig6}, we observe that a lower cutoff in the density of states shifts the maximum of the specific heat towards higher temperatures. In addition to the shift, the curve becomes broader and the value at the maximum decreases. This situation is similar to the physical scenario when protein folding is altered by confinement (see e.g. \cite{depablo}). The high energy states disappear from the density of states as the system is prohibited to explore these by external forces. For the present purpose of the inherent structure analysis, although derived for a highly idealized model density, the results indicate that insufficient sampling at high energies can significantly alter the global shape of the transition. We have checked that these conclusions remain unchanged for a piecewise constant density of states. For instance, for the ww domain, one finds $e_{max}/a\approx 60$ which is higher than the range of values for which a shift of the maximum can be expected from figure \ref{fig6}. Consequently, the origin of this shift cannot be attributed to inefficient sampling in the high energy range.\\Because the high energy minima are less frequently visited, it is tempting to try to sample the minima from a high temperature molecular dynamics trajectory. On the other hand, as the method relies on the probability to occupy the ground state which determines $1/Z_{IS}(T)$, it is also necessary to properly sample the ground state, i.e. to select a simulation temperature which is below $T_f$. To reconcile these two exclusive conditions, one solution is to combine results sampled at two different temperatures to calculate $\Omega_{IS}(e_{\alpha})$, which should be temperature independent. This is possible because, according to (\ref{eq:pz}) \begin{equation} \label{eq:zratio} \dfrac{P_{IS}(e_{\alpha},T_1)}{P_{IS}(e_{\alpha},T_2)} = \dfrac{Z_{IS}(T_2)}{Z_{IS}(T_1)} e^{-(\beta_1-\beta_2)e_{\alpha}} \; \ \ \ , \end{equation} with $\beta_{1,2} = 1/(k_B T_{1,2})$, so that the ratio of ${Z_{IS}(T_2)}/{Z_{IS}(T_1)}$ can be calculated from the probabilities to occupy a basin at temperatures $T_1$ and $T_2$. A molecular dynamics trajectory obtained at a temperature $T_1 < T_f$ can be used to determine $Z_{IS}(T_1)$ from the probability to occupy the ground state, and subsequently a second simulation at a higher temperature $T_2$ can sample high energy basins more efficiently. For all basins which are properly sampled in both molecular dynamics runs, the ratio ${Z_{IS}(T_2)}/{Z_{IS}(T_1)}$ can be evaluated with (\ref{eq:zratio}). Although it should not depend on the particular basin that was used for its calculation, this ratio actually fluctuates around a mean value which can be used to determine ${Z_{IS}(T_2)}$ from ${Z_{IS}(T_1)}$. Then (\ref{eq:pz}), applied at the higher temperature $T_2$, can be used to compute $\Omega_{IS}(e_{\alpha})$ in the high energy range. Moreover, in the intermediate energy range, the basins are properly sampled by the two trajectories at temperatures $T_1$ and $T_2$, which gives two ways to evaluate $\Omega_{IS}(e_{\alpha})$ for those basins, and thus provides a way to check the consistency of the method. \begin{figure}[H] \centering \includegraphics[width=7.8cm]{f8a} \includegraphics[width=7.05cm]{f8b} \caption { Results from the sampling of inherent structures of the ww domain at two different temperatures $T_1=1.03\ T_f$ ($\approx 79000$ minima, red) and $T_2=1.41\ T_f$ ($\approx 18000$ minima, blue) \textit{Left:} Probability of occupation of the inherent structures versus $e_{\alpha}$. \textit{Right: } Density of inherent states energy (in logarithmic scale), calculated from the simulation at $T_1$ (red) and calculated at $T_2$ using the ratio ${Z_{IS}(T_2)}/{Z_{IS}(T_1)}$.} \label{fig:2T} \end{figure} Figure \ref{fig:2T} shows the results for the ww domain from different molecular dynamics simulations, simulated respectively at $T_1/T_f=1.03$ and $T_2/T_f=1.41$. For the simulations at $T_2$, the ground state is not sampled in the finite interval of time of the simulations, but $\Omega_{IS}$ can nevertheless be obtained through the evaluation of $Z_{IS}(T_2)$ deduced from $Z_{IS}(T_1) = 1/p_0(T_1)$ for all values of $e_{\alpha}$ for which the histograms of the left-hand side of figure \ref{fig:2T} overlap. The right-hand side of figure \ref{fig:2T} shows that the values of the density of inherent state energies computed from data at $T_1$ and $T_2$ are in rather good agreement in the whole energy range where the basins are sampled at the two temperatures. There is however a discrepancy between the two results, with a systematic deviation towards lower values of $\Omega_{IS}(e_{\alpha})$ for high inherent state energies when the density of states is calculated with data sampled at high temperatures. This is counterintuitive as one could tend to believe that sampling the basins at low temperatures would, on the contrary, underestimate the density of basins at high energy. This systematic deviation, which has been observed in all our calculations could point out a limitation of the ISL method as presently applied, i.e.\ with a calculation which is only valid if $\beta F_v(\alpha,T)$ is the sum of a term that depends on $e_{\alpha}$ and a term that depends on temperature (see (\ref{separation})~). When the specific heat is calculated with the value of $\Omega_{IS}(e_{\alpha})$ extended in the high energies range with this method, the function is still very close to the value obtained with only the data at temperature $T_1$ and the agreement with the numerical value of $C_V(T)$ is not improved. This shows that the discrepancy between the exact results and those deduced from the ISL approach are not due to technical difficulties such as an insufficient sampling in some energy range, but that they are rather inherent to the method itself together with its assumptions. \section{Discussion} \label{section5} We applied the inherent structure landscape (ISL) approach to four different proteins of varying size and secondary structure elements using a coarse-grained off-lattice protein model, and calculated their inherent structure density of states. Using these densities, we derived the specific heat from the reduced inherent structure thermodynamics, and compared it to the value obtained from equilibrium molecular dynamics as a function of temperature. Our results show that the ISL approach can correctly capture the shape of the temperature dependence of the specific heat, including some characteristic features such as the hump observed at $T \approx 0.4 T_f$ for the ww domain. This is remarkable since the result is deduced from molecular dynamics simulations at a single temperature, close to $T_f$, and nevertheless predicts the main features of the specific heat in a large temperature range, including low temperatures for which very long simulations would be necessary to reach exhaustive sampling of the phase space. This shows that {\em many features of protein thermodynamics are encoded in the inherent structure landscape.} Still, the approach is not perfect as we observed quantitative differences between $C_{V,IS}(T)$ and $C_V(T)$ which are particularly significant for small protein domains. The deviations show a systematic trend, the specific heat being underestimated below $T_f$ and overestimated above. This lead us to reexamine the approximations that enter the construction of the reduced thermodynamics from inherent structures for the model that we considered. The first approximation assumes that the correction to the density of states due to the structure of the basin associated to a minimum $f_{v}(\alpha)$ depends on the energy level of the minimum only, and not of the individual minimum. An evaluation of $f_{v}(\alpha)$ using a harmonic approximation based on local normal modes (section \ref{NMA}) shows that this is only approximately correct. For a given $e_{\alpha}$, the values of $ f_{v}(\alpha)$ are actually distributed around an average value, with fluctuations that grow for higher values of $e_{\alpha}$ and that appear to be larger for small proteins. This could explain some of the discrepancies between the ISL results and the equilibrium data. Moreover, the calculation of $ f_{v}(\alpha)$ indicates that the second assumption that the correction can be considered to be $\alpha$-independent is certainly not valid. However, we showed that if $\beta F_v(\alpha)$ splits into a temperature-dependent and an $\alpha$-dependent part, which is the case in the harmonic approximation, most of the thermodynamic results deduced from a direct application of the ISL approach are not affected. This is true in particular for the the specific heat. In view of the persisting quantitative differences between reduced inherent structure and equilibrium thermodynamics, we therefore conclude that the correction of the free energy in term of a harmonic approximation is not sufficient. It is likely that the nonlinear terms in the free energy associated to a basin cannot be ignored, and play a significant role. This is not surprising because, especially in the high temperature range, the protein fluctuates by exploring many basins, and consequently cannot be assumed to be well described by a harmonic approximation.\\ In future studies, it would be useful to analyze the role of the structure of the full basin on the thermodynamic results beyond the approximation by local normal modes around the minima. This is a true challenge owing to the complexity of the energy landscape. A starting point for such a study might be the examination of the distribution of first rank saddle points associated to the different minima on the potential energy surface. A second aspect which is suggested by the present work is to apply the ISL approach for protein folding in the context of more complex energy landscapes that arise in more realistic potential energy functions. The results on the small proteins analyzed in this study show that the global separation of the probability density into two basins associated to folded and unfolded states is not a necessary requirement to construct the reduced thermodynamics. It appears therefore likely that the formalism remains useful in cases where the energy landscape is less biased towards the ground state than in the G\=o-model. An application of the ISL method to other protein models therefore appears to be desirable and promising. \bigskip \textbf{Acknowledgements:\ }We thank the anonymous referee for valuable suggestions. J-G.H. acknowledges financial support from the Coll\`ege Doctoral Franco-Japonais and the R\'egion Rh\^one-Alpes. The simulations were partially performed at the P\^ole Scientifique de Mod\'elisation Num\'erique (PSMN) in Lyon.
835c703d35127d60fda6b2a41a617986db0cd4f9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introductory Remarks and Calculation Setup} We will present the main features of the method and preliminary results of the bag parameter calculation for the K meson oscillations at three values of the lattice spacing using the $N_f=2$ dynamical quark configurations produced by the ETM collaboration. ETMC dynamical configurations have been produced with the tree-level Symmanzik improved action in the gauge sector while the dynamical quarks have been regularized by employing the twisted mass (tm) formalism \cite{tmQCD1}. It has been demonstrated that with the condition of {\it maximal twist} this formalism provides automatic $O(a)$-improved physical quantities \cite{Frezz-Rossi1}. In the so called physical basis the fermion lattice action concerning the sea sector is written \begin{equation} \label{sea-action} S_{sea} = a^4 \sum_{x} \bar\psi(x) (\gamma \tilde{\nabla} -i \gamma_5 ~\tau_3~ W_{cr} + \mu_{sea} ) \psi(x) \,\,\, , \end{equation} with $ W_{cr} = -\frac{a}{2} \sum_{\mu} \nabla_{\mu}^{*} \nabla_{\mu} + M_{cr}(r=1)$; $\psi = (u ~~d)^{T}$ is a doublet of degenerate light sea quarks while $\mu_{sea}={\rm diag}(\mu_u ~~\mu_d)$. We should also note that the tm formalism offers a simpler renormalisation pattern with comparison to the standard Wilson regularization. This is true for some important physical quantities calculated on the lattice, as for example the pseudoscalar decay constant and the chiral condensate. It has been shown that the use of the tm regularization can simplify the renormalization pattern properties of the four-fermion operators which enter in the calculation of certain phenomenologically important weak matrix elements such as $B_K$ \cite{tmQCD1, AlphaBK, PenSinVla}. In order to achieve both $O(a)$ improvement and a continuum-like renormalization pattern in the evaluation of $B_K$ we introduce the valence quarks with Osterwalder-Seiler lattice action and allow for replica of the down ($d$, $d'$) and strange ($s$, $s'$) flavours~\cite{Frezz-Rossi2}, viz.\ \begin{equation} \label{action} S_{val} = a^4 \sum_{x} \sum_{f=d,d',s,s'} \bar{q}_f(x) \, \Big( \gamma \tilde{\nabla} -i \gamma_5 ~r_f~ W_{cr} + \mu_f \Big) \, q_f(x) \; , \qquad -r_s = r_d = r_{d'} =r_{s'} = 1 \, . \end{equation} The valence sector action above is written (unlike eq.~(\ref{sea-action})) in the so called physical quark basis with the field $q_f$ representing just one individual flavour. While the four fermion operator relevant for $B_K$ (see eq.~(\ref{operator})) is chosen to contain all the four valence flavours in eq.~(\ref{action}), the interpolating fields for the external (anti)Kaon states are made up of a tm-quark pair ($\bar{d}\gamma_5 s$, with $-r_s = r_d$) and a OS-quark pair ($\bar{d}'\gamma_5 s'$, with $r_{d'} =r_{s'}$). This mixed action setup with maximally twisted Wilson-like quarks has been studied in detail in Ref.~\cite{Frezz-Rossi2}, allows for an easy matching of sea and valence quark masses and leads to unitarity violations that vanish as $a^2$ as the continuum limit is approached. In the present case, however, the quark mass matching is incomplete because we are neglecting the sea strange quark (i.e. partially quanched computation), thereby inducing some (possibly small) O($a^0$) systematic error. We notice that the proposed method for obtaining automatic $O(a)$ improved results has already been tested successfully in the calculation of $B_K$ with fully quenched quarks \cite{ALPHA-BK-2009}. In Table~\ref{simuldetails} we give the simulation details concerning the mass values of the sea and the valence quarks for each value of the gauge coupling for the calculation presented in this work. The smallest sea quark mass corresponds to a pion of about 270 MeV for the case of $\beta=3.90$. For $\beta=4.05$ the lightest pion weights 300 MeV while for $\beta=3.80$ the lowest pion mass is around 400 MeV. The highest sea quark mass for the three values of the lattice spacing is about half the strange quark mass. For the inversions in the valence sector we have made use of the stochastic method (one--end trick of ref.~\cite{Michael}) in order to increase the statistical information. Propagator sources have been located at randomly chosen timeslices. For more details on the dynamical configurations and the stochastic method application see Refs~\cite{etmc-light, etmc-long}. \begin{table}[!h] \begin{center} \begin{tabular}{cccccccccc} \hline \hline $\beta$ && $a^{-4}(L^3 \times T)$ && $a\mu_{\ell}~=~a\mu_{sea}$ && $a\mu_{h}$ && \\ \hline 3.80 && $24^3 \times 48$&& 0.0080 0.0110 && 0.0200, 0.0250 && \\ ($a\sim0.1~\mbox{fm}$) && && && 0.0300, 0.0360 && \\ \hline 3.90 && $24^3 \times 48$&& 0.0040, 0.0064 && 0.0150, 0.0220 && \\ && && 0.0085, 0.0100 && 0.0270, 0.0320 && \\ 3.90 && $32^3 \times 64$&& 0.0030, 0.0040 && 0.0220, 0.0270 && \\ ($a\sim0.085~\mbox{fm}$) && && && && \\ \hline 4.05 && $32^3 \times 64$&& 0.0030, 0.0060 && 0.0150, 0.0180 && \\ ($a\sim0.065~\mbox{fm}$) && && \hspace*{-1.3cm} 0.0080 && 0.0220, 0.0260 && \\ \hline \hline \end{tabular} \end{center} \caption{Simulation details} \label{simuldetails} \end{table} \section{The K-meson bag parameter} We recall that in our mixed action setup all the physical quantities are evaluated with no O($a$) discretization effects (see Ref.~\cite{Frezz-Rossi2}) and moreover the four fermion operator relevant for $B_K$, which reads \begin{equation} \Big{[} V_\mu V_\mu + A_\mu A_\mu \,\,\Big{]}_{\mbox{bare}}^{\mbox{phys-basis}} = [(\bar{q}_{s}\gamma_\mu q_d) (\bar{q}_{s'}\gamma_\mu q_{d'}) + (\bar{q}_{s}\gamma_\mu\gamma_5 q_d) (\bar{q}_{s'}\gamma_\mu \gamma_5 q_{d'})] + [ d \leftrightarrow d' ] \, . \label{operator} \end{equation} is multiplicatively renormalizable. This can be easily understood by noting that in the (unphysical) tm basis, where the Wilson term enters the valence action in the standard way (with no $i\gamma_5$-twist) and the operator renormalization properties are the same of the standard Wilson fermionic action, the operator~(\ref{operator}) takes the form \begin{equation} \Big{[} V_\mu A_\mu + A_\mu V_\mu \,\,\Big{]}_{\mbox{bare}}^{\mbox{tm-basis}} = [(\bar{\chi}_{s}\gamma_\mu \chi_{d}) (\bar{\chi}_{s'}\gamma_\mu \gamma_5 \chi_{d'}) + (\bar{\chi}_{s}\gamma_\mu\gamma_5 \chi_{d}) (\bar{\chi}_{s'}\gamma_\mu \chi_{d'})] + [ d \leftrightarrow d' ] \, , \label{operator_tmba} \end{equation} Here $\chi_f = \exp^{-i\gamma_5 \pi/4} q_f$ and $\bar\chi_f = \bar{q}_f \exp^{-i\gamma_5 \pi/4}$,$f=d,d',s,s'$ are the tm basis valence quark fields. The operator~(\ref{operator_tmba}) is known to be protected from mixing under renormalisation due to $CPS$ symmetry~\cite{Bernard}. In summary we have (``R'' stands for ``renormalized'') \begin{equation} \Big{[} V_\mu V_\mu + A_\mu A_\mu \,\,\Big{]}_{\mbox{R}}^{\mbox{phys-basis}} = \,\, Z_{VA+AV} \,\, \Big{[} V_\mu V_\mu + A_\mu A_\mu \,\,\Big{]}_{\mbox{bare}}^{\mbox{phys-basis}} = \,\, Z_{VA+AV} \,\, \Big{[} V_\mu A_\mu + A_\mu V_\mu \,\,\Big{]}_{\mbox{bare}}^{\mbox{tm-basis}} \, , \end{equation} where the name of the renormalization constant is chosen so as to be consistent with the notation used in the standard Wilson fermion literature. In order to estimate the $B_{\rm K}$-parameter we calculate a three-point correlation function where a four-fermion operator is free to move in lattice time $t$ whereas two ``K-meson walls" consisting of noisy sources are imposed at fixed time separation $t_R-t_L = T/2$. The $t_L$ value changes randomly from configuration to configuration. In our simulations we consider the time reversed case too and we average them properly. The plateau signal is taken for $t_L \ll t \ll t_R$. We extract $B_{\rm K}$ from the ratio: \begin{equation} R_{{\rm B_K}} = \frac{ C^{(3)}_{\bar{K}OK}(t-t_L,t-t_R) }{ C^{(2)}_{\bar{K}}(t-t_L) C^{(2)}_{K}(t-t_R) } \stackrel{t_L \ll t \ll t_R}\longrightarrow B_{{\rm K}} \end{equation} In our analysis all correlation functions satisfy the condition $a\mu_l = a\mu_{\rm sea}$ while the valence strange-like quark mass values are given in Table~\ref{simuldetails}. An important remark is in order: the mixed regularization set-up that we have used leads at finite lattice spacing to different values for the decay constant and the pseudoscalar masses of the two K-mesons employed in the calculation. We find that the discretisation effects are negligible for the decay constant while happen to be significant in the case of the pseudoscalar mass. For this reason we normalize the four fermion matrix element by dividing with $(8/3) m_K^{OS}~ m_K^{tm}~f_K^{OS}~f_K^{tm}$. Moreover, as expected, the cutoff effects diminish drastically towards the CL. So this kind of systematic error is well under control. The fits to the light quark mass behaviour are performed using the $SU(2)$ Partially Quenched Chiral Perturbation Theory formula of refs~\cite{SharpeZhang,Alltonetal}. In our case the fit ansatz is: \begin{equation} B(\mu_h) \,\, = \,\, B_\chi(\mu_h) \, \Big [ 1 \, + \, b(\mu_h) \, \frac{2B_0}{f^2} \, \mu_l \, - \, \frac{2B_0}{32\pi^2 f^2} \mu_l \, \ln\big (\frac{2B_0\mu_l}{\Lambda_\chi^2} \big ) \Big ] + D(\mu_h) a^2 \label{eq:pqchipt} \end{equation} where $\mu_h$ denotes the quark mass values around the strange quark (see Table~\ref{simuldetails}). Thus, the fit procedure consists of a combined fit of chiral and continuum extrapolation. We find that the cutoff effects on our data are well described by a $\mu_l$-independent (but $\mu_h$-dependent) $O(a^2)$ term. Two methods of analysis have been followed. The first method relies on using the information for the physical mass values of the up/down and strange quarks in the continuum limit, as they have been estimated in a recent ETMC computation \cite{ETMC-prep1}. Note that the implementation of this method requires the knowlegde of the quark mass renormalization constant \cite{REN}. The second method consists of employing the pseudoscalar masses instead of the quark masses. In this case we choose a set of three values of reference pseudoscalar masses made out of two strange-like quarks, $M_{hh}$; keeping each of them fixed we perform the chiral fits in terms of the light pseudoscalar mass. In the end of the procedure we estimate $B_{\rm K}$ via an interpolation at the physical point defined by the formula $M_{ss}^2 = 2 M_K^2 - M_{\pi}^2 $. Both methods give compatible final results within less than one standard deviation. In Figure \ref{fig:BK_Z}(a) the quality of the plateau is shown for $\beta=3.90$, for three values of the light quark mass and for one typical value of $\mu_h$; in Figure \ref{fig:BK_Z}(b) we present an example of a combined chiral plus continuum fit (three value of the lattice spacing) for $B_{\rm K}^{{\rm RGI}}(l,h)$ versus the light pseudoscalar mass squared in units of $r_0$; the value of $M_{hh}$ is in the vicinity of the physical one. \begin{figure}[!h] \begin{center} \subfigure[]{\includegraphics[scale=0.64,angle=-0]{BK_plateau_b390.ps}} \subfigure[]{\includegraphics[scale=0.58, angle=-0]{BK_vs_Mll2.ps}} \caption[]{(a) The quality of the plateau for three values of the light quark mass for $\beta=3.90$; (b) Combined chiral and continuum fit for $B_{\rm K}^{{\rm RGI}}(l,h)$ versus $(r_0 M_{ll})^2$. The empty black circle gives the value at the continuum limit for the case of $(r0M_{hh})=1.50$; } \label{fig:BK_Z} \end{center} \end{figure} The two point renormalisation constants for the axial and vector current have been calculated using the RI-MOM method \cite{rimom2}. We recall that the physical axial current made up of OS quarks is normalized by $Z_A$ while the one consisting of tm quarks is normalized by $Z_V$ \cite{REN}. The RI-MOM method has also been employed for the calculation of the renormalisation constant of the four-fermion operator \cite{rimom4}. In Figure \ref{fig:Zs}(a) we show the behaviour of the renormalisation constant as a function of the momentum squared in lattice units $(ap)^2$ for $\beta=3.90$ at the valence chiral limit and for $a\mu_{sea}=0.0040$. Discretization effects of $O(a^2)$ have been evaluated at one loop \cite{Cyp} and subtracted from the relevant correlation functions. Thus, the leading discretization effects on our RI-MOM determination of the renormalization constant are of $O(g^4 a^2,g^2 a^4)$. We show three types of results; two of them correspond to two estimates of the subtracted perturbative contributions. The amount of the subtraction depends on the choice of the value for the gauge coupling. We have considered two cases for the gauge coupling, the naive ($g_0$) and the boosted one ($g_b$). We also show the result for the $Z_{VA+AV}({\rm RGI})$ without considering any perturbative subtractions (indicated as ``uncorrected'' in the figure). In the right panel of the same figure we illustrate the absence of mixing with ``wrong chirality" operators; in fact the mixing coefficients are vanishing. \begin{figure}[!h] \begin{center} \subfigure[]{\includegraphics[scale=0.24,angle=-90]{Z4.ps}} \subfigure[]{\includegraphics[scale=0.24,angle=-90]{z4Delta.ps}} \caption[]{(a) RI/MOM computation of the multiplicative renormalization factor $Z_{VA+AV}({\rm RGI})$ at $\beta=3.90$; (b) Mixing coefficients $\Delta_k$ ($k=1,\cdots ,4$) with other four-fermion operators with ``wrong chirality". } \label{fig:Zs} \end{center} \end{figure} Our preliminary result for $B_{{\rm K}}$ in the RGI scheme in the continuum limit is $$B_{{\rm K}}^{{\rm RGI}}=0.73(3)(3)$$ The first error includes the uncertainty coming from the correlators and from the fit procedure (chiral plus continuum) while the second one is due to the uncertainties in the calculation of the renormalisation constants. We are currently attempting to reduce the latter uncertainty. \section{The K-bag meson parameter beyond the SM} Interactions beyond the SM including supersymmetry furnish new diagrams in the calculation of the $\Delta S=2$ process. Their effect expressed in the OPE expansion is to enrich the set of the local operators to be considered in the low energy regime, see e.g. \cite{Ciuchini-etal98}. Therefore one has to calculate on the lattice the matrix elements of five parity even four-fermion operators namely $O_1=O_{VV+AA}, O_2=O_{SS+PP}, O_3=O_{\tilde{T}T}, O_4=O_{SS-PP}$ and $O_5=O_{VV-AA}$ \cite{Allton_etal98, Babich_etal06, Nakamura_etal06}. It is well known that the renormalisation pattern of the parity-even four-fermion operators becomes complicated because of mixings as soon as the regularization breaks the chiral symmetry; this is certainly the case of Wilson fermions. However using the proposal of Ref.~\cite{Frezz-Rossi2} this problem is bypassed; due to the axial rotation mapping of the parity-even to parity-odd operators in the tm basis the renormalisation pattern becomes continuum-like \cite{rimom4}. It is worth mentioning that, as in the case of the SM four-fermion operator, the lattice estimates of the matrix elements of $O_2, \ldots, O_5$ are automatically $O(a)$-improved. First results regarding the signal quality for the case of $\beta=3.90$ are given in Figure \ref{R_SSM}. We depict the plateaux for the $B_3$ bag parameter (left panel) and for the quantity $ R_3 \sim \frac{\langle \bar{K}| O_3|K \rangle}{\langle \bar{K}|O_1|K \rangle} $ (right panel). Both figures refer to the same value of the light quark mass for three different choices of the strange-like quark mass. Computation at the other two values of the lattice spacing as well as a full determination of the renormalisation constant matrix is still in progress. \begin{figure}[!h] \begin{center} \subfigure[]{\includegraphics[scale=0.29,angle=-90]{B3_plateaux.ps}} \subfigure[]{\includegraphics[scale=0.29,angle=-90]{R3_plateaux.ps}} \caption[]{(a) and (b): the quality of the signal for the quantities $B_3$ and $R_3$ respectively at three values of the strange-like quark mass using $a\mu_l=0.0040$ at $\beta=3.90$. } \label{R_SSM} \end{center} \end{figure} \section*{Acknowledgements} We thank our ETM collaborators for their help and encouragement. This work has been supported in part by the EU ITN contract MRTN-CT-2006-035482, ``FLAVIAnet''. F.M. acknowledges the support by CUR Generalitat de Catalunya under project 2009SGR502 and by the Consolider-Ingenio 2010 Program CPAN (CSD2007-00042), {\it UB-ECM-PF 09/26, ICCUB-09-230}. V.G. and D.P. thank MICINN (Spain) for partial financial support under grant FPA2008-03373.
5e3fc8d8a3e9389dbff16bf057d52c966ff9d713
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Weak gravitational lensing is emerging as one of the most powerful probes of dark matter, dark energy \citep{DETF} and the nature of gravity at cosmological scales \citep{Jain08}. In less than a decade after first detections \citep{Bacon00,Kaiser00,VanWaerbeke00,Wittman00}, the lensing measurement accuracy and dynamical range have been improved dramatically (e.g. \citealt{Fu08}). Future weak lensing surveys have the potential to measure the lensing power spectrum with sub-$1\%$ statistical accuracy for many multipole $\ell$ bins. However, whether we can fully utilize this astonishing capability is up to the control over various systematic errors. They could arise from uncertainties in theoretical modeling, including the non-linear evolution of the universe \citep{Heitmann05,Coyote1,Coyote2} and the influence of baryons \citep{White04,Zhan04,Jing06,Rudd08}. They could also arise from uncertainties in the lensing measurement. An incomplete list includes the galaxy intrinsic alignment \citep{Hirata04,Mandelbaum06,Hirata07,Okumura09a,Okumura09b}, influence of the telescope PSF \citep{STEP1,STEP2}, photometric redshift (photo-z) calibration errors \citep{MaHuHuterer, Bernstein09b}, etc. Precision lensing cosmology puts stringent requirements on calibrating these errors \citep{Huterer06}. Weak lensing surveys are rich in physics and contain information beyond the cosmic shear power spectrum \citep{Bernstein09,Zhang08}. This bonus allows for self-calibration of weak lensing systematic errors, such as the galaxy intrinsic alignment \citep{Zhang08,Joachimi09}. In the present paper, we utilize the galaxy density-shear cross-correlation and density-density correlations in photometric survey data to self-calibrate the photo-z scatters between redshift bins. Namely, for a given photo-z bin, we want to figure out (reconstruct) the fraction of galaxies which are actually located in a distinct redshift bin. These scatters quantitatively describe the photo-z outliers or catastrophic errors. The effect of photo-z outliers on cosmological inference from the shear-shear power spectrum is discussed by \citet{Bernstein09b}, which also quantifies the task of calibrating these outliers by direct spectroscopic sampling of the galaxy population. Since spectroscopic sampling of faint galaxies at $>99\%$ completeness is an expensive or infeasible task for current ground-based capabilities, \citet{Newman08} proposes a technique based on cross-correlation between the photo-z sample and an incomplete spectro-z sample. Here we ask a complementary question: how well can the outlier rate be determined using purely photometric data from the original lensing survey? As pointed out by \citet{Schneider06}, the spurious cross correlation between the galaxy density in two photo-z bins can be explored to calibrate photo-z errors. They found that, for a photo-z bin of the size $\Delta z=0.5$ in a LSST-like survey, $1\%$ level scatters can be identified and the mean redshift can be calibrated within the accuracy of $\sim 0.01$. This result is impressive. Unfortunately, a factor of $10$ improvement is still required to meet the statistical accuracy of those ambitious ``stage IV'' projects ($\sim 10^{-3}$, \citealt{Huterer06}). In combination with baryon acoustic oscillation and weak lensing measurements, the constraints can be significantly improved \citep{Zhan06b}. However, these procedures adopt a number of priors/parameterizations, which may bias the calibration. A successful self-calibration should not correlate cosmological uncertainties with astrophysical uncertainties (in our case, photo-z errors). To meet this requirement, the self-calibration should adopt as few cosmological priors as possible, preferably none. On the other hand, it should not result in loss of cosmological information. In this paper, we propose to combine the galaxy-galaxy clustering measurement and shear-galaxy cross correlation measurement to perform the photo-z self-calibration, strictly reserving the shear-shear measurement for cosmology. Finally, it must be able to reach sufficiently high statistical accuracy and have controllable systematics, if any. As we will show, this self-calibration meets all the three requirements and is able to detect scatters as low as $0.01\%$. There are a number of differences between our self-calibration method and the method proposed by \citet{Schneider06}. (1) The inclusion of shear-galaxy cross correlation measurement breaks a severe degeneracy in the previous method and thus significantly improves the calibration accuracy. (2) We do not adopt any parameterizations on the photo-z probability distribution function (PDF) and are thus free of possible bias induced by improper parameterizations. (3) This method is a true self-calibration, in the sense that the photo-z scatters are reconstructed solely from the given weak lensing surveys, no external measurements nor priors on cosmology and galaxy bias, are needed.\footnote{Of course, the calibration accuracy depends on the fiducial photo-z PDF or the actual photo-z PDF in the given survey.} (4) We argue that the shot noise is the only relevant noise term for the likelihood analysis. Sample variance and non-Gaussianity, do not affect the reconstruction. This allows us to go deeply into the nonlinear regime, gain many more independent modes for the reconstruction, and significantly improve the reconstruction accuracy. Like most of these predecessor papers, we analyze an idealized survey: apparent density fluctuations induced by gravitational lensing are ignored; galaxy biasing is assumed to be common to all galaxies at a given redshift; and shear measurement errors are ignored. Incorporation of these and other effects can substantially degrade the correlation-based photo-z calibration methods \citep{Bernstein09b}. A framework for comprehensive analysis of photometric$+$spectroscopic lensing survey data is presented in \citet{Bernstein09}, and is necessary to make a final judgment on the efficacy of photo-z self-calibration. However this complicated analysis has not yet been applied to the problem of photo-z outlier calibration. The simpler analysis presented here will demonstrate that the galaxy-shear and galaxy-galaxy correlations contain sufficient information to measure the outlier rate to useful precision, and we will then examine the possibility of degradation by the non-ideal effects. This paper is organized in the following way. In \S \ref{sec:calibration}, we describe our self-calibration technique and target a fiducial ``Stage IV'' lensing survey for the error forecast. We discuss in \S\ref{sec:statistical} possible systematic errors which can be incorporated into our technique and will not bias the photo-z PDF reconstruction. Other systematic errors cannot be self-calibrated without strong priors or external information. For these, we quantify the induced bias in \S \ref{sec:bias}. We further discuss uncertainties in the error forecast due to uncertainties in the fiducial model and the robustness of our self-calibration technique (\S \ref{sec:fiducial}). We discuss possibilities to improve the calibration accuracy (\S \ref{sec:discussion}). We also include two appendices (\S \ref{sec:appendixA} \& \ref{sec:appendixB}) for technical details of the Fisher matrix analysis and bias estimation. \section{Photo-z self-calibration} \label{sec:calibration} We first define several key notations used throughout the paper. \begin{itemize} \item The superscript ``P'' denotes the property in the photo-z bin. \item The superscript ``R'' denotes the corresponding property in the true-z bin. \item The capital ``G'' denotes gravitational lensing, to be more specific, the lensing convergence converted from the more direct observable cosmic shear. \item The little ``g'' denotes galaxy number density (or over-density). \end{itemize} We split galaxies into $N_z$ photo-z bins. The $j$-th photo-z bin has the range $[z_j-\Delta z_j/2,z_j+\Delta z_j/2)$. Our notation is that larger $i$ means higher photo-z. $N_j$, the total number of galaxies in the $j$-th photo-z bin, is an observable. We also have $N_z$ true-z bins, with the choice of redshift range identical to that of the photo-z bins. We denote $N_{i\rightarrow j}$ as the total number of galaxies in the $j$-th photo-z bin which also belong to the $i$-th true-z bin, namely, whose true redshift fall in $[z_i-\Delta z_i/2,z_i+\Delta z_i/2)$. The process $i\rightarrow j$ is similar to scatters (transitions) between different quantum states. So we often call this process as the photo-z scatter. The scatter rate is related to the the photo-z probability distribution function, $p(z|z^P)$, by the following relation \begin{equation} N_{i\rightarrow j}=\int_{z_j-\frac{\Delta z_j}{2}}^{z_j+\frac{\Delta z_j}{2}} n(z^P)dz^P\left[\int_{z_i-\frac{\Delta z_i}{2}}^{z_i+\frac{\Delta z_i}{2}} p(z|z^P) dz\right]\ . \end{equation} Here, $n(z^P)dz^P$ is the number distribution of galaxies in the photo-z space. We define $p_{i\rightarrow j}\equiv N_{i\rightarrow j}/N_j$, which represents the averaged $p(z|z^P)$ over the relevant redshift ranges, or the binned photo-z PDF. In the limit that $\Delta z_{i,j}\rightarrow 0$, $p_{i\rightarrow j}\rightarrow p(z_i|z^P_j)$. Since $\sum_i p_{i\rightarrow j}=1$, we have $N_z(N_z-1)$ independent $p_{i\rightarrow j}$.\footnote{In general, $p_{i\rightarrow j}$ and $p_{j\rightarrow i}$ are independent and $p_{i\rightarrow j}\neq p_{j\rightarrow i}$.} These $p_{i\rightarrow j}$ together represent a non-parametric description of the photo-z PDF $p(z|z^P)$ and completely describe the scattering probabilities between redshift bins, namely the rate of photo-z outliers, or leakage rate. In \citet{Bernstein09b}, they are called ``contamination'' coefficients. Scatters in photo-z, especially catastrophic photo-z errors, cause a number of spurious correlations between photo-z bins. Our proposal is to reconstruct all $p_{i\rightarrow j}$ from these cross correlations. For convenience, we will work on the corresponding cross power spectra throughout the paper, instead of the cross correlation functions. We use fiducial power spectra, fiducial leakage $p_{i\rightarrow j}$ and the survey specification to perform the error forecast. To generate the fiducial power spectra, we adopt a flat $\Lambda$CDM cosmology with $\Omega_m=0.27$, $\Omega_{\Lambda}=1-\Omega_m$, $\Omega_b=0.044$, $\sigma_8=0.84$ and $h=0.71$. The transfer function is obtained using CMBFAST \citep{CMBFAST}. The nonlinear matter power spectrum is calculated using the fitting formula of \citet{Smith03}. Unless specified, we adopt a fiducial galaxy bias $b_g=1$. We are then able to calculate the galaxy power spectra and lensing-galaxy power spectra. The calculation of these power spectra is by no mean precise, due to the simplification of $b_g=1$. However, this simple bias model suffices for the purpose of this paper, namely, to demonstrate the feasibility of our self-calibration technique. We will further investigate the impact of scale dependent bias in \S \ref{sec:fiducial}, where we find that our self-calibration technique is also applicable. \begin{figure{zszp.eps} \caption{The fiducial $z^P$-$z^S$ scatters adopted for the forecast. The simulated data \citep{Bernstein09b} has $177210$ galaxies. To control the size of the figure, we only show $2\%$ of them, randomly chosen Although there are indeed $0.7\%$ galaxies at $z^P>4$, in the analysis we have taken the freedom to disregard these galaxies. To make the analysis closed, we shall adopt the approximation that no galaxies with $z^P<4.0$ comes from $z^S>4.0$. We check that galaxies with $z^S>4.0$ only account for a tiny fraction of the whole $z^P<4$ sample ($0.06\%$) and even smaller fraction for low photo-z bins ( and thus not showing up in this figure since we only plot $2\%$ of all galaxies). So this approximation is sufficiently accurate for the purpose of this paper. \label{fig:zszp}} \end{figure} We also need the galaxy distribution $n(z^P)$ and the input of fiducial $p_{i\rightarrow j}$. Due to uncertainties in survey specifications, we will not focus on any specific survey. Instead, we will target a fiducial lensing survey with some characteristics of ``Stage IV'' lensing surveys like LSST\footnote{Large Synoptic Survey Telescope, http://www.lsst.org/lsst}, Euclid\footnote{http://www.astro.ljmu.ac.uk/~airs2008/docs/euclid\_astronet\%202.pdf} and JDEM. The analysis can be redone straightforwardly to incorporate changes in the survey specifications. We follow \citet{Huterer06,Zhan06} and adopt the photo-z distribution $n(z^P)dz^P=x^2\exp(-x)dx/2$, where $x\equiv z^P/z_0$. For such a distribution, the median redshift is $z_m=2.675z_0$. We adopt $z_0=0.45$, the mean galaxy surface density $\bar{n}_g=40$ per arcmin$^2$, and the fractional sky coverage $f_{\rm sky}=0.5$. The rms dispersion in the shear measurement induced by the galaxy intrinsic ellipticities is adopted as $\gamma_{\rm rms}=0.2$. The fiducial $p(z|z^P)$ and $p_{i\rightarrow j}$ are calculated from the simulated data of \citet{Bernstein09b}, which were produced using the method described in \citet{Jouvel09}. The $z^P$-$z^S$ distribution in this data is shown in Fig. \ref{fig:zszp}, where $z^S$ is the spectroscopic redshift, which can be approximated as the true redshift. This simulated data is for SNAP-like space weak lensing surveys, conducted in 8 broad bands spanning $0.38$-$1.7$$\mu$m wavelength. These surveys have near-IR bands and hence likely smaller photo-z errors than LSST. Since we find that the statistical accuracy of the photo-z reconstruction is not very sensitive to the choice of fiducial photo-z PDF and since photo-z calibration has room to improve, the adopted photo-z PDF should be a good representative case to illustrate our method. Furthermore, the accuracy of the self-calibration is sensitive to the choice of redshift bins. Unless otherwise specified, we will adopt eight redshift bins, $[0.0,0.5)$, $[0.5,1.0)$, $[1.0,1.5)$, $[1.5,2.0)$, $[2.0,2.5)$, $[2.5,3.0)$, $[3.0,3.5)$ and $[3.5,4.0)$. For comparison, we also investigate the case of much finer redshift bins, $[0.0,0.1)$, $[0.1,0.2)$, $\ldots$, $[1.7,1.9)$, $[1.9,2.2)$, $[2,2,2.6)$, $[2.6,3.0)$, $[3.0,4.0)$. We call these coarse bins and fine bins, respectively. We have utilized the freedom in the data analysis and disregarded the galaxies with $z^P>4.0$. These galaxies only account for $0.7\%$ of the total galaxies. Neglecting them does not result in significant loss of information. On the other hand, these $z^P>4$ galaxies may have large photo-z catastrophic errors (Fig. \ref{fig:zszp}). Neglecting them helps to reduce systematic errors. To close the scattering process, we need to assume that no galaxies with $z^P<4.0$ come from $z^S>4.0$. Under this apssumption, what our self-calibration technique actually does, is to assign those $z^S>4$ galaxies into the $[3.5,4.0)$ true-z bin, instead of randomly assigning them elsewhere.\footnote{These galaxies do not lens other galaxies so they have to be put in the highest redshift bin. More details on how the self-calibration technique rank galaxies are given in \S \ref{subsec:fullcalibration}.} We argue that this approximation is sufficiently accurate, for two reasons. First, in the simulated $z^S-z^P$ data we used, the total fraction of galaxies with $z^P<4.0$ and $z^S>4.0$ is only $0.06\%$. Second, about $90\%$ of them leak into the $[3.5,4.0)$ photo-z bin. Although the contamination rate in this redshift bin is high ($4.8\%$), the induced error in lensing modelling is small, since the lensing weighting kernel varies slowly at $z\sim 4$. Contaminations to other photo-z bins are much smaller. These $z^S>4$ galaxies account for $0.01\%$ of total galaxies in the $[0.0,0.5)$ photo-z bin, $0.02\%$ in the $[0.5,1.0)$ and $[3.0,3.5)$ photo-z bins. In the adopted simulated data, we detect no such galaxies in photo-z bins $\in [1.0,3.0)$. Due to these tiny fractions, mis-assignment of these galaxies is not a limiting factor of our self-calibration technique. \subsection{Galaxy-galaxy clustering} Photo-z errors, especially catastrophic errors, induce non-zero galaxy-galaxy correlation between different redshift bins, which should vanish at sufficiently small scale, where the Limber approximation holds. This set of correlations has been explored to perform the photo-z self-calibration \citep{Schneider06}. With the presence of photo-z scatters, the measured galaxy surface density in a given photo-z bin is the combination of the corresponding ones in the true redshift bins, \begin{equation} \label{eqn:g} \delta_i^{\Sigma,P}=\sum_k p_{k\rightarrow i} \delta^{\Sigma,R}_k\ . \end{equation} The galaxy power spectrum between the $i$-th and $j$-th photo-z bins is \begin{eqnarray} \label{eqn:gg} C^{gg,P}_{ij}=\sum_{km} p_{k\rightarrow i}p_{m\rightarrow j}C_{km}^{gg,R}\simeq \sum_k p_{k\rightarrow i}p_{k\rightarrow j}C_{kk}^{gg,R}\ . \end{eqnarray} The last equation has approximated $C_{k\neq m}^{gg,R}=0$, which holds under the Limber approximation. At sufficiently large angular scales, the Limber approximation fails and the intrinsic cross correlation $C_{k\neq m}^{gg,R}\neq 0$. For this reason, we exclude the modes with $\ell<100$. We will further discuss this issue later in the paper (\S \ref{sec:Limber}). \begin{figure{cl.eps} \caption{Diagnostics of photo-z errors. Scatters between redshift bins caused by photo-z errors induce non-zero correlations $C^{gg,P}_{i\neq j}$ and $C^{Gg,P}_{i<j}$, which otherwise vanish. Here $i,j=1,2\ldots$ denote different redshift bins, the larger the $i,j$, the higher the redshift. These spurious cross correlations serve as diagnostics of photo-z scatters. We show the resulting $C^{gg,P}_{24}$ and $C^{Gg,P}_{23}$. The errorbars shown are from the shot noise. As explained in the text, this is the only relevant source of error. \label{fig:cl}} \end{figure} If the relevant photo-z scatters are sufficiently large, $C^{gg,P}_{i\neq j}$ will dominate the associated shot noise. In this case, the relevant photo-z scatters become detectable. For the adjacent bins ($|i-j|=1$), the dominant contribution to $C^{gg,P}_{ij}$ obviously comes from $p_{i\rightarrow j}$ and $p_{j\rightarrow i}$, unless the corresponding $p_{i\rightarrow j}$ and $p_{j\rightarrow i}$ are tiny, which is unlikely. For correlations between non-adjacent photo-z bins, this may not be true. In Fig. \ref{fig:cl}, we show the result of $C^{gg,P}_{24}$. In this case, the dominant contribution comes from $C^{gg,R}_{33}$, through the scatters $3\rightarrow 2$ and $3\rightarrow 4$. Due to the huge sky coverage of the fiducial stage IV lensing survey, even though the signal $C^{gg,P}_{24}$ is suppressed by a huge factor ($\simeq p_{3\rightarrow 2}\times p_{3\rightarrow 4}$) relative to $C^{gg,R}_{33}$, it still overwhelms the shot noise and becomes observable. However, an intrinsic degeneracy encoded in the galaxy-galaxy clustering significantly limits the accuracy of this approach. From Eq. \ref{eqn:gg}, it is clear that the correlation $C^{gg,P}_{i\neq j}$ can be induced by a nonzero $p_{i\rightarrow j}$ or by a non-zero $p_{j\rightarrow i}$. In other words, $C^{gg,P}_{ij}(\ell)$ ($i\neq j$) only measures the combination $p_{j\rightarrow i}p_{j\rightarrow j}C^{gg,R}_{jj}(\ell)+p_{i\rightarrow i}p_{i\rightarrow j}C^{gg,R}_{ii}(\ell)$. Thus with measurement at a single multipole $\ell$ bin, the galaxy clustering measurement alone can not break this degeneracy between up and down scatters. Adding more $\ell$ bins can break this degeneracy---if the ratio $C^{gg,R}_{jj} / C^{gg,R}_{ii}$ varies with $\ell$. If for example the 3D galaxy power spectrum is a strict power-law over the relevant scales at the relevant redshift range, whose power index does not vary with redshift,\footnote{This condition is necessary. If the power index of the 3D power spectrum varies with redshift, the shape of $C^{gg}_{ii}(\ell)$ will vary with redshift. } then all $C^{gg}_{ii}(\ell)$ are self-similar and the solution remains degenerate. Observationally, departures from a power law have been found in the galaxy correlation function \citep{Zehavi04}, so galaxy clustering measurements at many $\ell$ bins can break the above degeneracy. Nevertheless, galaxy correlation functions are observed to be close to power laws. This degrades the reconstruction accuracy. Furthermore, the slope depends weakly on galaxy type. If one uses the slight deviations from a power law, small changes in the slope of the leaking population could lead to large systematic reconstruction errors. This is an example of the galaxy distribution bias that we will scrutinize in \S \ref{sec:galaxybias}. We will further investigate this issue in \S \ref{sec:fiducial}. In the next section, we will show that, due to the unique geometry dependence of gravitational lensing, the degeneracy between up and down scatters is broken naturally by adding the lensing-galaxy correlation, resulting in significant improvement in the reconstruction. \subsection{Lensing-galaxy correlations} Galaxy-galaxy lensing brings $N_z^2$ new observables for each angular scale, which could be used to break the previous degeneracy. The scatter can render the otherwise vanishing foreground lensing-background galaxy cross correlation non-zero ($C^{Gg,P}_{i<j}\neq 0$). More importantly, lensing, due to its geometry dependence, distinguishes up-scatters from down-scatters. This new piece of information is the key to significantly improve the photo-z self-calibration. Without loss of generality, we will work on the lensing convergence $\kappa$, instead of the more direct observable cosmic shear $\gamma$, which are locally equivalent in Fourier space.\footnote{Cosmic shear measurement directly measures the reduced shear $\gamma/(1-\kappa)$. So the above statement only holds at first order approximation. However, this complexity does not affect our self-calibration, in which we do not rely on a theory to predict $\gamma$ or $\kappa$.} With the presence of photo-z scatters, the measured lensing convergence in a given photo-z bin is some linear combination of the ones in true-z bins weighted by the scatter probability, \begin{equation} \kappa_i^{P}=\sum_k p_{k\rightarrow i} \kappa^R_k\ . \end{equation} The cross correlation power spectrum between the lensing convergence in the $i$-th photo-z bin and the galaxy number density in the $j$-th photo-z bin is given by \begin{eqnarray} \label{eqn:Gg} C^{Gg,P}_{ij}&=&\sum_{k\geq m} p_{k\rightarrow i}p_{m\rightarrow j}C_{km}^{Gg,R} \ . \end{eqnarray} In the absence of lensing magnification bias, $C_{km}^{Gg,R}\neq 0$ only when the source redshifts are higher than the galaxy redshifts (namely, $k\geq m$). Discussion of magnification bias and other errors will be postponed to \S\ref{sec:statistical} and \S\ref{sec:bias}. We show in Fig. \ref{fig:cl} the case of $C^{Gg,P}_{23}$ as an example. It is mainly contributed by $C^{Gg,R}_{32}$ through the scatters $3\rightarrow 2$ and $2\rightarrow 3$, by $C^{Gg,R}_{22}$ from the scatters $2\rightarrow 3$ and by $C^{Gg,R}_{33}$ from the scattering $3\rightarrow 2$. Given the strength of $p_{3\rightarrow 2}$ and $p_{2\rightarrow 3}$, the resulting $C^{Gg,P}_{23}$ is sufficiently large to overwhelm the shot noise at $\ell<10^4$. We notice that, depending on the values of the relevant $p$ and the size of the redshift bin, the dominant contribution can come from the $C^{Gg,R}_{i=j}$ term, despite the heavy suppression in its amplitude due to the low amplitude of the lensing kernel over the relevant redshift range. In most of the cases, $C^{Gg,P}$ can not be measured with comparable accuracy to that of $C^{gg,P}$ (however, refer to Fig. \ref{fig:cl} for one exception). However, we point out that it is valuable to include this piece of information in the photo-z self-calibration. It turns out to be the key to breaking the strong degeneracy between up and down scatters. Look at the configuration $i\geq k>j$. The scatter $i\rightarrow j$ contributes to $C^{Gg,P}_{jk}$, while the scatter $j\rightarrow i$ does not. For this reason, it can break the degeneracy between $i\rightarrow j$ and $j\rightarrow i$, encountered in the self-calibration based on galaxy clustering alone. The discriminating power relies on the intrinsic asymmetry between up and down scatters in generating the lensing effect. So it remains efficient, even in the case of self-similar $C^{gg}_{ii}(\ell)$ , where the self-calibration based on galaxy clustering alone blows up (\S \ref{sec:fiducial}). \subsection{The photo-z self-calibration combining the galaxy-galaxy and lensing-galaxy measurements} \label{subsec:fullcalibration} Counting degrees of freedom suggests that we should be able to perform a rather model-independent self-calibration without any priors. Mathematically, we need to solve Eq. \ref{eqn:gg} \& \ref{eqn:Gg} for all $p_{j\rightarrow i}$, $C^{gg,R}_{i=j}$ and $C^{Gg,R}_{i\geq j}$ simultaneously. At the beginning of this process, $C^{gg,P}$ and $C^{Gg,P}$ should be replaced with the corresponding measurements. The reconstructed power spectra $C^{gg,R}$ and $C^{Gg,R}$ contain valuable information on cosmology and can be further explored. The reconstructed $p_{i\rightarrow j}$ can then be applied to the shear-shear correlation measurement to correct for bias induced by photo-z scatters and infer the correct cosmology. In the present paper, we will focus on only $p_{i\rightarrow j}$. Thus we treat $C^{gg,P}$ and $C^{Gg,P}$ as nuisance parameters to be marginalized over. The unknown parameters to be determined simultaneously by our self-calibration technique are $\lambda=(p_{\mu\rightarrow \nu},C^{Gg,R}_{ij}(\ell_1), C^{gg,R}_{kk}(\ell_1)\ldots)$, with $\mu\neq \nu$, $i\geq j$ and $\mu,\nu,i,j,k=1,\ldots,N_z$. For $N_{\ell}$ multipole $\ell$ bins and $N_z$ redshift bins, we have $N_z(N_z+3)N_l/2+N_z(N_z-1)$ quantities to solve ($N_{\ell}N_z$ for $C^{gg,R}_{kk}$, $N_{\ell}N_z(N_z+1)/2$ from $C^{Gg,R}_{km}$ ($k\geq m$) and $N_z(N_z-1)$ for $p_{k\rightarrow m}$). On the other hand, we have $N_{\ell}N_z(3N_z+1)/2$ independent measurements of correlations. $N_{\ell}N_z^2$ of them come from $C^{Gg,P}_{ij}$ and $N_{\ell}N_z(N_z+1)/2$ from $C^{gg,P}_{i\leq j}$.\footnote{The measurement $C^{gg,P}_{ij}$ is identical to the measurement $C^{gg,P}_{ji}$.} The equations to solve are quadratic in $p_{i\rightarrow j}$ and linear in $C_{ij}$ (Eq. \ref{eqn:gg} \& \ref{eqn:Gg}). For this reason, to guarantee a unique solution for $p_{i\rightarrow j}$, the number of measurements should be at least $N_z(N_z-1)$ larger than the number of unknowns. This condition is satisfied when $N_{\ell}\geq 2$. If all the equations ( Eq. \ref{eqn:gg} \& \ref{eqn:Gg}) are independent then $N_{\ell}=2$ is the minimum requirement. If some of the equations are linear combinations of the others, we will need $N_{\ell}>2$. For example, if the galaxy power spectra are strict power laws in $\ell$, then even perfect measurements of $C^{gg}(\ell)$ at all $\ell$ and redshift bins cannot break the degeneracies in $p$ and the self-calibration fails. In reality, none of the galaxy power spectra and shear-galaxy power spectra is strictly power law and we expect a valid self-calibration barring another unforeseen degeneracy. Furthermore, the baryon oscillations leave features in $C^{gg}$ and $C^{Gg}$, which help to improve the reconstruction accuracy \citep{Zhan06a}. To better understand the self-calibration process, we recast it as a mathematical problem to assign galaxies with photo-z labels into correct subsets (true-z bins), based on the cross correlation measurements. The condition $C^{gg,R}_{i\neq j}=0$ tells us that these subsets do not overlap with each other in true redshift. The condition $C^{Gg,R}_{i<j}=0$ sets the correct order of these true-z bins, ranking from low to high redshifts, since only galaxies behind a lens can be lensed. Furthermore, the measured power spectra have different dependences on $C^R$ and $p_{i\rightarrow j}$ (linearly on $C^{R}$ while quadratically on $p_{i\rightarrow j}$). This implies the possibility to separate $p$ from $C^R$. However, careful readers may have already noticed a puzzling behavior. Eq. \ref{eqn:gg} \& \ref{eqn:Gg} are invariant under the scaling \begin{eqnarray} \label{eqn:scaling} p_{k\rightarrow i}&\rightarrow& f_k^{-1}p_{k\rightarrow i}\nonumber \ ,\\ C^{gg,R}_{kk} &\rightarrow& f_k^2 C^{gg,R}_{kk}\ , \\ C^{Gg,R}_{km} &\rightarrow& f_kf_m C^{Gg,R}_{km} \ , \nonumber \end{eqnarray} where $f_k$ are some arbitrary constants. This seems to imply that we are not able to determine $p_{i\rightarrow j}$ without knowing $C^{gg,R}$ and $C^{Gg,R}$. {\it However, this is not a real degeneracy}. The reason is that we have the conditions $\sum_{k} p_{k\rightarrow i}=1$. These $N_z$ constraints uniquely fix the $N_z$ freedom of $f_k$. In the Fisher matrix analysis carried out in the next section, we enforce the conditions $\sum_{k} p_{k\rightarrow i}=1$ by explictly setting $p_{i\rightarrow i}=1-\sum_{k\neq i}p_{k\rightarrow i}$. The above solution to this puzzling problem also leads to the solution of another question. Can we choose more true-z bins than photo-z bins in the self-calibration technique? The answer is no. In such case, the number of unknown constants $f_k$ is larger than the number of constraints $\sum_{k} p_{k\rightarrow i}=1$. Thus we are not able to uniquely fix $f_k$ and thus $p_{k\rightarrow i}$. The absence of degeneracy in the self-calibration is, in the end, confirmed by the stability of our Fisher matrix inversion (below) for all the configurations that we have checked, including both the coarse bins and fine bins, various $n(z^P)$ and $p(z|z^P)$. \footnote{To be more strict, the stability of the Fisher matrix inversion means that the solution is a local maximum in the likelihood space, since the Fisher matrix is based on the Taylor expansion around the given solution. For the solution to be unique, in principle we have to go through the whole likelihood space and prove that it is the global maximum. This work is beyond the scope of this paper. } \begin{figure{p8.eps} \caption{The forecasted error in $p_{i\rightarrow j}$ for the fiducial stage IV lensing survey. The labels in the plots are the photo-z bins. The horizontal axis is the true redshift. The filled circles are the results including both galaxy-galaxy correlations and shear-galaxy correlations. The open squares only use information in galaxy-galaxy correlations. We only show the cases with $i\neq j$, since $p_{i\rightarrow i}$ is not independent. The horizontal coordinate is $z_i$, the middle point of the $i$-th photo-z bin, instead of the averaged true redshift $\langle z_i\rangle$. \label{fig:p8}} \end{figure} We thus believe that our photo-z self-calibration does work. It does not rely on any assumption of the underlying photo-z PDF. Furthermore, it does not rely on cosmological priors, since all cosmology-dependent quantities (e.g. the galaxy-galaxy and lensing-galaxy power spectra) are self-calibrated simultaneously. So the reconstructed $p$ is independent of uncertainties in cosmology. In the next section, we will quantify the reconstruction error and show that the proposed self-calibration is indeed powerful. For these reason, it can be and should be applied to ongoing and proposed weak lensing surveys such as CFHTLS\footnote{http://www.cfht.hawaii.edu/Science/CFHLS/}, DES\footnote{https://www.darkenergysurvey.org/}, LSST, JDEM and Euclid. Before quantifying the reconstruction accuracy, we want to address a fundamental limitation of this self-calibration technique. It is designed to diagnose scatters {\em between} redshift bins. It is thus completely blind to photo-z errors which do not cause such scatter. Any one-to-one mapping between photo-z and true-z preserves $C^{gg,P}_{i\neq j}=0$ and cannot be discriminated using galaxy-galaxy correlations. Some such photo-z errors can cause $C^{Gg,P}_{i<j}\neq 0$, so we still have some discriminating power left. If, however, the mapping between photo-z and true-z is monotonically increasing, then we have $C^{gg,P}_{i\neq j}=0$ and $C^{Gg,P}_{i<j}=0$, and our self-calibration technique completely lacks the capability to detect such a photo-z error. One simple example is $z^P=(1+\epsilon)z$, where $\epsilon$ is a constant. We must rely on spectroscopic redshift measurements to diagnose such errors. In other words, our method can determine only the scattering matrix of photo-z's, and is insensitive to any recalibration of the mean photo-z's. In this paper, we assume no such mean photo-z error exists. This is certainly a crucial point of further investigation. \begin{figure{zmean.eps} \caption{The error in the true mean redshift of each photo-z bin for the fiducial stage IV lensing survey. The filled circles are the results including both galaxy-galaxy correlations and shear-galaxy correlations. The open squares only use information in galaxy-galaxy correlations, where we can see the significant improvement by adding the shear-galaxy measurements. The results are plotted against the middle-point of the corresponding photo-z bin. We caution that the galaxy number density weighted true-z, $\langle z_i\rangle$, can differ significantly from the middle-point of the corresponding photo-z bin, mainly due to significant fractions of scatter into distant redshift bins. For example, for the photo-z bin $[3.5,4.0)$, $\langle z_8\rangle=2.97$, for $[3.0,3.5)$, $\langle z_7\rangle=2.7$ and for $[0.0,0.5)$, $\langle z_1\rangle=0.426$. \label{fig:zmean}} \end{figure} \subsection{Error estimation} \label{subsec:error} We derive the likelihood function and adopt the Fisher matrix formalism to estimate the capability of our self-calibration technique. The details are presented in the appendices \ref{sec:appendixA} \& \ref{sec:appendixB}. We want to highlight that the error estimation here is distinctly different from that in routine exercises of cosmological parameter constraints and that in \citet{Schneider06}. In these cases, the theory predicts the ensemble average power spectra, which are then compared to the data. An inevitable consequence of any power-spectrum determination is uncertainty due to cosmic (sample) variance. But in our self-calibration, the cosmic variance does not work in this way, because the $C^R$ are fitted parameters, not theoretical predictions. What enters into the key equations \ref{eqn:gg} and \ref{eqn:Gg} is not the ensemble average of the power spectra, but the actual values in the observed cosmic volume, {\it i.e.} they are the sums of their ensemble averages and cosmic variances within the observed volume. Galaxies in the same true-z bin (but different photo-z bins) share the same cosmic volume, thus their power spectra share the same sample variance (however, see \S \ref{sec:galaxybias} for complexities), as do their cross power spectra with the matter. Such coherence has been pointed out by \citet{Pen04} and has been applied to improve the weak lensing measurement \citep{Pen04} and primordial non-Gaussianity measurement through two-point galaxy clustering \citep{Seljak09}. Furthermore, we do not rely on a cosmological theory to predict these power spectra. Instead, our self-calibration reconstructs the actual power spectra in the observed survey volume. For this reason, the only relevant source of noise in writing down the likelihood function is the shot noise.\footnote{We also want to address that this does not mean that the influence of cosmic variance vanishes magically in the Fisher matrix error forecast. In fact, {\it it enters the fiducial power spectra}, since the fiducial ones should be those measured in a given cosmic volume instead of the ensemble average. To carry out the Fisher matrix analysis more robustly, we need to generate many realizations of the fiducial power spectra, do the Fisher matrix analysis, and weigh the error forecast according to the probability of each realization of the fiducial power spectra to find out the final answer. The good thing is that the cosmic variance of each power spectrum is usually much smaller than the ensemble average ($\ell \Delta lf_{\rm sky}\gg 1$), given the large sky coverage of the fiducial stage IV lensing survey. The reconstruction error in our self-calibration is not sensitive to such small fluctuations in the fiducial power spectra. Thus, we are safe to skip the full process and just use the ensemble average as the fiducial power spectra. For forecasts of surveys with much smaller sky coverage, such as CFHTLS, we may need to go through the full process. Furthermore, if we want to infer cosmology from the reconstructed power spectra, cosmic variance definitely enters. } \begin{figure{zmean22.eps} \caption{The dependence of reconstruction error on the size of redshift bins. The filled circles are the results showing in the previous figure adopting coarse redshift bins and the open circles adopt finer redshift bins. Changes in redshift bin size not only change the level of noise and the strength of signal, but also the number of parameters to be fitted. Along with the complicated error propagation, we see a quite irregular dependence on the bin size. \label{fig:zmean22}} \end{figure} This point is of crucial importance for our error analysis. (1) It allows us to derive the likelihood function robustly in essentially all $\ell$ range. It is simply Gaussian, thanks to the central limit theorem and the stochasticity of shot noise. This is even true for the high-$\ell$ regime, where the underlying density fields are highly nonlinear and non-Gaussian, but the shot noise remains Gaussian over many independent $\ell$ modes. (2) Since we do not rely upon any theoretical model for the power spectra, we do not need a theory capable of predictions at small scales where non-linear and baryonic physics are important. (3) For these reasons, we do not need to disregard those $\ell$ measurements in highly nonlinear regime, as \citet{Schneider06} did. The inclusion of these measurements significantly improves the $p$ reconstruction, especially for low redshift bins. This explains much of the difference between the reconstruction errors of this paper from that of \citet{Schneider06}, using the galaxy clustering measurement alone. The high-$\ell$ limit of this analysis will ultimately depend upon other limitations such as the applicability of the weak-lensing approximation at small scales, which can in principle render the adopted fiducial power spectra unrealistic and thus the error forecast unrealistic. However, in our exercise, we expect and have numerically confirmed that the contribution of high-$\ell$ is highly suppressed by the shot noise (as can be seen from Fig. \ref{fig:cl}), so the reconstruction accuracy is not sensitive to the high-$\ell$ limit. Throughout this paper, the results shown are based on the choice of $100<\ell<10^{5}$. (4) It significantly simplifies the matrix inversion and improves the numerical accuracy. For $N_z\sim 10$ and $N_l\sim 100$, the Fisher matrix to invert is several thousand by several thousand. However, a dominant portion of this Fisher matrix is block diagonal, due to the shot noise feature of the error sources. This allows us to significantly reduce the work on matrix inversion. The detail is explained in the appendix. \begin{figure{r.ps} \caption{The cross correlation coefficient between errors in $p$. To demonstrate the fine structure, we choose the fine redshift bins ($N_z=22$ redshift bins in total). Both the horizontal and vertical axes are $p_{i\rightarrow j}$ ($i\neq j$), which runs with the order $(i,j)=(2,1),(3,1),\ldots,(N_z,1),(1,2),(3,2)\ldots, (N_z,2),\ldots$. There are 462 $p$s in total. Errors in $p_{i\rightarrow j}$ and $p_{k\rightarrow j}$ are correlated, since they could cause some $C^{Gg,P}\neq 0$. Errors in $p_{i\rightarrow j}$ and $p_{i\rightarrow k}$ are correlated too, since they cause $C^{gg,P}_{jk}\neq 0$. More fine structure can be found by zooming in this figure. \label{fig:r}} \end{figure} We show the error forecast in Figs. \ref{fig:p8}, \ref{fig:zmean}, \ref{fig:zmean22} \& \ref{fig:r}. For most bins at $z<2$, $p$ can be reconstructed to accuracy $<0.01$, for either coarse or fine bins. The improvement by adding the shear-galaxy measurement is often better than $30\%$, up to a factor of a few. We can also compress errors in $p_{i\rightarrow j}$ into a single number $\sigma_{\langle z\rangle}$, the statistical error in the mean true redshift of each photo-z bin. $\sigma_{\langle z\rangle}$ by no means captures all information of the reconstruction error, but it is a convenient reference. The result of $\sigma_{\langle z\rangle}$ for coarse bins is shown in Fig. \ref{fig:zmean} and that for fine bins is shown in Fig. \ref{fig:zmean22}. For the coarse bins, it can reach $O(10^{-3})$ for $z<2$. The improvement by adding the lensing-galaxy measurement is a factor of $\sim 10$ at $z<2$, mainly through the improvement in constraining scatters from high redshift bins. $\sigma_{\langle z\rangle}$ for fine bins is larger (Fig. \ref{fig:zmean22}). However, these errors are tightly anti-correlated, as can be inferred from Fig. \ref{fig:zmean22} and \ref{fig:r}. This is the reason we see big improvement when choosing bigger bin size. We present an order of magnitude estimation to understand these numbers of the reconstruction error. For example, many $p_{i\rightarrow j}$ can be determined within the accuracy of $O(10^{-4})$. We take ${p_{2\rightarrow 1}}$ as an example. The scatter $2\rightarrow 1$ causes $C^{gg,P}_{12}\neq 0$. Ignoring other scatters, the threshold of $p_{2\rightarrow 1}$ is roughly set when the accumulated signal-to-noise of the $C^{gg,P}_{12}$ measurement is 1, \begin{equation} \left(\frac{S}{N}\right)_{12}^2\sim \sum_{\ell} \left(\frac{C^{gg,P}_{12}}{\sigma^{gg}_{12}}\right)^2=p_{2\rightarrow 1}^2\sum_{\ell} \left(\frac{C^{gg,R}_{22}}{\sigma^{gg}_{12}}\right)^2=1\ . \end{equation} Here, $\sigma$ is the associated shot noise power spectrum. We find that the threshold $p_{2\rightarrow 1}$ inferred from the above equation is even smaller than $\sigma_{p_{2\rightarrow 1}}$, the statistical error in $p_{2\rightarrow 1}$ (Fig. \ref{fig:p8}). This is indeed expected. Due to simplifications made in the derivation, mainly the neglect of error propagation from other $p_{i\rightarrow j}$ and $C_{ij}$, the threshold of $p_{2\rightarrow 1}$ obtained from the above approximation is certainly a lower limit of $\sigma_{p_{2\rightarrow 1}}$. From similar arguments, we also expect that the statistical errors for $p_{i\rightarrow j}$ are large for those high redshift bins (e.g. $i=7,8$), because the number of galaxies in these high redshift bins is small and thus the shot noise is large. However, as explained early, $C^{gg,P}$ is not the only source of information for the $p$ reconstruction. $C^{Gg,P}$ can also play important role. For example, the reconstruction of $p_{1\rightarrow 8}$ can reach an accuracy of $10^{-4}$. The reason is that the scatter $1\rightarrow 8$ causes $C^{Gg,P}_{1j}\neq 0$. The combined G-g measurements have $S/N>1$ even for $p_{1\rightarrow 8}=10^{-4}$. The $C^{gg,P}_{18}$ measurement also contributes, but since the number of galaxies in the $8$-th bin is only 1\% of total galaxies, the associated shot noise is large. So its contribution is overwhelmed by that from $C^{Gg,P}_{1j}$ ($j<8$). This explains the factor of 10 improvement in the $p_{1\rightarrow 8}$ reconstruction when adding the G-g measurements. Finally, we caution that, although we are able to qualitatively explain some results of Fig. \ref{fig:p8}, the error prorogation is complicated and the above estimation only serves as a convenient tool to understand Fig. \ref{fig:p8}. $\delta p_{i\rightarrow j}$, the errors in $p_{i\rightarrow j}$, are correlated, and the correlations show rich structures. To better demonstrate these features, we adopt the fine redshift bins, $N_z=22$ in total. We define the cross correlation coefficient between $\delta p$, as $r_{i_1j_1;i_2j_2}\equiv \langle \delta p_{i_1\rightarrow j_1}\delta p_{i_2\rightarrow j_2}\rangle/\sqrt{\langle \delta p_{i_1\rightarrow j_1}^2\rangle\langle \delta p_{i_2\rightarrow j_2}^2\rangle}$. The resulting $r$ is shown in Fig. \ref{fig:r}. Strong positive and negative correlations exist for errors between many scatters into the same photo-z bins ($p_{i\rightarrow j}$ and $p_{k\rightarrow j}$, regions around the diagonal of Fig. \ref{fig:r}). These scatters are coupled since both reduce $C^{Gg}_{j<m}\neq 0$ ($i>m$ and $k>m$) or $C^{Gg}_{m<j}$ ($m>i$ and $m>k$). Scatters $i\rightarrow j$ and $i\rightarrow k$ are coupled, too, since they contribute to $C^{gg,P}_{jk}\neq 0$. This explains some strong (both positive and negative) correlations of the off-diagonal elements. \section{Extra sources of statistical errors} \label{sec:statistical} Our self-calibration technique does not rely on priors on cosmology or photo-z distribution. In this sense, it is robust. However, there are still several sources of systematic error. Some of them, if handled properly, can be rendered into statistical errors, without resorting to external information, and will not bias our reconstruction of $p_{i\rightarrow j}$. We will discuss them in this section. The remaining of them can not be incorporated into the self-calibration without strong priors and will be discussed in \S \ref{sec:bias}. We find that galaxy intrinsic alignment (\S \ref{sec:IA}), the magnification and size bias (\S \ref{sec:magnification}) and the intrinsic cross correlation between different galaxy bins (\S \ref{sec:Limber}), can in principle be incorporated into our self-calibration technique and thus do not bias the $p$ reconstruction. Furthermore, we argue that the inclusion of these complexities is unlikely to significantly degrade the accuracy of our self-calibration technique. \subsection{The intrinsic alignment} \label{sec:IA} Surprisingly, galaxy intrinsic alignments do not bias the $p_{i\rightarrow j}$ reconstructed through our self-calibration technique for $p_{i\rightarrow j}$, although they definitely bias the inferred $C^{Gg}$ values. With the presence of the intrinsic alignment $I$, Eq. \ref{eqn:Gg} becomes \begin{eqnarray} C^{Gg,P}_{ij}&=&\sum_{k>m}p_{k\rightarrow i}p_{m\rightarrow j}C^{Gg,R}_{km}\nonumber \\ &+&\sum_k p_{k\rightarrow i}p_{k\rightarrow j}\left[C^{Ig,R}_{kk}+C^{Gg,R}_{kk}\right]\ . \end{eqnarray} Since we do not make any assumption on $C^{Gg,R}_{kk}$, our self-calibration technique automatically takes the intrinsic alignment into account and measures the sum of $C^{Gg,R}_{kk}$ and $C^{Ig,R}_{kk}$. Clearly, it does not bias the reconstruction of $p$. On the other hand, it certainly affects the statistical accuracy of the reconstruction of $p$. Unless $|C^{Ig,R}_{ii}|\gg C^{Gg,R}_{ii}$, its existence does not affect the error forecast significantly. If the intrinsic alignment $C^{Ig,R}_{kk}$ depends upon galaxy properties, it is possible that this term will differ for outlier galaxies than for those correctly assigned to photo-z bin $k$. In this case a bias in $p_{k\rightarrow i}$ may result. This behavior is similar to the systematics from variation of $b_g$ that are discussed in more detail in \S\ref{sec:galaxybias}. \subsection{Magnification and size bias} \label{sec:magnification} In reality, the measured galaxy distribution is the one lensed by foreground matter distribution. The measured galaxy over-density then has extra contribution from the lensing. Besides the well known magnification bias due to the lensing magnification on galaxy flux, there is also a size bias due to the lensing magnification on galaxy size \citep{Jain02,Schmidt09}. Both can be incorporated into a function $g(F,A)$, determined by the flux $F$ and size $A$ distribution of galaxies in the given redshift bin. The lensed galaxy over-density then takes the form $\delta_g\rightarrow \delta_g+g(F,A)\kappa$. The existence of this extra term induces non-vanishing $C^{gg,R}_{i\neq j}$ and $C^{Gg,R}_{i<j}$. If not taken into account, it will certainly bias the $p_{i\rightarrow j}$ reconstruction. The good thing is, at least in principle, the same weak lensing surveys contain the right information to correct for this effect. Given a lensing survey, we are able to split galaxies into bins of flux and size. Since the prefactor $g$ is determined by the flux and size distribution and is a measurable quantity, we are able to separate its effect from others. Or, alternatively, we can design an estimator $W(F,A)$ such that $\langle gW\rangle=0$, averaged over all flux and size bins. The price to pay is the statistical accuracy of the correlation measurement. For galaxy clustering, the shot noise increases by a factor $\langle W^2\rangle\langle b_g\rangle^2/\langle Wb_g\rangle^2$, with respect to the clustering signal. Here, $b_g(F,A)$ is the galaxy bias. Robust modeling of this factor requires information on galaxies to high redshifts and faint luminosities. Furthermore, we need to evaluate the effect of measurement error on galaxy flux and size. None of these exercises are trivial, so we postpone such studies elsewhere. However, we argue qualitatively that the degradation in the statistical accuracy is not likely dramatic. Since $b_g$ is always positive and $g$ changes sign from the bright end down to the faint end of the galaxy luminosity function, we do not expect a large loss of statistical accuracy by such weighting. However the success of the self-calibration will depend upon the accuracy of methods to estimate $g(F,A)$ (see also \citealt{Bernstein09b}). \subsection{The intrinsic galaxy cross correlation between non-overlapping redshift bins} \label{sec:Limber} Under the Limber approximation, the galaxy cross correlation between non-overlapping redshift bins vanishes. However, the Limber approximation is not $100\%$ accurate. In reality, there is indeed a non-vanishing intrinsic galaxy cross correlation $C^{gg,R}_{i\neq j}\neq 0$, especially at large scales. As correctly pointed out by \citet{Schneider06}, this intrinsic cross correlation biases the reconstruction of $p$. Eq. \ref{eqn:gg} should now be replaced by \begin{eqnarray} \label{eqn:Limber} C^{gg,P}_{ij}&=&\sum_{k}p_{k\rightarrow i}p_{k\rightarrow j} C^{gg,R}_{kk}+\sum_{k\neq m}p_{k\rightarrow i}p_{m\rightarrow j} C^{gg,R}_{km}\\ &\simeq & \sum_{k}p_{k\rightarrow i}p_{k\rightarrow j} C^{gg,R}_{kk}+\sum_{k-m=\pm 1}p_{k\rightarrow i}p_{m\rightarrow j} C^{gg,R}_{km} \ . \nonumber \end{eqnarray} In the last expression, we only keep the correlation between two adjacent redshift bins and neglect the correlations between non-adjacent redshift bins ($C^{gg,R}_{|k-m|>1}$). This approximation should be sufficiently accurate in practical applications. Thus even if no photo-z error is presented, there is still an intrinsic (real) correlation between two different (especially adjacent) redshift bins. If not accounted for, these non-zero $C^{gg,R}_{k\neq m}$ will be mis-interpreted as a photo-z error and thus bias the reconstruction of $p$. We first attempt to quantify $C^{gg,R}_{k\neq m}$. Since we only need to evaluate $C^{gg}_{k\neq m}$ at large scales where it is relevant, we can adopt the linear theory to calculate it through the following well known formula \begin{equation} C^{gg,R}_{ij}=\int_0^{\infty} \Delta^2_m(k,z=0)\frac{dk}{k}Q_i(k,l)Q_j(k,l)\ , \end{equation} where \begin{eqnarray} Q_i(k,l)=\frac{\int_{z_i-\Delta z_i/2}^{z_i+\Delta z_i/2} D(z)b_g(z)j_l(k\chi)n(z)dz}{\int_{z_i-\Delta z_i/2}^{z_i+\Delta z_i/2} n(z)dz}\ . \end{eqnarray} Unfortunately, since $Q_iQ_j$ ($i\neq j$) oscillates around zero and positive and negative contributions to the integral largely cancel, the numerical integration is very sensitive to numerical errors and is thus highly unstable. Nevertheless, through the Monte Carlo numerical integral, we believe that, at $\ell\geq 100$, the cross power spectrum between redshift bins $[0.0,0.5)$ and $[0.5,1.0)$ falls below $\sim 1\%$ of the geometrical mean of the corresponding two auto correlation power spectra, confirming the findings of \citet{Schneider06}. More accurate evaluation of the intrinsic cross correlation may be performed in real space, where we can avoid the highly oscillating integrand encountered in the multipole space. This issue will be further investigated. The bias induced is roughly $\delta p_{k\rightarrow m}\sim C^{gg,R}_{km}/C^{gg,R}_{kk}$ or $\delta p_{m\rightarrow k}\sim C^{gg,R}_{km}/C^{gg,R}_{mm}$. Depending on the low $\ell$ cut, this bias may become comparable to the statistical accuracy of self-calibration method. However, there are several possible ways to eliminate or reduce this bias. One way is to start with the last approximation of Eq. \ref{eqn:Limber}, treat $C^{gg,R}_{k-m=\pm 1}$ as free parameters and fit them simultaneously with other parameters. This can eliminate virtually all the associated bias, with the expense of larger statistical errors. We can easily figure out that there are still many more measurements than unknowns, so this remedy is doable. Furthermore, the degeneracy between $C^{gg,R}_{k-m=\pm 1}$ and $p$ is weak. For example, $C^{gg,P}_{i\neq j}$ induced by the scattering $p_{i\rightarrow j}$ has the property $ C^{gg,P}_{i\neq j}/C^{gg,R}_{ii}\propto \ell^0$. On the other hand, the intrinsic cross correlation induced by the deviation from the Limber approximation decreases quickly with $\ell$ and thus $C^{gg,R}_{i\neq j}/C^{gg,R}_{ii}$ decreases quickly with $\ell$. These distinctive behaviors help to distinguish the intrinsic cross correlation from the one induced by photo-z errors. The characteristic behavior of $C^{gg,R}_{i\neq j}/C^{gg,R}_{ii}$ allows us to take priors which are weak, while still helpful to discriminate between $C^{gg,R}_{i\neq j}/C^{gg,R}_{ii}$ and $p$. For example, we can set $C^{gg,R}_{i\neq j}=0$ when $\ell>\ell_{\rm Limber}$ and thus reduces the number of extra unknowns. Alternatively, we can model $C^{gg,R}_{i\neq j}/C^{gg,R}_{ii}$ as a power law of decreasing power with respect to $\ell$. The inclusion of the lensing-galaxy cross correlation measurement also helps to break the degeneracy between $C^{gg,R}_{i\neq j}\neq 0$ and $p$. The photo-z scatters induce both non-zero $C^{gg,P}_{k\neq m}$ and $C^{Gg}_{i<j}$. On the other hand, the failure of the Limber approximation does not cause $C^{Gg,R}_{i<j}\neq 0$, since the lensing kernel vanishes. We then conclude that the intrinsic galaxy cross correlation between non-overlapping redshift bins may be non-negligible for stage IV lensing surveys. However, our self-calibration technique has the capability to take this complexity into account. Further investigation is required to quantify its influence on the self-calibration. \section{Possible systematics} \label{sec:bias} There are some error sources which cannot be incorporated into our self-calibration technique without strong priors or without external information. Thus they will bias the reconstructed photo-z scatters. We discuss the influence of the {\it galaxy distribution bias} in \S \ref{sec:galaxybias} and the {\it multiplicative error bias} in cosmic shear measurement \S \ref{sec:f}. \begin{figure{b8.eps} \caption{The systematic error induced in $p$ by the galaxy distribution bias $\tilde{b}$. Circles: statistical errors. Triangles: systematic errors (the absolute values). These are for LSST. For DES and CFHTLS, the systematic errors induced by the relative galaxy bias is sub-dominant and negligible. The toy model adopt for $\tilde{b}$ takes the form $\tilde{b}_{i\rightarrow j}=1+s(z_i-z_j)$ and we adopt $|s|=0.1$ for the result shown here. The induced bias scales as $s$. \label{fig:bias}} \end{figure} \subsection{The galaxy distribution bias} \label{sec:galaxybias} A crucial assumption in the existing self-calibration technique is that those galaxies scattering out of the true redshift bin have the same spatial distribution as those remain in the true redshift bin. This implicit assumption can be inferred from Eq. \ref{eqn:g}. By straightforward math, we can find that $\delta_{k}^{\Sigma, R}$ in this equation is actually \begin{equation} \delta_{k,i}^{\Sigma, R}=\frac{\int_k n_i(z) \delta_{g,i}(z)dz}{\int_k n_i(z)dz}\ , \end{equation} where the integral is over the $k$-th redshift bin, $n_i(z)=\int_i p(z|z^P)n(z^P)dz^P$ is the true redshift distribution of the $i$-th photo z bin, and $\delta_{g,i}$ is the overdensity for galaxies in this photo-z bin. In Eq. \ref{eqn:g}, there is an implicit approximation $\delta_{k,i}^{\Sigma, R}=\delta_{k,k}^{\Sigma, R}=\delta_{k}^{\Sigma, R}$. This assumption is likely problematic. Those galaxies scattering from true redshift bin $k$ to photo-z bin $i$ could have either different redshift distribution ($n(z)$) or different clustering ($b_g$) or both compared to galaxies correctly identified in photo-z bin $k$. Furthermore, the difference in $n(z)$ means that these subcategories of galaxies do not sample the cosmic volume with identical weighting (despite sharing the same true-z bin), so they do not share exactly the same cosmic variance and thus do not have the identical clustering pattern. We call all these complexities as the {\it galaxy distribution bias}. \begin{figure{zbias.eps} \caption{The systematic error induced in the mean redshift by the galaxy relative bias $\tilde{b}$. Circles: statistical errors. Triangles: systematic errors. As a reminder, we adopt $\tilde{b}_{i\rightarrow j}=1+s(z_i-z_j)$ and $|s|=0.1$. \label{fig:zmeanbias}} \end{figure} This galaxy distribution bias certainly biases the $p_{i\rightarrow j}$ reconstruction. For example, if a subcategory of galaxies that did not cluster at all were to scatter, a cross galaxy-galaxy correlation would not detect those, and result in the incorrect leakage reconstruction. Interestingly, even for this extreme case of galaxy distribution bias, the galaxy-lensing correlation brings hope. For this subcategory of galaxies that scattered but did not cluster at all, other galaxies {\it apparently} behind of them may still be able to lens them and cause a detectable spurious foreground shear-background galaxy correlation. This example further demonstrates the gain by adding the galaxy-lensing cross correlation measurements in the self-calibration. Unfortunately, even with the aid of galaxy-lensing correlation measurement, the self-calibration still fails, if no priors are adopted. The galaxy distribution bias has not only a deterministic component, but also a stochastic component, i.e. the noise induced by the different cosmic (sample) variance realizations for different photo-z bins. We show that, even if we can neglect the stochasticity, the degrees of freedom in the galaxy distribution bias kill the self-calibration. In this limit, the galaxy distribution bias can be completely described by the relative bias parameter $\tilde{b}_{i\rightarrow j}$, namely the ratio of the bias between those scattered into the $j$-th redshift bin to those remaining in the $i$-th redshift bin, weighted by the difference in $n(z)$. With the presence of the deterministic galaxy distribution bias, Eq. \ref{eqn:gg} and Eq. \ref{eqn:Gg} become \begin{equation} \label{eqn:ggb} C^{gg,P}_{ij}\simeq \sum_{k}p_{k\rightarrow i}p_{k\rightarrow j} \tilde{b}_{k\rightarrow i} \tilde{b}_{k\rightarrow j}C^{gg}_{kk}\ , \end{equation} \begin{equation} \label{eqn:Ggb} C^{Gg,P}_{ij}=\sum_{k\geq m} p_{k\rightarrow i}p_{m\rightarrow j} \tilde{b}_{m\rightarrow j}C^{Gg}_{km}\ . \end{equation} First of all, we notice a degeneracy in Eq. \ref{eqn:ggb}, of the form $\tilde{b}_{i\rightarrow j}\times p_{i\rightarrow j}$ ($i\neq j$). The same argument helps to break the scaling invariance of Eq. \ref{eqn:scaling} does not apply here, simply due to many more free parameters involved here. The galaxy-lensing correlation measurements do help, since the scaling invariance in Eq. \ref{eqn:ggb}, $p_{i\rightarrow j}\rightarrow \tilde{b}_{i\rightarrow j}\times p_{i\rightarrow j}$ ($i\neq j$), does not hold in Eq. \ref{eqn:Ggb}. Unfortunately, in general, $\tilde{b}(\ell)$ are scale dependent and there are $N_{\ell}N_z(N_z-1)$ of them, which nearly triple the number of unknown parameters, making the number of unknowns larger than the number of independent measurements and thus ruining the self-calibration. In reality, from the origins of the galaxy distribution bias, we expect that it is scale dependent and is unlikely deterministic. Thus, we are not able to render it as a statistical error. Instead, we will live with it and quantify the induced bias in the reconstructed $p$. \begin{figure{zbias22.eps} \caption{The dependence of systematic error induced in the mean redshift by the galaxy relative bias $\tilde{b}$ on the size of redshift bins. Filled circles represent statistical errors and open triangles denote systematic errors. As a reminder, we adopt $\tilde{b}_{i\rightarrow j}=1+s(z_i-z_j)$ and $|s|=0.1$. \label{fig:zmeanbias22}} \end{figure} If we were to neglect this $\tilde{b}$ (namely by assuming $\tilde{b}=1$), the reconstructed $p$ would have a bias $\delta p\sim (\tilde{b}-1)p$. To robustly quantify the induced bias in $p$, we need robust measurement or modeling of $\tilde{b}$, which we lack. To proceed, we adopt a toy model, $\tilde{b}_{i\rightarrow j}=1+s(z_i-z_j)$. The details of this calculation are shown in the appendix. The resulting bias in $p$ scales as $s$. For the case of $|s|=0.1$, the result is shown in Fig. \ref{fig:bias}, \ref{fig:zmeanbias} \& \ref{fig:zmeanbias22}. We find that, for the most significant bias in $p$, it indeed satisfies the relation $\delta p\simeq (\tilde{b}-1)p$. For those $p$ whose value is small, the dominant bias is induced by the propagation from other parameters and thus do not follow this relation. Depending on the actual amplitude of this galaxy distribution bias, this may be the dominant systematic error. It may also be non-negligible, or even dominant, comparing to the statistical errors in the reconstruction (Fig. \ref{fig:bias}, \ref{fig:zmeanbias} \& \ref{fig:zmeanbias22}). There are possible ways to reduce it. By choosing finer bin size, we can reduce the galaxy distribution bias caused by the difference in $n(z)$, at the expense of more $p_{i\rightarrow j}$ and $\tilde b_{i \rightarrow j}$ parameters to constrain. With high-quality imaging or photometry we could further split galaxies into sub-samples of morphological or spectral types to reduce the difference in clustering strength. If eventually we can reach $|s|<0.1$, the galaxy distribution bias will not be catastrophic, but still significant (Fig. \ref{fig:bias}). We caution that this galaxy distribution bias also exists in the calibration technique based on cross correlations between photo-z and spec-z samples \citep{Newman08,Bernstein09b}. In principle, direct spec-z sampling of the photo-z galaxies \citep{Bernstein09b} allows for direct measurement of the galaxy distribution bias including its stochasticity. Since the galaxy distribution bias has its own sample variance, the spec-z sampling must be sufficiently wide in sky coverage, deep in redshift and reach high completeness. These are crucial issues for further investigation. \subsection{The multiplicative error bias} \label{sec:f} Due to incomplete PSF correction, shear measurement can have multiplicative errors and additive errors.The additive errors do not bias the self-calibration results, since they do not correlate with galaxies. However, the multiplicative errors, which renders $\gamma$ to $(1+f)\gamma$, can. If $f$ is the same for those galaxies whose photo-z remains in the true-z bin and those galaxies scatter out of the true-z bin, it does not induce bias in the $p$ reconstruction. However, in principle, these galaxies could have different multiplicative error. The multiplicative error could for example depend on the size of galaxies. If the photo-z error depends on some intrinsic properties of galaxies, which correlate with the galaxy size, then the multiplicative error would vary across different photo-z samples with the same true redshift. In this case, Eq. \ref{eqn:Gg} and \ref{eqn:Ggb} no longer hold. Eq. \ref{eqn:Ggb} should be replaced by \begin{equation} C^{Gg,P}_{ij}=\sum_{k\geq m} p_{k\rightarrow i}p_{m\rightarrow j} (1+\Delta f_{k\rightarrow i}) \tilde{b}_{m\rightarrow j}\tilde{C}^{Gg,R}_{km}\ . \end{equation} Here, the parameter $\Delta f_{k\rightarrow i}\equiv (1+f_{k\rightarrow i})/(1+f_{k\rightarrow k})-1\simeq f_{k\rightarrow i}-f_{k\rightarrow k}$ describes the relative difference in the multiplicative errors of the two galaxy samples in the $k$-th true redshift bin (one scatters to the $i$-th photo-z bin and the other remains in the $k$-th photo-z bin). $\tilde{C}^{Gg,R}_{km}=(1+f_{k\rightarrow k})C^{Gg,R}_{km}$. Clearly, if $\Delta f=0$, it does not bias the $p$ reconstruction, since we just need to redefine $C^{Gg,R}$. If $\Delta f\neq 0$, the bias induced in $p$ is $\delta p \sim \Delta f p$. Current shape measurement algorithms control $f$ to $\approx1\%$ levels with potentially larger redshift or size dependence \citep{STEP2}; but cosmic-shear analysis of future large surveys will require $|f| $^{\sim}_{<}$ 0.1\%$ if induced systematics are to be subdominant to statistical errors \citep{Huterer06, Amara08}. Anticipating future progress in shape measurement errors, we can expect an induced error $$^{\sim}_{<}$ 0.001p$, which is sub-dominant to the one induced by the galaxy distribution bias and is thus likely negligible. However, in case that the shape measurement errors fail to reach the required accuracy, the bias induced by the relative multiplicative error must be taken into account carefully. \section{Dependence on the fiducial model} \label{sec:fiducial} The error forecast depends on the fiducial model we adopt, including the galaxy properties and the survey specifications. A thorough analysis over all uncertainties in the fiducial model is beyond the scope of the current paper. Instead, we will present brief discussions on several key issues. \subsection{Dependence on the galaxy clustering properties} In the above analysis, we have made a number of simplifications. (1) We have adopted a scale independent and redshift independent galaxy bias for the error forecast. In reality, the galaxy bias is both redshift and scale dependent. As we have seen, the reconstruction does not require assumptions on the actual cosmology or clustering properties, so a variable bias is much like changing the background cosmology, which in principle is independent of the inferred scattering. (2) All the power spectra in Eq. \ref{eqn:gg}, \ref{eqn:Gg}, etc. are the ones in the observed cosmic volume. Due to the cosmic variance, they can differ from the ensemble averages that we adopt for the fiducial power spectra. As explained before, these uncertainties also affect the error forecast. Eventually we will apply this self-calibration technique to real data and thus will completely avoid the ambiguity in the fiducial model. Here we will address the impact of scale dependent bias. As we have mentioned in \S \ref{sec:calibration}, the self-calibration with only galaxy-galaxy clustering heavily relies on the shape differences between different $C^{gg}_{ii}(\ell)$. If the 3D galaxy clustering is close to power-law over a large scale range with redshift independent power index, the resulting $C^{gg}_{ii}(\ell)$ are close to self-similar and thus the self-calibration with only galaxy-galaxy clustering will degrade significantly. The question is, can the {\it full} self-calibration, with the aid from galaxy-lensing cross correlation measurement, avoid this potential degradation? \begin{figure{powerlaw.eps} \caption{The dependence of self-calibration performance on the shape of galaxy power spectrum. We adopt a toy model of galaxy clustering, characterized as a power law with a break at $k=k_c$. The shown value of $k_c$ is in unit of $h/$Mpc. The ad-hoc galaxy power spectrum is closer to a power law for larger $k_c$. As expected, when the galaxy power spectrum is closer to a power law (with redshift-independent power index), the self-calibration based on galaxy clustering alone degrades and eventually blows up, due to more severe degeneracy between up and down photo-z scatters. On the other hand, the full self-calibration virtually avoids this problem, since it heavily relies on the intrinsic lensing geometry dependence to break this degeneracy. \label{fig:powerlaw}} \end{figure} To investigate this issue, we adopt an ad-hoc model for galaxy clustering, in which the galaxy bias is scale dependent such that 3D galaxy power spectrum (variance) takes the form \begin{equation} \Delta_g^2(k,z)=10^{1.25}k^{1.85}(1+z)^{-2}\ \ when \ \ k<k_c\ , \end{equation} and $\Delta_g^2(k,z)=\Delta_g^2(k_c,z)$ when $k\geq k_c$. If $k_c>1 h/$Mpc, at $z=0$ and $0.1h/$Mpc $$^{\sim}_{<}$ k<1 h/$Mpc, it is close to the matter power spectrum (variance) $\Delta_m^2(k,z=0)$ and thus close to the galaxy clustering with $b_g=1$. But it shows significant deviation from the matter power spectrum at other $k$ and other redshifts. In the limit that $k_c\rightarrow \infty$, this galaxy power spectrum becoms a strict power-law and we expect that the self-calibration based on galaxy clustering alone fails. Numerically, we find that when $k_c$^{\sim}_{>}$ 7h/$Mpc, the Fisher matrix inversion based on galaxy clustering alone blows up, indicating its failure. Fig. \ref{fig:powerlaw} shows the degradation when increasing $k_c$. Despite better galaxy clustering measurement due to stronger clustering strength, the self-calibration accuracy degrades, since the galaxy power spectrum is closer to a strict power-law and the degeneracy between up and down photo-z scatters becomes more severe. We expect that the full self-calibration is basically free of this problem, since it mainly relies on the lensing geometry dependence to break the degeneracy between up and down scatters---namely, a lens can only lens a galaxy behind it. This is indeed what we find numerically. To demonstrate this point, we assume the galaxy bias with respect to the matter density to be deterministic, thus the 3D matter-galaxy cross correlation power spectrum (variance) is given by \begin{equation} \Delta^2_{mg}(k,z)=\sqrt{\Delta^2_g(k,z)\times \Delta^2_m(k,z)}\ . \end{equation} This quantity determines the galaxy-galaxy lensing power spectrum. The accuracy of the full self-calibration is shown in Fig. \ref{fig:powerlaw}. The reconstruction accuracy only degrades slightly even when the galaxy power spectrum is very close to a strict power-law (e.g. $k_c=6.5h/$Mpc). This extreme example confirms our expectation that scale dependence of the galaxy bias is unlikely to alter the major conclusions of this paper, namely the feasibility of our self-calibration technique. \subsection{Dependence on other specifications} The scaling of $p_{i\rightarrow j}$ reconstruction accuracy with fiducial quantities can be roughly understood as follows. The observed density-density correlations $C\propto p^2 b_g^2 \Delta^2_m$ and the shear-density correlations $C\propto p^2 b_g \Delta^2_m$, where $\Delta^2_m$ is the matter power spectrum (variance). These are the signals. For the noises in the correlation measurements, we have shown that shot noises in shear and galaxy number density measurements are the only relevant ones, which scale as $\gamma_{\rm rms}(f_{\rm sky}\bar n_g^2)^{-1/2}$ and $(f_{\rm sky}\bar n_g^2)^{-1/2}$ respectively. Then if relying on galaxy-galaxy lensing measurement alone, the reconstruction error $\sigma_p\propto \gamma_{\rm rms}^{1}$. In combination with galaxy-galaxy clustering measurement, the dependence becomes weaker and we expect that $\sigma_p\propto \gamma_{\rm rms}^{e}$, where $e\in (0,1)$. If we adopt a fiducial value $\gamma_{\rm rms}=0.24$ instead of $0.2$, the reconstruction accuracy will degrade by a factor of $<20\%$. For fixed galaxy distribution, matter power spectrum $\Delta^2_m$, and $p_{i\rightarrow j}$, following similar argument above, we find that \begin{equation} \sigma_p\propto f_{\rm sky}^{-1/2}\bar{n}_g^{-1}b_g^{-d}\gamma_{\rm rms}^{e}\ . \end{equation} Here, the bias dependence $d\in (1,2)$ and, as a reminder, $e\in (0,1)$. It is interesting to quantify the performance of the self-calibration technique for surveys like CFHTLS and DES. Based on the above scalings, we are able to do an order of magnitude estimation. (1) One of the major differences between these surveys and the fiducial stage IV survey, is the sky coverage. From the $f_{\rm sky}$ dependence alone, for CFHTLS, we expect a factor of 10 larger reconstruction errors, since the sky coverage is a factor of 100 smaller. For DES, we expect the degradation to be a factor of 2. Since the statistical accuracy of these surveys scales exactly the same way with respect to the sky coverage $f_{\rm sky}$, the self-calibration technique works equally fine for these surveys, from this viewpoint. (2) Another difference is that the number density of source galaxies in CFHTLS and DES is likely a factor of 2 smaller. This results in another factor of 2 degradation in the reconstruction accuracy. Since the lensing measurement at $\ell<2000$ is not completely shot noise dominated, the statistical accuracy has weaker dependence on $\bar{n}_g$. From this viewpoint, the self-calibration technique works better for surveys with higher galaxy number density. We caution that the above estimation neglects many complexities. For example, CFHTLS, DES and Pan-STARRS are shallower than the fiducial survey and thus the galaxy number densities at high redshift in these surveys are likely much smaller, while the galaxy number densities at low redshift are comparable. This implies that the $p_{i\rightarrow j}$ reconstruction in these surveys at high-z is more affected that at low-z. Furthermore, the above estimation neglects the difference in the photo-z error, which is likely considerably worse for CFHTLS and DES. It will definitely affect the reconstruction. However, as we discussed in \S\ref{subsec:error}, the detection threshold of contamination rate ($p_{i\rightarrow j}$) is mainly determined by the ratio of noise and signal of corresponding true-z bins. In this sense, the self-calibration technique works better for worse photo-z estimation. For a robust forecast of the performance of our self-calibration technique in each specific survey, we need a detail fiducial model of the galaxy distribution, clustering and photo-z error distribution. Although this task is beyond the scope of this paper, the above estimates imply that the self-calibration technique will be applicable. \section{Discussions} \label{sec:discussion} It is possible to further improve the statistical accuracy of the self-calibration technique. For example, so far we treat $C^{Gg}_{i>j}$ and $C^{Gg}_{k>j}$ as independent quantities. Improvement can be made by utilizing their internal connection. Both of them are determined by the same mass-galaxy cross correlation over the same redshift range. For this reason, they are connected by a simple scaling relation (in the absence of intrinsic alignments), as pointed out by \citet{Jain03,Zhang05,Bernstein06}. In weak lensing cosmology, people often disregard the lensing power spectrum measurement at $\ell>2000$-$3000$, because theoretical prediction at such scale is largely uncertain. However, the shear-shear measurement at such scales contains useful information to improve the photo-z reconstruction as well as shear-ratio information that is useful for cosmology. For example, they allow for better handle over the magnification and size bias induced correlations, mainly in the background density-foreground shear correlation measurement. We could use this information usually disregarded in cosmological applications to improve the photo-z calibration. So far we only focus on the reconstructed $p_{i\rightarrow j}$. The reconstructed $C^{gg,R}_{ii}$ and $C^{Gg,R}_{i\geq j}$ contain valuable information on cosmology and can be further explored. Since these reconstructed power spectra do not suffer from the problem of the photo-z scatters, they can be utilized for purpose beyond cosmology. For example, the self-calibration of galaxy intrinsic alignment, proposed by \citet{Zhang08}, relies on the measurement of $C^{Gg,P}_{ii}$ to infer the GI correlation contaminated the weak lensing measurement. This quantity is contaminated by the photo-z scatters. Since we are now able to quantify the photo-z scatters and $C^{Gg,R}_{ij}$ simultaneously, we are able to quantify and correct for the effect of photo-z scatters in the self-calibration of galaxy intrinsic alignment. One key issue missing in this paper is to propagate the errors or biases on the reconstructed $p_{i\rightarrow j}$ into errors on cosmology inferred from shear-shear data. This will tell us whether the errors in the self-calibration are small enough to avoid significant biases or inflated errors in a shear-shear cosmology measurement. Although many of the $p$'s are uncertain by more than the $\sim 0.1\%$ that is needed to make the biases negligible \citep{Bernstein09b}, there are many correlations between these errors which complicate the estimation. This issue definitely deserves further investigation. We emphasize that our purely photometric self-calibration technique is complementary to those based on cross correlations between photo-z and spec-z samples \citep{Newman08,Bernstein09}, which we call cross-calibration. An advantage of cross-calibration is that it can identify a special type of photo-z error, namely when the mean photo-z is a monotonic increasing function of true-z (other than the correct identity function). As explained earlier, the self-calibration technique fails completely for such type of photo-z error. The cross-calibration technique will also likely be more able to infer galaxy distribution bias caused when photo-z outliers have a different $n(z)$ than other galaxies in the same true-z bin. On the other hand, the self-calibration has a number of advantage over the cross correlation. Since the total number of photo-z galaxies is much larger than that of the spec-z sample, it can reach higher statistical accuracy. Since what it measures is the photo-z scatters in the whole survey volume, it avoids possible cosmic variance in the photo-z scatters, which could bias the cross-calibration. And since the spec-z targets may be a very different population from the photo-z galaxies, the cross-correlation method will be more susceptible to biases from varying $b_g$ among subpopulations. The photo-z scatter self-calibration method described here has many attractive aspects: it does not depend on any cosmological priors or on models of the power spectrum; it is unaffected by intrinsic alignments; its errors are determined by shot noise, not sample variance, so that higher source densities are exploited if observed; it remains Gaussian and tractable to high $\ell$; and is not significantly affected by shear measurement errors. And of course it can be conducted with the same imaging data used for the shear-shear correlation measurement without degrading the shear-shear information content. The Fisher analysis suggests that photo-z outlier rates can be determined with statistical errors of 0.01--1\% for bins at $z\le2$. It will be necessary to correct data for lensing magnification bias, and in principle this can be done with little statistical penalty, but the magnification bias factor $g$ must be determined to sufficient accuracy. The biggest issue is ``galaxy distribution bias,'' whereby photo-z outlier galaxies might have different $n(z)$ or bias $b_g$ than the non-outlying galaxies in the same true-z bin. We find that the effective bias of galaxies must vary by $<O(0.1)$ in order to avoid systematic errors in scattering rates that exceed the expected statistical errors. This is an area deserving of more detailed attention. In this paper, we have presented a concept study of the proposed self-calibration. For the idealized survey of stage IV survey specifications, we have shown that it can in principle reconstruct the photo-z error distribution to useful precision. More robust forecast shall include all extra sources of error, as listed in \S \ref{sec:statistical} and \S \ref{sec:bias}, more realistic fiducial model (\S \ref{sec:fiducial}), and possibly more uncertainties, into account. \section{Acknowledgment} We thank Hu Zhan, Sarah Bridle and Jun Pan for many useful discussions and the anonymous referee for many useful suggestions. PJZ thanks the hospitality of the UPenn physics and astronomy department and the Aspen center for physics, where part of the work was done. PJZ thanks the support of the one-hundred talents program of the Chinese academy of science, the national science foundation of China (grant No. 10533030, 10543004, 10821302 \& 10973027), the CAS grant KJCX3-SYW-N2 and the 973 program grant No. 2007CB815401. GMB acknowledges support from grant AST-0607667 from the National Science Foundation and Department of Energy grant DOE-DE-FG02-95ER40893.
76526377c67e9e269f5d5e539974bb5cbab4b9ea
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{sec:introduction} A \textit{surface group} (\textit{hyperbolic surface group}, respectively) means a group isomorphic to the fundamental group of a closed surface with non-positive (negative, respectively) Euler characteristic. Throughout this paper, we let $F$ be a finitely generated non-abelian free group, and $H$ be a handlebody so that $\pi_1(H)=F$. Unless specified otherwise, we fix ${\script S}=\{a_1,\ldots,a_n\}$ as a generating set for $F$. Each element $w\in F$ can be written as a \textit{word} written in ${\script S}\cup{\script S}^{-1}$. That means, $w=v_1 v_2\cdots v_l$ for some $v_i\in {\script S}\cup{\script S}^{-1}$. Each $v_i$ is called a \textit{letter} of $w$. $w$ is \textit{cyclically reduced} if $v_i\ne v_{i+1}^{-1}$, where the indices are taken modulo $l$. The length of a cyclically reduced word $w$ is notated as $|w|$. The Cayley graph of $F$ with respect to the generating set ${\script S}$ is an infinite $2n$--valent tree, denoted as $\ensuremath{\mathrm{Cay}}_{\script S}(F)$ or simply $\ensuremath{\mathrm{Cay}}(F)$. There is a natural action of $F$ on $\ensuremath{\mathrm{Cay}}(F)$, so that $\ensuremath{\mathrm{Cay}}(F)/F$ is a bouquet of $n$ circles. Each circle in $\ensuremath{\mathrm{Cay}}(F)/F$ inherits from $\ensuremath{\mathrm{Cay}}(F)$ an orientation and a label by an element in ${\script S}$. Motivated by $3$--manifold theory, Gromov conjectured every one-ended word-hyperbolic group contains a surface group~\cite{Bestvina:2009p3862}. Note that a surface subgroup of a word-hyperbolic group is actually a hyperbolic surface group. We say that $U\subseteq F$ is \textit{diskbusting} if there does not exist a non-trivial free decomposition $F=G_1\ast G_2$ such that each element of $U$ is conjugate into one of $G_i$'s \cite{Canary:1993p4322,Stong:1997p4326,Stallings:1999p3173}. For convention, we will only consider finite subsets of $F$. As a special case, a word $w\in F$ is diskbusting if $w$ does not belong to any proper free factor of $F$. $w\in F$ is \textit{root-free} if $w$ is not a proper power. A particularly simple case of the Gromov's conjecture is when a word-hyperbolic group is given as the \textit{double} $D(w)= F\ast_{\langle w\rangle} F$. $D(w)$ is one-ended and word-hyperbolic if and only if $w$ is diskbusting and root-free~\cite{Bestvina:1992p456,Gordon:2009p360}. Gordon and Wilton ignited attention on this case by providing several sufficient conditions for $D(w)$ to contain a surface group~\cite{Gordon:2009p360}. On one hand, they formulated a homological condition by crucially using a result of Calegari on surface subgroups of word-hyperbolic graphs of free groups with cyclic edge groups~\cite{Calegari:2008p1810}. On the other hand, they considered a $3$--manifold theoretic condition as follows. Realize $U\subseteq F=\pi_1(H)$ as an embedded $1$--submanifold $A\subseteq H$. $U$ (or equivalently, $A$) is said to be \textit{virtually geometric} if there exists a finite cover $p:H'\rightarrow H$ such that $p^{-1}(A)$ is freely homotopic to a $1$--submanifold embedded in $\partial H'$. In particular, $U$ (or equivalently, $A$) is \textit{geometric} if $A$ is freely homotopic to a $1$--submanifold on $\partial H$. If $w\in F$ is virtually geometric and diskbusting, then $D(w)$ contains a surface group~\cite{Gordon:2009p360}. However, not all root-free diskbusting words are virtually geometric~\cite{Manning:2009p3177}. Let $w\in F = \pi_1(\ensuremath{\mathrm{Cay}}(F)/F)$ be cyclically reduced. Following \cite{Stallings:1983p596}, a locally injective graph map is called an \textit{immersion}. There exists an immersed loop $\gamma\subseteq \ensuremath{\mathrm{Cay}}(F)/F$ such that $[\gamma]=w$; in this case, we say that $\gamma$ \textit{reads} $w$. Let $X(w)$ denote the 2--dimensional CW-complex obtained by taking two copies of $\ensuremath{\mathrm{Cay}}(F)/F$ and gluing each boundary component of a cylinder along each copy of $\gamma_w\subseteq \ensuremath{\mathrm{Cay}}(F)/F$. In the case when the free basis ${\script S}$ needs to be explicitly notated, we write $X_{\script S}(w)=X(w)$. It turns out that $X(w)$ is an Eilenberg--Mac Lane space for $D(w)$~\cite{Wise:2000p790,Gordon:2009p360}. In~\cite{Kim:2009p3867}, Wilton and the author defined a combinatorial condition for a cyclically reduced word $w$, called \textit{polygonality}; see Definition~\ref{defn:polygonal}. A key observation was, the polygonality of $w$ is equivalent to the existence of a homeomorphically and $\pi_1$--injectively embedded closed surface of non-positive Euler characteristic in some finite cover of $X(w)$. Hence, the polygonality of $w$ will guarantee the existence of a surface group in $D(w)$. We will use the following terminology that appears in~\cite{Kim:2009p3867}. A \textit{polygonal disk} $P$ is a $2$--dimensional closed disk with a CW--structure such that $\partial P = P^{(1)}$ is a polygon. Let $E(\partial P)$ denote the set of the edges in $\partial P$. A \textit{side-pairing} $\sim$ on a collection of polygonal disks $P_1,\ldots,P_m$ is a partition of $\coprod_i E(\partial P_i)$ into unordered pairs, along with a choice of a homeomorphism between the edges in each pair; here, we require that such a homeomorphism does not identify two consecutive edges of any $\partial P_i$ in a way that fixes a common vertex of the two edges. A side-pairing $\sim$ on polygonal disks $P_1,\ldots,P_m$ determines an identification of the edges of $\coprod_i P_i$; hence, we have a closed surface $S=\coprod_i P_i/\!\!\sim$. We denote by $m(S)$ the number of polygonal disks ($2$--cells) which $S$ is made of. If $\phi\colon\thinspace\Gamma\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ is a graph map, each $e\in E(\Gamma)$ carries an orientation and a label (by ${\script S}$) induced from the orientation and the label of $\phi(e)\in E(\ensuremath{\mathrm{Cay}}(F)/F)$. Conversely, for a graph $\Gamma$, a choice of an orientation and a label (by ${\script S}$) for each $e\in E(\Gamma)$ determines a graph map $\phi\colon\thinspace\Gamma\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$; if $e\in E(\Gamma)$ is labeled by $a_i$, we call $e$ as an $a_i$-edge of $\Gamma$. That $\phi\colon\thinspace\Gamma\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ is an immersion is equivalent to that for each $a_i\in{\script S}$ and each $v\in\Gamma^{(0)}$, there do not exist two incoming $a_i$-edges or two outgoing $a_i$-edges at $v$. We say that $U\subseteq F$ is \textit{independent} if for any two distinct $w_1,w_2\in U$ and for any $t_1,t_2\in\bbb{Z}\setminus\{0\}$, $w_1^{t_1}$ is not conjugate to $w_2^{t_2}$. The definition of polygonality generalized to an independent set of cyclically reduced words is as follows; see \cite{Kim:2009p3867} for the case when $|U|=1$. \begin{definition}\label{defn:polygonal} Let $U\subseteq F$ be an independent set of cyclically reduced words. \begin{enumerate} \item Suppose there exists a side-pairing $\sim$ on polygonal disks $P_1,\ldots,P_m$ and an immersion $\phi\colon\thinspace S^{(1)}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ where $S=\coprod_i P_i/\!\!\sim$, such that the composition $\partial P_i\rightarrow S^{(1)}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ reads a non-trivial power of some word in $U$. Then the closed surface $S$ is called a \textit{$U$--polygonal surface}. \item $U$ is \textit{polygonal} if either $U$ contains a proper power or there exists a $U$--polygonal surface $S$ such that $\chi(S) < m(S)$. \end{enumerate} \end{definition} \begin{rem}\label{rem:defn:polygonal} Let $U\subseteq F$ be a set of cyclically reduced words. The condition $\chi(S)<m(S)$ is necessary for the polygonality to be a non-trivial property, for one can always construct a $U$--polygonal surface $S$ with $\chi(S)=m(S)$ as follows. Choose any non-trivial word $w\in U$ and take two polygonal disks $P$ and $P'$ each of whose boundary components reads $w$. For $e\in E(\partial P)$ and $e'\in E(\partial P')$, define $e\sim e'$ if and only if $e$ and $e'$ read the same letter of $w$. Then for $S=P\coprod P'/\!\!\sim$, $\chi(S)=m(S)=2$. One can also construct a $U$--polygonal surface $S= P/\!\!\sim$ with $\chi(S)=m(S)=1$ by taking a polygonal disk $P$ whose boundary reads $w^2$ and by defining a side-pairing $\sim$ to identify the edges of $\partial P$ with respect to the $\pi$--rotation. \end{rem} The polygonality of $U\subseteq F$ depends on the choice of a free basis ${\script S}$ for $F$ in which $U$ is written. For $g,h\in F$, we let $g^h$ denote $h^{-1}g h$. Choose a finite-index subgroup $F'$ of $F$ and a free basis ${\script S}'$ for $F'$. For $w'\in F'$, let us denote the $F'$--conjugacy class of $w'$ by $[w']$. Fix a word $w\in F$. For $g\in F$, let $n_g$ be the smallest positive integer such that $(w^{n_g})^g\in F'$. Following~\cite{Gordon:2009p360,Manning:2009p3177}, define \[\ensuremath{\underline{w}}_{F'} = \{ [(w^{n_g})^g] |\thinspace gF'\in F/F'\}.\] Note that $|\ensuremath{\underline{w}}_{F'}|\le |F/F'|$. We define a \textit{transversal $\ensuremath{\hat{w}}_{F'}$ for $\ensuremath{\underline{w}}_{F'}$} to be a subset of $F'$ obtained by choosing exactly one element from each conjugacy class in $\ensuremath{\underline{w}}_{F'}$; here, we will regard $\ensuremath{\hat{w}}_{F'}$ as a set of cyclically reduced words written in ${\script S}'$, by taking cyclic conjugations if necessary. Suppose for some $g$ and $h$ in $F$, $(w^{n_g})^g$ and $(w^{n_h})^h$ have non-trivial powers which are conjugate to each other in $F'$. Since $F$ does not have any non-trivial Baumslag--Solitar relation, we can write $(w^M)^g = (w^{M})^{hf'}$ for some $M>0$ and $f'\in F'$. We have $hf'g^{-1}\in C(w^{M})=C(w)$ and so, $w^{hf'} = w^g$. In particular, $n_g=n_h$ and $[(w^{n_g})^g] = [(w^{n_h})^h]$. This shows that $\ensuremath{\hat{w}}_{F'}$ is independent as a set of words in $F'$. \begin{definition}\label{defn:vp} A word $w\in F$ is \textit{virtually polygonal} if a transversal $\ensuremath{\hat{w}}_{F'}$ for $\ensuremath{\underline{w}}_{F'}$ is polygonal as an independent set of cyclically reduced words written in ${\script S}'$, for some finite-index subgroup $F'$ of $F$ and for some free basis ${\script S}'$ of $F'$. \end{definition} After formulating polygonality in terms of Whitehead graphs, we will prove a relation between virtual geometricity and virtual polygonality. \begin{theoremnonumber}[Theorem~\ref{thm:vg implies vp}] A diskbusting and virtually geometric word in $F$ is virtually polygonal. \end{theoremnonumber} While Theorem~\ref{thm:vg implies vp} is interesting in its own, it also has a corollary related to the Gromov's conjecture on $D(w)$. \begin{theoremnonumber}[Theorem~\ref{thm:vp}] If a root-free word $w\in F$ is virtually polygonal, then $D(w)$ contains a hyperbolic surface group. \end{theoremnonumber} \begin{corollarynonumber}[Corollary~\ref{cor:vp}; first proved by Gordon--Wilton~\cite{Gordon:2009p360}] If $w\in F$ is root-free, virtually geometric and diskbusting, then $D(w)$ contains a hyperbolic surface group. \end{corollarynonumber} In~\cite{Manning:2009p3177}, Manning proved that the word $w_1=bbaaccabc\in F_3=\langle a,b,c\rangle$ is not virtually geometric. The same argument shows that $w_2 =aabbacbccadbdcdd \in F_4=\langle a,b,c,d \rangle$ is not virtually geometric. We will prove that $w_1$ and $w_2$ are both polygonal (Proposition~\ref{prop:w1w2}). As a consequence, $D(w_1)$ and $D(w_2)$ contain surface groups. \begin{theoremnonumber}[Theorem \ref{thm:p but not vg}] There exist polygonal words which are not virtually geometric. \end{theoremnonumber} \section{Preliminary}\label{sec:preliminary} We recall basic facts on polygonality and Whitehead graphs. The material in Section~\ref{subsec:polygonality} is largely drawn from~\cite{Kim:2009p3867}. For readers' convenience, we include a complete proof of Theorem~\ref{thm:polygonal} which is similar to the argument in~\cite[Corollary 2.11]{Kim:2009p3867}. \subsection{Polygonality}\label{subsec:polygonality} We say that $U,U'\subseteq F$ are \textit{equivalent} if $U'=\phi(U)$ for some $\phi\in \smallcaps{Aut}(F)$. The definition of polygonality of $U\subseteq F$ involves the representation of $U$ as a set of words written in ${\script S}=\{a_1,\ldots,a_n\}$~(Definition~\ref{defn:polygonal}). In particular, $U\subseteq F$ is equivalent to a polygonal set of words if and only if $U\subseteq F$ is polygonal with respect to some choice of a free basis for $F$. We list some of the known polygonal words as follows. \begin{exmp}\label{exmp:polygonal} (1) Consider a cyclically reduced word $w\in F = \langle a_1,\ldots,a_n\rangle$ such that for each $i$, exactly two letters of $w$ belong to $\{a_i,a_i^{-1}\}$. In particular, $|w|=2n$. We claim that $w$ is polygonal. We may assume $w$ is root-free, as any proper powers are polygonal by definition. Let $P$ be a polygonal disk with each edge oriented and labeled by ${\script S}$ such that $\partial P$ reads $w$. Let $\sim$ be the side-pairing which identifies the two $a_i$-edges of $\partial P$ for each $i$. Put $S=P/\!\!\sim$. Since $S^{(1)}$ has exactly one $a_i$-edge for each $i$, $S^{(1)}$ immerses into $\ensuremath{\mathrm{Cay}}(F)/F$. In order to prove $w$ is polygonal, we have only to show that $\chi(S)<m(S)=1$. For each $v\in S^{(0)}$, let $d_v$ denote the valence (of the $1$--complex $S^{(1)}$) at $v$. As $w$ is cyclically reduced, $d_v\ge2$ for each $v$. Moreover, $\chi(S)=|S^{(0)}|-|S^{(1)}|+1 = \sum_{v\in S^{(0)}}1 - \sum_{v\in S^{(0)}} d_v/2 + 1$. Hence, $\chi(S)<1$ fails only when $d_v=2$ for each $v\in S^{(0)}$. If $d_v=2$ for each $v$, then one can see that $S\approx\bbb{R} P^2$ and $\sim$ identifies the edges of $\partial P$ by the $\pi$--rotation; this means, $\sim$ identifies $v_i$ to $v_{(|w|/2)+i}$ for each $i$. It follows that $w=u^2$ for some $u\in F$, and we have a contradiction. (2) Let $F_2$ denote the free group generated by $a$ and $b$. If $|p_i|,|q_i|>1$ for each $i$, then the words $\prod_i a^{p_i} b^{q_i}, \prod_i a^{p_i} (a^b)^{q_i}\in F_2$ are polygonal~\cite{Kim:2009p3867}. (3) The word $w= \prod_i a^{p_i} (a^b)^{q_i}$ is called a \textit{positive height--1 word} of $F_2$ if $p_i,q_i>0$. Put $p=\sum_i p_i, q=\sum_i q_i, p'=\sum_{p_i=1} 1$ and $q'=\sum_{q_i=1} 1$. If $pp'\le q^2$ and $qq'\le p^2$, then $w$ is polygonal~\cite{Kim:2009p3867}. With a suitable notion of probability on $F_2$, this implies a positive height--1 word is ``almost surely'' polygonal. \end{exmp} Let $p:(X',x')\rightarrow (X,x)$ be a finite covering map for some based spaces $(X',x')$ and $(X,x)$. For a based loop $\gamma\colon\thinspace (S^1,v)\rightarrow (X,x)$, let $q\colon\thinspace (S^1,v')\rightarrow (S^1,v)$ be the smallest covering such that $\gamma \circ q$ lifts to a based loop $\tilde{\gamma}\colon\thinspace:(S^1, v')\rightarrow (X',x')$ as shown in the commutative diagram below. Following~\cite{Wise:2000p790}, we say that $\tilde{\gamma}$ is the \textit{elevation of $\gamma$ at $x'$ with respect to $p$}. \[ \xymatrix{ & (S^1,v') \ar[rr]^{\tilde{\gamma}}\ar[d]^q && (X',x')\ar[d]^p\\ & (S^1,v) \ar[rr]^{{\gamma}} && (X,x)\\ } \] Let $U=\{w_1,\ldots,w_r\}\subseteq F$ be an independent set of root-free, cyclically reduced words, and $\gamma_i$ be the (based) loop in $\ensuremath{\mathrm{Cay}}(F)/F$ reading $w_i$. Take two copies $\Gamma_1,\Gamma_2$ of $\ensuremath{\mathrm{Cay}}(F)/F$ and glue each boundary component of a cylinder $C_i\approx S^1\times[-1,1]$ along each copy of $\gamma_i$ in $\Gamma_1$ and in $\Gamma_2$, for $i=1,2,\ldots,r$. We let $X(U)$ denote the $2$--dimensional CW-complex thus obtained. Define $D(U)=\pi_1(X(U))$. Take $F^{(1)},F^{(2)}\cong F$, and let $w_i^{(j)}$ denote the image of $w_i$ in $F^{(j)}$ for $j=1,2$. Note that \[D(U) \cong \langle F^{(1)},F^{(2)},t_2,t_3,\ldots,t_r |\thinspace w_1^{(1)}=w_1^{(2)}, (w_i^{(1)})^{t_i}=(w_i^{(2)})\mbox{ for }i=2,\ldots,r\rangle.\] For a finite-index subgroup $F'\le F$, $\ensuremath{\mathrm{Cay}}(F)/F'$ is the finite covering space of $\ensuremath{\mathrm{Cay}}(F)/F$ corresponding to $F'$. For each choice of the basepoint $\tilde{x}$ of $\ensuremath{\mathrm{Cay}}(F)/F'$, there is an elevation of $\gamma_i$ at $\tilde{x}$ with respect to the covering $\ensuremath{\mathrm{Cay}}(F)/F'\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$. We identify two elevations $\tilde{\gamma},\tilde{\gamma}'$ (at two distinct basepoints $\tilde{x},\tilde{x}'\in\ensuremath{\mathrm{Cay}}(F)/F'$) of the based loop $\gamma_i\colon\thinspace S^1\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ if $\tilde{\gamma}$ and $\tilde{\gamma}'$ are the same as loops without basepoints (\textit{i.e.}, when the basepoints are forgotten); this means, the lifting of some power of $\gamma_i$ at $\tilde{x}$ to $\ensuremath{\mathrm{Cay}}(F)/F'$ terminates at $\tilde{x}'$. See~\cite[Lemma 2.7]{Wilton:2008p7013} and~\cite[Lemma 2.4]{Kim:2009p3867} for algebraic description of this identification. We let $\{ \tilde{\gamma}_1,\tilde{\gamma}_2,\ldots,\tilde{\gamma}_s\}$ denote the set of all the elevations of $\gamma_1,\gamma_2,\ldots,\gamma_r$ at the vertices of $\ensuremath{\mathrm{Cay}}(F)/F'$, after this identification. Take two copies $\Gamma_1',\Gamma_2'$ of $\ensuremath{\mathrm{Cay}}(F)/F'$ and glue each boundary component of a cylinder $C_j'$ along each copy of $\tilde{\gamma}_j$ in $\Gamma_1'$ and in $\Gamma_2'$, for $j=1,2,\ldots,s$; in this way, one obtains a finite cover $Y(U,F')$ of $X(U)$. The image of each cylinder in $X(U)$ or in $Y(U,F')$ will still be called a \textit{cylinder}. Lemma~\ref{lem:cylinder} is a special case of \cite[Lemma 2.2]{Kim:2009p3867}. We omit the proof here. \begin{lem}[\cite{Kim:2009p3867}]\label{lem:cylinder} Suppose $U\subseteq F$ is an independent set of root-free, cyclically reduced words and $[F:F']<\infty$. Let $S$ be a closed connected surface homeomorphically embedded in $Y(U,F')$. Then $S$ is the union of some cylinders, $\chi(S)\le0$ and $\pi_1(S)$ embeds into $D(U)$.\qed \end{lem} The following generalizes \cite[Corollary 2.11]{Kim:2009p3867}. \begin{theorem}\label{thm:polygonal} Let $U\subseteq F$ be an independent set of root-free, cyclically reduced words. If $U$ is polygonal, then $D(U)$ contains a hyperbolic surface group. \end{theorem} \begin{proof} Write $U = \{w_1,\ldots,w_r\}$, and let $\gamma_j\subseteq \ensuremath{\mathrm{Cay}}(F)/F$ realize $w_j$. Let $\sim$ be a side-pairing on polygonal disks $P_1,\ldots,P_m$ such that $S=\coprod_i P_i/\!\!\sim$ is a closed $U$--polygonal surface satisfying $\chi(S)<m$. This implies that for some immersion $\phi\colon\thinspace S^{(1)}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$, each composition $\partial P_i\rightarrow S^{(1)}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ reads a non-trivial power of an element in $U$. Since $F$ is subgroup separable, one can lift the immersion $\phi$ to an embedding $\phi'\colon\thinspace S^{(1)}\rightarrow \ensuremath{\mathrm{Cay}}(F)/F'$ for some $[F:F']<\infty$~\cite{Scott:1978p142}. Choose an open disk $B_i$ in the interior of each $P_i$ and let $S'$ be the double of $S\setminus\cup_i B_i$. Since $\phi'$ maps each $\partial P_i$ to an elevation of some $\gamma_j$, the definition of $Y(U,F')$ implies that $S'$ is homeomorphic to the union $S''$ of some cylinders in $Y(U,F')$. By Lemma~\ref{lem:cylinder}, $\pi_1(S')$ embeds into $D(U)$. Note that $\chi(S')=2 (\chi(S)-m) <0$. \end{proof} \begin{rem}\label{rem:polygonal} Let $U \subseteq F$ be as in the hypothesis of Theorem~\ref{thm:polygonal}. By an elementary argument on graphs of spaces, a finite cover of $X(U)$ contains a homeomorphically embedded closed surface if and only if so does $Y(U,F')$ for some $[F:F']<\infty$. Then Theorem~\ref{thm:polygonal} can actually be strengthened as follows: $U$ is polygonal if and only if a finite cover of $X(U)$ contains a homeomorphically embedded closed hyperbolic surface; see~\cite{Kim:2009p3867}. \end{rem} \subsection{Whitehead graph} A \textit{graph} means a $1$--dimensional CW-complex. For a graph $\Gamma$, let $V(\Gamma)$ and $E(\Gamma)$ denote the vertex set and the edge set, respectively. For a cyclically reduced word $w = v_1 v_2\ldots v_l\in F$ where $v_i\in {\script S}\cup{\script S}^{-1}$, we let $W(w)$ denote the \textit{Whitehead graph} of $w$. This means, $W(w)$ is a graph with the vertex set $ {\script S}\cup{\script S}^{-1}$ and the edge set $\{e_1,e_2,\ldots,e_l\}$ such that $e_i$ joins $v_i$ and $v_{i+1}^{-1}$, where the indices are taken modulo $l$. In the special case when $w=a_i$ or $w=a_i^{-1}$, $W(w)$ consists of a single edge joining $a_i$ and $a_i^{-1}$. We define the \textit{connecting map $\sigma_w$ associated with $W(w)$} as the map $\sigma_w\colon\thinspace\{(e,v)|\thinspace e\in E(W(w)), v\in \partial e\}\rightarrow E(W(w))$ such that $\sigma_w(e_i,v_i)=e_{i-1}$ and $\sigma_w(e_i,v_{i+1}^{-1})=e_{i+1}$. In particular, if $\sigma(e,v)=e'$ then $v\in\partial e$, $v^{-1}\in\partial e'$ and moreover, the length--$2$ subwords of $w$ corresponding to $e$ and $e'$ are consecutive and share the letter $v$ or $v^{-1}$; see Example~\ref{exmp:cnt}. Consider a non-zero integral polynomial $f(x_1,\ldots,x_r)=\sum_{1\le i\le r,1\le j\le s} c_{ij} x_i^j$ where $s>0$ and $c_{ij}\ge0$. For $U=\{w_1,\ldots,w_r\}\subseteq F$ consisting of cyclically reduced words, we define \[W(f(U))=W(f(w_1,\ldots,w_r))=\cup_{i=1}^r \cup_{j=1}^s \cup_{k=1}^{c_{ij}} W(w_i^j)\] where the union is taken so that each term $W(w_i^j)$ has the common vertex set ${\script S}\cup {\script S}^{-1}$, and any two terms do not have a common edge. For instance, $W(w+w)$ is obtained by doubling all the edges of $W(w)$, so that $V(W(w+w))={\script S}\cup{\script S}^{-1}$ and $|E(W(w+w))|=2|E(W(w))| = 2|w|$. Note that $W(f(U))$ is an abusive notation in that $U$ is considered as an \textit{ordered tuple} rather than just a set. We have $|E(W(f(U)))| = \sum_{i,j}c_{ij} |E(W(w_i^j))| = \sum_{i,j}c_{ij} j|w_i|$. The unique extension on $W(f(U))$ of the connecting map on each $W(w_i^j)$ is called the \textit{connecting map associated with $W(f(U))$}. \begin{exmp}\label{exmp:cnt}. Consider the word $w=ab^{-2}\in F_2=\langle a,b\rangle$. Write $w=v_1v_2v_3$ where $v_1=a$ and $v_2=v_3=b^{-1}$. $W(w)$ is shown in Fig.~\ref{fig:connecting map} (a); here, the edges $e_1,e_2$ and $e_3$ correspond to the (cyclic) subwords $v_1v_2=ab^{-1}, v_2v_3=b^{-2}$ and $v_3v_1=b^{-1}a$, respectively. Some of the values of the associated connecting map are given as $\sigma_w(e_1,a)=\sigma_w(e_1,v_1)=e_3$, $\sigma_w(e_3,a^{-1})=\sigma_w(e_3,v_1^{-1})=e_1$ and $\sigma_w(e_3,b^{-1})=\sigma_w(e_3,v_3)=e_2$. Figure~\ref{fig:connecting map} (b) shows $W(w+w)=W(2w)$. $e_i$ and $e_i'$ denote the edges corresponding to the length--$2$ cyclic subword $v_iv_{i+1}$ of $w$. The connecting map is given as $\sigma_{2w}(e_1,a)=e_3$, $\sigma_{2w}(e_3,a^{-1})=e_1$,$\sigma_{2w}(e_3,b^{-1})=e_2$, $\sigma_{2w}(e_1',a)=e_3'$ and so forth. $W(w^2)$ is drawn in Fig.~\ref{fig:connecting map} (c). $f_i$ denotes the edge corresponding to the $i$--th length--$2$ cyclic subword of $w^2=ab^{-2}ab^{-2}$. The connecting map can be computed as $\sigma_{w^2}(f_1,a)=f_6$, $\sigma_{w^2}(f_3,a^{-1})=f_4$, $\sigma_{w^2}(f_3,b^{-1})=f_2$, $\sigma_{w^2}(f_4,a)=f_3$ and so forth. Note that $W(2w)$ and $W(w^2)$ are the same as graphs, while $\sigma_{2w}$ and $\sigma_{w^2}$ are distinct. \end{exmp} \begin{figure}[htb!] \subfigure[$W(w)$.]{ \includegraphics[]{fig/figcnt.1} }\hfill \subfigure[$W(w+w)=W(2w)$.]{ \includegraphics[]{fig/figcnt.2} }\hfill \subfigure[$W(w^2)$.]{ \includegraphics[]{fig/figcnt.3} } \caption{$w=ab^{-2}$. \label{fig:connecting map}} \end{figure} \begin{rem}\label{rem:connecting map} (1) Let $\sigma$ be the connecting map associated with a Whitehead graph $\Gamma$. For $e\in E(\Gamma)$ and $v\in\partial e$, $v^{-1}$ is an endpoint of $\sigma(e,v)$ and $\sigma(\sigma(e,v),v^{-1})=e$. (2) Let $U=\{w_1,\ldots,w_r\}$ and $f(x_1,\ldots,x_r)=\sum_{i,j\ge1}c_{ij} x_i^j$. For each $i$ and $j$, glue $c_{ij}$ copies of a disk to $\ensuremath{\mathrm{Cay}}(F)/F$ along the loop reading $w_{i}^j$. Let $Z(f(U))$ be the $2$--dimensional CW-complex thus obtained. Then $W(f(U))$ is precisely the link of the (unique) vertex $v$ in $Z(f(U))$. The vertex $a_i$ ($a_i^{-1}$, respectively) of $W(f(U))$ corresponds to the incoming (outgoing, respectively) portion of the $a_i$-edge in $Z(f(U))^{(1)}$ at $v$. \end{rem} A collection $\script{D}=\{D_1,\ldots,D_n\}$ of disjoint, properly embedded disks in $H$ is called a \textit{disk structure on $H$} if $H\setminus\cup_i D_i$ is a $3$--cell~\cite{Stallings:1999p3173}. We will alway equip each disk $D_i$ with a transverse orientation. Let $A$ be an embedded $1$--submanifold of $H$. Assume that $A$ intersects $\script{D}$ \textit{transversely and minimally}. This means, $A\cap(\cup_i D_i)$ consists of finitely many points and $|A\cap (\cup_i D_i)|\le |A'\cap (\cup_i D_i)|$ for any $1$--submanifold $A'$ freely homotopic to $A$. Given a disk structure $\script{D}$ on $H$, we denote by $\script{D}\times[-1,1]=\cup_i (D_i\times[-1,1])$ a closed regular neighborhood of $\script{D}$ in $H$ so that $D_i$ is identified with $D_i\times0$ and the transverse orientation of $D_i$ is from $D_i\times{-1}$ to $D_i\times1$. By transversality, we may assume that $A\cap (D_i\times[-1,1])$ has the product structure $(A\cap D_i)\times[-1,1]$ for each $i$. We define the \textit{Whitehead graph $W(A,\script{D})$ of $A$ with respect to $\script{D}$} as the graph such that the vertex set is ${\script S}\cup{\script S}^{-1}$ and the edge set consists of arcs in $A\setminus (\script{D}\times(-1,1))$, where $A\cap (D_i\times\pm1)$ are identified with the vertices $a_i^{\pm1}$ in $W(A,\script{D})$. In other words, $W(A,\script{D})$ is the graph obtained from $(A\cup(\script{D}\times[-1,1]))\setminus\script{D}\times(-1,1)$ by collapsing each $D_i\times\{\pm1\}$ onto the vertices $a_i^{\pm1}$. Suppose each loop $\gamma_j$ in $A=\gamma_1\cup\ldots\cup\gamma_r$ is equipped with an orientation. Follow each loop $\gamma_j$, and whenever $\gamma_j$ intersects $D_i$, record $a_i$ or $a_i^{-1}$ according to whether the orientation of $\gamma_j$ coincides with the transverse orientation of $D_i$ or not. Let $w_j$ be the word thus obtained. In this case, we say that $U=\{w_1,\ldots,w_r\}$ is \textit{realized by $A$} with respect to $\script{D}$. As we are assuming that $A$ intersects $\script{D}$ minimally, $U$ consists of cyclically reduced words. Depending on the choice of the basepoint of $\gamma_j$, $w_j$ is determined up to cyclic conjugation. Observe that $W(A,\script{D})=W(U)$, and the notion of the connecting map is also defined for $W(A,\script{D})$ by that of $W(U)$. Moreover, for any other choice of disk structure $\script{D}'$ on $H$ which $A$ intersects transversely and minimally, $W(A,\script{D}')=W(\phi(U))$ for some $\phi\in\smallcaps{Aut}(F)$. For a simple loop $\gamma$ embedded in $H$ and $c,j>0$, we let $c\gamma^j$ denote the $1$--submanifold consisting of $c$ components, each of which is freely homotopic to $\gamma^j$. Consider a non-zero integral polynomial $f(x_1,\ldots,x_r)=\sum_{i,j\ge1} c_{ij} x_i^j$ where $c_{ij}\ge0$. Let $A$ be a $1$--submanifold of $H$ written as the union of disjoint loops $A=\gamma_1\cup\cdots\cup\gamma_r$. Then $f(A)=f(\gamma_1,\ldots,\gamma_r)$ denotes a $1$--submanifold in $H$ freely homotopic to $\coprod_{i,j\ge1} c_{ij}\gamma_i^j$. $f(A)$ is again an abusive notation in that $A$ is considered as an ordered tuple of loops, rather than just a set of loops. If a $1$--submanifold $A\subseteq H$ realizes $U\subseteq F$ with respect to a disk structure $\script{D}$, then $W(f(U)) = W(f(A),\script{D})$. \begin{rem}\label{rem:minimal} When considering the Whitehead graph of $U\subseteq F$ or a $1$--submanifold $A\subseteq H$, we always require that $U$ consists of cyclically reduced words, and $A$ intersects $\script{D}$ transversely and minimally. If necessary, we achieve this by replacing words in $U$ by some cyclic conjugations or by freely homotoping loops in $A$. Note that certain additional conditions on $A$, such as $A\subseteq\partial H$, can possibly be lost by freely homotoping $A$. \end{rem} A properly embedded disk $D$ in $H$ is \textit{essential} if either $H\setminus D$ is connected or neither of the components of $H\setminus D$ is a $3$--cell. Let $A$ be a $1$--submanifold of $H$. Suppose that for any $1$--submanifold $A'$ freely homotopic to $A$, $A'$ intersects any essential disk $D\subseteq H$. Then $A$ is said to be \textit{diskbusting}. Note that a set $U$ of cyclically reduced words in $F$ is diskbusting if and only if a $1$--submanifold $A\subseteq H$ realizing $U$ is diskbusting. A vertex $v$ in a graph $\Gamma$ is a \textit{cut vertex} if $\Gamma\setminus\{v\}$ is not connected. We say that a Whitehead graph $W(A,\script{D})$ is \textit{minimal} \footnote{Minimality of $W(A,\script{D})$ should not be confused with the hypothesis that $A$ intersects $\script{D}$ transversely and minimally.} if $|E(W(A,\script{D}))|\le |E(W(A',\script{D}'))|$ for any $1$--submanifold $A'$ freely homotopic to $A$ and any disk structure $\script{D}'$ which $A'$ intersects transversely and minimally. $U\subseteq F$ is \textit{minimal} if $U$ is not equivalent to any $U'\subseteq F$ where the sum of the lengths of the words in $U'$ is smaller than that of $U$. Using a well-known finite-time algorithm to determine whether $U\subseteq F$ is diskbusting or not, one gets the following: \begin{theorem}[\cite{Whitehead:1936p4475,Stong:1997p4326,Stallings:1999p3173}] \begin{enumerate} \item If $U\subseteq F$ is minimal, independent and diskbusting, then $W(U)$ is connected and does not have a cut vertex. \item Suppose $A\subseteq H$ is a diskbusting $1$--submanifold which intersects a disk structure $\script{D}$ on $H$ transversely and minimally, such that $W(A,\script{D})$ is minimal. Then $W(A,\script{D})$ is connected and does not have a cut vertex.\qed \end{enumerate} \label{thm:stallings} \end{theorem} \section{Simple Surgery}\label{sec:ss} Zieschang proved a key fact on a minimal Whitehead graph of a geometric $1$--submanifold in $H$. \begin{theorem}[\cite{Zieschang:1965p4314}; see also~\cite{Berge:2009p3422}]\label{thm:geometric} Suppose $A$ is a geometric $1$--submanifold of $H$ which intersects a disk structure $\script{D}$ transversely and minimally. If $W(A,\script{D})$ is minimal, then $A$ is freely homotopic to some $1$--submanifold $A'\subseteq\partial H$ such that $A'$ intersects $\script{D}$ transversely and minimally. In particular, $W(A',\script{D})=W(A,\script{D})$. \qed \end{theorem} A \textit{pairing} $\sim$ on a finite set $X$ will mean an equivalence relation on $X$ such that each equivalence class consists of precisely two elements. From now on, we will use the notation $I_1=(0,1]$ and $I_{-1}=[-1,0)$. \begin{setting}[for Definition~\ref{defn:ss}]\label{setting:ss} Let $A\subseteq H$ be a $1$--submanifold and $\script{D}=\{D_1,\ldots,D_n\}$ be a disk structure on $H$ such that $A$ intersects $\script{D}$ transversely and minimally. Recall our convention that $\script{D}\times [-1,1]=\cup_i (D_i\times[-1,1])$ denotes a closed regular neighborhood of $\script{D}$ equipped with a product structure so that $D_i$ is identified with $D_i\times 0$. In particular, if $v\in D_i$, then $v\times 1\in D_i\times1\subseteq H$ and $v\times -1\in D_i\times-1\subseteq H$. We will also assume that $A$ is compatible with the product structure in the sense that $A\cap (D_i\times[-1,1]) = (A\cap D_i)\times[-1,1]$ for each $i$. For each $i$, suppose $|A\cap D_i|$ is even and $\sim_i$ is a pairing on $A\cap D_i$. For each pair $v\sim_i v'$, let $\alpha_{vv'}^{+}\subseteq D_i\times I_1$ be an arc joining $v\times1$ and $v'\times 1$, and $\alpha_{vv'}^-\subseteq D_i\times I_{-1}$ be an arc joining $v\times-1$ and $v'\times-1$; moreover, we assume that all the arcs in $\{\alpha_{vv'}^\pm|\thinspace\; v\sim_i v'\mbox{ for some }i\}$ are disjoint (see Fig.~\ref{fig:sseg1} and~\ref{fig:sseg2}). Let $A'\subseteq H\setminus\script{D}$ be the $1$--submanifold obtained from $A$ if one replaces $A\cap (D_i\times[-1,1])$ by the arcs $\cup_{v\sim_i v'}\alpha_{vv'}^\pm$ for all $i$. \end{setting} \begin{definition}\label{defn:ss} In Setting~\ref{setting:ss}, assume further the following: \renewcommand{\labelenumi}{(\roman{enumi})} \begin{enumerate} \item for each $1\le i\le n$ and $\epsilon=\pm1$, the intersection between each component of $A'$ and $D_i\times I_\epsilon$ is either connected or empty; \item for some component $\gamma$ of $A'$, $\gamma\cap(\script{D}\times[-1,1])$ has more than two components. \end{enumerate} Then we say that $(\script{D},\{\sim_i\})$ \textit{determines a simple surgery on $A$.} When $\sim_i$ needs not be explicitly notated, we simply say \textit{$A$ admits a simple surgery (with respect to $\script{D}$)}. Also, $A'$ is said to be \textit{obtained by the simple surgery $(\script{D},\{\sim_i\})$ on $A$}. \renewcommand{\labelenumi}{(\arabic{enumi})} \end{definition} \begin{exmp}\label{exmp:ss1} Let $A\subseteq H$ be the $1$--submanifold denoted as the bold curve in Fig.~\ref{fig:sseg1} (a). Write $D_1\cap A = \{v,v'\}$ and $D_2\cap A=\{w,w'\}$. Define $v\sim_1 v'$ and $w\sim_2 w'$, and consider $\alpha_{vv'}^\pm$ and $\alpha_{ww'}^\pm$ as in Setting~\ref{setting:ss}. Put $\script{D}=\{D_1,D_2\}$. Figure~\ref{fig:sseg1} (b) shows $A'\subseteq H$ which is obtained by substituting $\alpha_{vv'}^\pm$ and $\alpha_{ww'}^\pm$ for $A\cap (\script{D}\times[-1,1])$. Note that $A'\cap (\script{D}\times[-1,1])$ has four components $\{\alpha^+_{vv'},\alpha^-_{vv'},\alpha^+_{ww'},\alpha^-_{ww'}\}$. It follows that $A$ admits the simple surgery $(\script{D},\{\sim_1,\sim_2\})$. \begin{figure}[htb!] \subfigure[$A\approx S^1$.]{ \includegraphics[]{fig/figsseg1.1} } \subfigure[$A'\approx S^1$.]{ \includegraphics[]{fig/figsseg1.2} } \caption{$A'$ is obtained by a simple surgery on $A$. \label{fig:sseg1}} \end{figure} \end{exmp} \begin{exmp}\label{exmp:ss2} The bold curves in Fig.~\ref{fig:sseg2} (a) denotes $A\subseteq H$. Put $D_1\cap A = \{u,u',v,v'\}$ and $D_2\cap A=\{w,w'\}$ as shown in the figure. Set $\script{D}=\{D_1,D_2\}$. Define $u\sim_1 u',v\sim_1 v'$ and $w\sim_2 w'$. Replace suitable segments on $A$ by $\alpha_{uu'}^\pm,\alpha_{vv'}^\pm$ and $\alpha_{ww'}^\pm$ as in Setting~\ref{setting:ss}, to obtain $A'=\gamma_1\coprod\gamma_2\coprod \gamma_3$; see Fig.~\ref{fig:sseg2} (b). Note that condition (ii) of Definition~\ref{defn:ss} is violated, since $\gamma_i\cap(\script{D}\times[-1,1])$ has exactly two components for each $i=1,2,3$. Consider a different pairing $u\sim_1 v',u'\sim_1 v$ and $w\sim_2 w'$. Following Setting~\ref{setting:ss}, replace segments in $A\cap (\script{D}\times[-1,1])$ accordingly to obtain $A''\approx S^1$ described in Fig.~\ref{fig:sseg2} (c). Condition (i) of Definition~\ref{defn:ss} then fails, since $A''\cap (D_1\times I_1)$ and $A''\cap (D_1\times I_{-1})$ are both disconnected. In this way, one can see that $A$ does not admit any simple surgery with respect to $\script{D}$. \begin{figure}[htb!] \subfigure[$A\approx S^1\coprod S^1\coprod S^1\coprod S^1$.]{ \includegraphics[]{fig/figsseg2.1} } \subfigure[$A'\approx S^1\coprod S^1\coprod S^1$.]{ \includegraphics[]{fig/figsseg2.2} } \subfigure[$A''\approx S^1$.]{ \includegraphics[]{fig/figsseg2.3} } \caption{$A$ does not admit a simple surgery with respect to $\script{D}=\{D_1,D_2\}$. \label{fig:sseg2}} \end{figure} \end{exmp} Whether $A\subseteq H$ admits a simple surgery or not can be detected by looking at a corresponding Whitehead graph. \begin{prop}\label{prop:ss} Let $A$ be a $1$--submanifold of $H$ which intersects a disk structure $\script{D}$ transversely and minimally. Let $\sigma$ denote the connecting map associated with $\Gamma=W(A,\script{D})$. Then $A$ admits a simple surgery with respect to $\script{D}$ if and only if $\Gamma=\cup_h C_h$ for some $C_h$'s satisfying the following: \renewcommand{\labelenumi}{(\roman{enumi})} \begin{enumerate} \item each $C_h$ is a simple cycle, \item $C_h$ and $C_h'$ do not have a common edge whenever $h\ne h'$, \item if $e$ and $e'$ are edges of some $C_h$ intersecting at a vertex $v$, then $\sigma(e,v)$ and $\sigma(e',v)$ are edges of some $C_{h'}$, \item at least one $C_h$ is not a bigon. \end{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} \end{prop} In condition (iii), $h$ and $h'$ can possibly be the same. \begin{proof} Write $\script{D}=\{D_1,\ldots,D_n\}$. We let $\Phi\colon\thinspace (A\cup (\script{D}\times[-1,1]))\setminus(\script{D}\times(-1,1))\rightarrow\Gamma$ denote the map collapsing $D_i\times\pm1$ onto the vertices $a_i^{\pm1}$. ($\Rightarrow$) Let $A'$ be obtained from $A$ by the simple surgery $(\script{D},\{\sim_i\})$. $\Phi$ maps each loop in $A'$ onto a simple cycle in $\Gamma$, by the definition of a simple surgery. Hence, we have the condition (i). Let $\{C_1,C_2,\ldots, C_t\}$ denote the set of simple cycles in $\Gamma$ thus obtained. (ii) follows from the fact that $A'$ consists of disjoint loops. For (iii), let $e,e'\in E(C_h)$ intersect with, say, $a_i\in V(\Gamma)$. Then there exist two arcs $\alpha,\alpha'$ in $A\setminus (\script{D}\times(-1,1))$ and $v,v'\in A\cap D_i$ such that $v\sim_i v'$, $v\times1\in \alpha,v'\times 1\in \alpha'$ and, $e$ and $e'$ are the images of $\alpha$ and $\alpha'$ by $\Phi$. Let $f$ and $f'$ be the edges in $\Gamma$ which are the images of the arcs in $A\setminus (\script{D}\times(-1,1))$ intersecting the disk $D_i\times-1$ at $v\times-1$ and at $v'\times-1$, respectively. By the definition of $\sigma$, we have $\sigma(e,v)=f$ and $\sigma(e',v)=f'$. Since $v\sim_i v'$, $f$ and $f'$ belong to some $C_{h'}$, implying (iii). For (iv), one has only to consider a loop $\gamma$ in $A'$ that intersects more than two of $D_i\times I_{\pm1}$'s. ($\Leftarrow$) Choose any $D_i$ and $v,v'\in A\cap D_i$ such that $v\ne v'$. Let $\gamma$ and $\gamma'$ be (possibly the same) loops in $A$ intersecting $D_i$ at $v$ and $v'$, respectively. Let $e_+,e_-,e'_+$ and $e'_-$ denote the edges of $\Gamma$ corresponding to the components of $A\setminus(\script{D}\times(-1,1))$ containing $v\times1$, $v\times-1$, $v'\times 1$ and $v'\times-1$, respectively. Define $v\sim_i v'$ if $e_+$ and $e'_+$ happen to be consecutive edges of some $C_h$; by (iii), this occurs if and only if $e_-=\sigma(e_+,a_i)$ and $e_-'=\sigma(e'_+,a_i)$ are consecutive in some $C_{h'}$. By (ii), $\sim_i$ is a pairing on $A\cap D_i$. Let $A'$ be obtained from $A$ and from the pair $(\script{D},\{\sim_i\})$, by the process described in Setting~\ref{setting:ss}. Whenever $v\sim_i v'$ for some $i$, there is an arc in $A'$ joining $v\times1$ and $v'\times1$, and another arc in $A'$ joining $v\times-1$ and $v'\times-1$. $\Phi$ sends each loop in $A'$ to some $C_h$ by the definition of $\{\sim_i\}$. Each loop in $A'$ does not intersect $D_i\times I_\epsilon$ more than once for any $i$ and $\epsilon=\pm1$, by the condition (i). Condition (iv) implies that at least one loop in $A'$ intersects more than two of $D_i\times I_{\pm1}$'s. Hence, $(\script{D},\{\sim_i\})$ determines a simple surgery on $A$. \end{proof} For a planar graph $\Gamma\subseteq S^2$, a \textit{bigon neighborhood} of $\Gamma$ is $\cup_{e\in E(\Gamma)} N_e\subseteq S^2$, where \renewcommand{\labelenumi}{(\roman{enumi})} \begin{enumerate} \item each $N_e$ is a $2$--cell such that $e$ is properly embedded in $N_e$, and \item $N_e\cap N_{e'}=e\cap e'$ for $e\ne e'$. \end{enumerate} \renewcommand{\labelenumi}{(\arabic{enumi})} For example, see Fig.~\ref{fig:figbigon}. \begin{figure}[htb!] \subfigure[$\Gamma\subseteq S^2$.]{ \includegraphics[]{fig/figbigon.0} } \hspace{.9in} \subfigure[A bigon neighborhood of $\Gamma$.]{ \includegraphics[]{fig/figbigon.1} } \caption{An example of a bigon neighborhood. Note that $R_h'\subseteq R_h$. \label{fig:figbigon}} \end{figure} \begin{definition}\label{defn:wss} Consider a $1$--submanifold $A=\gamma_1\cup\cdots\cup\gamma_r\subseteq H$ where each $\gamma_i$ is a loop. If there exists a non-zero integral polynomial of the form $f(x_1,\ldots,x_r) = \sum_{i,j\ge1} c_{ij} x_i^j$ where $c_{ij}\ge0$ such that $f(A)=\coprod_{i,j\ge1}c_{ij}\gamma_i^j$ admits a simple surgery with respect to a disk structure $\script{D}$ on $H$, then we say that \textit{$A$ weakly admits a simple surgery (with respect to $\script{D}$)}. \end{definition} Proposition~\ref{prop:g implies ss} and~\ref{prop:p and ss} are the key steps for the proof of Theorem~\ref{thm:g implies p} and~\ref{thm:vg implies vp}. \begin{prop}\label{prop:g implies ss} Suppose $A$ is a geometric and diskbusting $1$--submanifold of $H$ which intersects a disk structure $\script{D}$ transversely and minimally, such that $W(A,\script{D})$ is minimal. Then $A$ weakly admits a simple surgery with respect to $\script{D}$. \end{prop} \begin{proof} By Theorem~\ref{thm:geometric}, we may assume that $A\subseteq \partial H$. Write $\script{D}=\{D_1,\ldots,D_n\}$ and $\Gamma=W(A,\script{D})$. Let $N$ be a closed regular neighborhood of $A$ in $\partial H$. $B=\partial N$ denotes the boundary of $N$ as a $2$--submanifold of $\partial H$. Note that $B = A+A$; that is, $B$ consists of two $1$--submanifolds each of which is freely homotopic to $A$. In particular, $B$ intersects $\script{D}$ transversely and minimally. Since $A$ is diskbusting, $\Gamma$ is connected (Theorem~\ref{thm:stallings}); in particular, $\Gamma$ does not have any isolated vertices. Hence, one can write $S^2\setminus \Gamma = \coprod_h R_h$ where each $R_h$ is an open disk. Choose a bigon neighborhood $N_1\subseteq S^2$ of $\Gamma\subseteq S^2$, so that $S^2\setminus N_1 = \coprod_h R_h'$ for some $R_h'\subseteq R_h$ (Fig.~\ref{fig:figbigon}). Write $\Gamma'=W(B,\script{D})$. Then we have an edge decomposition $\Gamma'=\cup_h \partial\overline{R_h'}\subseteq S^2$. Define a pairing $\sim_i$ on $B\cap D_i = B\cap \partial D_i$ by $v\sim_i v'$ if $v$ and $v'$ are the endpoints of some interval (denoted as $\alpha_{vv'}$) in $\partial D_i\setminus\mathrm{int}(N)$. We claim that $(\script{D},\{\sim_i\})$ determines a simple surgery on $B$. As in the proof of Proposition~\ref{prop:ss}, choose a product structure on $\script{D}\times[-1,1]$ such that $B\cap (D_i\times[-1,1])$ is identified with $(B\cap D_i)\times[-1,1]$ for each $i$. In particular, there exist homeomorphisms $D_i\times0\rightarrow D_i\times \pm1$ which are compatible with the product structure. Whenever $v\sim_i v'$, choose $\alpha_{vv'}^+\subseteq D_i\times1$ and $\alpha_{vv'}^-\subseteq D_i\times-1$ to be the image of $\alpha_{vv'}\subseteq D_i\times 0$. Let $B' = (B\setminus (\script{D}\times(-1,1)))\cup(\cup_{v\sim_i v'}\alpha_{vv'}^\pm)$. Then $\Gamma'$ is obtained from $B'$ by collapsing $\alpha_{vv'}^\pm$ onto the vertices $a_i^{\pm1}$ for each $v$ and $v'$ in $D_i$ such that $v\sim_i v'$. By this process, each loop in $B'$ collapses onto the cycle $C_h = \partial\overline{R_h'}$ for some $h$. In the below, let us verify the conditions (i) through (iv) of Proposition~\ref{prop:ss} with regard to the decomposition $\Gamma'=\cup_h C_h$. The condition (ii) of Proposition~\ref{prop:ss} is obvious, since $\partial\overline{R_h'}$ and $\partial\overline{R_{h'}'}$ intersect only at vertices of $\Gamma'$. Condition (iii) easily follows from the proof of ($\Rightarrow$) in Proposition~\ref{prop:ss}. By Theorem~\ref{thm:stallings}, $\Gamma$ is a connected graph without a cut vertex. By Lemma~\ref{lem:graph}, each $\partial\overline{R_h}$ is a simple cycle. This implies $C_h=\partial\overline{R_h'}\approx\partial\overline{R_h} $ is a simple cycle, and hence, we have (i). The condition (iv) fails only when each $\partial\overline{R_h'}$, and hence each $\partial\overline{R_h}$ also, is a bigon. Since each edge is shared by two regions, the number of edges in $\Gamma$ must then be the same as the number of regions in $S^2\setminus\Gamma$. Considering $S^2$ as a CW-complex having the connected graph $\Gamma$ as its $1$--skeleton, we would have $2=\chi(S^2)=|V(\Gamma)|$. This contradicts to the fact that the genus of $H$ is larger than $1$. \end{proof} \begin{lem}\label{lem:graph} Let $\Gamma\subseteq S^2$ be a connected graph without a cut vertex. If $R$ is a component of $S^2\setminus\Gamma$, then $\partial\overline{R}$ is a simple closed curve. \end{lem} \begin{proof} Let $N(\Gamma)$ denote a closed regular neighborhood of $\Gamma$. Denote by $T$ the component of $S^2\setminus N(\Gamma)$ such that $T\subseteq R$. Since $\Gamma$ is connected, so is $N(\Gamma)$. Hence, $N(\Gamma)$ is a punctured sphere, and $T$ is an open disk. By the deformation retract $N(\Gamma)\rightarrow \Gamma$, $\partial\overline{T}$ maps to $\partial\overline{R}$. So, $\partial\overline{R}$ is a closed curve and $R$ is an open disk. There exists a polygonal disk $\overline{Q}$ and a quotient map $q\colon\thinspace \overline{Q}\rightarrow\overline{R}$ such that $\mathrm{int}(q)\colon\thinspace Q\rightarrow R$ is a homeomorphism and $\partial q\colon\thinspace\partial\overline{Q}\rightarrow\partial\overline{R}$ is a graph map, where $\mathrm{int}(q)$ and $\partial q$ are restrictions of $q$. We have only to show that $\partial q$ is $1-1$. Suppose $q(x)=q(y)$ for some $x\ne y\in (\partial \overline{Q})^{(0)}$. Pick a properly embedded arc $\alpha\subseteq \overline{Q}$ joining $x$ and $y$, and write $Q\setminus\alpha = Q_1\cup Q_2$ (Fig.~\ref{fig:lem:graph} (a)). Since $q(\alpha)\approx S^1$, we can write $S^2\setminus q(\alpha)=A_1\cup A_2$ such that $A_i$ is an open disk and $q(Q_i)= R\cap A_i$. So, $q(\alpha)$ separates $q(\partial\overline{Q_1}\setminus\alpha)\setminus q(x)$ from $q(\partial \overline{Q_2}\setminus\alpha)\setminus q(x)$ (Fig.~\ref{fig:lem:graph} (b)). Since $q(\alpha)\cap \Gamma = q(\alpha)\cap\partial\overline{R} = q(x) = q(y)$, $q(x)$ separates $\Gamma$. This is a contradiction. \end{proof} \begin{figure}[htb!] \subfigure[Polygonal disk $\overline{Q}$.]{ \includegraphics[]{fig/figcutvertex.1} } \hfill \subfigure[$R\subseteq S^2$.]{ \includegraphics[]{fig/figcutvertex.2} } \caption{Proof of Lemma~\ref{lem:graph}. \label{fig:lem:graph}} \end{figure} \begin{prop}\label{prop:p and ss} Let $U\subseteq F$ be an independent, diskbusting set of root-free, cyclically reduced words. If a $1$--submanifold $A\subseteq H$ realizes $U$ with respect to a disk structure $\script{D}$, then the following are equivalent. \begin{enumerate} \item $U$ is polygonal. \item $A$ weakly admits a simple surgery with respect to $\script{D}$. \end{enumerate} \end{prop} \begin{proof} Write $U = \{w_1,\ldots,w_r\}$ and $A=\gamma_1\cup\ldots\cup\gamma_r$ so that $\gamma_i$ realizes $w_i$ with respect to $\script{D}$. Recall our convention that $A$ intersects $\script{D}$ transversely and minimally (Remark~\ref{rem:minimal}). \textbf{(1)$\Rightarrow$(2):} Suppose there exists a $U$--polygonal surface $S$ and an associated immersion $\phi\colon\thinspace S^{(1)}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ satisfying $\chi(S)<m(S)$. For $i,j\ge1$, let $c_{ij}$ denote the number of polygonal disks $P$ on $S$ such that the composition of immersions $\partial P\to S^{(1)}\stackrel{\phi}\to\ensuremath{\mathrm{Cay}}(F)/F$ reads $w_i^j$. In particular, $m(S) = \sum_{i,j} c_{ij}$. One can write $S = \coprod_i \coprod_j \coprod_{k=1}^{c_{ij}}P_{ijk}/\!\!\sim$ for some side-pairing $\sim$ such that each $P_{ijk}$ is a polygonal disk whose boundary reads $w_i^j$. Put $f(x_1,\ldots,x_r) = \sum_{i,j} c_{ij}x_i^j$ and $\Gamma = W(f(U))$. Let $\sigma$ denote the connecting map associated with $\Gamma$. By Remark~\ref{rem:connecting map} (2), there is a natural $1-1$ correspondence $\rho$ between the edges of $\Gamma$ and the corners (\textit{i.e.}, two adjacent edges) of $\coprod_{i,j,k}P_{ijk}$. For instance, a vertex $v\in\partial P_{ijk}$ at which an $a_1$-edge is incoming and an $a_2$-edge is outgoing corresponds to an edge of $\Gamma$ joining $a_1$ and $a_2^{-1}$ (Fig.~\ref{fig:prop:p and ss 1}). \begin{figure}[htb!] \subfigure[A corner $\rho(e)$ of $ P_{ijk}$.]{ \includegraphics[]{fig/figpolygonalandss.0} } \hspace{.45in} \subfigure[An edge $e$ of $\Gamma=W(f(A),\script{D})$.]{ \includegraphics[]{fig/figpolygonalandss.1} } \caption{The $1-1$ correspondence $\rho$. \label{fig:prop:p and ss 1}} \end{figure} Write $S^{(0)}= \{v_1,\ldots,v_t\}$. $\smallcaps{Link}(v_h)\approx S^1$ denotes the link of $v_h$ in $S$. $\rho^{-1}$ maps each $\smallcaps{Link}(v_h)$ to some cycle $C_h\subseteq\Gamma$. Since $\phi\colon\thinspace S^{(1)}\rightarrow \ensuremath{\mathrm{Cay}}(F)/F$ is locally injective, each $C_h$ is a simple cycle. This proves the condition (i) of Proposition~\ref{prop:ss}. The condition (ii) follows from the fact that $\rho$ is a $1-1$ correspondence. Suppose $e$ and $e'$ are consecutive edges in some $C_h$. $e$ and $e'$ correspond to an adjacent pair of edges (corners) in $\smallcaps{Link}(v_h)$. Without loss of generality, assume $a_q\in \partial e\cap\partial e'\subseteq V(\Gamma)$, and let $v_{h'}\in S^{(0)}$ be the other endpoint of the $a_q$-edge incoming at $v_h$ (Fig.~\ref{fig:prop:p and ss 2} (a)). $\rho(e)$ is the corner of some $P_{ijk}$ at $v_h$, and similarly, $\rho(e')$ is the corner of some $P_{i'j'k'}$ at $v_h$. From the definition of $\sigma$, one can see that the corners of $P_{ijk}$ and $P_{i'j'k'}$ at $v_{h'}$ correspond to $f=\sigma(e,a_q)$ and $f'=\sigma(e',a_q)$, respectively. $f$ and $f'$ belong to some $C_{h'}$ as consecutive edges (Fig.~\ref{fig:prop:p and ss 2} (b)). This proves the condition (iii) of Proposition~\ref{prop:ss}. \begin{figure}[htb!] \subfigure[$S=\coprod_{i,j,k} P_{ijk}/\!\!\sim$.]{ \includegraphics[]{fig/figpolygonalandsss.1} } \hspace{.45in} \subfigure[$\Gamma=W(f(A),D)$.]{ \includegraphics[]{fig/figpolygonalandsss.2} } \caption{Visualizing the connecting map $\sigma$. \label{fig:prop:p and ss 2}} \end{figure} Now we will use the assumption $\chi(S)<m(S)$ as follows. The number of vertices, edges and faces in $S=\coprod_{i,j,k} P_{ijk}/\!\!\sim$ are $t$, $\sum_{i,j} c_{ij}|w_i^j|/2$ and $\sum_{i,j}c_{ij}$, respectively. Hence, \begin{eqnarray*} \sum_{i,j}c_{ij} = m(S) &>& \chi(S) = t - \sum_{i,j} c_{ij} |w_i^j|/2 + \sum_{i,j}c_{ij},\\ 2t &<& \sum_{i,j} c_{ij} |w_i^j| = |E(\Gamma)|=\sum_{h=1}^t |E(C_h)|. \end{eqnarray*} It follows that $|E(C_h)|>2$ for some $h$; that is, the condition (iv) of Proposition~\ref{prop:ss} is satisfied. \textbf{(2)$\Rightarrow$(1):} Suppose $f(x_1,\ldots,x_r) = \sum_{i,j\ge1}c_{ij} x_i^j$ is a non-zero integral polynomial with $c_{ij}\ge0$ such that there is a simple surgery on $f(A)=f(\gamma_1,\ldots,\gamma_r)$. Write $\Gamma=W(f(A),\script{D})=\cup_{h=1}^t C_h$ so that the conditions in Proposition~\ref{prop:ss} are satisfied. For each $i,j$ and $1\le k\le c_{ij}$, take a polygonal disk $P_{ijk}$ equipped with an immersion $\partial P_{ijk}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$ reading $w_i^j$; so, each edge in $\coprod_{i,j,k} \partial P_{ijk}$ carries a label by ${\script S}=\{a_1,\ldots,a_n\}$ as well as an orientation induced from the orientations of the edges in $\ensuremath{\mathrm{Cay}}(F)/F$. There is a $1-1$ correspondence $\rho$ between the edges of $\Gamma$ and the corners of $\coprod_{i,j,k} P_{ijk}$ as in the proof of (1)$\Rightarrow$(2). Let $1\le q\le n$, and consider two $a_q$-edges $x\in E(\partial P_{ijk})$ and $x'\in \partial E(P_{i'j'k'})$. We declare that $x\sim x'$ if $\rho^{-1}$ sends the corners at the terminal vertices of $x$ and $x'$ to consecutive edges $e$ and $e'$ in some $C_h$ (Fig.~\ref{fig:prop:p and ss 2} (a)). By the condition (ii) of Proposition~\ref{prop:ss}, $e\in E(\Gamma)$ and $a_q\in \partial e$ uniquely determines $e'\in E(\Gamma)$ such that $e$ and $e'$ are consecutive at $a_q$ in some $C_h$. By the condition (iii), the other corners of $x$ and $x'$ correspond to consecutive edges $f=\sigma(e,a_q)$ and $f'=\sigma(e',a_q)$ in some $C_{h'}$. Hence, $\sim$ defines a side-pairing on $\coprod_{i,j,k} P_{ijk}$ that matches the labels and the orientations of the edges; moreover, $\rho$ determines a $1-1$ correspondence between the links of the vertices in the closed surface $S=\coprod_{i,j,k}P_{ijk}/\!\!\sim$ and the cycles $C_1,C_2,\ldots,C_t$. Since each $C_h$ is simple, there is an immersion $S^{(1)}\rightarrow \ensuremath{\mathrm{Cay}}(F)/F$ induced by the given immersion $\coprod_{i,j,k}\partial P_{ijk}\rightarrow\ensuremath{\mathrm{Cay}}(F)/F$. Note that $m(S)=\sum_{i,j}c_{ij}$. As in the proof of (1)$\Rightarrow$(2), \[\chi(S)= t - |E(\Gamma)|/2 + m(S) = t - \sum_{h=1}^t |E(C_h)|/2 + m(S).\] By the condition (iv) of Proposition~\ref{prop:ss}, $|E(C_h)|>2$ for some $h$. Therefore, $\chi(S)<m(S)$. \end{proof} \begin{theorem}\label{thm:g implies p} Suppose $U\subseteq F$ is an independent, diskbusting set of root-free, cyclically reduced words. If $U$ is geometric, then $U$ is equivalent to a polygonal set of words in $F$. \end{theorem} \begin{proof} We may assume that $U$ is minimal, by applying an automorphism of $F$ if necessary. Let $U=\{w_1,\ldots,w_r\}$ be realized by $A\subseteq H$ with respect to a disk structure $\script{D}$. By Theorem~\ref{thm:geometric}, we can choose $A$ so that $A\subseteq \partial H$ and $W(A,\script{D})$ is minimal. Proposition~\ref{prop:g implies ss} implies that $A$ weakly admits a simple surgery with respect to $\script{D}$. Hence, $U$ is polygonal by Proposition~\ref{prop:p and ss}. \end{proof} Now we consider virtually polygonal words (Definition~\ref{defn:vp}). Note that polygonal words are virtually polygonal. The converse is not true. Actually, there exists a non-polygonal word which becomes polygonal only after an automorphism of $F$ is applied. An example given in~\cite{Kim:2009p3867} is $w=abab^2ab^3\in F_2=\langle a,b\rangle$. While an elementary argument shows that $w$ is not polygonal, the automorphism $\phi$ defined by $\phi(a)=ab^{-2}$ and $\phi(b)=b$ maps $w$ to a Baumslag--Solitar relator $w'= a(a^2)^b$. Any Baumslag--Solitar relator $a^p(a^q)^b$ is polygonal for $pq\ne0$~\cite{Kim:2009p3867}. \begin{definition}\label{defn:virtual ss} A 1-submanifold $A\subseteq H$ \textit{virtually admits a simple surgery} if there exists a finite cover $p:H'\rightarrow H$ such that $p^{-1}(A)\subseteq H'$ weakly admits a simple surgery. \end{definition} \begin{prop}\label{prop:virtual p and ss} Let $w\in F$ be a root-free and diskbusting word realized by a loop $\gamma\subseteq H$. Then $w$ is virtually polygonal if and only if $\gamma$ virtually admits a simple surgery. \end{prop} \begin{proof} Let $F'$ be a finite-index subgroup of $F$ and ${\script S}'$ be a free basis for $F'$. Denote by $\ensuremath{\hat{w}}_{F'}$ a transversal for $\ensuremath{\underline{w}}_{F'}$, so that $\ensuremath{\hat{w}}_{F'}$ is an independent set of cyclically reduced words written in ${\script S}'$. $\ensuremath{\hat{w}}_{F'}$ is realized by $p^{-1}(\gamma)$ with respect to some disk structure $\script{D}'$ on $H'$. By Proposition~\ref{prop:p and ss}, $\ensuremath{\hat{w}}_{F'}$ is polygonal with respect to ${\script S}'$ if and only if $p^{-1}(\gamma)$ weakly admits a simple surgery with respect to $\script{D}'$. \end{proof} \begin{theorem}\label{thm:vg implies vp} A diskbusting and virtually geometric word in $F$ is virtually polygonal. \end{theorem} \begin{proof} Let $w$ be diskbusting and virtually geometric. We may assume $w$ is root-free and cyclically reduced. Let $\gamma\subseteq H$ realize $w$ with respect to a given disk structure $\script{D}$. Suppose $p:H'\rightarrow H$ is a finite cover such that $p^{-1}(\gamma)$ is freely homotopic to $A\subseteq\partial H'$. It is elementary to see that $A$ is also diskbusting in $H'$. By Proposition~\ref{prop:g implies ss}, $A$ weakly admits a simple surgery with respect to some disk structure $\script{D}'$ on $H'$. By Proposition~\ref{prop:virtual p and ss}, $w$ is virtually polygonal. \end{proof} Theorem~\ref{thm:vp} underlines the importance of virtual polygonality. \begin{theorem}\label{thm:vp} If a root-free word $w\in F$ is virtually polygonal, then $D(w)$ contains a hyperbolic surface group. \end{theorem} \begin{proof} We may assume $w$ is cyclically reduced by applying an automorphism of $F=\langle {\script S}\rangle$ if necessary. Let $\gamma_w\subseteq\ensuremath{\mathrm{Cay}}(F)/F$ read $w$. Let $F'$ be a finite-index subgroup of $F$ such that a transversal $\ensuremath{\hat{w}}_{F'}$ for $\ensuremath{\underline{w}}_{F'}$ is polygonal with respect to some free basis ${\script S}'$ for $F'$. Recall that $X_{{\script S}'}(\ensuremath{\hat{w}}_{F'})$ denote the 2--dimensional CW-complex obtained by taking two copies of $\ensuremath{\mathrm{Cay}}_{{\script S}'}(F')/F'$ and gluing cylinders along the loops reading $\ensuremath{\hat{w}}_{F'}$ considered as a set of words written in ${\script S}'$. By Theorem~\ref{thm:polygonal}, $D(\ensuremath{\hat{w}}_{F'})=\pi_1(X_{{\script S}'}(\ensuremath{\hat{w}}_{F'}))$ contains a hyperbolic surface group. As was introduced in Section~\ref{sec:preliminary}, $Y(w,F')$ denotes the finite cover of $X_{\script S}(w)$ obtained by taking two copies of $\ensuremath{\mathrm{Cay}}(F)/F'$ and gluing cylinders along the copies of the elevations of $\gamma_w$. Since the homotopy equivalence $\ensuremath{\mathrm{Cay}}(F)/F'\rightarrow \ensuremath{\mathrm{Cay}}_{{\script S}'}(F')/F'$ maps each elevation of $\gamma_w$ to a loop realizing an element in $\ensuremath{\hat{w}}_{F'}$, we have a homotopy equivalence $Y(w,F')\simeq X_{{\script S}'}(\ensuremath{\hat{w}}_{F'})$. Hence $D(w)\ge \pi_1(Y(w,F')) = D(\ensuremath{\hat{w}}_{F'})$ and so, $D(w)$ contains a hyperbolic surface group. \end{proof} \begin{cor}[Gordon--Wilton~\cite{Gordon:2009p360}]\label{cor:vp} If $w\in F$ is root-free, virtually geometric and diskbusting, then $D(w)$ contains a hyperbolic surface group.\qed \end{cor} Converses of Theorem~\ref{thm:g implies p} and~\ref{thm:vg implies vp} do not hold. Actually, we will prove the following theorem in Section~\ref{sec:nvg}. \begin{theorem}\label{thm:p but not vg} There exist polygonal words which are not virtually geometric. \end{theorem} Virtual geometricity does not imply geometricity, in general. For example, $w=a^2b^{-1}ab\subseteq F_2=\langle a,b\rangle$ is virtually geometric, but not geometric~\cite{Gordon:2009p360}. It is not known whether virtual polygonality is strictly weaker than polygonality up to $\smallcaps{Aut}(F)$. \begin{que}\label{que:p and vp} Is a virtually polygonal word equivalent to a polygonal word? \end{que} \begin{tilingconjecture}[\cite{Kim:2009p3867}]\label{conj:tiling} A minimal diskbusting word in $F$ is polygonal. \end{tilingconjecture} While the Tiling Conjecture is not resolved yet, we propose a weaker conjecture. \begin{vtilingconjecture}\label{conj:vtiling} A diskbusting word in $F$ is virtually polygonal. \end{vtilingconjecture} By Theorem~\ref{thm:vp}, the Virtual Tiling Conjecture would suffice to settle the Gromov's conjecture for $D(w)$. \begin{rem}\label{rem:summary} Let $w\in F$ be a diskbusting and root-free word realized by $\gamma\subseteq H$. Consider the following hypotheses on $w$ and $\gamma$: \begin{itemize} \item * : No further hypothesis on $w$. \item (V)Geom : $w$ is (virtually) geometric. \item WSS : $\gamma$ weakly admits a simple surgery. \item VSS : $\gamma$ virtually admits a simple surgery. \item EPoly: $w$ is equivalent to a polygonal word. \item VPoly : $w$ is virtually polygonal. \item DSurf : $D(w)$ contains a hyperbolic surface group. \end{itemize} Note that none of the above hypotheses are dependent on the choice of a free basis for $F$, or equivalently, on the choice of a disk structure on $H$. We can summarize the content of this section as the following diagram: \[ \xymatrix{ \mathrm{Geom}\ar@{=>}@<1ex>[rr]\ar@{=>}@<-1ex>[d] &&\mathrm{WSS}\ar@{<=>}[r]\ar@{=>}@<-1ex>[d]\ar@{=>}@<1ex>[ll]|\times &\mathrm{EPoly}\ar@{=>}@<-1ex>[d] &&\mathrm{\ast}\ar@{==>}[ll]_{(B)}\ar@{==>}[d]^{(D)}\ar@{==>}[lld]_{(C)}\\ \mathrm{VGeom}\ar@{=>}@<1ex>[rr]\ar@{=>}@<-1ex>[u]|\times &&\mathrm{VSS}\ar@{<=>}[r]\ar@{==>}@<-1ex>[u]_{(A)}\ar@{=>}@<1ex>[ll]|\times &\mathrm{VPoly}\ar@{==>}@<-1ex>[u]_{(A)}\ar@{=>}@<-1ex>[rr] &&\mathrm{DSurf} } \] where each dashed arrow is a question or conjecture unanswered in this paper, and each broken arrow with $\times$ in the middle means a false implication. Question~\ref{que:p and vp}, the \hyperref[conj:tiling]{Tiling Conjecture} and the \hyperref[conj:vtiling]{Virtual Tiling Conjecture} are equivalent to (A), (B) and (C), respectively. (D) is equivalent to the Gromov's conjecture for the groups of the form $D(w)$. \end{rem} \section{Proof of Theorem~\ref{thm:p but not vg}}\label{sec:nvg} Let us set $F_3=\langle a,b,c\rangle$ and $F_4=\langle a,b,c,d\rangle$. Throughout this section, put $w_1=bbaaccabc\in F_3$ and $w_2 = aabbacbccadbdcdd\in F_4$. A graph $\Gamma$ is \textit{$k$--valent} if the valence of each vertex is $k$. $\Gamma$ is \textit{$k$--edge-connected} if one cannot disconnect $\Gamma$ by removing the interior of $k-1$ or fewer edges. In~\cite{Manning:2009p3177}, Manning proved (a stronger version of) the following theorem. \begin{theorem}[\cite{Manning:2009p3177}]\label{thm:manning} Let $w\in F$ be a cyclically reduced word such that for some $k\ge 3$, $W(w)$ is a $k$--valent, $k$--edge-connected and non-planar graph. Then $w$ is not virtually geometric.\qed \end{theorem} An example of graphs satisfying the hypothesis in Theorem~\ref{thm:manning} is the complete bipartite graph $K_{k,k}$ for $k\ge3$. Manning noted that $W(w_1)$ is $K_{3,3}$, and hence, $w_1$ is not virtually geometric~\cite{Manning:2009p3177}. Since $W(w_2)$ is $K_{4,4}$, $w_2$ is not virtually geometric, either. On the other hand, \begin{prop} \begin{enumerate} \item $w_1=bbaaccabc$ is polygonal. \item $w_2 = aabbacbccadbdcdd$ is polygonal. \end{enumerate} \label{prop:w1w2} \end{prop} \begin{proof} (1) Let $P_1$ be a polygonal disk whose boundary reads $w_1^2$. Name the edges of $\partial P_1$ as $1,2,\ldots, 18$ so that the edge named by $i$ corresponds to the $i$--th letter of $w_1^2$. Define a side-pairing $\sim_1$ on $P_1$ as shown in Fig.~\ref{fig:w1} (a). That is to say, the edges of $\partial P_1$ are paired by $\sim_1$ as \[ \{1,2\}, \{3,7\}, \{4,16\}, \{5,15\}, \{6,18\}, \{8,11\}, \{9,14\}, \{10,17\}, \{12,13\}. \] A neighborhood of each vertex in $S_1=P_1/\!\!\sim_1$ is illustrated in Fig.~\ref{fig:w1} (b). One sees that no two incoming edges (or two outgoing edges) of the same label exist at each vertex. Therefore, $S_1$ is a $w_1$--polygonal surface. Since $\chi(S_1)=|S^{(0)}|-|S^{(1)}|+|S^{(2)}|= 4 - 18/2 + 1= -4 < 1$, we conclude that $w_1$ is polygonal. (2) The proof that $w_2$ is polygonal is almost identical with (1) by considering a side-pairing $\sim_2$ on $P_2$, where $P_2$ is a polygonal disk whose boundary reads $w_2$; see Fig.~\ref{fig:w2} (a). Specifically, $\sim_2$ identifies the edges of $\partial P_2$ as \[ \{1,2\}, \{3,12\}, \{4,7\}, \{5,10\}, \{6,14\}, \{8,9\}, \{11,16\}, \{13,15\}. \] One again sees that $S_2=P_2/\!\!\sim_2$ is a $w_2$--polygonal surface, from the description of the links in Fig.~\ref{fig:w2} (b). Since $\chi(S_2)=4 - 16/2 + 1= -3 < 1$, $w_2$ is polygonal. \end{proof} \begin{rem}\label{rem:prop:w1w2} (1) Let $\Gamma$ be a graph immersed in $\ensuremath{\mathrm{Cay}}(F)/F$. Then $\Gamma$ embeds into $\ensuremath{\mathrm{Cay}}(F)/F'$ for some $[F:F']<\infty$ such that $|\ensuremath{\mathrm{Cay}}(F)/F'^{(0)}|=|\Gamma^{(0)}|$. In particular, the degree of the cover $\ensuremath{\mathrm{Cay}}(F)/F'\to\ensuremath{\mathrm{Cay}}(F)/F$ can be chosen to be $|\Gamma^{(0)}|$; see~\cite{Stallings:1983p596}. By the proof of Theorem~\ref{thm:polygonal}, we observe that if $w\in F$ has a closed $w$--polygonal surface $S$, then $X(w)$ has a finite cover of degree $|S^{(0)}|$ containing a closed surface $S'$ such that $\chi(S') =2 (\chi(S)-m(S))$. From the proof of Proposition~\ref{prop:w1w2} and Fig.~\ref{fig:w1}, we see that $X(w_1)$ has a finite cover of degree $4$ that contains a closed surface of Euler characteristic $2(\chi(S_1)-1)=-10$. Similarly, a degree--$4$ cover of $X(w_2)$ contains a closed surface of Euler characteristic $2(\chi(S_2)-1)=-8$. (2) Let $\Gamma_1=W(w_1^2)$ and $\Gamma_2=W(w_2)$. $S_1$ has one vertex of valence six and three vertices of valence four. By the proof of Proposition~\ref{prop:ss} and~\ref{prop:p and ss}, $\Gamma_1$ can be written as the union of one simple cycle of length six and three simple cycles of length four. Also, $\Gamma_2$ is the union of four simple cycles of length four. \end{rem} \begin{figure}[htb!] \subfigure[A side-pairing $\sim_1$ on $P_1$.]{ \includegraphics[]{fig/figw1.0} } \subfigure[Vertices in $S_1=P_1/\!\!\sim_1$.]{ \includegraphics[]{fig/figw1.1} } \caption{Proposition~\ref{prop:w1w2} (1). Single, double and triple arrows denote $a$-, $b$-and $c$-edges, respectively. \label{fig:w1}} \end{figure} \begin{figure}[htb!] \subfigure[A side-pairing $\sim_2$ on $P_2$.]{ \includegraphics[]{fig/figw2.0} } \subfigure[Vertices in $S_2=P_2/\!\!\sim_2$.]{ \includegraphics[]{fig/figw2.1} } \caption{Proposition~\ref{prop:w1w2} (2). Single, double, triple, and white arrows denote $a$-, $b$-, $c$-and $d$-edges, respectively. \label{fig:w2}} \end{figure} \textbf{Acknowledgement.} The author appreciates Cameron Gordon for inspirational conversations. The author is grateful to Henry Wilton for helpful discussion on Section 2.1, and to Alan Reid for his guidance through this work. \bibliographystyle{plain}
b4f0de7d2782f61c48b2471ad32f770297a6f29c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Recently, many new resonances are discoveried experimentally. Many of them have the proper quantum numbers of the $q\bar{q}$ meson states. However, their mass values do not fit the conventional $q\bar{q}$ states in the quark model. Among them, $X(3872)$ was first observed in the $J/\psi\pi^+\pi^-$ channel by Belle collaboration in 2003 \cite{Choi:2003ue}, and has been confirmed by CDF \cite{Acosta:2003zx}, D0 \cite{Abazov:2004kp} and Babar collaborations \cite{Aubert:2004ns}. Its quantum numbers are probabely $J^{PC}=1^{++}$. The corresponding charmonium candidates in the quark model are $2^3P_1(3990)$ and $3^3P_1(4290)$ which are $50\sim 200$ MeV above $M_X=3872$ MeV. Many people suggested that $X(3872)$ is mainly a $D\bar{D}^*$ molecular state \cite{Close:2003sg, Voloshin:2003nt, Swanson:2003tb, Wong:2003xk, Tornqvist:2004qy}. However, to bind the quarks and anti-quarks together in such a four quark state or other multi-quark states, we need to introduce new interaction into the quark model. Swanson proposed that the $X(3872)$ is mainly a $D\bar{D}^*$ molecule bound by the meson-meson interaction derived from the one pion exchange and the quark exchange \cite{Swanson:2003tb}. In Wong's work \cite{Wong:2003xk}, the meson-meson interaction is derived from a QED-type effective interaction in terms of effective charges for quarks and antiquarks. In refs.~\cite{AlFiky:2005jd, Fleming:2007rp, Braaten:2007ct, Hanhart:2007yq, Voloshin:2007hh, Colangelo:2007ph}, further investigations based on the molecule assumption were carried out. Since the strength of the one pion exchange interaction seems not strong enough to bind the $D\bar{D}^*$ molecular state, other authors argued that $X(3872)$ may be a dominant $c\bar{c}$ charmonium with some admixture of $D\bar{D}^*$ \cite{Suzuki:2005ha, Meng:2005er, Zhu:2007wz}. In ref.~\cite{Liu:2008fh}, after taking into account of the sigma meson exchange potential, the interpretation of $X(3872)$ as a loosely bound molecular state was further disfavored. We should notice that the color structure of a multi-quark state is much richer than that of a $q\bar{q}$ conventional meson state. Unlike the conventional mesons or baryons, the $q\bar{q}$ and $qq$ pairs in a multi-quark state can be in the color $8_c$ and $6_c$ representations respectively. Some color interactions which have no effects in the $q\bar{q}$ or $q^3$ colorless system, may have significant contribution in a multi-quark system. So the complete interactions in the quark model can be quite different after we take into account of these multi-quark states. The $s$-channel one gluon exchange interaction is an interaction between quark and anti-quark of the same flavor which annihilate into a virtual gluon. It has no effect on the conventional $q\bar{q}$ mesons but acts in the hidden flavor multi-quark system like the charmonium-like moelcular states. In this work, we will investigate the hidden flavor molecular states by considering the $s$-channel gluon exchange interaction in the quark model. In the next section, we will model the potential of the $s$-channel one gluon exchange interaction starting from the non-relativistic reduction. Then the $J^{PC}$ quantum numbers of the molecular states are selected by an analysis of the spin dependence of the interaction strength. In sec.~\ref{sect-3}, we will carry out the numerical calculation of $X(3872)$ and some other charmonium-like states as the molecular states. Also we will make prediction about the similar bottomium-like molecular states. Finally, we will give a brief summary. \section{The Potential of $S$ Channel One Gluon Exchange Interaction} In our work, we will use the Bhaduri quark model which is a rather simple non-relativistic quark potential model. In the Bhaduri model, the hamiltonian can be written as\cite{Bhaduri:1981pn} \begin{equation} \label{eq:1} H=\sum_i (m_{i}+\frac{\bm{P}_i^2}{2m_i}) -\frac{3}{4}\sum_{i<j} \left(\bm{F}_i\cdot\bm{F}_jV_{ij}^{C} +\bm{F}_i\cdot\bm{F}_j\bm{S}_i\cdot\bm{S}_jV_{ij}^{SS}\right). \end{equation} $m_i$ are constituent quark masses and $\bm{F}_i^c=\frac12\bm{\lambda}_i^c \ (c=1,...,8)$ are the well-known $SU_c(3)$ Gell-Mann matrices. Apart from a constant, here the central potential is the usual one gluon exchange coulomb potential plus the linear confinment: \begin{equation} V_{ij}^{C}=-\frac{\kappa}{r_{ij}}+\frac{r_{ij}}{a_{0}^{2}}-M_0 . \end{equation} $r_{ij}=|\bm{r}_i-\bm{r}_j|$ is the distance between quark $i$ and $j$. The color-magnetic interaction reads \begin{equation} V_{ij}^{SS}=\frac{4\kappa}{m_im_j}\frac{1}{r_0^2r_{ij}}e^{-r_{ij}/r_0} , \end{equation} where the $\delta$-interaction has been smeared smoothly with the prescription \begin{equation} \label{smear-ansatz} \delta^3(\bm{r}) \to \frac{1}{4\pi r_0^2r_{ij}}e^{-r_{ij}/r_0}. \end{equation} The model parameter values are \begin{align*} \kappa&=102.67\text{MeV}\text{fm},& a_0&=0.0326(\text{MeV}^{-1}\text{fm})^{\frac12}, \\ M_0&=913.5\text{MeV},& r_0&=0.4545\text{fm} \\ m_u&=m_d=337\text{MeV},& m_s&=600\text{MeV},\\ m_c&=1870\text{MeV},& m_b&=5259\text{MeV} . \end{align*} \begin{figure} \caption{\label{fig-1}% The $s$-channel one gluon exchange} \[ \includegraphics{ann_g.eps} \] \end{figure} The $s$-channel one gluon exchange interaction (SOGE) takes place when a $q\bar{q}$ pair annihilates into a virtual gluon (Fig.~\ref{fig-1}). The non-relativistic reduction of the potential is: \begin{equation} V^{\text{SOGE}} (\bm{r}_{ij}) = - \frac12 \left( \frac{4}{3} +\bm{F}_q \cdot \bm{F}_{\bar{q}} \right) \left( 1 + \frac{4}{3} \bm{S}_q \cdot \bm{S}_{\bar{q}} \right) G (4 m^2) \delta^3 (\bm{r}_{ij}) \delta(f_i,f_j), \end{equation} where $G(4m^2)$ is the one gluon exchange amplitude. The last $\delta(f_i,f_j)$ represents that the quark and the anti-quark must have the same flavor. Clearly, the two factors in the brackets mean that this interaction only occurs when the $q\bar{q}$ pair is in the color octet and with spin $S=1$. In our model calculation, the $\delta$-interaction should be smeared smoothly with the same prescription (\ref{smear-ansatz}) used in the color-magnetic interaction in the same quark model. However, the QCD amplitude $G(4m^2)$ is taken value in the timelike region, where the QCD behavior is still not clear at present time. In the spacelike region, this amplitude is well known from the gluon propagator in perturbative QCD and it reads \begin{equation} G^t_{\text{pert}}(q^2) = \frac{4\pi\alpha_s}{-q^2} \qquad \text{for } q^2 <0 . \end{equation} However, in our study of the hidden flavor molecular states, in order to provide an attrative interaction to favor the formation of the bound states, we need that \[ G(4m^2) >0, \] which means the above formula from perturbative QCD should not be directly used in the timelike region. This change of sign is first suggested in ref.~\cite{Li:1994ys} in the study of $\pi\pi$ and $K\pi$ scattering. Following ref.~\cite{Li:1994ys}, we assume that \begin{equation} G(4m^2) = -f G^t_{\text{pert}}(4m^2) = f \frac{\pi\alpha_s^2}{m^2}, \end{equation} where $f$ is an introduced strength factor. In our model, after the $\delta$-function being smeared smoothly, the SOGE potential turns to be: \begin{equation} V_{ij}^{\text{SOGE}}= - \frac{f}4 \left( 1 +\frac34 \bm{F}_q \cdot \bm{F}_{\bar{q}} \right) \left( \frac34 + \bm{S}_q \cdot \bm{S}_{\bar{q}} \right) \frac{4\kappa}{m_i^2}\frac{1}{r_0^2r_{ij}}e^{-r_{ij}/r_0} \delta(f_i,f_j). \end{equation} First we observe that the potential $V^{\text{SOGE}}$ is proportional to $m_i^{-2}$, so the interaction mainly takes place between the light $q\bar{q}$ pairs of $q=u,d,s$ in any multi-quark system. Next, we will analyse the spin dependence of $V^{\text{SOGE}}$ in each $J^{PC}$ channel of molecular state. For the molecular states we concern, let us assume that their flavor structure is $Q\bar{q}q\bar{Q}$, where $Q=c,b$ and $q=u,d,s$. Since we can neglect the $V^{\text{SOGE}}$ interaction between $Q\bar{Q}$, the interaction acts only when the $q\bar{q}$ pair is in color octet and $S=1$ as we have mentioned before. More specifically, the interaction will be switched on if the spin coupling of the four quarks is \[ [(Q\bar{Q})_{J_1} (q\bar{q})_{J_2=1}]_J. \] The color structure of $Q\bar{Q}$ and $q\bar{q}$ should be octet obviously. However, the color (re)coupling is irrelevant to our analysis here and will not be explicitly presented. Furthermore, we will assume no spatial excitation of any quark. Hence all quarks have zero orbital angular momentum. Then the quantum numbers $J^{PC}$ can be determined easily for $J_1=0,1$. We have following four $V^{\text{SOGE}}$ interaction channels: \[ J_1=0: 1^{+-}; \quad J_1=1: 0^{++}, 1^{++}, 2^{++}. \] Finally, we can recover the molecular states by the angular momentum recoupling. We obtain \begin{itemize} \item $0^{++}$ \[ [(Q\bar{Q})_1(q\bar{q})_1]_0 = -\frac{\sqrt3}2 (Q\bar{q})_0(q\bar{Q})_0 -\frac12 [(Q\bar{q})_1(q\bar{Q})_1]_0, \] \item $1^{++}$ \[ [(Q\bar{Q})_1(q\bar{q})_1]_1 =-\frac1{\sqrt2} (Q\bar{q})_1(q\bar{Q})_0 + \frac1{\sqrt2} (Q\bar{q})_0(q\bar{Q})_1, \] \item $1^{+-}$ \[ (Q\bar{Q})_0(q\bar{q})_1 = \frac12 (Q\bar{q})_1(q\bar{Q})_0 + \frac12 (Q\bar{q})_0(q\bar{Q})_1 + \frac1{\sqrt2} [(Q\bar{q})_1(q\bar{Q})_1]_1, \] \item $2^{++}$ \[ [(Q\bar{Q})_1(q\bar{q})_1]_2 = [(Q\bar{q})_1(q\bar{Q})_1]_2 \]. \end{itemize} Then the factor of interaction strength can be read from the coefficients, which we present in table~\ref{table1}. We see that the $V^{\text{SOGE}}$ interaction favors to form the molecular states with $J^{PC}=1^{++},2^{++}$. \begin{table} \caption{\label{table1}% The list of spin factor of $V^{\text{SOGE}}$ interaction strength in a molecular state $J^{PC}$ made of two meson states $J_1^{P_1}$ and $J_2^{P_2}$. $c.c.=\text{charge conjugation}$.} \begin{ruledtabular} \begin{tabular}{cc|c} $J_1^{P_1}J_2^{P_2}$ & $J^{PC}$ & Spin factor \\\hline $0^{-}0^{-}$ & $0^{++}$ & $\frac34$ \\ $1^{-}1^{-}$ & $0^{++}$ & $\frac14$ \\ $1^{-}0^{-} + c.c.$ & $1^{++}$ & $1$ \\ $1^{-}0^{-} - c.c.$ & $1^{+-}$ & $\frac12$ \\ $1^{-}1^{-}$ & $1^{+-}$ & $\frac12$ \\ $1^{-}1^{-}$ & $2^{++}$ & $1$ \\ \end{tabular} \end{ruledtabular} \end{table} \section{Numerical Calculation} \label{sect-3} To calculate the molecular states, we use the Rayleigh-Ritz variation principle. The test wave function will be taken to be a series of Gaussian basis functions. The Gaussian basis functions are often utilized in variational calculations of atomic and molecular problems. Recently the method has been also used in the few body system in nuclear and particle physics \cite{Kameyama:1989zz, Varga:1996zz, Brink:1998as}. In our case of the $Q\bar{q}q\bar{Q}$ molecular state, the test wave function of a molecular state between two clusters of $q\bar{q}$ meson states is a series \begin{equation} \label{eq-var-4} \psi_{1234}(r_{12},r_{34},r_{1234}) = \sum_{i} \alpha_{1234}^i \psi_{12}(r_{12})\psi_{34}(r_{34}) \exp(-\beta_{1234}^i r_{1234}^2), \end{equation} where $\bm{r}_1$, $\bm{r}_2$, $\bm{r}_3$ and $\bm{r}_4$ are the coordinates of $Q$, $\bar{q}$, $q$ and $\bar{Q}$, respectively. $\bm{r}_{ij} = \bm{r}_i -\bm{r}_j$. $r_{1234}$ is the distance between the two meson clusters \begin{equation} \bm{r}_{1234} = \frac{m_Q\bm{r}_1+m_q\bm{r}_2}{m_Q+m_q} -\frac{m_q\bm{r}_3+m_Q\bm{r}_4}{m_q+m_Q}. \end{equation} $\psi_{ij}(r_{ij})$ is the meson wave function which is also taken to be a Gaussian function series \begin{equation} \label{eq-var-2} \psi_{ij}(r_{ij}) = \sum_k \alpha_{ij}^k \exp(-\beta_{ij}^k r_{ij}^2) . \end{equation} The wave function of a molecular state is determined by the variation principle in two steps. We first determine the wave function (\ref{eq-var-2}) of each meson cluster. Then the meson cluster functions $\psi_{ij}$ are fixed in (\ref{eq-var-4}) to obtain the wave function of the molecular state and their masses. To reduce the amount of computation, the parameters $\beta^i$ and $\alpha^i$ in a Gaussian function series are determined in two steps by one-dimensional minimization. We first determined a average $\beta$ value using a single Guassian function. Then a set $\{\beta^i\}$ of $2N+1$ elements is generated from scaling the $\beta$ value up and down by a scale factor $s$ \cite{Brink:1998as}: \begin{equation} \beta^i = \beta s^{i-N} \end{equation} where $i=0,1,...,2N$. The coefficients $\alpha^i$ are determined by diagonalization the model Hamiltonian in the $2N+1$-dimensional space spanned by these $2N+1$ different Gaussian functions. The final values of $\beta^i$ and $\alpha^i$ and the mass of tetra quark states are determined by searching the scale factor $s$ for a minimum of system energy. In this way, we have calculated the possible $0^{++}$(the combination of two $0^-$ mesons with the spin factor $\frac34$), $1^{++}$ and $2^{++}$ molecular states in table~\ref{table1}. We have calculated both the charmonium-like states $c\bar{q}q\bar{c}$ which will be compared to recent experimental results and the bottomium-like states $b\bar{q}q\bar{b}$ which can be investigated in further experiments. The results of relevant heavy quark $Q\bar{q}$ mesons are listed in table~\ref{table-2}. We see that the mass values calculated from the Bhaduri quark model deviate at most $30$ MeV away from the experimental data. The difference between the calculation from the variation method with Gussian function series (cal.~II) and the exact numerical calculation (cal.~I) is less than $0.5$ MeV. So the variation method is an impresssive good approximation for the numerical calculation of the $Q\bar{q}$ conventional mesons. \begin{table} \caption{\label{table-2}% Mass of $Q\bar{q}$ mesons. The experimental values are taken from PDG\cite{Amsler:2008zzb}. In calculation I, the mass is obtained by solving the Schr\"odinger equation. In calcalation II, the mass is obtained by the variation method with Gaussian test functions using $N=3$. } \begin{ruledtabular} \begin{tabular}{c|ddd} meson state & \multicolumn{1}{c} exp. (MeV) & \multicolumn{1}{c} cal. I (MeV) & \multicolumn{1}{c} cal. II (MeV)\\\hline $D^{\pm}$ & 1869.62 & 1885.56 & 1886.14 \\ $D^0$ & 1864.84 & &\\ $D^*(2007)^0$ & 2006.97 & 2019.96 & 2020.07 \\ $D^*(2010)^\pm$ & 2010.27 & &\\\hline $D_s^\pm$ & 1968.49 & 1995.78 & 1996.36 \\ $D_s^{*\pm}$ & 2112.3 & 2101.2 & 2101.4 \\\hline $B^\pm$ & 5279.15 & 5300.84 & 5301.18\\ $B^0$ & 5279.53 & &\\ $B^*$ & 5325.1 & 5350.3 & 5350.5 \\\hline $B_s^0$ & 5366.3 & 5371.9 & 5372.4 \\ $B_s^*$ & 5412.8 & 5413.3 & 5413.6 \end{tabular} \end{ruledtabular} \end{table} The results of the molecular states are given in table~\ref{table-3} and table~\ref{table-4}. Here the molecular state is presented by its binding energy: \begin{equation} E_b = M_1 + M_2 - M_X, \end{equation} and the rms radius $\langle r^2 \rangle^{1/2}$, where $M_1$, $M_2$ are masses of the two compound mesons, $M_X$ is the mass of the molecular state. \begin{table} \caption{\label{table-3}% Binding energy $E_b$ (in MeV) of molecular states vs paramter $f$. } \begin{ruledtabular} \begin{tabular}{c|ddddd} $f$ & -0.8 & -1.0 & -1.5 & -2.0 & -3.0 \\\hline $(DD)0^{++}$ & - & - & 11.8 & 28.1 & 71.7 \\ $(DD^*)1^{++}$ & 1.8 & 6.2 & 24.2 & 48.7 & 109.3 \\ $(D^*D^*)2^{++}$ & - & 5.4 & 21.4 & 43.5 & 98.1 \\\hline $(D_sD_s)0^{++}$ & - & - & - & - & 5.3 \\ $(D_sD^*_s)1^{++}$ & - & - & - & - & 15.4 \\ $(D^*_sD^*_s)2^{++}$ & - & - & - & - & 14.0 \\\hline $(BB)0^{++}$ & 8.8 & 15.3 & 35.2 & 58.5 & 110.9 \\ $(BB^*)1^{++}$ & 16.8 & 26.8 & 55.8 & 88.5 & 160.4 \\ $(B^*B^*)2^{++}$ & 16.0 & 25.5 & 53.4 & 84.7 & 153.6 \\\hline $(B_sB_s)0^{++}$ & - & - & 3.4 & 9.3 & 25.7 \\ $(B_sB^*_s)1^{++}$ & - & 1.9 & 8.9 & 18.9 & 44.4 \\ $(B^*_sB^*_s)2^{++}$ & - & 1.8 & 8.5 & 18.2 & 42.9 \end{tabular} \end{ruledtabular} \end{table} \begin{table} \caption{\label{table-4}% rms $\langle r^2 \rangle^{1/2}$ (in fm) of molecular states vs paramter $f$. } \begin{ruledtabular} \begin{tabular}{c|ddddd} $f$ & -0.8 & -1.0 & -1.5 & -2.0 & -3.0 \\\hline $(DD)0^{++}$ & - & - & 1.33 & 1.00 & 0.76 \\ $(DD^*)1^{++}$ & 2.67 & 1.68 & 1.07 & 0.86 & 0.69 \\ $(D^*D^*)2^{++}$ & - & 1.79 & 1.12 & 0.91 & 0.72 \\\hline $(D_sD_s)0^{++}$ & - & - & - & - & 1.63 \\ $(D_sD^*_s)1^{++}$ & - & - & - & - & 1.12 \\ $(D^*_sD^*_s)2^{++}$ & - & - & - & - & 1.17 \\\hline $(BB)0^{++}$ & 1.11 & 0.94 & 0.74 & 0.64 & 0.54 \\ $(BB^*)1^{++}$ & 0.92 & 0.80 & 0.66 & 0.58 & 0.50 \\ $(B^*B^*)2^{++}$ & 0.94 & 0.82 & 0.67 & 0.59 & 0.51 \\\hline $(B_sB_s)0^{++}$ & - & - & 1.42 & 1.01 & 0.74 \\ $(B_sB^*_s)1^{++}$ & - & 1.77 & 1.03 & 0.81 & 0.63 \\ $(B^*_sB^*_s)2^{++}$ & - & 1.81 & 1.05 & 0.83 & 0.64 \end{tabular} \end{ruledtabular} \end{table} We can see from tables~\ref{table-3} and \ref{table-4} that indeed the $1^{++}$ and $2^{++}$ molecular states are bound preferable since the $V^{\text{SOGE}}$ are strong in these two channels. The binding of $1^{++}$ states are slightly deeper and tighter than that of $2^{++}$'s. So from our model calculation, the $1^{++}$ molecular states should be easily observed in experiments. First, let us look at the $D^{(*)}D^{(*)}$ sector. If the $X(3872)$ is a molecular state of $D$ and $D^*$, the binding energy can be estimated from experimental data\cite{Amsler:2008zzb} \[ E_b = M_D+M_{D^*} - M_X = 0.3 \text{ MeV}, \] which is very small. Accounting the uncertainty of approximation in our model calculation, we think the reasonable range of $f$ value should be around $-0.8\sim -1.0$. If it is small $f\sim -0.8$, then it may be the only molecular states in the $D^{(*)}D^{(*)}$ sector. If $f\sim -1.0$, the $2^{++}$ state may also exists. From our numerical calculation, it mass should be about \[ M=M_{D^*}+M_{D^*}-E_b \approx 4008.5 \text{MeV}. \] The Belle collaboration has reported a $X(3930)$ state of $2^{++}$ at mass $M=3929$ MeV\cite{Uehara:2005qd}, which is a candidate of the $c\bar{c}$ charmonium $2^3P_2$ excited state $\chi_{c2}'$. However its mass is aboult $40$ MeV below the quark model calculations (in the Bhaduri model, the $\chi_{c2}'$ mass is $M=3963.53$MeV). Several authors have discussed the mass shifts of the coupled channel effect \cite{Kalashnikova:2005ui,Pennington:2007xr,Li:2009ad,Ortega:2009hj}. If the above $2^{++}$ molecular state exists, the final state interaction will be important. Now, we turn to the possible molecular states in other sectors. We observe that in $D^{(*)}_SD^{(*)}_S$ sector, there are no such molecular states due to the mass increase of $m_s$ of the light quark pair. On the other hand, in the $B^{(*)}B^{(*)}$ sector, the binding energy increased as the mass increase of $m_b$ of the heavy quark pair. So these bottomium-like molecular states with $1^{++}$, $2^{++}$, and even $0^{++}$ may also exists. We can also observe the $B^{(*)}_SB^{(*)}_S$ molecular states $1^{++}$ and $2^{++}$ if $f\sim -1.0$. \section{Summary} The quark model is extended by introducing the $s$-channel one gluon exchange interaction. The interaction has no effect on the inner quark structure of conventional $q\bar{q}$ mesons and $qqq$ baryons. Since the interaction is short ranged --- a smeared $\delta$-interaction, the effect on the long ranged hadron-hadron interaction is expected to be very small. So the significant effect of this interaction is only in the so called hidden flavor states of multi-quark system. We have calculated the heavy quark molecular states of $qQ\bar{Q}\bar{q}$ with $Q=c,b$ and $q=u,d,s$. We find that the interaction can be strong enough to bind the $1^{++}$ and $2^{++}$ states, and possibly $0^{++}$ states. To compare with the recent experiments, the $X(3872)$ is a candidate of the $DD^{*}$ molecular state and the $X(3930$ may be the $\chi_{c2}'$ state which couples to the $D^*D^*$ $2^{++}$ molecular state. The calculation shows that it is more easy to bind the bottomium-like molecular states. Thus we expect that the similiar bottomium-like molecular states of $1^{++}$ and $2^{++}$ should also exist. \begin{acknowledgments} We would like to thank Shi-Lin Zhu for useful discussions. This work was supported by the National Natural Science Foundation of China under Grants 10675008. \end{acknowledgments}
04aaca7386c79b3939cb47c805c5f36846215fe9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Nanotechnology is cool. This truth has great allure to students and educators both. As public attention to nanoscale science and engineering spotlights research and the potential of new discoveries, students are pulled toward careers in science, engineering, and related social sciences or businesses. Educators not only have a new field of endeavor and questions to explore, but also another hook to gain the attention and interest of students. Nanoscale science and engineering raises many important questions, especially at the intersection of technology and society. Government funding of the field, which includes funds specifically earmarked for environmental and societal impact studies,\cite{Roco-Broad,Roco-NNI} shows that policy officials are focussed on addressing these societal concerns. The ability to create nanoscale materials and devices will generate new ways for people to understand and exploit nature. But who will have access to these new capabilities? How will they be applied? By whom? What are the consequences for our society? It is incumbent on science and engineering educators to partner with their counterparts in the social sciences and public policy to bring the discussion about the connections between technology and society to undergraduate students. Before this course, a curricular gap existed in nanoscale science and engineering education at the University of Wisconsin-Madison (UW). Nanotechnology education has primarily focused on the field's technical aspects, with little emphasis on issues such as the social and ethical implications of design choices, public attitudes toward new technologies, and nanotechnology policy. A course on nanotechnology and its societal implications can serve multiple purposes. Recruitment, education, introduction to nanoscale science and engineering, and science and technology studies (STS) all fall in its scope. STS itself is an umbrella term for a number of related topics including the sociology of science knowledge, philosophy of science, and history of science and technology. Here we describe a nontechnical course for undergraduates that introduces a broad audience to nanoscale science and engineering and STS. The course is open to all majors and satisfies a humanities requirement for undergraduates. Although designated as a 200-level class (freshmen or sophomores), the course was open to all students. The course is discussion-based, requires active student involvement, and focuses on readings, group discussion sessions, role-playing exercises, essay assignments and exams, and a semester-long research project with a final presentation. The course, Nanotechnology and Society, was offered in two sections in the spring of 2005. Two sections of a STS course, Where Science Meets Society, were designed and led by a graduate student specifically trained in nanoscale science and engineering and STS in the previous semester. In prior versions of the latter course STS topics were covered in a more general context of many technologies, without including learning of specific science concepts or facts. The course is regularly taught as a first-year seminar and satisfies either a humanities or social sciences requirement within the university's core liberal arts curriculum. It is well known by first-year advisors in the College of Letters and Science and the College of Engineering and has proven successful in drawing students from humanities, science, and engineering. This year, two sections were separated and designated for the new course on Nanotechnology and Society. This paper discusses the section\cite{URL-Tahan-NanoSocietyCourse} taught by co-author Tahan, a physics graduate student; the other section was taught by co-author Leung, a sociology graduate student. Both courses were based on a similar core curriculum developed in the prior semester.\cite{PROC-Crone-NanoSoc} \section{Preparation} To develop an effective undergraduate course in nanotechnology and society, we first needed to educate the educators. To this end, a seminar was created for advanced graduate students in the sciences, engineering, humanities, and social sciences to explore questions about the connections between nanotechnology and societal issues and to reflect on the broader place of technology in modern societies. The instructors for this seminar (co-authors Zenner, Ellison, Crone, and Miller) came from backgrounds in engineering, public policy, and the humanities. In addition, a partnership was initiated through a National Science Foundation funded Nanotechnology Undergraduate Education grant between the Materials Research Science and Engineering Center and the Robert and Jean Holtz Center for Science and Technology Studies, a newly established center for research and teaching in the history, sociology, and philosophy of science, technology, and medicine at UW. The seminar was offered to graduate students for either one or three credits. Students who chose the one-credit option were expected to attend the seminar's first hour, read and discuss the class materials, and write a one-page response essay each week. This part of the seminar, attended by ten graduate students and post-doctoral associates in the Fall 2004 semester, focused on theories and approaches to understanding the social dimensions of technology applied to the case study of nanotechnology. More detailed course information is provided in Refs.~\onlinecite{PROC-Crone-NanoSoc} and \onlinecite{URL-MRSECNanoSocietyCourses}. The three credit option had an additional emphasis on the development of teaching skills and the creation of a teaching portfolio. Students who chose this option attended a second hour of the seminar and developed an annotated syllabus for an undergraduate seminar in nanotechnology and society. This portion of the course was designed for future educators who wished to teach nanotechnology and society topics, either as a stand-alone course or as part of another course. These students also led the discussion in the first hour on a rotating basis, giving them an opportunity to test various active learning techniques such as think-pair-share, jigsaw (where the class is divided in parts to solve a problem), town-meeting formats, group discussion, and blackboard exercises. This second part of the seminar introduced approaches, materials, and skills for teaching undergraduates how to think critically about the social aspects of technology. Four graduate students completed the three credit course, including the two who taught their own courses in the spring. One of these courses is described here. \section{Goals and Course Content} STS 201, Nanotechnology and Society, set broad goals in both its scope and content. As stated in the syllabus, the objectives of this course include the following: \begin{enumerate} \item Introduce the broad field of nanotechnology and the basic science and technology. \item Consider the societal implications of nanotechnology in the context of social, scientific, historical, political, environmental, philosophical, ethical, and cultural ideas from other fields and prior work. \item Develop questioning, thinking, idea producing, and communication skills, both written and verbal. \end{enumerate} Because STS 201 was primarily a humanities course, the focus was on understanding the implications of technology and its interactions with society, specifically applied to nanoscale science and engineering. >From a deeper curriculum perspective, the goals include the following. \begin{enumerate} \item Introduce the various social theories of technology, such as technological determinism and the social construction of technology. \item Explore the wider social, historical, and cultural contexts in which nanoscale science and engineering are embedded. \item Examine the technical and social elements of nanotechnological systems. \item Provide skills and resources for learning about the technological infrastructures of modern societies and the potential impacts of developments in nanotechnology. \item Investigate why people sometimes fear new technologies, including studies of technological utopias and dystopias, accidents, risk, and concerns about loss of control. \end{enumerate} An obvious question is how much science was included. Students were required to learn some of the basic science of the nanotechnologies discussed in class. We illustrate the level by the example of the nanotechnology of nanocrystals or quantum dots. The students were expected to learn some primitive semiconductor physics to understand why nanoscale semiconductor crystals exhibit new properties, such as changes in color emission at certain size thresholds. The notion of a bandgap between core (valence) electron levels and free (conduction) levels was introduced with a discussion of light (photon) excitation. Students were expected to learn how the energy gap between the electron levels changes with decreasing size and the reason (quantum confinement effects). This understanding was then compared and applied to the application of quantum dots for medical contrast imaging. Lectures in addition to books for a lay audience, for example, Refs.~\onlinecite{ART-un,ART-NanoShaping,ART-SwissRe,waser-nano}, provided the main teaching materials.% \begin{table} \begin{enumerate} \item Introduction to Nanotechnology and Society (classes 1--3, essay 1). How is nanotechnology defined? \setlength{\baselineskip}{12pt} \item Nanoscience/technology (classes 4, 5, 10, 12, 14, 37--44). \begin{enumerate} \item Policy reports and reviews. \item Topics: New nanoscale effects; quantum vs.\ classical; Nano-manufacturing; quantum dots and nanoparticles; carbon; medical applications. \item Student research projects and presentations. \end{enumerate} \item Nanotech in Culture (classes 6, 8, 9, 22, 24, 46). \begin{enumerate} \item What real nanoproducts are on the market now and what's nanohyped? \item How does science fiction bring science/technology to the public? See Refs.~\onlinecite{Flynn,NewBreed,LearningCurve}. \item How has nano seeped into the media? \end{enumerate} \item Revolutions and the History of Science and Technology (classes 31, 46, essay 3). Is nanotech a new industrial revolution? \item Technology and Society (classes 7, 9, 11, 13, 15, 16, 24, 32, 46, essay 2). \begin{enumerate} \item Do technological innovations necessarily contribute to progress? \item How does technology affect the way we live? \item How do the users shape the development of technology? \item Is technology political? \end{enumerate} \item How Government Drives Technology (classes 23, 25, 46, essay 4). \begin{enumerate} \item How much money is being invested nanotechnology and science? \item What agencies handle nanotech funding? \item How does the military's needs shape our world? \end{enumerate} \item Weighing the Risks (classes 33, 34, 35, 36, 46, essay 4). \begin{enumerate} \item How does society decide what kinds of risks are acceptable given the possible consequences of pursuing a certain technology or science? \item Is nanoscale science and engineering more dangerous than micro? \item What is a normal accident? \end{enumerate} \item Thinking About the Future (classes 30, 45, 47). \begin{enumerate} \item What do the minds of today (or at least those who get media attention) think about nanotech? (See for example, Refs.~\onlinecite{Drexler} and \onlinecite{Mulhall}.) \item More Science Fiction. \item Reflections. What have we learned? \end{enumerate} \end{enumerate} \vspace{-0.5cm} \caption{Course outline. The course materials can be found online.\cite{URL-Tahan-NanoSocietyCourse}\label{cap:CourseOutline}} \end{table} The class outline given in Table~\ref{cap:CourseOutline} is mostly chronological except that the nanoscience subtopics were distributed throughout the semester instead of at a single time. We began reading general introductory articles on nanotechnology such as found in popular science magazines, think-tank and corporate reports, and then began looking at the STS topics one-by-one, intermixing STS topics with nanoscale science and engineering. In the last few weeks the students reported on their research on a specific topic in nanoscale science and engineering. The STS readings were introductory in nature (such as in Refs.~\onlinecite{WinnerBook,winner-congress,cross-szostak,GolemScience,GolemTech,handbook,social-shaping,Smith-Military-Noble,Perrow-Normal,Misunderstood,TechandFuture,FiftyYears}) and assumed an audience not familiar with the more complex analytical techniques and terms that are used in higher level sociology or history of science courses. The readings for this section are available online.\cite{URL-Tahan-NanoSocietyCourse} The overall curriculum consisted of components that introduced a concept in STS and then used STS as a means to apply or interpret the concept. \section{Requirements and Output} Because the course was primarily discussion based, class participation (including homework) was highly valued and vital to exploring the issues fully. It counted for 25\% of the grade, including the expectation that students participate or lead group discussions, present before the class, and participate in debates, mock hearings, or other cooperative activities. Reading was assigned for nearly every class, but homework was occasional and included small writing or research assignments to be shared with the class. An example was an assignment for which the students chose from a list of professors at the university doing nanoscale science and engineering research and reported to the class on the interests of a particular research group. Another assignment was to find a nanoscale science and engineering product in the news, learn about it, and teach what they learned to the class. To a large extent the course was about connecting disparate questions, concepts, facts, and ideas, and then raising new questions. Writing is a vital process in this approach to thinking because it is a formal way of integrating ideas and communicating. There were four, 2--3 page, double-spaced response or op-ed type essays for each of the main topics (see Table~\ref{cap:Essays}). The four graded essays counted for a total of 20\% of the grade.% \begin{table} \begin{enumerate} \item You are interviewing for a job at McKinsey, a prestigious consulting firm. During your interview you mention that you have experience thinking about the societal implications of technology, specifically nanotechnology. The interviewer asks you to go home and write a two to three-page executive summary defining nanotechnology (which she, a non-scientist, can understand) and suggesting specific areas where McKinsey may be able to do in the future. You must really impress her to get the job. \item Does nanotechnology have politics? Make your case, for or against, using the articles we have talked about in class (see, for example, Ref.~\onlinecite{WinnerBook}). \item Is the field of nanotechnology a revolution or just evolution? \item Write a brief testimony to be presented to the congressional subcommittee reviewing the National Nanotechnology Initiatives and address the following questions. Should the government continue funding of research in nanotechnolog? In what specific areas? How? Should the public be brought into the nanotech development process? How? You will represent a specific political group, for example, the military or AAAS. \end{enumerate} \vspace{-0.5cm} \caption{Essay assignments (abbreviated).\label{cap:Essays}} \end{table} Two formal exams counted for another 25\% of the grade. The remaining 30\% of the course requirements was assessed from individual research projects and class presentations. A list of topics was developed by the instructor, and each student selected one and become the class {}``expert'' on it. These topics provided a means to explore in more depth some of the subfields of nanoscale science and engineering and allowed the students to teach each other instead of sitting through lectures by the instructor. The goal was to produce a pamphlet on key nanotechnologies circa 2005 that may have value to future iterations of the class and to the public. It also provided an opportunity for more advanced students to contribute their particular expertise that might be outside the realm of the instructor's specialty. Approximately two-thirds of each roughly five double-spaced page report covered the science of the selected topic with the last one-third on the societal implications. Each student also gave a 20 minute PowerPoint or blackboard presentation. Examples of the nanotopics include nano-nuclear batteries, nanotechnology and cancer, nanofiltration, and nanotechnology and agriculture. The student reports and presentations are also available.\cite{URL-Tahan-NanoSocietyCourse} \section{Assessment} In addition to the traditional evaluation of student work discussed in Sec.~IV, several surveys were given during the semester to gauge the students' perceptions of the course and to provide feedback on further improvements. A brief pre-assessment was given on the second day of class and two more detailed assessments were given in the last week of class, in addition to several unofficial feedback surveys during the semester. The assessments and surveys show that the students found the course valuable and that many of the goals in the syllabus were met. A typical student comment was {}``I really enjoyed the class. Not only did I learn about what advances have been achieved (or will be soon), but also the social implications towards using/creating technology.'' The pre-assessment attempted to gauge the comfort and knowledge levels of the topics to be studied in the course as well as of nanoscale science and engineering in general. Figure~\ref{cap:PrePost} shows the results of the comfort level assessment before and after the bulk of the course.% \begin{figure} \begin{center}\includegraphics[% scale=0.4]{prepost.eps}\end{center} \caption{Pre- and post-assessment answers to the question: {}``Please rate your comfort level with the following topics.''\label{cap:PrePost}} \end{figure} Of note is the general increase in comfort level for all topics and the improvement in the area of nanotechnology and society. By the end of the course 95\% of the class claimed to be {}``comfortable'' or {}``very comfortable'' with the subject, a tremendous improvement. In addition, the pre-assessment asked the students to define nanotechnology and list several nanotechnologies that they knew, as well as whether and where they had heard the term. About a quarter of the class said that this course was the first time they had heard the term. The others cited news, TV, or science fiction as their source of introduction. Initially, most students described nanotechnology as {}``tiny,'' {}``microscopic,'' or {}``advanced.'' The most common answers were variations on {}``the study of small particles or very small technology'' or circular definitions such as the {}``study/design/manufacturing of products/objects at the nanoscale.'' Only one student cited $1\times10^{-9}$ meters as a benchmark. Before the course students cited {}``advanced/really-fast computers'' as the most common example for nanotechnology, followed by {}``medical/medicine,'' and {}``stain free pants.'' The final exams and post-assessment asked these same questions again plus more in-depth questions about the students' knowledge of nanoscale science and engineering. When asked to define nanotechnology, almost all the students were able to give a working definition of nanoscale science and engineering on par with or surpassing the definitions found elsewhere. The students also could cite examples of new phenomena that occur at the nanoscale including increased reactivity, quantum confinement effects, and biological coincidences (such as the ability of nanoparticles to cross the blood-brain barrier), as well as more specific examples. All the students were able to give three examples of specific nanotechnologies. Moreover, the students were able to formulate three meaningful questions about the societal implications of nanoscale science and engineering, a question on the pre-assessment that was left mostly blank. The post-assessment included additional questions to judge the impact of the course on the students. The students were asked to summarize the class in a sentence or two; the following comment is representative. {}``This class gave me a good overview of the science of nanotech and its societal implications. I now feel much better about current trends in the field.'' To fully interpret the post-assessment results, it is useful to revisit the students' backgrounds and motivations. Many of the students (14) took the class to fulfill a humanities requirement with about half also citing a general interest in nanotechnology. Out of 22 total students, roughly two-thirds did not come from a humanities background but instead came from the engineering and natural sciences, business, and related fields. Out of five women and seventeen men, there were four freshman, ten sophomores, three juniors, and five seniors. The largest contingent from any one major was from biochemistry (4) followed by computer science (3). Fourteen students would take the course again even if it didn't fulfill a requirement, although a quarter would not. Nearly all (17 yes, 3 maybes) would recommend the course to another student. All said their knowledge of the science of nanoscale science and engineering improved because of this course. One student commented: {}``I knew very little about nanotechnology and I was surprised by how much there is.'' Nearly all (17) said the course made them very or extremely well prepared to explain what nanoscale science and engineering is. For example, one comment stated that the course {}``provides a basic, layman's definition as well as an in-depth definition.'' Nearly all (18) considered `nanotechnology and society a valuable field'' of intellectual pursuit, which was somewhat surprising to us considering the newness and ambiguity of the field when we started. Before the course, most students were planning on pursuing a career in science and engineering (3 were not, 2 maybe), and none were considering one in nanotechnology. Students were largely not encouraged to change to a more nano-related career (8 maybe, 10 no), but the course encouraged them to be aware of opportunities and relations to nanoscale science and engineering in their planned field (15 yes). The course did not encourage the students to pursue a career in STS or policy (5 maybe, 16 no). Three-quarters of the class said that their perspective on science, technology, and societal implications changed as a result of the class. A typical student comment was that {}``Before the course, I thought any/all technological improvements were good. Now I understand more of the social issues of new technology.'' Most of the students thought the class was sufficiently challenging, although a few expected more and most thought the course could not or only might be improved significantly. About a quarter of the students would have liked to see more science, about a quarter thought there was too much, and 50\% thought it was a good mix. The students preferred in-class activities, debates, town-hall meetings, and generally doing the work themselves over traditional lectures. The research project presentations were universally thought to be a good idea, but the students would have preferred more specificity and direction from the instructor. Finally, the essay assignments provided a means to apply and test the application of higher order analytical skills and concepts to present day issues in nanotechnology and society. Although assessment cannot be quantitative in this regard, we found that the students did reasonably well (with some variation in skill level) in thinking creatively and knowledgeably on the issues in question. Not only did they show a growing understanding of how nanotechnology will affect society (with past technologies as test cases), but how society can determine the evolution and application of technology (see Table~\ref{cap:Essays}). A rewarding message from the post-assessment and in-class surveys was that the students overwhelmingly preferred discussion/group-oriented classes over lecture-oriented classes. {}``Some of the more science based aspects are taught better in lecture format. This was done for the main part. But implications on society is better in discussion format.'' Another good point was {}``nanotech is changing so fast, it'd be bad to try and follow a pre-established lecture schedule.'' \section{Discussion and Reflection} A social science course that focuses on technology creates unique challenges and new opportunities for education. With over half the class composed of science or engineering majors, there was a bias against the more open-ended, subjective questions that can be posed in science and technology studies. Many students expected a class about nanotechnology. Clarity is the first step in good student engagement. The philosophy and content of the course must be clearly and repeatedly explained, focusing on why the subject is worthwhile and what will be gained from a significant time investment. The instructor's (CT) technical background helped somewhat in that it gave credibility and a starting point for a new direction of intellectual pursuit. In the end though, personal attention --- learning the students' names, majors, career plans, interests --- is necessary to enlist the class in learning, especially in the context of group work, class participation, and active learning activities. Not surprisingly, this attention requires much effort on the instructor's part. It is also tremendously rewarding. Teaching the course required a lot of leadership. We pushed and pulled in different directions as the course navigated through various paces and types of content. We bounced back and forth between STS and nanoscale science and engineering to keep student interest and integrate concepts and theories. Because the course was offered for the first time, extra preparation was needed for each class. The course schedule was also quite fluid as the order and depth of the course material was continually calibrated to match the students' learning pace and the instructors' growing experience. We had thought the students would be mostly in their first year. Instead, we attracted a much more diverse and older student body. Older students with science and engineering majors tend to be more resistant to active learning techniques and class participation. They are also more competent overall, be it in writing, reading, or analytical comprehension abilities, which can lead to boredom in mixed skill-level environments. We made this overqualification into an opportunity. The research projects and essay assignments provided a good way to challenge the students while keeping everyone engaged at their ability level. The nano research projects became continuing educational tools for both the researcher and the rest of the class in research and communication techniques as well as general knowledge. So how much work did it take? For the students, a balance had to be maintained between university requirements and their expectation and commitment level. The class decided collectively to meet as groups in-class but have individual homework and assignments outside of class. For important concepts or theories in STS, the class settled into a routine of working in groups on work sheets or quizzes provided by the instructor, then as a class reviewing their work. The nanoscience discussions tended to be more whole class oriented with individual students contributing their research or perspective. After the learning goals were set by the instructor, the class preferred to work in small groups. The amount of work required on the students part was similar to other courses at the university. The instructor had more extensive duties. In addition to preparing for a course with no standard text for the first time, the research projects required special attention. The students learned more about nanoscale science and engineering through the projects and applied their newfound societal analytical toolset to explore the implications of their nano-topic. The instructor's philosophy was to model the progress and requirements of the project on a real-world research group, where the students would need to meet milestones and share their progress with the rest of the class at group meetings. The formal class presentation was a step in this process of producing a readable report. The implementation of this approach was good but not perfect. Some of the students would have benefited from more hand-holding and specification. Despite the instructor's not limitless time, the assessments showed that the experience was found to be valuable by almost all of the students. In summary, realistic time constraints were not a barrier to preparing and teaching an effective and interesting course from our perspective. Scientists and technologists, as well as science students, consider the societal ramifications of technology all the time. Well, at least they should. But thinking critically about such issues in a course involving science and technology studies, history of science, and public policy professionals is generally a new and very worthwhile experience. An exciting new field of study like nanotechnology can provide the basis for learning about the issues of technological change alongside technological developments in real-time. \begin{acknowledgments} We are grateful to the National Science Foundation through the Materials Research Science and Engineering Center on Nanostructured Materials and Interfaces (DMR-0079983) and through the Nanotechnology Undergraduate Education (NUE) grant, \emph{An Integrated Approach to Teaching Nanotechnology and Society} (DMR-0407075). Both programs are at the University of Wisconsin-Madison. We would like to thank the EPD 690 students for their help in thinking about the complex issues surrounding nanotechnology and society. In particular, we would like to thank Anne Bentley and Adam Creuziger for participating in the second portion of the course and creating syllabi and other course materials. CT would like to thank Robert Joynt for allowing him to develop and teach this course while pursuing his Ph.D.\ in physics. \end{acknowledgments}
1e36d0a0ce485d5cbb4898d60cdb86da395eb0a9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction and Main Results} In this paper we continue our analysis, begun in \cite{PPWZ05}, of a recent model describing the proliferation of prions. This model has been introduced in Greer, Pujo-Menjouet and Webb \cite{GPW04}, based on the works of Masel, Jansen and Nowak \cite{MJN99}, Nowak, Krakauer, Klug and May \cite{NKKM98} and others. For comprehensive explanations and discussions of the model and the relevant biochemical literature we refer to \cite{GPW04}. Here we only give a very short description of the model. Prions are proteins that are believed to be responsible for certain diseases like BSE and the Creutzfeld-Jacob disease. There are two basic forms of prions of interest here, the {\em Prion Protein Cellular} $PrP^C$ and the {\em Prion Protein Scrapie} $PrP^{Sc}$. The single molecule proteins $PrP^C$, also called {\em monomers} in the sequel, are protease resistent proteins which have a cell protective function and are produced by the body, regularly. On the other hand, the infectious prion $PrP^{Sc}$ is a string-like {\em polymer} formed of monomeric $PrP^C$. Above a critical chain length $x_0>0$ the polymers are more stable than the $PrP^C$, and they can grow to chains containing thousands of monomers. $PrP^{Sc}$ has the ability to replicate by splitting, we assume binary splitting here. So there are three main processes which govern the dynamics of prions in this model. \begin{itemize} \item growth in length by polymerization with rate $\tau>0$; \item binary splitting with rate $\beta(x)>0$, a polymer of length $x>0$ splits into one of length $0<y<x$ and one of length $x-y$ with probability $\kappa(y,x)$; \item natural degradation with rate $\gamma>0$ for the monomers and with rate $\mu(x)$ for the polymers with length $x$. \end{itemize} The model proposed in \cite{NKKM98} further assumes that polymers of length $0<x\leq x_0$ immediately decompose completely into monomers. This reflects the assumption that $PrP^{Sc}$ polymers are unbranched and form a simple $\alpha$-helix with $x_0$ monomer units per turn. An $\alpha$-helix of length less than $x_0$ is incomplete and thus is much less stable. Denoting the numbers of monomers at time $t$ by $V(t)$ and the density of polymers by $u(t,x)$, we obtain the following model equations. \begin{eqnarray} \label{pde} &&\partial_tV(t) =\lambda -\gamma V(t) -\tau V(t)\int_{x_0}^\infty u(t,x)dx +2\int_0^{x_0}x\int_{x_0}^\infty\beta(y)\kappa(x,y)u(t,y)dydx\nonumber\\ &&\partial_t u(t,x)+\tau V(t)\partial_x u(t,x)+(\mu(x)+\beta(x))u(t,x)=2\int_x^\infty\beta(y)\kappa(x,y)u(t,y)dy\\ && V(0)=V_0\geq 0, \quad u(t,x_0)=0, \quad u(0,x)=u_0(x),\nonumber \end{eqnarray} where $t\geq0$ and $x_0\leq x <\infty$. Here $\lambda>0$ is a constant background source of monomers. Observe that the splitting function $\kappa(y,x)$ should satisfy the following properties. $$ \kappa(y,x)\geq0,\quad \kappa(y,x)=\kappa(x-y,x),\quad \int_0^x \kappa(y,x)dy =1,$$ for all $x\geq x_0$, $y\geq0$, and $\kappa(y,x)=0$ if $y>x$ or $x\leq x_0$. Note that these conditions imply $$ 2\int_0^x y\kappa(y,x)dy =x,\quad x>0.$$ In fact, \begin{eqnarray*} && 2\int_0^x y\kappa(y,x)dy = \int_0^x y\kappa(y,x)dy + \int_0^x y\kappa(x-y,x)dy\\ && =\int_0^x y\kappa(y,x)dy + \int_0^x (x-y)\kappa(y,x)dy= x\int_0^x \kappa(y,x)dy=x. \end{eqnarray*} This implies that mass does not change via the splitting process, and by a simple computation we obtain the following relation for the total number of monomers in the system. $$ \frac{d}{dt}[V(t)+ \int_{x_0}^\infty xu(t,x)dx] = \lambda-\gamma V(t) -\int_{x_0}^\infty x\mu(x) u(t,x)dx,\quad t\geq0.$$ In \cite{NKKM98} it is further assumed that splitting is equi-distributed (polymer chains are equally likely to split at all locations), and that the rate of splitting is proportional to length. This reflects again the hypothesis that polymers form $\alpha$-helices and are not folded in more complicated configurations, which would make certain segments of the chain less likely to split than others. Therefore, we make the further assumptions $$ \kappa(y,x) = 1/x\; \mbox{ if } x>x_0 \; \mbox{ and } 0<y<x,\quad \kappa(y,x)=0 \; \mbox{ elsewhere },$$ $\beta(x)=\beta x$ is linear, and $\mu(x)\equiv \mu$ constant. Then the model contains only 6 parameters, and can even be reduced to a system of 3 ordinary differential equations. In fact, introduce the new functions $$ U(t)=\int_{x_0}^\infty u(t,y)dy \quad \mbox{ and } \quad P(t)=\int_{x_0}^\infty yu(t,y)dy,$$ representing the total number of polymers, and the total number of monomers in polymers at time $t$, respectively. Integrating the equation for $u(t,x)$ over $[x_0,\infty)$ we get \begin{eqnarray*} \frac{d}{dt} U(t)&=&-\tau V(t)u(t,x)|_{x_0}^\infty-\mu U(t)-\beta P(t)+2\beta \int_{x_0}^\infty\int_x^\infty u(t,y)dydx\\ &=& -\mu U(t)-\beta P(t) +2\beta\int_{x_0}^\infty u(t,y)(y-x_0)dy\\ &=& -\mu U(t)-\beta P(t)+2\beta P(t)-2\beta x_0 U(t), \end{eqnarray*} hence $$\dot{U}(t)= -(\mu+2\beta x_0) U(t)+\beta P(t).$$ Multiplying the equation for $u(t,x)$ by $x$, integration yields \begin{eqnarray*} \frac{d}{dt} P(t)&=&-\tau V(t)(xu(t,x)|_{x_0}^\infty-\int_{x_0}^\infty u(t,y)dy)\\ &&-\mu P(t)-\beta \int_{x_0}^\infty u(t,x)x^2dx+2\beta \int_{x_0}^\infty x\int_x^\infty u(t,y)dydx\\ &=& \tau V(t)U(t)-\mu P(t)-\beta \int_{x_0}^\infty u(t,x) x^2dx +\beta\int_{x_0}^\infty u(t,y)(y^2-x_0^2)dy\\ &=& \tau V(t)U(t) -\mu P(t) -\beta x_0^2 U(t), \end{eqnarray*} hence $$\dot{P}(t)= \tau U(t)V(t)-\mu P(t)-\beta x_0^2U(t).$$ Thus we obtain the following closed model involving only ordinary differential equations. \begin{eqnarray} \label{model} \dot{U}&=& \beta P -\mu U -2\beta x_0 U\nonumber\\ \dot{V} &=& \lambda -\gamma V -\tau UV +\beta x_0^2 U\\ \dot{P} &=& \tau UV -\mu P -\beta x_0^2 U\nonumber \end{eqnarray} with initial conditions $$ U(0)=U_0\geq 0,\quad V(0)=V_0\geq 0, \quad P(0)=P_0\geq x_0 U_0.$$ This way the partial differential equation for the density $u(t,x)$ decouples from the ordinary differential equations. Once the solutions of \eqref{model} are known, one has to solve only a linear partial integro-differential eqution to obtain $u(t,x)$. The system (1.2) is identical to the "basic virus dynamics model" that is discussed at length in \cite{MaNo02}. Concerning the ode-system \eqref{model} we have the following result from Pr\"uss, Pujo-Menjouet, Webb and Zacher \cite{PPWZ05}. \begin{theorem} Suppose $x_0,\beta,\gamma,\lambda,\mu,\tau>0$ are given constants. Then the system (\ref{model}) induces a global semiflow on the set $K=\{(U,V,P)\in{\mathbb R}^3:\; U,V,P-x_0U\geq0\}$. There is precisely one disease free equilibrium $(0,\lambda/\gamma,0)$ which is globally exponentially stable if and only if $\mu+x_0\beta>\sqrt{\lambda\beta\tau/\gamma}$, and asymptotically stable in case of equality. On the other hand, if $\mu+x_0\beta<\sqrt{\lambda\beta\tau/\gamma}$ there is the unique disease equilibrium $$\Big(\frac{\lambda\beta\tau-\gamma(\mu+\beta x_0)^2}{\mu\tau(\mu+2\beta x_0)},\frac{(\mu+\beta x_0))^2}{\beta\tau}, \frac{\lambda\beta\tau-\gamma(\mu+\beta x_0)^2}{\beta\mu\tau}\Big)$$ which is globally exponentially stable in $K\setminus[\{0\}\times{\mathbb R}_+\times\{0\}]$. \end{theorem} It is the purpose of this paper to study the full system \eqref{pde} under the assumptions of equi-distributed splitting, linear splitting rate, and constant rates of degradation. Since $V(t)+\int_{x_0}^\infty xu(t,x)dx$ is the total number of monomers in the system, which should be finite at any time, it seems reasonable to study \eqref{pde} in the standard cone $Z_+:= {\mathbb R}_+\times L_1^+((x_0,\infty);xdx)$ of the Banach space $Z:= {\mathbb R}\times L_1((x_0,\infty);xdx)$. The following theorem summarizes our results. \begin{theorem} Assume equi-distributed splitting with linear splitting rate $\beta(x)=\beta x$ and constant degradation rates $\gamma$ and $\mu(x)\equiv \mu$. Suppose $\lambda,\tau,\beta,\gamma,\mu,x_0>0$. Then \eqref{pde} generates a global semiflow in the natural phase space $Z_+$. Furthermore, \\(i) \, if $\lambda\beta\tau/\gamma\leq (\mu+\beta x_0)^2$, then the disease-free equilibrium $\bar{z}=(\lambda/\gamma,0)$ is globally asymptotically stable in $Z_+$, and even exponentially in the case of strict inequality; \\(ii) \, if $\lambda\beta\tau/\gamma> (\mu+\beta x_0)^2$, then there is a unique disease equilibrium $z_*=(V_*,u_*)$ which is globally asymptotically stable in $Z_+\setminus({\mathbb R}_+\times\{0\})$. It is given by $$ V_*= \frac{(\mu+\beta x_0)^2}{\beta\tau},\quad u_*(x)= \frac{2\beta}{\mu\tau} \frac{\lambda\beta\tau-\gamma(\mu+\beta x_0)^2}{(\mu+\beta x_0)(\mu+2\beta x_0)}\Phi\big(\frac{\beta(x-x_0)}{\mu+\beta x_0}\big),$$ where $\Phi(r)= (r+r^2/2)\exp(-(r+r^2/2))$. \end{theorem} The remaining part of this paper deals with the proof of this result. Recall that the function $\omega(t):= \tau V(t)$ can be considered as known, by Theorem 1.1, and $\omega(t)\rightarrow \omega_\infty$ exponentially, where either $\omega_\infty = \lambda/\gamma$ in the disease-free or $\omega_\infty= (\mu+\beta x_0)^2/\beta$ in the disease case. Hence we have to solve a linear nonautonomous partial integro-differential equation of first order. For this we shall use standard techniques from the theory of $C_0$-semigroups and we refer to the monograph Arendt, Batty, Hieber and Neubrander \cite{ABHN01} as a general reference for the results employed below. We proceed in four steps. First we study the autonomous case where $\omega\equiv \omega_\infty$. In Section 2 we show that there is a unique $C_0$-semigroup $T(t)=e^{-Lt}$ associated with the pde-part of \eqref{pde} in $X=L_1((x_0,\infty);xdx)$, which is positive and contractive, and even exponentially stable in the disease-free case. The resolvent of $L$ is shown to be compact in Section 3, hence $L$ has only point spectrum in the closed right half-plane. In the disease case, we further show that 0 is the only eigenvalue of $L$ on the imaginary axis, it is simple and so the ergodic projection ${\mathcal P}$ onto the kernel $N(L)$ of $L$ along the range $R(L)$ of $L$ exists and is rank one. We compute an element $e\in N(L)$ which is positive. A result of Arendt, Batty, Lubich and Phong \cite{ABHN01} then shows that $T(t)$ is strongly ergodic, i.e. $\lim_{t\rightarrow\infty}T(t)={\mathcal P}$ strongly in $X$. Wellposedness of the nonautonomous problem is proved in Section 4 by means of monotone convergence, it is shown that the evolution operator exists and is bounded. Moreover, bounds for $\partial_x u(t,\cdot)$ in $X$ are derived. Finally, in Section 5 we put together these results to prove Theorem 1.2. While we assume throughout that $\beta(x) = \beta x, \, \mu(x) = \mu$ (constant), and $y\kappa(x,y) = 1$ for $x < y ,\, y > x_0$, $\kappa(x,y) = 0$ elsewhere, our methods extend to versions of (1.1) where these assumptions do not hold. We do not carry out these generalizations since it is not clear which would be biologically reasonable. On the other hand, the equation discussed in this paper $$ \partial_t u(t,x) = -\tau V(t)\partial_x u(t,x) -(\mu +\beta x)u(t,x) +2\beta\int_x^\infty u(t,y)dy $$ for $x > x_0, \, t> 0$, with initial and boundary data as in (1.1), can be solved with an integral transformation followed by the method of characteristics. Namely, define $$v(t,x)= \int_x^\infty \int_y^\infty u(t,\xi) \, d\xi \, dy = \int_x^\infty (\xi - x)u(t,\xi) \, d\xi, \quad \partial_x^2 v(t,x) = u(t,x) \, . $$ Then a computation shows that $v$ solves the first order partial differential equation without integral term $$ \partial_t v(t,x)=-\tau V(t) \partial_x v(t,x)-(\mu+\beta x)v(t,x) $$ for $x > x_0, \, t>0$, with initial data $v(0,x)$ obtained by integrating $u_0$ twice and boundary data $v(t,x_0) = P(t) - x_0 U(t)$. The equation for $v$ may be solved by the method of characteristics, and $u$ is recovered from $\partial_x^2 v(t,x) = u(t,x)$. The solution depends on the initial data in the region $\{(x,t) \, | \, x > x_0 + \tau \int_0^tV(s) ds \, \}$ and on the boundary data in the complement of this region. Since $V(t)$ always has a positive limit, it is evident that the contribution from the initial data is swept out towards large $x$-values and decays exponentially, in fact, at a rate like $e^{-\epsilon t^2}$ for some $\epsilon > 0$. If the disease-free state is stable, then $\left( P(t), U(t) \right) \to (0,0)$ as $t \to \infty$, which implies that the solution $u$ converges to zero also in the region where it depends on the boundary data. In the case of a positive disease equilibrium, $P(t) - x_0 U(t)$ has a positive limit as $t \to \infty$, which determine the limiting equilibrium distribution $u_*$ given in Theorem 1.2. This method breaks down if $\beta(\cdot), \, \mu(\cdot)$, or $\kappa(\cdot,\cdot)$ have more complicated forms, as the reader will readily confirm. \section{The Linear Automomous Problem} \subsection{Functional Analytic Setting} We consider the problem \begin{eqnarray} \label{pde0} \partial_t u(t,x)+\omega \partial_x u(t,x)+(\mu+\beta x)u(t,x)= 2\beta\int_x^\infty u(t,y)dy,\\ u(0,x)= u_0(x),\quad u(t,x_0)=0, \quad t>0,\;x>x_0.\nonumber \end{eqnarray} Set $w(t,x)=u(t,x+x_0)$, $x\geq0$. Then this problem becomes the following one on ${\mathbb R}_+$. \begin{eqnarray} \label{pde1} \partial_t w(t,x)+\omega \partial_x w(t,x)+(\mu_0+\beta x)w(t,x)= 2\beta\int_x^\infty w(t,y)dy,\\ w(0,x)=g(x):=u_0(x+x_0),\quad w(t,0)=0, \quad t>0,\;x>0.\nonumber \end{eqnarray} Here we have set $\mu_0=\mu+\beta x_0$. $\omega$ plays the role of $\tau V$ at $\infty$, i.e.\ $$\omega=\tau V(\infty)=\lambda\tau/\gamma$$ in the disease-free case or $$\omega=\tau V(\infty)=(\mu+\beta x_0)^2/\beta=\mu_0^2/\beta$$ in the disease case. We want to study (\ref{pde1}) in the basic space $X=L_1({\mathbb R}_+;(a+x)dx)$, where we choose as the norm $$ ||w||=a|w|_1+|xw|_1,$$ with $a>0$ to be determined later. We define two linear operators in $X$ by means of $$ Au(x)= \omega u^\prime(x)+(\mu_0+\beta x)u(x),\quad x\in{\mathbb R}_+,$$ with domain $$D(A)=\{ u\in W^1_1({\mathbb R}_+)\cap X: \; x^2u\in L_1({\mathbb R}_+), x u^\prime(x)\in L_1({\mathbb R}_+),\; u(0)=0\},$$ and $$Bu(x)=2\beta\int_x^\infty u(y)dy,\quad D(B)=D(A).$$ Both operators are well-defined and linear, $B$ will be considered as a perturbation of $A$. \subsection{$m$-Accretivity of $A$} We have \begin{eqnarray*} \int_0^\infty Au \operatorname{sgn}{u} dx &=& \omega\int_0^\infty |u|^\prime dx +\mu_0|u|_1 +\beta|xu|_1\\ &=& \mu_0|u|_1 +\beta|xu|_1, \end{eqnarray*} and \begin{eqnarray*} \int_0^\infty Au \operatorname{sgn}{u} xdx &=& \omega\int_0^\infty |u|^\prime xdx +\mu_0|xu|_1 +\beta|x^2u|_1\\ &=& -\omega|u|_1+\mu_0|xu|_1 +\beta|x^2u|_1. \end{eqnarray*} Employing the bracket in $L_1$ this implies $$[Au,u]_+\geq (a\mu_0-\omega)|u|_1+(a\beta +\mu_0)|xu|_1\geq \eta||u||,$$ for some $\eta>0$ provided $\mu_0>\omega/a$. Hence for such $a$, $A$ is strictly accretive, in particular closable. Next we compute the resolvent of $A$. The equation $(\lambda+A)u=f$ is equivalent to solving the ode \begin{equation}\label{help} \lambda u(x)+ \omega u^\prime(x)+(\mu_0+\beta x)u(x)=f(x),\quad x>0, \end{equation} with initial condition $u(0)=0$. Therefore we obtain $$u=(\lambda+A)^{-1} f(x)=\frac{1}{\omega}\int_0^x \exp{-[(\lambda+\mu_0)(x-y)/\omega+\beta(x^2-y^2)/2\omega]} f(y)dy.$$ If $f\in L_1({\mathbb R}_+)$ then on easily obtains the estimate $$ |u|_1\leq |f|_1/(\lambda+\mu_0).$$ If also $xf\in L_1({\mathbb R}_+)$ then \begin{eqnarray*} |x^2u(x)|&\leq& \frac{1}{\omega}\int_0^x e^{-(\lambda+\mu_0)(x-y)/\omega}(x^2-y^2)e^{-\beta(x^2-y^2)/2\omega)}|f(y)|dy\\ &+& \frac{1}{\omega}\int_0^x ye^{-\beta(x-y)2y/2\omega}y|f(y)|dy, \end{eqnarray*} hence $$|x^2u|_1\leq \frac{1}{\omega}\frac{\omega}{\lambda+\mu_0}\frac{2\omega}{\beta e}|f|_1 +\frac{1}{\omega}\frac{\omega}{\beta^2}|xf|_1.$$ This shows that $x^2u\in L_1({\mathbb R}_+)$, hence $xu\in L_1({\mathbb R}_+)$, and then by equation \eqref{help} also $u^\prime\in L_1({\mathbb R}_+)$ as well as $x u^\prime\in L_1({\mathbb R}_+)$, i.e.\ $u\in D(A)$. This shows that $A$ is $m$-accretive. As a consequence we note that $-A$ generates a $C_0$-semigroup in $X$ which is also positive and strictly contractive, hence exponentially stable. \subsection{Accretivity of $A-B$} We have $$|\int_x^\infty u(x)dx|_1\leq |xu|_1,\quad |x\int_x^\infty u(x)dx|_1\leq \frac{1}{2}|x^2u|_1,$$ and therefore $$\int_0^\infty (Au-Bu)\operatorname{sgn}(u)dx\geq \mu_0|u|_1+\beta|xu|_1-2\beta|xu|_1,$$ as well as $$\int_0^\infty(Au-Bu)\operatorname{sgn}(u) xdx\geq -\omega|u|_1+\mu_0|xu|_1.$$ This yields $$[(A-B)u,u]_+\geq (\mu_0a-\omega)|u|_1+(\mu_0-\beta a)|xu|_1\geq0,$$ for all $u\in D(A)$, provided $\mu_0a\geq\omega$ and $\mu_0\geq \beta a$. Such a choice of $a>0$ is possible if and only if the condition $\omega/\mu_0\leq \mu_0/\beta$ is met, i.e.\ if and only if $$\omega\leq \mu_0^2/\beta$$ holds true. Now in the disease-free case we have $\omega=\lambda\tau/\gamma$, while in the disease case $\omega=\mu_0^2/\beta$; then $a=\mu_0/\beta$. Thus $A-B$ will be strictly accretive in the disease-free case while it will be accretive only in the disease case. In the first case, the decay rate can easily be estimated not to be smaller than $\mu_0-\sqrt{\lambda\beta\tau/\gamma}$. \subsection{Density of the Range of $A-B$} Let $f\in L_1({\mathbb R}_+;(a+x)dx)$ be given and assume $f\geq0$. Set $u_1=(1+A)^{-1}f$ and define the sequence $u_n$ inductively by means of $$ u_{n+1}=u_1+(1+A)^{-1} Bu_n.$$ Then $u_1\geq0$, and $u_{2}-u_1=(1+A)^{-1}Bu_1\geq0$, hence by induction $u_{n+1}\geq u_n$ pointwise, since $B$ is positive. This shows that the sequence of functions $u_n$ is nonnegative and increasing pointwise. Moreover, $$\omega u^\prime_n +(1+\mu_0+\beta x)u_n= f+2\beta\int_x^\infty u_{n-1}(y)dy\leq f+2\beta\int_x^\infty u_n(y)dy,$$ which implies $$(1+\mu_0)|u_n|_1+\beta|xu_n|_1\leq |f|_1+2\beta|xu_n|_1,$$ and $$ -\omega|u_n|_1+(1+\mu_0)|xu_n|_1+\beta^2|x^2u_n|_1\leq |xf|_1+\beta|x^2u_n|_1.$$ Choosing $a$ as above this yields an a priori bound for the sequence $(u_n)$ $$||u_n||=a|u_n|_1+|xu_n|\leq C||f||,$$ and therefore we may conclude by the monotone convergence theorem $u_n\rightarrow u_\infty$ as $n\rightarrow\infty$. If in addition $x^2f\in L_1({\mathbb R}_+)$ then we obtain in a similar way boundedness of $x^2u_n$ in $X$. This implies $(1+A-B)u_n= f+B(u_{n-1}-u_n)\rightarrow f$ in $X$ as $n\rightarrow\infty$, hence $u_\infty\in D(\overline{A-B})$ and $u_\infty=(1+\overline{A-B})^{-1}f$. Since $L_1=L_1^+-L_1^+$ we may conclude $R(1+\overline{A-B})=X$, i.e.\ the closure of $A-B$ is $m$-accretive. \bigskip \begin{remark} The above proof shows that the resolvent of $\overline{A-B}$ is positive, hence the semigroup generated by this operator will be as as well. \end{remark} \subsection{ Irreducibility} Suppose $f\in X$ is nonnegative and $u$ solves $$\omega u^\prime +(\lambda +\mu_0+\beta x)u =f+2\beta\int_x^\infty u(y)dy,\quad x\geq0,$$ with initial value $u(0)=0$. If $f\not\equiv0$ then let $x_1:=\inf\operatorname{supp} f$. We have $$u(x)=\frac{1}{\omega}\int_0^x \exp{-[(\lambda+\mu_0)(x-y)/\omega+\beta(x^2-y^2)/2\omega]} [f(y)+Bu(y)]dy.$$ Since we already know $u(x)\geq0$, this formula implies $u(x)>0$ for all $x>x_1$. But then $\int_x^\infty u(y)dy>0$ for all $x\geq0$, and so so $u(x)>0$ for all $x>0$. This proves the irreducibility of the semigroup generated by $\overline{A-B}$. \subsection{$A-B$ is not Closed} Unfortunately, the sum $A-B $ is not closed. We show this by the following example. \begin{example} Set $u=\chi/x^3$ where $\chi$ denotes a cut-off function which is $0$ on $[0,1]$ and $1$ on $[2,\infty)$. Then $u,u^\prime u,xu\in L_1({\mathbb R}_+)$, but $x^2u\not\in L_1({\mathbb R}_+)$, and $u(0)=0$. On the other hand, \begin{eqnarray*} f(x)&:=& \omega u^\prime(x)+(\lambda +\mu_0+\beta x)u(x)-2\beta\int_x^\infty u(y)dy\\ &=& \omega\chi^\prime/x^3-3\omega\chi/x^4+(\lambda+\mu_0)\chi/x^3 +\beta \chi/x^2- 2\beta \int_x^\infty\chi(y)dy/y^3 \end{eqnarray*} Since \begin{eqnarray*} &&\chi(x)/x^2-2\int_x^\infty\chi(y)dy/y^3 = \chi(x)/x^2+\chi(y)/y^2|_x^\infty-\int^\infty_x\chi^\prime(y)dy/y^2\\ &=&-\int_x^\infty \chi^\prime(y)dy/y^2, \end{eqnarray*} we obtain $$f=\omega \chi^\prime(x)/x^3-3\omega\chi(x)/x^4+(\lambda+\mu_0)\chi(x)/x^3-\beta \int_x^\infty \chi^\prime(y)dy/y^2.$$ Obviously, $f$ as well as $xf$ belong to $L_1({\mathbb R}_+)$, so $A-B$ with domain $D(A)$ is not closed. \end{example} \subsection{Summary} Let us summarize what we have shown so far. \begin{theorem} Suppose $\beta\omega\leq \mu_0^2$. Then problem (\ref{pde1}) is well-posed in $X=L_1({\mathbb R}_+;(a+x)dx)$ and admits an associated $C_0$-semigroup $T(t)=e^{-Lt}$ which is positive. If $a$ is chosen from the interval $a\in[\omega/\mu_0, \mu_0/\beta]$ then $T(t)$ is nonexpansive. In the strictly disease free case $\omega=\lambda\tau/\gamma<\mu_0^2/\beta$, the semigroup $T(t)$ is exponentially stable with type $\omega_0(T)\leq -\mu_0+\sqrt{\lambda\beta\tau/\gamma}<0$. \end{theorem} \section{Asymptotic Behavior of the Autonomous Problem} \subsection{Compactness} Set $L=\overline{A-B}$. Since $L$ is m-accretive in $X=L_1({\mathbb R}_+;(a+x)dx)$, the spectrum $\sigma(L)$ is contained in the closed right halfplane. We want to show that the resolvent of $L$ is compact. For this purpose we derive another representation of $(\lambda+L)^{-1}$ for $\lambda>0$. Let $f\in X$ and set $u=(\lambda+L)^{-1}f$. Then we obtain $$u=(\lambda+A)^{-1}f + (\lambda+A)^{-1} B u,$$ and \begin{eqnarray*} (\lambda+A)^{-1}Bu&=& 2\beta(\lambda+A)^{-1}[\int_x^\infty u(y)dy]\\ &=& \frac{2\beta}{\omega} \int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}[\int_y^\infty u(r)dr]dy\\ &=& \frac{2\beta}{\omega} \int_x^\infty u(r)[\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy]dr\\ &+& \frac{2\beta}{\omega} \int_0^x u(r) [\int_0^r e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy]dr\\ &=& k_\lambda(x) \int_x^\infty u(r)dr + \omega(\lambda+A)^{-1} [k_\lambda u], \end{eqnarray*} where $$k_\lambda(x)= \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy.$$ Note that $$0\leq k_\lambda(x)\leq \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}dy\leq \frac{2\beta}{\lambda+\mu_0},$$ i.e.\ $k_\lambda\in L_\infty({\mathbb R}_+)$. We thus have the identity $$u(x)-k_\lambda(x)\int_x^\infty u(y)dy = (\lambda+A)^{-1}f(x)+ \omega(\lambda+A)^{-1}[k_\lambda u]=: g(x),$$ and $u(0)=0$. We may solve this equation for $u$ to the result $$ u(x)= g(x)-k_\lambda(x)\int_0^x \exp\left(-\int_y^x k_\lambda(r)dr\right) g(y)dy + k_\lambda(x) \exp \left(-\int_0^x k_\lambda(s)ds\right)<q_\lambda| f>,$$ where $$<q_\lambda,f> := \frac{1}{(\lambda+\mu_0)^2-\omega\beta}((\lambda+\mu_0)\int_0^\infty f(s)ds+\beta \int_0^\infty sf(s)ds)\,.$$ This way we have the representation \begin{equation} \label{repres} (\lambda+L)^{-1}f = (1-R_\lambda)(\lambda+A)^{-1}[ 1+\omega k_\lambda (\lambda+L)^{-1}]f +k_\lambda(x) \exp\left(-\int_0^x k_\lambda(s)ds \right)<q_\lambda| f>, \end{equation} with $$(R_\lambda g)(x) = k_\lambda(x)\int_0^x \exp \left(-\int_y^x k_\lambda(r)dr\right) g(y)dy.$$ Next $D(A)$ embeds compactly into $X$, hence $(\lambda+A)^{-1}$ is compact. From boundedness of $k_\lambda$ we may then conclude that $(\lambda+L)^{-1}$ is compact, as soon as we know that the Volterra operator $R_\lambda$ is bounded in $X$. To prove the latter we estimate as follows \begin{eqnarray*} ||R_\lambda g||&=&\int_0^\infty (a+x) k_\lambda(x)|\int_0^x \exp\left(-\int_y^x k_\lambda(r)dr\right) g(y)dy|dx\\ &\leq& \int_0^\infty|g(y)|[\int_y^\infty (a+x)k_\lambda(x)\exp\left(-\int_y^x k_\lambda(r)dr\right)dx]dy\\ &=& \int_0^\infty |g(y)| [ (a+y) + \int_y^\infty \exp\left(-\int_y^x k_\lambda(r)dr\right)dx]dy\\ &\leq& C_\lambda\int_0^\infty |g(y)|(a+y)dy=C_\lambda||g||, \end{eqnarray*} as we show now. \begin{eqnarray*} k_\lambda(x)&=& \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)(x-y)/\omega}e^{-\beta(x^2-y^2)/2\omega}dy\\ &\geq & \frac{2\beta}{\omega}\int_0^x e^{-(\lambda +\mu_0)y/\omega}e^{-\beta xy/\omega}dy\\ &=& \frac{2\beta}{\lambda+\mu_0+\beta x} (1- e^{-(\lambda+\mu_0+\beta x)x/\omega})\\ &\geq & \frac{2\beta}{\lambda+\mu_0+\beta x}\cdot \frac{(\lambda+\mu_0+\beta x)x/\omega}{1+(\lambda+\mu_0+\beta x)x/\omega)}\\ &=& \frac{2\beta x}{\omega +(\lambda+\mu+\beta x)x}, \end{eqnarray*} by the elementary inequality $1-e^{-x} \geq x/(1+x)$. This implies \begin{eqnarray*} \int_y^x k_\lambda(r)dr &\geq& 2\beta\int_y^x rdr/(\omega +(\lambda+\mu_0+\beta r)r\\ &=& \int_y^x \frac{2\beta r +\lambda+\mu_0}{\omega+(\lambda+\mu_0)r +\beta r^2}dr -(\lambda+\mu_0)\int_y^x \frac{dr} {\omega+(\lambda+\mu_0)r +\beta r^2}\\ &\geq& \log \frac{\omega +(\lambda+\mu_0)x + \beta x^2}{\omega+(\lambda+\mu_0)y +\beta y^2} - c_\lambda, \end{eqnarray*} since the second integral is bounded. This estimate finally yields $$\int_y^\infty \exp\left(-\int_y^x k_\lambda(r)dr\right)dx\leq e^{c_\lambda}\int_y^\infty\frac{\omega +(\lambda+\mu_0)y + \beta y^2} {\omega+(\lambda+\mu_0)x +\beta x^2}dx\leq C_\lambda(a+y). $$ This completes the proof of compactness of the resolvent of $L$. \subsection{Ergodicity} Since the resolvent of $L$ is compact we know that the spectrum of $L$ consists only of eigenvalues of finite multiplicity, these are poles of the resolvent of $L$. By accretivity of $L$ we have the inequality $|(\lambda+L)^{-1}|_{{\cal B}(X)}\leq 1/{\rm Re}\lambda$, ${\rm Re} \lambda>0$, hence the resolvent can only have poles of first order on the imaginary axis. This shows that all eigenvalues on the imaginary axis are semisimple. Compactness of the resolvent implies also that the range of $\lambda+L$ is closed, for each $\lambda\in{\mathbb C}$. In particular, we have the direct sum decomposition $X= N(L)\oplus R(L)$, i.e.\ ergodicity in the sense of Abel. Now we concentrate on the disease equilibrium which means $a=\mu_0/\beta$ and $\omega=\mu_0^2/\beta$. A function $e(x)$ belongs to the kernel of $L$ if $$ \omega e^\prime (x)+(\mu_0+\beta x) e(x)- 2\beta \int_x^\infty e(y)dy =0, \quad x>0,\; e(0)=0,$$ or equivalently $$ e^{\prime\prime}(x)+\frac{\beta}{\mu_0}(1+\frac{\beta}{\mu_0}x) e^\prime(x)+ 3 \frac{\beta^2}{\mu_0^2} e(x)=0, \quad x>0,\; e(0)=0.$$ The scaling $e(x)= v(\beta x/\mu_0)$ reduces this problem to $$ v^{\prime\prime}(z) +(1+z)v^\prime(z)+3 v(z)=0,\quad z>0,\; v(0)=0.$$ By the initial condition $v(0)=0$, this shows that the kernel of $L$ can be only one-dimensional, and a simple computation yields that $$v(z)= (z+z^2/2) e^{-(z+z^2/2)}, \quad z>0,$$ is a solution. Therefore $N(L)={\rm span}\{ e\}$, with $e(x)= (\beta/\mu_0)^2v(\beta x/\mu_0)$, and another simple computation yields $$ \int_0^\infty (a+x) e(x)dx =1.$$ Since $L$ is Fredholm with index zero, the kernel $N(L^*)$ of the dual of $L$ has also a one-dimensional kernel which are the constant functions. The ergodic projection ${\mathcal P}$ onto the kernel of $L$ along the range of $L$ is then given by \begin{equation}\label{projection} {\mathcal P} u(x)= [\int_0^\infty(a+x) u(x)dx] e(x)= <u|e^*>e(x), \quad x>0. \end{equation} Suppose there are no other eigenvalues of $L$ on the imaginary axis. Then $L^*$ also has no other eigenvalues on the imaginary axis, and then by the theorem of Arendt, Batty, Lubich and Phong we may conclude that $$ e^{-Lt} u \rightarrow {\mathcal P} u \quad \mbox{ as } t\rightarrow\infty, \mbox{ for each } u\in X,$$ i.e.\ the semigroup generated by $-L$ is strongly ergodic. We show now that there are in fact no eigenvalues other than 0 on the imaginary axis. Suppose on the contrary that $$ i\rho u(x) +\omega u^\prime(x) +(\mu_0+\beta x)u(x)= 2\beta\int_x^\infty u(y)dy, \quad x>0,\; u(0)=0,$$ $u\neq0$. Multiplying this equation with $\bar{u}/|u|$, taking real parts, and integrating over ${\mathbb R}_+$ we obtain \begin{equation}\label{i} \mu_0|u|_1 +\beta|xu|_1 = 2\beta {\rm Re}\int_0^\infty u(x)\int_0^x \bar{u}(y)/|u(y)|dydx\leq 2\beta |xu|_1, \end{equation} and similarly, multiplying with $x\bar{u}(x)/|u(x)|$ we get \begin{equation}\label{ii} -\omega|u|_1+ \mu_0|xu|_1+\beta |x^2u|_1= 2\beta {\rm Re} \int_0^\infty u(x)\int_0^x y\bar{u}(y)/|u(y)|dydx\leq \beta |x^2u|_1. \end{equation} Multiplying the first inequality with $a=\mu_0/\beta$ and adding the second we arrive at a contradiction if at least one of the inequalities (\ref{i}), (\ref{ii}) is strict. Hence we must have $$ {\rm Re}\int_0^\infty u(x)\int_0^x \bar{u}(y)/|u(y)|dydx= |xu|_1,$$ which implies with $\arg u(x)=\theta(x)$ $$x\equiv{\rm Re}\int_0^x e^{i(\theta(x)-\theta(y))}dy= \frac{1}{2}\frac{d}{dx} |\int_0^x e^{i\theta(y)}dy|^2,$$ or equivalently $$ |\int_0^x e^{i\theta(y)}dy|^2=x^2, \quad x>0.$$ But this is only possible if $\theta(y)$ is constant, w.l.o.g.\ we may assume $\theta =0$ i.e.\ $u(x)$ is nonnegative, which in turn yields $\rho=0$ since $u\neq0$ by assumption. \bigskip \subsection{Summary} Let us summarize what we have shown in this section. \begin{theorem} Assume the disease case $\omega=\mu_0^2/\beta$, $a=\mu_0/\beta$. The the semigroup $T(t)=e^{-Lt}$ is strongly ergodic, it converges strongly to the projection ${\mathcal P}$ onto the kernel $N(L)$ of $L$ along its range $R(L)$. The kernel is one-dimensional and spanned by $e(x)= (\beta/\mu_0)^2\Phi(\beta x/\mu_0)$, where $\Phi(z)=(z+z^2/2)e^{-(z+z^2/2)}$, and the projection ${\mathcal P}$ is given by $$ {\mathcal P} u(x)=[\int_0^\infty(a+y)u(y)dy]e(x)= <e^*|u>e(x), \quad x>0,\; u\in X.$$ \end{theorem} \noindent {\em Remark.} We do not know whether the ergodicity is exponential since it is not clear that the type of the semigroup $e^{-Lt}$ restricted to $R(L)$ is negative. \section{Well-posedness of the Non-Autonomous Evolution} \subsection{The Trivial Evolution} Let $\omega\in C({\mathbb R}_+)$ be positive, such that $0<\omega_\infty=\lim_{t\rightarrow\infty} \omega(t)$ exists, and assume $\omega(\cdot)-\omega_\infty\in L_1({\mathbb R}_+)$. Let $$\omega_+=\max_{s\geq0} \omega(s)\quad \mbox{ and }\quad \omega_-=\min_{s\geq0} \omega(s),$$ and note that $\omega_+\geq \omega_->0$. We are particularly interested in the cases $\omega_\infty=\lambda\tau/\gamma$, the disease-free case, and $\omega_\infty= \mu_0^2/\beta$, the disease case. We want to show that the nonautonomous problem is well-posed in $X=L_1({\mathbb R}_+;(a+x)dx)$. We begin with the problem \begin{eqnarray} \label{trivial} \partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=0,\quad x>0, t>s\geq0\\ u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0.\nonumber \end{eqnarray} The method of characteristics yields easily the evolution operator $U_0(t,s)$ for this problem. It is given by \begin{eqnarray} \label{evop} [U_0(t,s)g](x)=u(t,x)= g(x-\int_s^t\omega(\tau)d\tau)e^{-\phi(t,s,x)},\\ \phi(t,s,x)=\mu_0(t-s)+ \beta(t-s)(x-\int_s^t\omega(\tau)d\tau)+\beta\int_s^t(t-\tau)\omega(\tau)d\tau,\nonumber \end{eqnarray} if we extend $g$ trivially to ${\mathbb R}$. We obviously have the estimate $|U_0(t,s)|_{{\cal B}(X)}\leq e^{-\mu_0(t-s)}$, and $u(t,x)$ is a strong solution in $X$ if the initial function $g$ belongs to $D$ defined by $$ D:=\{g\in L_1({\mathbb R}_+):\;x^2g,g^\prime,xg^\prime\in L_1({\mathbb R}_+), g(0)=0\}.$$ We also need the solution of \begin{eqnarray} \label{trivial1} \partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=0,\quad x>0, t>s\geq0\nonumber\\ u(s,x)=0,\quad u(t,0)=h(t),\quad t>s\geq0, x>0. \end{eqnarray} Again the method of characteristics applies and yields with $K(t,x) = \int_{\rho(t,x)}^t(r-\rho(t,x))\omega(r)dr$ the formula $$ [V_0(t,s)h](x)=u(t,x)=h(\rho(t,x))e^{-[\mu_0(t-\rho(t,x)+\beta x(t-\rho(t,x))-\beta K(t,x)]}, $$ for $x<\int_s^t\omega(r)dr$, and zero elsewhere, where the function $\rho(t,x)$ is defined by the equation \begin{equation} \label{kappa} x=\int_\rho^t \omega(r)dr; \end{equation} note that this equation has a unique solution $\rho(t,x)\in (s,t)$, since $\omega(r)\geq \omega_- >0$ for all $r\geq0$, by assumption, and $x<\int_s^t\omega(r)dr$. Observe that with $K_0(t,s) =\int_s^t\omega(r)dr$ we have \begin{eqnarray*} &&\int_0^\infty (a+x)[V_0(t,s)h](x)dx\leq |h|_\infty\int_0^{K_0(t,s)}(a+x)e^{-\mu_0(t-\rho(t,x))}dx\\ && \leq |h|_\infty \int_s^t (a+\int_\sigma^t\omega(r)dr)e^{-\mu_0(t-\sigma)} \omega(\rho(t,x)) d\sigma\\ &&\leq |h|_\infty \omega_+\int_0^{t-s}(a+\omega_+\sigma)e^{-\mu_0 \sigma}d\sigma\leq C|h|_\infty , \end{eqnarray*} by the variable transformation $\sigma=\rho(t,x)$. Thus the part coming from a nontrivial bounded boundary value $h$ is bounded in $X$. \subsection{Well-posedness for the Full Problem} Let us now consider the full problem, i.e. \begin{eqnarray} \label{full} \partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=2\beta\int_x^\infty u(t,y)dy,\\ u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, \; x>0.\nonumber \end{eqnarray} Since the standard cone in $X$ is reproducing, i.e. $L_1=L_1^+-L_1^+$, we may restrict attention to nonnegative initial functions $g$. We define the sequence $u_n$ inductively by $$u_1(t):=U_0(t,s)g, \quad u_{n+1}(t)=u_1(t)+\int_s^t U_0(t,r)Bu_n(r)dr, \quad t\geq s\geq 0.$$ Since $U_0(t,s)$ is positive the functions $u_n$ are as well, and $u_2(t)\geq u_1(t)$ since $B$ is positive. Inductively we obtain with $$u_{n+1}(t)-u_n(t)=\int_s^t U_0(t,r)B(u_n(r)-u_{n-1}(r))dr, \quad t\geq s\geq0,$$ that the functions $u_n$ are pointwise increasing w.r.t.\ $n\in{\mathbb N}$. Suppose that $g\in D$. Then $u_n$ is a strong solution of \begin{eqnarray*} &&\partial_t u_n(t,x)+\omega(t)\partial_x u_n(t,x) +(\mu_0+\beta x) u_n(t,x)=2\beta\int_x^\infty u_{n-1}(t,y)dy\\ &&\qquad\qquad\leq 2\beta\int_x^\infty u_n(t,y)dy,\quad x>0, t>s\geq0\\ &&u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0, \end{eqnarray*} i.e.\ $u_n$ is a strong lower solution of (\ref{full}). Multiplying the equation with $x^i$ and integrating over ${\mathbb R}_+$ this yields with $z_i(t)=|x^i u_n(t)|_1$ $$\partial_t z_0(t) +\mu_0 z_0(t)+\beta z_1(t)\leq 2\beta z_1(t),$$ for $i=0$, and for $i=1$ $$\partial_tz_1(t)-\omega(t) z_0(t)+\mu_0z_1(t)+\beta z_2(t)\leq \beta z_2(t).$$ Setting $z(t)=(z_0(t),z_1(t))^T$, $b(t)=(0,(\omega(t)-\omega_\infty)z_0(t))^T$, and defining $G$ by the $2\times2$-matrix with entries $-\mu_0,\beta,\omega_\infty,-\mu_0$, this inequality becomes $$\partial_tz(t)\leq Gz(t)+b(t), \quad t\geq s\geq0.$$ The eigenvalues of $G$ are given by $\lambda_\pm= -\mu_0 \pm \sqrt{\beta\omega_\infty}$ which are both nonpositive if $\beta\omega_\infty\leq \mu_0^2$, which is true in both, the disease-free and the disease case. Since $e^{Gt}$ is positive we may conclude $$z(t)\leq e^{G(t-s)}z(s)+\int_s^t e^{G(t-r)}b(r)dr.$$ Boundedness of $e^{Gt}$ then implies an inequality of the form $$|z(t)|\leq C + C\int_s^t |\omega(r)-\omega_\infty||z(r)|dr,\quad t\geq s\geq0,$$ which implies boundedness of $z(t)$ on $[s,\infty)$ since $(\omega(\cdot)-\omega_\infty)\in L_1({\mathbb R}_+)$ by assumption. Note that the constant $C$ depends only on the parameters $\mu_0,\beta,\omega_\infty$ and on $||g||$. Therefore the functions $u_n(t)$ are bounded in $X$ uniformly in $t$ and $n$. By monotone convergence we may conclude $u_n(t)\rightarrow u(t)$ in $X$ for each $t\geq s$. Since $B$ is positive, $Bu_n\rightarrow Bu$ in $L_1({\mathbb R}_+)$ as well, and then also \begin{equation} \label{mild} u(t)=U_0(t,s)g+\int_s^t U_0(t,r)Bu(r)dr,\quad t\geq s\geq0, \end{equation} at least in $L_1({\mathbb R}_+)$. A density argument finally shows that this conclusion is valid for all initial data $g\in X$. \bigskip \noindent {\em Remark.} It is not clear that solutions of (\ref{mild}) are unique. The reason for this is that $B$ is unbounded. Therefore we need another definition of mild solution. \bigskip \noindent {\bf Definition.}\,{\em Let $f\in L_{1,loc}(R_+;X)$. \\ (i)\, We call a function $u\in C({\mathbb R}_+;X)$ strong solution of \begin{eqnarray} \label{full1} \partial_t u(t,x)+\omega(t)\partial_x u(t,x) +(\mu_0+\beta x) u(t,x)=2\beta\int_x^\infty u(t,y)dy+f(t,x),\nonumber\\ u(s,x)=g(x),\quad u(t,0)=0,\quad t>s\geq0, x>0. \end{eqnarray} if $u\in C^1({\mathbb R}_+;X)\cap C({\mathbb R}_+;D)$ and (\ref{full1}) is valid pointwise.\\ (ii)\, We call a function $u\in C({\mathbb R}_+;X)$ mild solution of (\ref{full1}) if there are $f_n\in L_{1,loc}(R_+;X)$ and strong solutions $u_n$ of (\ref{full1}) such that $u_n\rightarrow u$ and $f_n\rightarrow f$ as $n\rightarrow\infty$, in $X$, uniformly on compact intervals. } \bigskip \noindent Suppose that $g\in D$ has compact support. Then each iteration $u_n(t)$ has also compact support, namely $$\operatorname{supp} u_n(t)\subset \operatorname{supp} g + \omega_+[0,t],$$ for each $n\in{\mathbb N}$. Therefore each function $u_n(t)$ is a strong solution of (\ref{full1}) with inhomogeneity $f_n(t)=B(u_{n-1}(t)-u_n(t))$. This proves that the limit $u(t)$ is a mild solution. Approximation then shows that (\ref{full}) has at least one mild solution, for each initial value $g\in X$. Uniqueness of mild solutions can be obtained as follows. If $u$ is a strong solution of (\ref{full1}) then the equation yields as above the inequality $$ \partial_t ||u(t)||\leq \omega_+ ||u(t)||+ ||f(t)||,\quad t>0,$$ hence $$ ||u(t)||\leq e^{\omega_+(t-s)}||g||+\int_s^t e^{\omega_+(t-r)}||f(r)||dr.$$ By approximation this inequality is also valid for mild solutions, hence $u\equiv 0$ in case $f\equiv g=0$. Thus mild solutions are unique and of course they satisfy the integral equation (\ref{mild}). \subsection{ Summary.} We have proved the following result about well-posedness of \eqref{full} \begin{theorem} Suppose $\omega\in C({\mathbb R}_+)$ is a given strictly positive function, such that $\omega_\infty=\lim_{t\rightarrow\infty} \omega(t)>0$ exists and $\omega(\cdot)-\omega_\infty\in L_1({\mathbb R}_+)$. Then \eqref{full} is well-posed in the sense of the definition given above. There exists a unique evolution operator $U(t,s)$ in $X$ generated by \eqref{full}, which is bounded in $X$, uniformly in $0\leq s\leq t<\infty$, and positive. Moreover, \eqref{full} has finite speed of propagation with maximum speed less than $ \omega_+= \sup_{t\geq0} \omega(t)$. \end{theorem} \subsection{Higher Order Bounds} Consider an initial function $g\in C_0^\infty(0,\infty)$. Then $u_1$ is smooth as well and has compact support for each $t\geq s$. Then the same holds true for $u_2$, hence by induction for all $u_n$. Setting $v_n=\partial_x u_n$ we have the following problem for $v_n$. \begin{eqnarray} \label{derivative} &&\partial_t v_n+\omega(t)\partial_x v_n +(\mu_0+\beta x) v_n= -\beta [u_n+2u_{n-1}],\\ &&v_n(s,x)=g^\prime(x),\quad v_n(t,0)=\psi_n(t),\quad t>s\geq0, x>0\nonumber \end{eqnarray} where $\psi_n(t)= \frac{2\beta}{\omega(t)}|u_{n-1}(t)|_1$. This implies $$\partial_x u_n(t)=v_n(t) = U_0(t,s)g^\prime -\beta\int_s^t U_0(t,r)[u_n(r)+2u_{n-1}(r)]dr+w_n(t),\quad t\geq s\geq0,$$ with $$w_n(t)=2\beta V_0(t,s)[|u_{n-1}(\cdot)|_1/\omega(\cdot)].$$ Uniform boundedness of $u_n$ in $X$ and exponential stability of the evolution operator $U_0(t,s)$ in $X$ then implies boundedness of $\partial_x u_n$ in $X$. Passing to the limit we get $$\partial_x u(t) = U_0(t,s)g^\prime -3\beta\int_s^t U_0(t,r)u(r)dr+ w(t),\quad t\geq s\geq0,$$ where $$w(t,x)=2\beta V_0(t,s)[|u(\cdot)|_1/\omega(\cdot)]. $$ This yields $\partial_xu\in C_b([s,\infty);X)$. The last identity was proven for $g\in C_0^\infty(0,\infty)$, but via density can be extended to $g\in D$. \section{Convergence} We are now ready to prove the main result on convergence. Let us first look at the disease-free case. Then with $A(t)$, $B$, defined as in section 2, and $L(t)=\overline{A(t)-B}$, we know that $L(t)$ is strictly accretive for large times $t$ if the parameter $a$ is chosen in $a\in(\lambda\tau/\gamma\mu_0,\mu_0/\beta)$. This proves exponential stability of the trivial solution in the disease-free case, with decay rate at least $\mu_0-\sqrt{\lambda\beta\tau/\gamma}$. Suppose we have a solution $u$ of the nonautonomous problem in the disease case such that $\partial_xu(t)$ is bounded in $X$. Then we may write \begin{eqnarray} \label{full2} &&\partial_t u+\omega_\infty\partial_x u +(\mu_0+\beta x) u-2\beta\int_x^\infty u(t,y)dy =(\omega_\infty-\omega(t))\partial_xu,\\ &&u(0,x)=g(x),\quad u(t,0)=0,\quad t>0, x>0.\nonumber \end{eqnarray} Therefore we obtain the identity $$u(t)=e^{-Lt}g +\int_0^t e^{-L(t-r)}(\omega_\infty-\omega(r))\partial_xu(r))dr,\quad t\geq0.$$ We know from Section 9 that $e^{-Lt}$ converges strongly in $X$ to the ergodic projection ${\mathcal P}$. On the other hand, the scalar function $\omega(\cdot)-\omega_\infty$ belongs to $L_1({\mathbb R}_+)$ by assumption. This then implies $$u(t)\rightarrow u_\infty\in R({\mathcal P}).$$ Thus we have convergence in $X$ to a unique element for all nonnegative solutions with initial values in $D$. Since the evolution operator associated with (\ref{full}) is bounded in $X$, this convergence extends to all initial values $u_0\in X$. Returning now to the system \eqref{pde}, we may compute the limit $u_\infty$. For this purpose recall that $U(t)=\int_{x_0}^\infty u(t,x)dx\rightarrow U_\infty$ and $P(t)=\int_{x_0}^\infty u(t,x)xdx\rightarrow P_\infty$. This implies $$ u_\infty = \lim_{t\rightarrow\infty} {\mathcal P} u(t)= \lim_{t\rightarrow\infty} [aU(t)+P(t)-x_0U(t)]e= [\mu U_\infty/\beta+P_\infty]e.$$ Note that $u_\infty$ is independent of the initial values $V_0$ and $u_0$. This completes the proof of Theorem 1.2. \bibliographystyle{amsnum}
270b20181063a15b4bb6f946e35e0d2a8261d160
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} \label{intro} During the last two decades, several quantum key distribution (QKD) protocols have been proposed, which use two-level quantum systems (qubits) as information carriers \cite{BB84,qubitPRL,qubitPRA}. The security of these protocols against all kinds of attacks has been analyzed extensively and various unconditional security proofs have been presented \cite{LC,M,SPTKI,Lo,GLLPIM,GPTLC,GL-C}. From the experimental point of view, a number of prototypes based on qubits have been developed \cite{RMP}, while QKD has been successfully performed outside the laboratory at distances up to about $67$km using telecom fibers \cite{Setal,EPR}, and up to $23.4$km \cite{Open} through open air. In contrast to qubits, the use of high-dimensional quantum systems in quantum cryptography has attracted considerable attention only recently. Currently, qudits ($d$-di\-me\-nsio\-nal quantum systems) can be realized experimentally in several ways (including multiport beam splitters, bi-photons, higher-order parametric down-conversion and energy-time entanglement) \cite{QuditExp,TAZG,RMSTZG}. As far as QKD protocols are concerned, qudits can carry more information than qubits increasing thus the flux of information between the two legitimate users (Alice and Bob). For a prime power $d$ it has been demonstrated that there exist $(d+1)$ mutually unbiased bases. Hence, the natural extensions of the standard BB84 and six-state qubit-based QKD protocols to higher dimensions involve $2d$ and $d(d+1)$ states, respectively \cite{WF,BBRV}. These latter qudit-based QKD schemes are able to tolerate higher error rates than their qubit-based counterparts \cite{chau,PABM,PT,DKCK,BKBGC,AGS}. The maximal error rate that can be tolerated by a particular QKD protocol (also referred to as {\em threshold disturbance}) quantifies the robustness of the protocol against a specific eavesdropping strategy, and depends on the algorithm that Alice and Bob are using for post-processing their raw key. In practice, nowadays secret keys can be distilled efficiently by means of one- or even two-way classical post-processing \cite{CK-BBCM,Cascade}, while advantage distillation protocols using two-way classical communication seem to be still rather inefficient \cite{Mau}. In principle, however, quantum distillation protocols involving two-way communication between Alice and Bob [also referred to as two-way entanglement purification protocols (EPPs)] can tolerate substantially higher error rates than their classical counterparts and can be applied whenever the quantum state shared between the two honest parties is freely entangled, i.e. distillable \cite{DEJ,BDSW,ADGJ,HH}. For $2\otimes 2$ quantum systems, non-distillability is equivalent to separability \cite{Letal,P-H} and thus there seems to exist a complete equivalence between entanglement distillation and secrecy. In particular, for qubit-based QKD protocols and under the assumption of individual attacks, it was proven recently that the extraction of a secret key from a quantum state is possible if and only if entanglement distillation is possible \cite{AMG}. For higher dimensions, however, the complete equivalence between entanglement distillation and secrecy, has been put into question by Horodecki {\em et al.} \cite{HHHO}, who showed that a secret key can, in principle, be extracted even from bound entangled states \cite{HHH}. Nevertheless, for arbitrary dimensions, {\em provable quantum entanglement is always a necessary precondition for secure QKD} \cite{CLL-AG}. Therefore, the natural question arises whether qudit-based QKD protocols can indeed go beyond entanglement distillation. In other words, what is the maximal error rate that can, in principle, be tolerated by a qudit-based QKD under the assumption of general coherent attacks ? In this paper, we address this question by focusing on qudit-based QKD protocols that use two mutually unbiased bases. Up to date, all investigations related to the security of such protocols have concentrated mainly on individual attacks (e.g. quantum cloning machines) and/or one-way post-processing of the raw key \cite{PABM,PT,DKCK,BKBGC,AGS}. Here, under the assumption of general coherent (joint) attacks, we show that for estimated disturbances below $(d-1)/2d$ Alice and Bob can be confident that they share distillable entanglement with high probability. On the other hand, an estimated disturbance above $(d-1)/2d$ does not enable Alice and Bob to infer that their quantum state is entangled ({\em no provable quantum entanglement}). Hence, in view of the necessary precondition for secure key distribution \cite{CLL-AG}, our result demonstrates that $(d-1)/2d$ is also the ultimate threshold disturbance for the prepare-and-measure schemes of the protocols. Furthermore, our result implies that, for the post-processing we consider throughout this work, the extraction of a secret key beyond entanglement distillation is impossible in the framework of qudit-based QKD protocols using two bases. This paper is organized as follows : For the sake of completeness, in Sec. \ref{basics} we summarize basic facts which are necessary for the subsequent discussion. In Sec. \ref{2bases} we briefly describe the prepare-and-measure and the enta\-ngle\-ment-based versions of the $2$-bases QKD protocols using qudits. Subsequently, Sec. \ref{D-sec} focuses on the key quantity of this work namely, the estimated error rate (disturbance) and its symmetries. Finally, the threshold disturbance for $2$-bases qudit-based QKD protocols, is derived in Sec. \ref{distil} and various examples are presented. \section{Qudits and the generalized Pauli group} \label{basics} Throughout this work we consider QKD protocols with qudit systems as information carriers. Each qudit corresponds to a $d-$dimensional Hilbert space $\mathbb{C}^d$ where $d=p^r$ is a prime power, i.e. $p$ is a prime and $r$ is an integer \cite{note_prime}. From now on all the arithmetics are performed in the finite (Galois) field $\field{d}$ \cite{ECC-book}. Theoretical investigations of $d$-level quantum systems are performed conveniently with the help of the generalized Pauli group. For this purpose let us define the unitary operators \begin{eqnarray} {\cal X} &=& \sum_{\alpha\in\field{d}}\ket{\alpha + 1}\bra{\alpha},\\ {\cal Z} &=& \sum_{\alpha\in\field{d}}\omega^{{\rm tr}(\alpha)} \ket{\alpha}\bra{\alpha}, \label{xz} \end{eqnarray} where $\omega=\exp(i2\pi/p)$ is a primitive $p$-th root of unity and \begin{eqnarray} {\rm tr}(\alpha) = \sum_{j=0}^{r-1}\alpha^{p^j} \label{trc} \end{eqnarray} is the absolute trace of $\alpha\in\field{d}$. The states $\{\ket{\alpha};\alpha\in\field{d}\}$ constitute an orthonormal computational basis on the Hilbert space of a qudit $\mathbb{C}^d$. The unitary operators ${\cal X}$ and ${\cal Z}$ generate the generalized Pauli group with unitary elements \begin{eqnarray} {\cal E}_{mn}= \{{\cal X}^m {\cal Z}^n: m,n\in\field{d}\}. \label{err} \end{eqnarray} These $d^2$ unitary operators form an error group on $\mathbb{C}^d$ \cite{ErrorGroup}, and are the generalizations of the Pauli operators for qubits. In fact the indices $m$ and $n$ refer to shift and phase errors in the computational basis, respectively. Thus the generalized Pauli operators can be represented in the form \begin{eqnarray} {\cal E}_{mn} = \sum_{k\in\field{d}} \omega^{{\rm tr}(k\cdot n)} \ket{k+m}\bra{k}, \label{err2} \end{eqnarray} with \begin{eqnarray} {\cal Z}^n{\cal X}^m&=&\omega^{{\rm tr} (m\cdot n)}{\cal X}^m {\cal Z}^n. \label{prop2} \end{eqnarray} Consider now a bipartite system of two qudits $A$ and $B$. It is not hard to show that the operators ${\cal X}_A\otimes{\cal X}_B^*$ and ${\cal Z}_A\otimes{\cal Z}_B^*$ constitute a {\em complete set of commuting operators} in the Hilbert space of two distinguishable qudits $\mathbb{C}_A^d\otimes\mathbb{C}_B^d$, while their simultaneous eigenstates are the $d^2$ maximally entangled states \begin{eqnarray} \ket{\Psi_{mn}}=\frac{1}{\sqrt{d}} \sum_{k\in\field{d}}\openone_A\ket{k_A}\otimes{\cal E}_{mn;B}\ket{k_B}, \label{bell-like} \end{eqnarray} with $m,n\in\field{d}$. These states are the generalization of the Bell states to higher dimensions and they form an orthonormal basis in $\mathbb{C}_A^d\otimes\mathbb{C}_B^d$. The singlet state $\ket{\Psi_{00}}$ is of particular interest because it remains invariant under any unitary transformation of the form ${\cal U}_A\otimes{\cal U}_B^*$. In fact $\ket{\Psi_{00}}$ is one of the key elements of the entanglement-based version of the qudit cryptographic protocols described in the following section. \section{Two-bases QKD protocols} \label{2bases} \subsection{Mutually unbiased bases} \label{2bases-1} Of central importance in the context of QKD is the notion of mutually unbiased (maximally conjugated) bases. It has been demonstrated that for a prime power $d$, there exist $d+1$ such bases, i.e. the eigenbases of the operators ${\cal Z},\,{\cal X},{\cal XZ},\,{\cal XZ}^2,\ldots,{\cal XZ}^{d-1}$ \cite{WF,BBRV}. In a qudit-based $2$-bases QKD protocol (to be referred to hereafter as $2d$-state protocol), Alice and Bob use for their purposes only two mutually unbiased bases ${\cal B}_1$ and ${\cal B}_2$ with $d$ basis-states each. Following \cite{BKBGC,AGS}, from now on the eigenbasis $\{\ket{k} : k\in\field{d}\}$ of the operator ${\cal Z}$ is chosen as the standard (computational) basis ${\cal B}_1$, while the second basis ${\cal B}_2\equiv\{\ket{\bar{l}} : l\in\field{d}\}$ is chosen as the Fourier-dual of the computational basis, i.e. $\ket{\bar{l}}\equiv\sum_k{\cal H}_{lk}\ket{k}$, with \begin{eqnarray} {\cal H} = \frac{1}{\sqrt{d}}\sum_{i,j\in\field{d}}\omega^{{\rm tr}(i\cdot j)} \ket{i}\bra{j} \label{fourier} \end{eqnarray} denoting the discrete Fourier transformation. One can verify easily that ${\cal H}$ is symmetric and thus unitary, i.e. ${\cal H}^\dag={\cal H}^{-1}={\cal H}^*$. This property will be used extensively in the following sections. Besides, errors in the two maximally conjugated bases are related via the discrete Fourier transform, i.e. \begin{eqnarray} {\cal H}^\dag {\cal E}_{mn} {\cal H} = \omega^{-{\rm tr}(m\cdot n)}{\cal E}_{nm}^*. \label{errorsMC} \end{eqnarray} In other words, shift errors in the computational basis become phase errors in the complementary basis and vice-versa. \subsection{Prepare-and-measure QKD scheme} \label{2bases-2} In a typical $2d$-state prepare-and-measure scheme Alice sends to Bob a sequence of qudits each of which is randomly prepared in one of the $2d$ non-orthogonal basis-states $\{\ket{k}\}$ or $\{\ket{\bar{l}}\}$. Bob measures each received particle randomly in ${\cal B}_1$ or ${\cal B}_2$. After the distribution stage, Alice and Bob agree on a random permutation of their data and publicly discuss the bases chosen, discarding all the dits where they have selected different bases (sifting procedure). Subsequently, they randomly select a sufficient number of dits \cite{half} from the remaining random sifted key and determine their error probability. If, as a result of a noisy quantum channel or of an eavesdropper, the estimated disturbance is too high the protocol is aborted. Otherwise, Alice and Bob perform error correction and privacy amplification with one- or two-way classical communication, in order to obtain a smaller number of secret and perfectly correlated random dits \cite{GL-C,chau,BKBGC,AGS,CK-BBCM,Cascade,Mau}. \subsection{Entanglement-based QKD scheme} \label{2bases-3} From the point of view of an arbitrarily powerful eavesdropper the above prepare-and-measure scheme is equivalent to an entanglement-based QKD protocol \cite{chau,AGS,BBM}. In this latter form of the protocol Alice prepares each of $2N$ entangled-qudit pairs in the maximally entangled state \begin{eqnarray} \ket{\Psi_{00}}=\frac{1}{\sqrt{d}}\sum_{k\in\field{d}} \ket{k_A}\otimes\ket{k_B}, \end{eqnarray} where the subscripts $A,B$ refer to Alice and Bob, respectively. Alice uses for her purposes the set of bases $\{{\cal B}_1, {\cal B}_2\}$ whereas Bob uses the set $\{{\cal B}_1, {\cal B}_2^*\}$, where ${\cal B}_2^*\equiv\{{\cal H}^*\ket{k} : k\in\field{d}\}$ \cite{chau,AGS}. More precisely, Alice keeps half of each pair and submits the other half to Bob after having applied a random unitary transformation chosen from the set $\{\openone, {\cal H}\}$. As soon as Bob acknowledges the reception of all the particles, Alice reveals the sequence of operations she performed on the transmitted qudits and Bob undoes all of them, i.e. he applies $\openone$ or ${\cal H}^{-1}$ on each qudit separately. Thus, at this point, in an ideal system Alice and Bob would share $2N$ qudit-pairs in the state $\ket{\Psi_{00}}^{\otimes 2N}$. However, in real systems, due to noise and/or eavesdropping all the $2N$ entangled-qudit pairs will be corrupted. In order to ensure secret key distribution Alice and Bob {\em permute randomly} all the pairs before doing any other operations \cite{GL-C}. In this way, any influence of the eavesdropper (from now on we assume that all the noise in the channel is due to eavesdropping) is equally distributed among all the pairs. The next step of the protocol now involves a verification test which will determine whether the protocol should be aborted or not. More precisely, Alice and Bob randomly select a number of pairs (say $N_{\rm c}$) \cite{half} as check pairs and measure each one of them {\em separately} along the standard (computational) basis. They compare their results publicly thus estimating the average error rate during the transmission. After the verification test all the check pairs are dismissed and, if the estimated error rate is too high the protocol is aborted. Otherwise, Alice and Bob apply an appropriate EPP with classical one- or two-way communication \cite{chau,DEJ,BDSW,ADGJ,HH} on the remaining $2N-N_{\rm c}$ pairs, in order to distill a smaller number of almost pure entangled-qudit pairs. Finally, measuring these almost perfectly entangled qudit pairs in a common basis, Alice and Bob obtain a secret random key, about which an adversary has negligible information. In our subsequent treatment we focus on the entanglement-based version of the $2d$-state QKD protocol. \section{Estimated disturbance and symmetries} \label{D-sec} The verification test performed by Alice and Bob immediately after the transmission stage is perhaps the most crucial stage of the $2$-bases QKD protocol and its success relies on the ``commuting-observables'' idea \cite{LC}. More precisely, the fact that all the operations performed in a typical EPP commute with a Bell measurement allows one to reduce any quantum eavesdropping attack to a classical probabilistic cheating strategy \cite{Lo,LC,chau,GL-C}. During the verification test Alice and Bob focus on the parity of their outcomes. Moreover, note that for the check pairs where Alice and Bob have performed ${\cal H}$ and ${\cal H}^{-1}$ respectively, the measurements are effectively performed in the complementary ${\cal B}_2$ basis rather than the standard basis ${\cal B}_1$ \cite{SPTKI}. Thus, given the unitarity of ${\cal H}$ and the invariance of $\ket{\Psi_{00}}$ under any unitary transformation of the form ${\cal U}_A\otimes{\cal U}_B^*$, the average estimated disturbance (error rate) is given by \begin{widetext} \begin{eqnarray} D = \frac{1}{2 N_{\rm c}}\sum_{b=0,1}\sum_{j_i=1}^{N_{\rm c}} {\rm Tr}_{A,B}\left \{ \left [\left ({\cal H}_A^{b\dag}\otimes {\cal H}_B^b\right ) {\cal P}\left ( {\cal H}_A^b \otimes{\cal H}_B^{b\dag}\right ) \right ]_{j_i}\rho_{AB} \right \}, \label{QBER2-1} \end{eqnarray} \end{widetext} where $\rho_{AB}$ denotes the reduced density operator of Alice and Bob for all $2N$ pairs. The index $j_i$ indicates that the corresponding physical observable refers to the $j_i$-th randomly selected qudit-pair. In particular, the projection operator entering Eq. (\ref{QBER2-1}) is given by \begin{eqnarray} {\cal P}_{j_i} \equiv \sum_{l\in\field{d}} \sum_{k\in\field{d}^*}\ket{l_A,(l+k)_B} \bra{l_A,(l+k)_B}, \label{Proj1} \end{eqnarray} where $\field{d}^*$ denotes the set of all nonzero elements in the field $\field{d}$ \cite{notation}. In other words, the inner summation in (\ref{Proj1}) is performed over all the nonzero elements of the finite field $\field{d}$, such that $(l+k)_B\neq l_A$. Moreover, the powers of the discrete Fourier transformation ${\cal H}^b$, with $b\in\{0,1\}$, in Eq. (\ref{QBER2-1}) reflect the fact that the errors in the sifted key originate from measurements in both complementary bases which have been selected randomly by Alice and Bob with equal probabilities. One can easily verify that all the measurements performed during the verification test are equivalent to Bell measurements. Indeed, using the definition of the Bell states (\ref{bell-like}) the projector ${\cal P}_{j_i}$ can be written in the form \begin{eqnarray} {\cal P}_{j_i} = \sum_{m,n\in\field{d}}(1-\delta_{m,0})\ket{\Psi_{mn}} \bra{\Psi_{mn}}, \label{Proj2} \end{eqnarray} where $\delta_{m,0}$ is the Kronecker delta \cite{notation}. This last relation indicates that the verification test performed by Alice and Bob is nothing else than a quality-check test of the fidelity of the $2N$ pairs with respect to the ideal state $\ket{\Psi_{00}}^{\otimes 2N}$. Hence, classical sampling theory can be applied for the estimation of the average error rate and the establishment of confidence levels \cite{Lo,LC,chau,GL-C}. We can simplify further our discussion by taking into account the symmetry of the QKD protocol under any permutation of the pairs. As we discussed earlier, a random permutation of all the pairs at the beginning of the entanglement-based protocols ensures a homogeneous distribution of the errors introduced by a potential eavesdropper (Eve) over all the qudit pairs \cite{GL-C}. This is equivalent to saying that the eavesdropping attack is symmetric on all the pairs, and such a symmetrization argument is one of the key elements of various unconditional security proofs \cite{SPTKI,Lo,GL-C,chau}. Indeed, Eve does not know in advance which of the qudit-pairs will be used for quality checks and which qudit-pairs will contribute to the final key. Hence, she is not able to treat them differently and the check pairs constitute a classical random sample of all the pairs. Invariance of the eavesdropping attack under any permutation of the pairs implies that all the reduced density operators describing the state of each pair shared between Alice and Bob are equal, i.e. \begin{eqnarray} \rho_{AB}^{(1)} &=& \rho_{AB}^{(2)} =\cdots = \rho_{AB}^{(2N)}, \label{homogen} \end{eqnarray} where the reduced density operator of Alice's and Bob' s $k$-th pair is denoted by $\rho_{AB}^{(k)} = {\rm Tr}_{AB}^{(\not k)}(\rho_{AB})$, with ${\rm Tr}_{AB}^{(\not k)}$ indicating the tracing (averaging) procedure over all the qudit-pairs except the $k$-th one. It should be stressed that Eq. (\ref{homogen}) does not at all imply that the overall reduced density operator $\rho_{AB}$ of the $2N$ pairs itself, is a product state of all the reduced pair states $\rho_{AB}^{(k)}$. On the contrary, $\rho_{AB}$ is expected to have a complicated structure as it includes all the effects arising from a general coherent (joint) attack of a possible eavesdropper. In view of Eq. (\ref{homogen}), the average disturbance defined in Eqs. (\ref{QBER2-1}) is determined by the average error probability of an arbitrary qudit pair, say the pair $j_1$, i.e. \begin{widetext} \begin{eqnarray} D = \frac{1}{2}\sum_{b=0,1} {\rm Tr}_{A,B}^{(j_1)}\left \{ \left [ \left ({\cal H}_A^{b\dag} \otimes{\cal H}_B^b\right ) {\cal P}\left ({\cal H}_A^b\otimes {\cal H}_B^{b\dag}\right ) \right]_{j_1}\rho_{AB}^{(j_1)} \right \}, \label{QBER2-2} \end{eqnarray} \end{widetext} where ${\rm Tr}_{A,B}^{(j_1)}$ denotes the tracing procedure over the $j_1$-th qudit pair of Alice and Bob. In other words, the reduced single-pair state $\rho_{AB}^{(j_1)}$ contains all the information about the noisy quantum channel and a possible general coherent attack by an eavesdropper, which is relevant for the evaluation of the error rate. In particular, this implies that an arbitrary joint eavesdropping attack which gives rise to a particular state $\rho_{AB}$ obeying Eq. (\ref{homogen}) is indistinguishable, from the point of view of the estimated disturbance, from a corresponding collective attack which addresses each qudit individually and results in the $2N$-pair state of the form $\bigotimes_{j=1}^{2N}\rho_{AB}^{(j)}$, for example. According to Eqs. (\ref{Proj1}) and (\ref{QBER2-2}) the average estimated disturbance is invariant under the transformations \begin{subequations} \label{SymmG1} \begin{eqnarray} (l,b) &\to& (l + m,b),\\ (l,b) &\to& (l,b \oplus 1), \end{eqnarray} \end{subequations} with $ m\in\field{d}$, while $\oplus$ denotes addition modulo $2$. This invariance implies that there are various reduced density operators of the $j_1$-th qudit pair, which all give rise to the same observed value of the average disturbance. This can be seen from Eq. (\ref{errorsMC}) which implies elementary relations of the form \begin{widetext} \begin{eqnarray} {\cal E}_{mn}{\cal H}^b \ket{j}\bra{j}({\cal H}^b)^\dag {\cal E}_{mn}^\dag = {\cal H}^b \ket{j+bn+(1-b)m}\bra{j+bn+(1-b)m} {\cal H}^{b\dag}. \label{simpleG} \end{eqnarray} \end{widetext} Together with the invariance of $D$ under the transformations (\ref{SymmG1}), these elementary relations imply that the reduced operators $\rho_{AB}^{(j_1)}$ and the symmetrized state \begin{eqnarray} \tilde{\rho}_{AB}^{(j_1)} &=&\frac{1}{4d^2} \sum_{g\in{\cal G}_1, h\in {\cal G}_2} U(h)U(g)\rho_{AB}^{(j_1)}U(g)^{\dagger}U(h)^{\dagger} \label{rhotildeG2} \end{eqnarray} give rise to the same value of $D$. Thereby, the unitary operators \begin{eqnarray} U(g_{mn}) &=& {\cal E}_{mn;A}\otimes {\cal E}_{mn;B}^*, \label{EU} \end{eqnarray} \begin{eqnarray} U(h_1) &=& \openone_A\otimes \openone_B,\quad\,\, U(h_3) = ({\cal H}_A\otimes {\cal H}_B^*)^2,\nonumber\\ U(h_2) &=& {\cal H}_A\otimes {\cal H}_B^*,\quad U(h_4) = ({\cal H}_A\otimes {\cal H}_B^*)^3, \label{HU} \end{eqnarray} have been introduced, which form unitary representations of two discrete Abelian groups ${\cal G}_1 =\{g_{00},g_{01},\ldots\}$ and ${\cal G}_2 =\{h_1,h_2,h_3,h_4\}$. The key point is now that, invariance of $\tilde{\rho}_{AB}^{(j_1)}$ under both of these groups is induced by the symmetry transformations (\ref{SymmG1}) which leave $D$ invariant. \section{Entanglement distillation and secret key} \label{distil} Having exploited the symmetries underlying the estimated disturbance, in this section we estimate the threshold disturbance that can, in principle, be tolerated by any $2d$-state QKD protocol, under the assumption of arbitrary coherent (joint) attacks. To this end, we make use of the {\em necessary precondition} for secret key distillation that is, the correlations established between Alice and Bob during the state distribution cannot be explained by a separable state \cite{CLL-AG}. Throughout this work, we consider that Alice and Bob focus on the sifted key during the post processing (i.e., they discard immediately all the polarization data for which they have used different bases) and that they treat each pair independently. Thus, according to the aforementioned precondition, given a particular value of the estimated disturbance $D$, the task of Alice and Bob is to infer whether their correlations may have originated from a separable state or not. So, {\em our aim is to estimate the threshold disturbance $D_{\rm th}$ such that for any $D<D_{th}$ Alice and Bob share provable entanglement with certainty}. To this end, we proceed as follows : Firstly, we estimate the regime of disturbances for which Alice and Bob share distillable entanglement. Secondly, we demonstrate that for the remaining regime of disturbances the correlations shared between Alice and Bob can always be described by a separable state. \subsection{Threshold disturbance} \label{distil-1} Adopting the entanglement-based version of the protocol defined in Sec. \ref{2bases-3}, let us estimate the regime of disturbances for which Alice and Bob share free entanglement. From the symmetries underlying the observed average error rate and in particular from Eq. (\ref{rhotildeG2}) we have that the density operator $\rho_{AB}^{(j_1)}$ is freely entangled if $\tilde{\rho}_{AB}^{(j_1)}$ is freely entangled, as both states are related by local unitary operations and convex summation. Hence, to determine the values of the disturbance for which the real state $\rho_{AB}^{(j_1)}$ is distillable, it suffices to determine the disturbances for which the most general two-qubit state $\tilde{\rho}_{AB}^{(j_1)}$ (which is invariant under the discrete Abelian groups ${\cal G}_1 $ and ${\cal G}_2$) is distillable. We already know that the operators $U(g_{10})\equiv{\cal X}_A \otimes{\cal X}_B^*$ and $U(g_{01})\equiv{\cal Z}_A\otimes{\cal Z}_B^*$ of the group ${\cal G}_1$ constitute a {\em complete set of commuting operators} in $\mathbb{C}_A^d\otimes\mathbb{C}_B^d$, while their simultaneous eigenstates are the $d^2$ maximally entangled states defined in Eq. (\ref{bell-like}). Thus, the most general two-qudit state which is invariant under the Abelian group ${\cal G}_1$ is given, by a convex sum of all $\ket{\Psi_{mn}}$, i.e. \begin{eqnarray} {\tilde \rho}_{AB}^{(j_1)}= \sum_{m,n\in\field{d}} \lambda_{mn}\ket{\Psi_{mn}}\bra{\Psi_{mn}}, \label{rhoAB-Bell-G} \end{eqnarray} where the non-negative parameters $\lambda_{m n}$ have to fulfill the normalization condition \begin{eqnarray} \sum_{m,n\in\field{d}}\lambda_{mn} = 1. \label{normG} \end{eqnarray} Moreover, the operations ${\cal H}_A\otimes {\cal H}_B^*$, $({\cal H}_A\otimes {\cal H}_B^*)^2$ and $({\cal H}_A\otimes {\cal H}_B^*)^3$ transform Bell states into other Bell states. Thus, additional invariance of the quantum state (\ref{rhoAB-Bell-G}) under the discrete group ${\cal G}_2$ implies that \begin{eqnarray} \lambda_{m,n}=\lambda_{n,d-m}=\lambda_{d-m,d-n} =\lambda_{d-n,m}. \label{lambdas-2} \end{eqnarray} As a consequence of Eq. (\ref{lambdas-2}) there are different sets of identical parameters $\lambda_{mn}$. Each set $j$ contains four members $\eta_j$ unless the chain (\ref{lambdas-2}) is truncated. The latter case occurs for $d-m=m$ and $d-n=n$, i.e. for $m,n\in\{0,d/2\}$. More precisely, the sets $j$ with $m=n\in\{0,d/2\}$ contain one eigenvalue $\xi_j$ each, whereas the set with $m\neq n\in\{0,d/2\}$ has two equal eigenvalues denoted by $\zeta$. From now on we distinguish between even and odd dimensions $d$. All the sets for both cases as well as their notation are summarized in Table~\ref{tab:table1}. \begin{table} \caption{ The notation of the sets and the number of eigenvalues per set for even and odd dimensions.} \label{tab:table1} \begin{ruledtabular} \begin{tabular}{cccc} members per set &\multicolumn{2}{c}{number of sets} & notation\\ & Even $d$ & Odd $d$ & \\ \hline 1 & 2 & 1 & $\xi_j$\\ 2 & 1 & 0 &$\zeta$ \\ 4 & $(d^2-4)/4$ & $(d^2-1)/4$ & $\eta_j$ \\ \end{tabular} \end{ruledtabular} \end{table} Given the various sets of eigenvalues, the normalization condition (\ref{normG}) now reads \begin{subequations} \begin{eqnarray} {\rm Odd }\,\, d&:&\quad \xi_0+4\sum_{j=1}^{\eta_{odd}}\eta_j=1,\nonumber\\ {\rm Even }\,\, d&:&\quad \xi_0+\xi_1+2\zeta+4\sum_{j=1}^{\eta_{even}}\eta_j=1, \nonumber \end{eqnarray} \end{subequations} where in both cases the index $j$ runs over all the possible 4-member groups i.e., $\eta_{odd}\equiv(d^2-1)/4$ and $\eta_{even}\equiv(d^2-4)/4$ (see Table~\ref{tab:table1}). Similarly, using Eqs. (\ref{Proj2}), (\ref{QBER2-2}) and (\ref{rhoAB-Bell-G}), the estimated average disturbance can be expressed in the form \begin{subequations} \begin{eqnarray} {\rm Odd}\,\, d &:& \quad D=2\sum_{j=1}^{\lfloor d/2\rfloor}\eta_j+ 4\sum_{j=\lfloor d/2\rfloor+1}^{\eta_{odd}}\eta_j,\nonumber\\ {\rm Even}\,\, d &:& \quad D=\xi_1+\zeta+2\sum_{j=1}^{d/2-1}\eta_j+ 4\sum_{j=d/2}^{\eta_{even}}\eta_j,\nonumber \label{D-general} \end{eqnarray} \end{subequations} with $\lfloor x\rfloor$ denoting the largest integer not greater than $x$, while all the parameters (disturbance and eigenvalues) are real-valued and non-negative, i.e. \begin{eqnarray} 0\leq D,\xi_j,\zeta,\eta_j\leq 1.\nonumber \end{eqnarray} Let us evaluate now the disturbances for which the state ${\tilde \rho_{AB}}^{(j_1)}$ is distillable. According to the reduction criterion \cite{HH}, if ${\tilde \rho_{AB}}^{(j_1)}$ is separable, then \begin{eqnarray} {\tilde \rho}_A^{(j_1)}\otimes\openone_B- {\tilde \rho_{AB}}^{(j_1)}\geq 0 \label{red} \end{eqnarray} (and also $\openone_A\otimes{\tilde \rho}_B^{(j_1)}- {\tilde \rho}_{AB}^{(j_1)}\geq 0$), with ${\tilde \rho}_A^{(j_1)}\equiv {\rm Tr}_B [{\tilde \rho}_{AB}^{(j_1)}]$. Using the explicit form of ${\tilde \rho_{AB}}^{(j_1)}$ given by Eq. (\ref{rhoAB-Bell-G}) we have ${\tilde \rho}_A^{(j_1)}={\tilde \rho}_B^{(j_1)}=\openone_d/d$, where $\openone_d$ denotes the unit operator in $\mathbb{C}_{A(B)}^d$. Thus inequality (\ref{red}) reads \begin{eqnarray} \sum_{m,n\in\field{d}} \left (\frac{1}{d}-\lambda_{mn}\right )\ket{\Psi_{mn}}\bra{\Psi_{mn}} \geq0. \label{dstlCnst} \end{eqnarray} Violation of the above inequality (\ref{dstlCnst}) for any of the eigenvalues $\lambda_{mn}$, i.e. \begin{eqnarray} \lambda_{mn}> \frac{1}{d}, \label{distilConst1} \end{eqnarray} is {\em sufficient} for distillability of the entanglement of ${\tilde \rho}_{AB}^{(j_1)}$ and implies violation of the Peres criterion (i.e., a non-positive partial transpose) for this state \cite{HH,Letal}. In particular, as long as the fidelity $f$ of ${\tilde \rho}_{AB}^{(j_1)}$ with respect to $\ket{\Psi_{00}}$ satisfies \begin{eqnarray} f\equiv\bra{\Psi_{00}}{\tilde \rho}_{AB}^{(j_1)}\ket{\Psi_{00}}> \frac{1}{d}, \label{fidel} \end{eqnarray} the state can be distilled with the help of unitary twirling operations ${\cal U}_A\otimes{\cal U}_B^*$ which leave $f$ invariant \cite{HH}. In our case, using Eqs. (\ref{rhoAB-Bell-G}) and (\ref{lambdas-2}) the distillability condition (\ref{fidel}) reads $\xi_0 > 1/d$ or equivalently \begin{subequations} \begin{eqnarray} {\rm Odd}\quad d&:&\quad D < D_0 +2\sum_{j=\lfloor d/2\rfloor+1}^{\eta_{odd}}\eta_j,\nonumber \\ {\rm Even}\quad d&:&\quad D< D_0+\frac{1}{2}\xi_1+ 2\sum_{j=d/2}^{\eta_{even}}\eta_j,\nonumber \end{eqnarray} \end{subequations} where \begin{eqnarray} D_0\equiv \frac{d-1}{2d}. \label{d0-def} \end{eqnarray} According to these last inequalities, and given the fact that $\xi_j,\zeta,\eta_j\geq 0$, the threshold disturbance $D_{\rm th}$ for entanglement distillation at any dimension satisfies the inequality \begin{eqnarray} D_{\rm th}\geq D_0, \label{distilConst3} \end{eqnarray} with $D_0$ given by (\ref{d0-def}). For any $D<D_{\rm th}$, the symmetrized state ${\tilde \rho}_{AB}^{(j_1)}$ is always distillable (i.e., freely entangled). Given that $\rho_{AB}^{(j_1)}$ and ${\tilde \rho}_{AB}^{(j_1)}$ are related via local operarations and convex summation, the original state $\rho_{AB}^{(j_1)}$ must also be distillable in the same regime of disturbances. Nevertheless, the fact that inequality (\ref{fidel}) is not satisfied for $D\geq D_{\rm th}$, does not necessarily imply that the state ${\tilde \rho}_{AB}^{(j_1)}$ is not at all distillable for $D\geq D_{\rm th}$. For instance, there might exist another eigenvalue $\lambda_{mn}$ (and not $\xi_0$) which satisfies inequality (\ref{distilConst1}) [i.e., it violates inequality (\ref{red})] and this fact, according to the the reduction criterion, is also sufficient for distillability of ${\tilde \rho}_{AB}^{(j_1)}$ \cite{HH}. Hence, we must now evaluate the precise value of the threshold disturbance $D_{\rm th}$. One way to prove that strict equality holds in Eq. (\ref{distilConst3}) for any $2d$-state QKD protocol is to demonstrate that for $D \geq D_0$, there always exist separable states which can describe Alice's and Bob's correlations and simultaneously are indistinguishable from the real bipartite state $\rho_{AB}^{(j_1)}$. To this end, let us focus on bipartite Bell-diagonal states i.e., states which can be written in the form (\ref{rhoAB-Bell-G}) and consider the following particularly simple family of such separable states \begin{widetext} \begin{eqnarray} \sigma_{AB}(D)&=&y\openone_{d^2}+d|x-y|\sum_{k\in\field{d}} \frac{(\ket{k_A}\bra{k_A})\otimes (\ket{k_B}\bra{k_B})}{d}+d|x-y|\sum_{i\in\field{d}} \left ({\tilde \sigma}_{A}^{(i)} \otimes {\tilde \sigma}_{B}^{(i)}\right). \label{sep} \end{eqnarray} \end{widetext} Thereby \begin{eqnarray} x &=& \frac{1+d(d-2)(1-D)}{d^2(d-1)},\nonumber \\ y &=& \frac{1+d(-1+2D)}{d^2(d-1)}, \nonumber \end{eqnarray} and \begin{eqnarray} {\tilde \sigma}_{C}^{(i)} &=& \frac{1}{d}\sum_{k\in\field{d}}\ket{k_{C}}\bra{(k+i)_{C}}, \nonumber \end{eqnarray} while $\openone_{d^2}$ denotes the unit operator in $\mathbb{C}_A^d\otimes\mathbb{C}_B^d$. This family is parametrized by the estimated average disturbance $D$ detected by Alice and Bob and is valid for \begin{eqnarray} \frac{d-1}{2d}=D_0\leq D\leq \frac{2d-1}{2d}. \label{interval} \end{eqnarray} Moreover, any separable state which belongs to this family is indistinguishable, from the point of view of the estimated error rate, from the real state shared between Alice and Bob. In other words, whenever the detected disturbance $D$ is within the interval (\ref{interval}), the correlations shared between Alice and Bob can be very well described in the framework of the family of seprable states $\sigma_{AB}(D)$. In such a case, the necessary precondition for secret key distillation is not met for disturbances within this regime, so that the protocol must be aborted. So, we have proved that strict equality holds in (\ref{distilConst3}) and thus, from Alice's and Bob's point of view, the threshold disturbance for entanglement distillation in the context of entanglement-based $2d$-state QKD protocols is \begin{eqnarray} D_{\rm th}=\frac{d-1}{2d}. \label{distilConst4} \end{eqnarray} In particular, if the detected average disturbance is below this threshold, the two legitimate users can be assured that they share freely entangled qudit-pairs with high probability. In other words, under the assumption of general coherent attacks, for $D<D_{\rm th}$ Alice and Bob are always able to extract a secret key by application of a two-way EPP which purifies towards the maximally entangled state $\ket{\Psi_{00}}$. On the other hand, an estimated disturbance above $D_{\rm th}=(d-1)/2d$, does not allow Alice and Bob to infer whether the state they share is entangled or not. In particular, we have seen that there is at least one simple family of separable states which can describe Alice's and Bob's correlations up to high error rates of magnitude $(2d-1)/2d$. Finally, note that for $d=2$ we reveal the threshold disturbance for the standard BB84 QKD protocol, that is $D_{\rm th}=1/4$ \cite{NA}. Moreover, $D_{\rm th}\to 1/2$ for $d\to\infty$ reflecting the possible advantage of using higher-dimensional quantum systems as information carriers in quantum cryptography. \begin{figure}[t] \centerline{\includegraphics[width=8.0cm]{fig1.eps}} \caption{(Color online) $2d$-state QKD protocols : The threshold disturbance for entanglement distillation as a function of dimension. The triangles refer to Eq. (\ref{distilConst4}) and arbitrary coherent attacks whereas the circles correspond to Eq. (\ref{distilConstOC}) and optimal cloning machines.} \label{Dth:fig} \end{figure} In view of the {\em necessary precondition} for secret key distillation \cite{CLL-AG}, our results imply that $D_{\rm th}$ is also the ultimate upper security bound of any $2d$-state prepare-and-measure QKD protocol. Nevertheless, the details of a particular prepare-and-measure scheme (that is the error correction and privacy amplification protocols required) which will be capable of meeting this upper security bound remain an open question. In fact one has to specify a classical distillation (post-processing) protocol which has the same bounds of tolerable noise as quantum distillation protocols. It is worth mentioning, however, that the security bound (\ref{distilConst4}) relies on certain conditions. In particular, it relies on the complete omission of any polarization data from the raw key that involve different bases for Alice and Bob, as well as on the individual manipulation of each pair during the post-processing. If some of these conditions are changed, also the threshold disturbance may change. Recently, under the same conditions, Acin {\em et al.} \cite{AGS} derived another bound for entanglement distillation, namely \begin{eqnarray} D_{\rm th}^{\rm (CM)}=1-\frac{1}{\sqrt{d}}. \label{distilConstOC} \end{eqnarray} As depicted in Fig.~\ref{Dth:fig}, $D_{\rm th}^{\rm (CM)}$ is well above the threshold we have derived in this work for any dimension of the information carriers. The reason is basically that $D_{\rm th}^{\rm (CM)}$ has been obtained under the additional assumption that Eve is restricted to so-called ``optimal incoherent attacks''. These attacks rely on cloning machines and maximize Eve's information gain. One can easily verify, for example, that the class of separable states (\ref{sep}) is not optimal (in the sense of \cite{AGS}). In our work we allow for arbitrary eavesdropping attacks and thus we have demonstrated that the distillation of a secret key for disturbances above $D_{\rm th}$ is impossible. So, although the incoherent attacks considered in \cite{AGS} are optimal with respect to the information gain of an eavesdropper, they are not able to disentangle Alice and Bob at the lowest possible disturbance. The cost of information loss that Eve has to accept by employing an attack that disentangles Alice and Bob at each particular disturbance above $D_{\rm th}$ remains an open question. Clearly, to this end one has to consider in detail the eavesdropping attack and this is beyond the purpose of this work. A further issue ought to be brought up here in connection with the existence of bound entanglement. For $2\otimes 2$ systems (i.e., for the BB84 QKD protocol) non-distillability is equivalent to separability \cite{P-H} and this fact seems to lead to a complete equivalence between entanglement distillation and secrecy \cite{AMG}. However, for higher dimensions the situation is more involved due to the existence of bound entangled states with positive or non-positive partial transpose \cite{Letal,HHH}. Moreover, in a recent work \cite{HHHO} Horedecki {\em et al.} showed that a secret key can be distilled even from bound entangled states. As a consequence, a qudit-based (with $d>2$) QKD scheme could, in principle, go beyond entanglement distillation. However, this does not seem to be the case for the post processing and the protocols we consider throughout this work. Indeed, for $D<D_{\rm th}$ we have seen that the state shared between Alice and Bob is always distillable, i.e. it is freely entangled. Bound entangled states are expected to exist for $D\geq D_{\rm th}$ and this is precisely the regime of parameters where the ideas presented in \cite{HHHO} can be used for the extraction of a secret key beyond entanglement distillation. We have demonstrated, however, that an eavesdropper is always able to break any entanglement between Alice and Bob for $D\geq D_{\rm th}$ without being detected, by preparing, for example, a separable state from the family $\sigma_{AB}(D)$. As a consequence, according to \cite{CLL-AG}, the protocol must be aborted at $D=D_{\rm th}$. Under these circumstances, the extraction of a secret key beyond entanglement distillation seems to be practically impossible. The reason is basically that, based on the estimated error rate, Alice and Bob are incapable of verifying whether they share a separable state or not for disturbances above $D\geq D_{\rm th}$. Alice and Bob can improve their situation only if they do not restrict themselves to the sifted data only. In particular, constructing appropriate entanglement witnesses from their raw data \cite{CLL-AG}, Alice and Bob can verify whether they share a separable state or not, even for $D\geq D_{\rm th}$. Closing this section, let us briefly compare the performance of two different realizations of a six-state QKD protocol, namely a $3$-bases scheme using qubits and a qutrit-based scheme using $2$ out of $4$ mutually unbiased bases. In principle, both protocols can tolerate precisely the same error rate, that is $1/3$. Nevertheless, the qutrit-based protocol offers a higher yield since $1/2$ of the transmissions pass the sifting procedure (compared to $1/3$ for the qubit-based protocol). Thus, although both six-state protocols appear to be equally secure, the qutrit based scheme seems to be more efficient. \subsection{Examples} \label{distil-3} So far, our discussion involved arbitrary dimensions and general coherent attacks. For the sake of illustration, in this subsection we briefly discuss low dimensions (i.e., $d=2,3$) as well as symmetric (isotropic) channels \cite{BKBGC,AGS} and arbitrary dimensions. In particular, we present evidence of the fact that for $d=3$ any eavesdropping strategy is equivalent to a symmetric one. However, for $d>3$ this equivalence does not seem to exist anymore. Moreover, we present numerical results for $d=3,4$ and $5$, verifying the security bounds derived in the previous subsection. \subsubsection{Qubits} As a consequence of Eq. (\ref{lambdas-2}), for $d=2$ there are three different eigenvalues entering Eq. (\ref{rhoAB-Bell-G}). So, in a matrix form we may write \begin{eqnarray} \lambda_{mn}=\left ( \begin{array}{ccc} u\quad & x \\ x\quad & y\\ \end{array} \right ), \label{lambdas2} \end{eqnarray} with the eigenvalues $u,x$ and $y$ satisfying the normalization condition $u+2x+y=1$. In this notation, the estimated disturbance can be expressed in the form $D = x + y$. One can easily verify that the state $\tilde{\rho}_{AB}^{(j_1)}$ is entangled for $1/4<D$ and $D>3/4$ \cite{NA}. Moreover, for $1/4\leq D\leq 3/4$ the state $\tilde{\rho}_{AB}^{(j_1)}$ is always separable and indistinguishable (as far as the estimated disturbance is concerned) from the real state $\rho_{AB}^{(j_1)}$ shared between Alice and Bob. \subsubsection{Qutrits} In analogy to qubits, applying Eq. (\ref{lambdas-2}) for $d=3$ and without any additional assumptions one finds that there are three different eigenvalues entering Eq. (\ref{rhoAB-Bell-G}). In particular, the matrix of eigenvalues reads \begin{eqnarray} \lambda_{mn}=\left ( \begin{array}{ccc} u\quad & x\quad & x \\ x\quad & y\quad & y \\ x\quad & y\quad & y \end{array} \right ), \label{lambdas3} \end{eqnarray} while the average estimated disturbance is of the form $D = 2x + 4y$. Hence, taking into account the normalization condition $u+4(x+y)=1$, we have two real-valued and non-negative independent parameters in the problem. Moreover, the partial transpose of $\tilde{\rho}_{AB}^{(j_1)}$ is block-diagonal with all three blocks being identical and equal to \begin{eqnarray} M_3=\frac{1}{3}\left ( \begin{array}{ccc} u+2x & x-y & x-y\\ x-y & x+2y & u-x \\ x-y & u-x & x+2y \end{array} \right ). \nonumber \end{eqnarray} Hence, the following two eigenvalues \begin{subequations} \begin{eqnarray} \nu_1&=&\frac{1}{3}\left(-u+2x+2y\right),\nonumber\\ \nu_2&=&\frac{1}{3}\left(u+x+y-\sqrt{3}(x-y)\right),\nonumber \end{eqnarray} \end{subequations} determine the sign of the partial transpose of $\tilde{\rho}_{AB}^{(j_1)}$. Related numerical results will be presented below. \subsubsection{Isotropic quantum channels} For $d> 3$ the number of independent parameters in the problem increases enormously with $d$, e.g. for $d=4$ we have \begin{eqnarray} \lambda_{mn}=\left ( \begin{array}{cccc} \xi_0\quad & \eta_1\quad & \zeta\quad & \eta_1\\ \eta_1\quad & \eta_2\quad & \eta_3\quad & \eta_2 \\ \zeta\quad & \eta_3\quad & \xi_1\quad & \eta_3\\ \eta_1\quad & \eta_2\quad & \eta_3\quad & \eta_2 \end{array} \right ). \label{lambdas4} \end{eqnarray} However, the situation becomes tractable in the case of isotropic channels (e.g., open-space QKD) where disturbances involving different errors \cite{noteD} are equal, thus leading to an eigenvalue matrix of the form \cite{PT,BKBGC,AGS} \begin{eqnarray} \lambda_{mn}=\left ( \begin{array}{cccc} u\quad & x\quad & \ldots\quad & x\\ x\quad & y\quad & \ldots\quad & y \\ \vdots\quad & \vdots\quad & \ddots\quad & \vdots\\ x\quad & y\quad & \ldots\quad & y \end{array} \right ), \label{lambdas-iso} \end{eqnarray} for any dimension $d$. In the case of qubits, such an isotropy argument does not seem to be a restriction. Thus, any eavesdropping strategy is equivalent to a symmetric (i.e., isotropic) one \cite{CG-FGNP}. This might be due to the fact that such a symmetry arises automatically as an inherent property of the qubit-based QKD protocols [the matrix (\ref{lambdas2}) is of the form (\ref{lambdas-iso})]. As a consequence, one is always able to substitute any eavesdropping attack with a symmetric one which yields the same results for all the properties which are defined as averages over all the possible messages sent by Alice to Bob (e.g., estimated disturbance) \cite{CG-FGNP}. Besides, here we see that the symmetry (isotropy) arises automatically for qutrits [the matrix (\ref{lambdas3}) is for the form (\ref{lambdas-iso})] and thus similar arguments must hold for $d=3$ as well. Nevertheless, we have found that for $d > 3$ this symmetry does not exist [see for instance Eq. (\ref{lambdas4}) for $d=4$] and one has to apply it explicitly. Hence, unless the quantum channel itself is isotropic, a restriction to symmetric eavesdropping strategies for $d>3$ seems unreasonable and might, in general, underestimate Eve's power. However, in our case such a restriction does not seem to affect the threshold disturbance, while simultaneously enabling us to present numerical results regarding $2d$-state QKD protocols with $d>3$. So, using the matrix (\ref{lambdas-iso}), the normalization condition (\ref{normG}) reads \begin{eqnarray} u+2(d-1)x+(d-1)^2y=1,\nonumber \end{eqnarray} and only two of the three parameters $(u,x,y)$ are independent. Moreover, combining Eqs. (\ref{Proj2}), (\ref{QBER2-2}) and (\ref{rhoAB-Bell-G}) we have that the average estimated disturbance is given by \begin{eqnarray} D = (d-1)x + (d-1)^2y.\nonumber \end{eqnarray} Finally, in analogy to the case of qutrits, the partial transpose of ${\tilde \rho}_{AB}^{(j_1)}$ is block-diagonal with each block being a $d\times d$ matrix. For odd dimensions all the blocks are identical whereas for even dimensions two different blocks appear. \subsubsection{Numerical results and discussion} We have been able to test the results of Sec.\ref{distil-1} numerically for qutrits, while for higher dimensions we had to resort to the assumption of isotropic quantum channels, to reduce the number of independent parameters in our simulations. More precisely, fixing two independent parameters, say $D$ and $x-y$, we evaluated all the remaining parameters $u$, $x$, $y$ which are consistent with all the constraints. Subsequently, for the parameters at hand we checked whether the distillability condition (\ref{distilConst1}) is fulfilled and whether the two-qudit state ${\tilde \rho}_{AB}^{(j_1)}$ has a non-positive partial transpose (NPPT). The corresponding ``distillability maps'' for $d=3,4,5$ are presented in Figs.~\ref{D-x-y:fig}. \begin{figure} \centerline{\includegraphics[width=8.0cm]{fig2.eps}} \caption{(Color online) $2d$-state QKD protocols: The regions of the independent parameters $D$ and $x-y$ for which the qudit-pair state $\tilde{\rho}_{AB}^{(j_1)}$ is (a) NPPT and distillable ; ~ (b) PPT; ~ (c) NPPT but the reduction criterion is satisfied. From the top to the bottom, the ``distillability maps'' correspond to $d=5,4\,{\rm and}\,3$, respectively. The non-negativity of $x,y\,{\rm and}\,u$ (straight lines) defines the region of parameters where the protocols operate while the distillability condition (\ref{fidel}) separates distillable from non-distillable states. The threshold disturbances for entanglement distillation are indicated by black dots. The triangles correspond to the separable state (\ref{sep}). Note the different scales of the horizontal axis.} \label{D-x-y:fig} \end{figure} Our simulations confirm the validity of \begin{eqnarray} D_{\rm th} = \frac{d-1}{2d} \nonumber \end{eqnarray} as the ultimate robustness bound for $2d$-state QKD protocols. More precisely, for $D < D_{\rm th}$ Alice and Bob share always freely entangled qudit pairs [regime (a) in Figs.~\ref{D-x-y:fig}]. On the contrary, for $D \geq D_{\rm th}$ we can identify two different regimes of parameters. The dominant regime (b) involves parameters which yield a ${\tilde \rho_{AB}}^{(j_1)}$ with positive partial transpose (PPT). These states can not be distilled and are either separable or bound entangled \cite{HHH,Letal}. Besides, we have the regime of parameters (c), for which ${\tilde \rho_{AB}}^{(j_1)}$ has a NPPT but the reduction criterion is not violated. These states probably belong to the hypothetical set of bound entangled states with NPPT \cite{HH,Letal}. At this point, it could be argued that $D\geq D_{\rm th}$ is the regime of parameters where the ideas of Horodecki {\em et al.} might be applicable for the distillation of a secret key from bound entangled states \cite{HHHO}. To this end, however, Alice and Bob have to confirm whether the state they share is indeed bound entangled. Such an identification is only possible with the help of appropriate additional entanglement witnesses constructed from the polarization data of the raw key \cite{CLL-AG}. \section{Conclusions} \label{conclusions} We have discussed the robustness of qudit-based QKD protocols that use two mutually unbiased bases, under the assumption of general coherent (joint) attacks. For $d=3$ (i.e., for qutrits), we have presented evidence of the fact that any eavesdropping strategy is equivalent to a symmetric one, while for higher dimensions this equivalence is no longer valid. The lowest possible disentanglement bound that an eavesdropper can saturate in the context of these cryptographic protocols scales with dimension as $(d-1)/2d$. Whenever Alice and Bob detect disturbances above $(d-1)/2d$, they are not able to infer whether their correlations originate from an entangled state or not, and the protocol must be aborted. On the contrary, if the detected disturbance is below $(d-1)/2d$, the two honest parties can be confident that they share free entanglement with high probability and the extraction of a secret key is, in principle, possible. In particular, for the entanglement-based version of the protocols such a secure key can be obtained after applying an appropriate EPP which purifies the qudit pairs shared between Alice and Bob towards $\ket{\Psi_{00}}$ \cite{DEJ,BDSW,ADGJ,HH}. Moreover, in view of the fundamental role of entanglement in secret key distribution \cite{CLL-AG}, the development of qudit-based prepare-and-measure schemes that can tolerate bit error rates up to $(d-1)/2d$ is also possible. For this purpose, however, the construction of new appropriate two-way EPPs, which are consistent with the associated prepare-and-measure schemes seems to be of vital importance. Our results generalize the results of \cite{AGS} to arbitrary coherent attacks and simultaneously answer (to some extent) many of the open issues raised in the concluding remarks of that paper. Finally, it should be stressed that the disturbance thresholds we have obtained depend on the post-processing of the QKD protocol. In particular, they rely on the complete omission of those qudits of the raw key for which Alice and Bob measured in different bases. Furthermore, they also rely on the fact that Alice and Bob manipulate each qudit pair separately. Under these conditions, we have demonstrated that the extraction of a secret-key from bound entangled states is impossible in the framework of qudit-based QKD protocols that use two mutually unbiased bases. \section{Acknowledgments} Stimulating discussions with Markus Grassl and Antonio Acin are gratefully acknowledged. This work is supported by the EU within the IP SECOQC.
d39e73656c06cb2ef38f60428497ef6fb97a106c
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\setcounter{equation}{0}\Section{\setcounter{equation}{0}\Section} \newcommand{\mathcal{H}}{\mathcal{H}} \newcommand{\varepsilon}{\varepsilon} \def\thesection.\arabic{equation}{\thesection.\arabic{equation}} \def\mathbb{R} {\mathbb{R} } \def\mathbb{N}{\mathbb{N}} \def\mathbb{E}{\mathbb{E}} \def{\rm Tr}{{\rm Tr}} \def{\underline h}{B^H } \def{\bf Problem\ \ \ }{{\bf Problem\ \ \ }} \def\widetilde{\widetilde} \def{\bf a}{{\bf a}} \def{\bf b}{{\bf b}} \def{\bf c}{{\bf c}} \def{\bf d}{{\bf d}} \def\bfe{{\bf e}} \def{\bf f}{{\bf f}} \def{\bf g}{{\bf g}} \def{\bf h}{{\bf h}} \def\bfi{{\bf i}} \def{\bf j}{{\bf j}} \def{\bf k}{{\bf k}} \def{\bf l}{{\bf l}} \def\bfm{{\bf m}} \def{\bf n}{{\bf n}} \def{\bf o}{{\bf o}} \def{\bf p}{{\bf p}} \def\bfq{{\bf q}} \def{\bf r}{{\bf r}} \def{\bf s}{{\bf s}} \def{\bf t}{{\bf t}} \def\bfu{{\bf u}} \def{\bf v}{{\bf v}} \def{\bf w}{{\bf w}} \def{\bf x}{{\bf x}} \def{\bf y}{{\bf y}} \def{\bf z}{{\bf z}} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal D}{{\cal D}} \def{\cal C}{{\cal C}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal H}{{\cal H}} \def{\hfill $\Box$}{{\hfill $\Box$}} \def{\int_0^t\!\!\!\int_0^t }{{\int_0^t\!\!\!\int_0^t }} \def{\delta}{{\delta}} \def{\lambda}{{\lambda}} \def{\sigma}{{\sigma}} \def{\bb E}{{\mathbf{B} E}} \def{\cal F}{{\cal F}} \def{\Delta}{{\Delta}} \def{\eta}{{\eta}} \def{\cal L}{{\cal L}} \def{\Omega}{{\Omega}} \def{\Xi}{{\Xi}} \def{\Gamma}{{\Gamma}} \def{\Delta}{{\Delta}} \def{\hbox{Exp}}{{\hbox{Exp}}} \def{\Sigma}{{\Sigma}} \def{ \hbox{ Tr} }{{ \hbox{ Tr} }} \def{ \hbox{ ess\ sup} } {{ \hbox{ ess\ sup} }} \def{\Lambda}{{\Lambda}} \def{\varepsilon}{{\varepsilon}} \def \eref#1{\hbox{(\ref{#1})}} \def{\underline t}{{\underline t}} \def{\underline f}{{\underline f}} \def{\underline g}{{\underline g}} \def{\underline h}{{\underline h}} \def{\theta}{{\theta}} \def{\Theta}{{\Theta}} \def{\Omega}{{\Omega}} \def{\cal S}{{\cal S}} \DeclareMathOperator{\Lip}{\mathit{L}} \DeclareMathOperator{\LIP}{Lip} \DeclareMathOperator{\lip}{\mathit{l}} \DeclareMathOperator{\Vip}{\overline{\varsigma}} \DeclareMathOperator{\vip}{\underline{\varsigma}} \DeclareMathOperator{\vv}{\varsigma} \DeclareMathOperator{\BC}{BC} \DeclareMathOperator{\CH}{CD} \DeclareMathOperator{\sd}{\beta} \colorlet{tableheadcolor}{gray!10} \newcommand{\headcol}{\rowcolor{tableheadcolor}} % \colorlet{tablerowcolor}{gray!10} \newcommand{\rowcol}{\rowcolor{tablerowcolor}} % \newcommand{\topline}{\arrayrulecolor{black}\specialrule{0.1em}{\abovetopsep}{0pt}% \arrayrulecolor{tableheadcolor}\specialrule{\belowrulesep}{0pt}{0pt}% \arrayrulecolor{black}} \newcommand{\bottomlinec}{\arrayrulecolor{tablerowcolor}\specialrule{\aboverulesep}{0pt}{0pt}% \arrayrulecolor{black}\specialrule{\heavyrulewidth}{0pt}{\belowbottomsep}}% \colorlet{blcolor}{gray!80} \newcommand{\arrayrulecolor{tableheadcolor}{\arrayrulecolor{tableheadcolor} \specialrule{\aboverulesep}{0pt}{0pt}% \arrayrulecolor{black}\specialrule{\lightrulewidth}{0pt}{0pt}% \arrayrulecolor{tablerowcolor}\specialrule{\belowrulesep}{0pt}{0pt}% \arrayrulecolor{black}} \newcommand{\rowmidlineG}{\arrayrulecolor{tablerowcolor}% \specialrule{\aboverulesep}{0pt}{0pt}% \arrayrulecolor{blcolor}\specialrule{\lightrulewidth}{0pt}{0pt}% \arrayrulecolor{tablerowcolor}\specialrule{\belowrulesep}{0pt}{0pt}% \arrayrulecolor{black}} \newcommand{\myRef}[2]{#1} \newcommand{\myEqRef}[2]{(#1)} \makeatletter \newcommand{\dotfillstretch}[1]{% \leavevmode \cleaders\hb@xt@.44em{\hss.\hss}\hskip\z@\@plus #1fill \kern\z@ } \makeatother \usepackage{natbib} \bibliographystyle{plainnat} \usepackage{filecontents} \begin{filecontents}{\jobname.bib} @book{karlin1981second, AUTHOR = {Karlin, Samuel and Taylor, Howard M.}, TITLE = {A second course in stochastic processes}, PUBLISHER = {Academic Press, Inc. [Harcourt Brace Jovanovich, Publishers], New York-London}, YEAR = {1981}, PAGES = {xviii+542}, MRCLASS = {60-01 (60J60)}, MRNUMBER = {611513}, MRREVIEWER = {D. A. Darling}, } @article{hu2015parameter, AUTHOR = {Hu, Yaozhong and Lee, Chihoon and Lee, Myung Hee and Song, Jian}, TITLE = {Parameter estimation for reflected {O}rnstein-{U}hlenbeck processes with discrete observations}, JOURNAL = {Stat. Inference Stoch. Process.}, FJOURNAL = {Statistical Inference for Stochastic Processes. An International Journal Devoted to Time Series Analysis and the Statistics of Continuous Time Processes and Dynamical Systems}, VOLUME = {18}, YEAR = {2015}, NUMBER = {3}, PAGES = {279--291}, ISSN = {1387-0874}, MRCLASS = {62M05 (60J60 62F12)}, MRNUMBER = {3395608}, MRREVIEWER = {Rosa Maria Mininni}, DOI = {10.1007/s11203-014-9112-7}, URL = {https://doi.org/10.1007/s11203-014-9112-7}, } @incollection{hu2013parameter, AUTHOR = {Hu, Yaozhong and Song, Jian}, TITLE = {Parameter estimation for fractional {O}rnstein-{U}hlenbeck processes with discrete observations}, BOOKTITLE = {Malliavin calculus and stochastic analysis}, SERIES = {Springer Proc. Math. Stat.}, VOLUME = {34}, PAGES = {427--442}, PUBLISHER = {Springer, New York}, YEAR = {2013}, MRCLASS = {62M05 (60G22 60H07 60J60 62F12)}, MRNUMBER = {3070455}, MRREVIEWER = {Wei-Lin Xiao}, DOI = {10.1007/978-1-4614-5906-4_19}, URL = {https://doi.org/10.1007/978-1-4614-5906-4_19}, } @article{bass1987uniqueness, AUTHOR = {Bass, R. F. and Pardoux, \'{E}.}, TITLE = {Uniqueness for diffusions with piecewise constant coefficients}, JOURNAL = {Probab. Theory Related Fields}, FJOURNAL = {Probability Theory and Related Fields}, VOLUME = {76}, YEAR = {1987}, NUMBER = {4}, PAGES = {557--572}, ISSN = {0178-8051}, MRCLASS = {60J60}, MRNUMBER = {917679}, MRREVIEWER = {Ruth J. Williams}, DOI = {10.1007/BF00960074}, URL = {https://doi.org/10.1007/BF00960074}, } @book {MR717388, AUTHOR = {Tong, Howell}, TITLE = {Threshold models in nonlinear time series analysis}, SERIES = {Lecture Notes in Statistics}, VOLUME = {21}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {1983}, PAGES = {x+323}, ISBN = {0-387-90918-4}, MRCLASS = {62M10}, MRNUMBER = {717388}, MRREVIEWER = {G. M\'{e}lard}, DOI = {10.1007/978-1-4684-7888-4}, URL = {https://doi.org/10.1007/978-1-4684-7888-4}, } @article{meyn1993stability, AUTHOR = {Meyn, Sean P. and Tweedie, R. L.}, TITLE = {Stability of {M}arkovian processes. {II}. {C}ontinuous-time processes and sampled chains}, JOURNAL = {Adv. in Appl. Probab.}, FJOURNAL = {Advances in Applied Probability}, VOLUME = {25}, YEAR = {1993}, NUMBER = {3}, PAGES = {487--517}, ISSN = {0001-8678}, MRCLASS = {60J27}, MRNUMBER = {1234294}, MRREVIEWER = {Esa Nummelin}, DOI = {10.2307/1427521}, URL = {https://doi.org/10.2307/1427521}, } @article{cheng2020generalized, AUTHOR = {Cheng, Yiying and Hu, Yaozhong and Long, Hongwei}, TITLE = {Generalized moment estimators for {$\alpha$}-stable {O}rnstein-{U}hlenbeck motions from discrete observations}, JOURNAL = {Stat. Inference Stoch. Process.}, FJOURNAL = {Statistical Inference for Stochastic Processes. An International Journal Devoted to Time Series Analysis and the Statistics of Continuous Time Processes and Dynamical Systems}, VOLUME = {23}, YEAR = {2020}, NUMBER = {1}, PAGES = {53--81}, ISSN = {1387-0874}, MRCLASS = {62F12 (60G52 62M05)}, MRNUMBER = {4072252}, DOI = {10.1007/s11203-019-09201-4}, URL = {https://doi.org/10.1007/s11203-019-09201-4}, } @article{brockwell1992continuous, title = "On continuous-time threshold autoregression", journal = "International Journal of Forecasting", volume = "8", number = "2", pages = "157 - 173", year = "1992", issn = "0169-2070", doi = "https://doi.org/10.1016/0169-2070(92)90116-Q", url = "http://www.sciencedirect.com/science/article/pii/016920709290116Q", author = "P.J. Brockwell and R.J. Hyndman", } @article{su2015quasi, AUTHOR = {Su, Fei and Chan, Kung-Sik}, TITLE = {Quasi-likelihood estimation of a threshold diffusion process}, JOURNAL = {J. Econometrics}, FJOURNAL = {Journal of Econometrics}, VOLUME = {189}, YEAR = {2015}, NUMBER = {2}, PAGES = {473--484}, ISSN = {0304-4076}, MRCLASS = {62M05 (60J60 91B84 91G30 91G70)}, MRNUMBER = {3414915}, DOI = {10.1016/j.jeconom.2015.03.038}, URL = {https://doi.org/10.1016/j.jeconom.2015.03.038}, } @article{su2017testing, AUTHOR = {Su, Fei and Chan, Kung-Sik}, TITLE = {Testing for threshold diffusion}, JOURNAL = {J. Bus. Econom. Statist.}, FJOURNAL = {Journal of Business \& Economic Statistics}, VOLUME = {35}, YEAR = {2017}, NUMBER = {2}, PAGES = {218--227}, ISSN = {0735-0015}, MRCLASS = {62M02 (60J60 62F03 91G30 91G70)}, MRNUMBER = {3622833}, MRREVIEWER = {Georgiy M. Shevchenko}, DOI = {10.1080/07350015.2015.1073594}, URL = {https://doi.org/10.1080/07350015.2015.1073594}, } @article{stramer1996existence, AUTHOR = {Stramer, O. and Tweedie, R. L. and Brockwell, P. J.}, TITLE = {Existence and stability of continuous time threshold {ARMA} processes}, JOURNAL = {Statist. Sinica}, FJOURNAL = {Statistica Sinica}, VOLUME = {6}, YEAR = {1996}, NUMBER = {3}, PAGES = {715--732}, ISSN = {1017-0405}, MRCLASS = {60J25 (62M05 62M10)}, MRNUMBER = {1410743}, MRREVIEWER = {B. M. P\"{o}tscher}, } @book{meyn2012markov, AUTHOR = {Meyn, Sean and Tweedie, Richard L.}, TITLE = {Markov chains and stochastic stability}, EDITION = {Second}, NOTE = {With a prologue by Peter W. Glynn}, PUBLISHER = {Cambridge University Press, Cambridge}, YEAR = {2009}, PAGES = {xxviii+594}, ISBN = {978-0-521-73182-9}, MRCLASS = {60J05}, MRNUMBER = {2509253}, MRREVIEWER = {M. Iosifescu}, DOI = {10.1017/CBO9780511626630}, URL = {https://doi.org/10.1017/CBO9780511626630}, } @article{chan1993consistency, AUTHOR = {Chan, K. S.}, TITLE = {Consistency and limiting distribution of the least squares estimator of a threshold autoregressive model}, JOURNAL = {Ann. Statist.}, FJOURNAL = {The Annals of Statistics}, VOLUME = {21}, YEAR = {1993}, NUMBER = {1}, PAGES = {520--533}, ISSN = {0090-5364}, MRCLASS = {62M10 (62J05)}, MRNUMBER = {1212191}, MRREVIEWER = {Ed McKenzie}, DOI = {10.1214/aos/1176349040}, URL = {https://doi.org/10.1214/aos/1176349040}, } @article{brockwell2007continuous, ISSN = {10170405, 19968507}, URL = {http://www.jstor.org/stable/26432511}, author = {Peter J. Brockwell and Richard A. Davis and Yu Yang}, journal = {Statistica Sinica}, number = {1}, pages = {63--80}, publisher = {Institute of Statistical Science, Academia Sinica}, title = {CONTINUOUS-TIME {G}AUSSIAN AUTOREGRESSION}, volume = {17}, year = {2007} } @article{stramer2007bayesian, AUTHOR = {Stramer, O. and Roberts, G. O.}, TITLE = {On {B}ayesian analysis of nonlinear continuous-time autoregression models}, JOURNAL = {J. Time Ser. Anal.}, FJOURNAL = {Journal of Time Series Analysis}, VOLUME = {28}, YEAR = {2007}, NUMBER = {5}, PAGES = {744--762}, ISSN = {0143-9782}, MRCLASS = {Expansion}, MRNUMBER = {2395912}, DOI = {10.1111/j.1467-9892.2007.00549.x}, URL = {https://doi.org/10.1111/j.1467-9892.2007.00549.x}, } } @article{brockwell1991continuous, AUTHOR = {Brockwell, Peter J. and Hyndman, Rob J. and Grunwald, Gary K.}, TITLE = {Continuous time threshold autoregressive models}, JOURNAL = {Statist. Sinica}, FJOURNAL = {Statistica Sinica}, VOLUME = {1}, YEAR = {1991}, NUMBER = {2}, PAGES = {401--410}, ISSN = {1017-0405}, MRCLASS = {62M10 (62F10 62M20)}, MRNUMBER = {1130126}, MRREVIEWER = {Tarmo Pukkila}, } @book{brooks2011handbook, TITLE = {Handbook of {M}arkov chain {M}onte {C}arlo}, SERIES = {Chapman \& Hall/CRC Handbooks of Modern Statistical Methods}, EDITOR = {Brooks, Steve and Gelman, Andrew and Jones, Galin L. and Meng, Xiao-Li}, PUBLISHER = {CRC Press, Boca Raton, FL}, YEAR = {2011}, PAGES = {xxvi+592}, ISBN = {978-1-4200-7941-8}, MRCLASS = {62-06 (60J22 62F15 65C05)}, MRNUMBER = {2742422}, DOI = {10.1201/b10905}, URL = {https://doi.org/10.1201/b10905}, } @article{kutoyants2012identification, AUTHOR = {Kutoyants, Yury A.}, TITLE = {On identification of the threshold diffusion processes}, JOURNAL = {Ann. Inst. Statist. Math.}, FJOURNAL = {Annals of the Institute of Statistical Mathematics}, VOLUME = {64}, YEAR = {2012}, NUMBER = {2}, PAGES = {383--413}, ISSN = {0020-3157}, MRCLASS = {60J60 (62F12 62F15 62M10)}, MRNUMBER = {2878912}, MRREVIEWER = {Matteo Ruggiero}, DOI = {10.1007/s10463-010-0318-1}, URL = {https://doi.org/10.1007/s10463-010-0318-1}, } @book {MR1652247, AUTHOR = {van der Vaart, A. W.}, TITLE = {Asymptotic statistics}, SERIES = {Cambridge Series in Statistical and Probabilistic Mathematics}, VOLUME = {3}, PUBLISHER = {Cambridge University Press, Cambridge}, YEAR = {1998}, PAGES = {xvi+443}, ISBN = {0-521-49603-9; 0-521-78450-6}, MRCLASS = {62-02 (62E20 62F05 62F12 62G07 62G09 62G20)}, MRNUMBER = {1652247}, MRREVIEWER = {Nancy Reid}, DOI = {10.1017/CBO9780511802256}, URL = {https://doi.org/10.1017/CBO9780511802256}, } @incollection{browne1995piecewise, AUTHOR = {Browne, Sid and Whitt, Ward}, TITLE = {Piecewise-linear diffusion processes}, BOOKTITLE = {Advances in queueing}, SERIES = {Probab. Stochastics Ser.}, PAGES = {463--480}, PUBLISHER = {CRC, Boca Raton, FL}, YEAR = {1995}, MRCLASS = {60J05 (60F05 60J60 60J80)}, MRNUMBER = {1395170}, MRREVIEWER = {V\v{e}ra L\'{a}nsk\'{a}}, } @article{linetsky2005transition, AUTHOR = {Linetsky, Vadim}, TITLE = {On the transition densities for reflected diffusions}, JOURNAL = {Adv. in Appl. Probab.}, FJOURNAL = {Advances in Applied Probability}, VOLUME = {37}, YEAR = {2005}, NUMBER = {2}, PAGES = {435--460}, ISSN = {0001-8678}, MRCLASS = {60J35 (60J60 60J70 60K25 91B28)}, MRNUMBER = {2144561}, DOI = {10.1239/aap/1118858633}, URL = {https://doi.org/10.1239/aap/1118858633}, } @article{zhuo2017simple, AUTHOR = {Zhuo, Xiaoyang and Xu, Guangli and Zhang, Haoyan}, TITLE = {A simple trinomial lattice approach for the skew-extended {CIR} models}, JOURNAL = {Math. Financ. Econ.}, FJOURNAL = {Mathematics and Financial Economics}, VOLUME = {11}, YEAR = {2017}, NUMBER = {4}, PAGES = {499--526}, ISSN = {1862-9679}, MRCLASS = {91G60 (65C05)}, MRNUMBER = {3709385}, MRREVIEWER = {Zhijian He}, DOI = {10.1007/s11579-017-0192-1}, URL = {https://doi.org/10.1007/s11579-017-0192-1}, } @article{zhuo2017efficient, AUTHOR = {Zhuo, Xiaoyang and Menoukeu-Pamen, Olivier}, TITLE = {Efficient piecewise trees for the generalized skew {V}asicek model with discontinuous drift}, JOURNAL = {Int. J. Theor. Appl. Finance}, FJOURNAL = {International Journal of Theoretical and Applied Finance}, VOLUME = {20}, YEAR = {2017}, NUMBER = {4}, PAGES = {1750028, 34}, ISSN = {0219-0249}, MRCLASS = {91G60 (91G20)}, MRNUMBER = {3658513}, MRREVIEWER = {Piotr Nowak}, DOI = {10.1142/S0219024917500285}, URL = {https://doi.org/10.1142/S0219024917500285}, } @article{wang2015skew, AUTHOR = {Wang, Suxin and Song, Shiyu and Wang, Yongjin}, TITLE = {Skew {O}rnstein-{U}hlenbeck processes and their financial applications}, JOURNAL = {J. Comput. Appl. Math.}, FJOURNAL = {Journal of Computational and Applied Mathematics}, VOLUME = {273}, YEAR = {2015}, PAGES = {363--382}, ISSN = {0377-0427}, MRCLASS = {60J60 (60J70 91G40 91G80)}, MRNUMBER = {3239257}, DOI = {10.1016/j.cam.2014.06.023}, URL = {https://doi.org/10.1016/j.cam.2014.06.023}, } @article{gairat2017density, AUTHOR = {Gairat, Alexander and Shcherbakov, Vadim}, TITLE = {Density of skew {B}rownian motion and its functionals with application in finance}, JOURNAL = {Math. Finance}, FJOURNAL = {Mathematical Finance. An International Journal of Mathematics, Statistics and Financial Economics}, VOLUME = {27}, YEAR = {2017}, NUMBER = {4}, PAGES = {1069--1088}, ISSN = {0960-1627}, MRCLASS = {60J65 (60J70 91G20)}, MRNUMBER = {3705163}, MRREVIEWER = {Hardy Hulley}, DOI = {10.1111/mafi.12120}, URL = {https://doi.org/10.1111/mafi.12120}, } @article{decamps2006self, AUTHOR = {Decamps, Marc and Goovaerts, Marc and Schoutens, Wim}, TITLE = {Self exciting threshold interest rates models}, JOURNAL = {Int. J. Theor. Appl. Finance}, FJOURNAL = {International Journal of Theoretical and Applied Finance}, VOLUME = {9}, YEAR = {2006}, NUMBER = {7}, PAGES = {1093--1122}, ISSN = {0219-0249}, MRCLASS = {91B28 (60H10 60H30)}, MRNUMBER = {2269906}, DOI = {10.1142/S0219024906003937}, URL = {https://doi.org/10.1142/S0219024906003937}, } @article{jiang2018pricing, AUTHOR = {Jiang, Yiming and Song, Shiyu and Wang, Yongjin}, TITLE = {Pricing {E}uropean vanilla options under a jump-to-default threshold diffusion model}, JOURNAL = {J. Comput. Appl. Math.}, FJOURNAL = {Journal of Computational and Applied Mathematics}, VOLUME = {344}, YEAR = {2018}, PAGES = {438--456}, ISSN = {0377-0427}, MRCLASS = {60H30 (91G20)}, MRNUMBER = {3825527}, DOI = {10.1016/j.cam.2018.04.039}, URL = {https://doi.org/10.1016/j.cam.2018.04.039}, } @article{siu2006option, AUTHOR = {Siu, Tak Kuen and Tong, Howell and Yang, Hailiang}, TITLE = {Option pricing under threshold autoregressive models by threshold {E}sscher transform}, JOURNAL = {J. Ind. Manag. Optim.}, FJOURNAL = {Journal of Industrial and Management Optimization}, VOLUME = {2}, YEAR = {2006}, NUMBER = {2}, PAGES = {177--197}, ISSN = {1547-5816}, MRCLASS = {91B28 (62M10 62P05 91B84)}, MRNUMBER = {2208402}, DOI = {10.3934/jimo.2006.2.177}, URL = {https://doi.org/10.3934/jimo.2006.2.177}, } @article{siu2016self, AUTHOR = {Siu, Tak Kuen}, TITLE = {A self-exciting threshold jump-diffusion model for option valuation}, JOURNAL = {Insurance Math. Econom.}, FJOURNAL = {Insurance: Mathematics \& Economics}, VOLUME = {69}, YEAR = {2016}, PAGES = {168--193}, ISSN = {0167-6687}, MRCLASS = {91G20 (60H30 60J75)}, MRNUMBER = {3515887}, MRREVIEWER = {Xingchun Wang}, DOI = {10.1016/j.insmatheco.2016.05.008}, URL = {https://doi.org/10.1016/j.insmatheco.2016.05.008}, } @book {MR1121940, AUTHOR = {Karatzas, Ioannis and Shreve, Steven E.}, TITLE = {Brownian motion and stochastic calculus}, SERIES = {Graduate Texts in Mathematics}, VOLUME = {113}, EDITION = {Second}, PUBLISHER = {Springer-Verlag, New York}, YEAR = {1991}, PAGES = {xxiv+470}, ISBN = {0-387-97655-8}, MRCLASS = {60J65 (35K99 35R60 60G44 60H10 60J60)}, MRNUMBER = {1121940}, DOI = {10.1007/978-1-4612-0949-2}, URL = {https://doi.org/10.1007/978-1-4612-0949-2}, } @book {MR1083357, AUTHOR = {Revuz, Daniel and Yor, Marc}, TITLE = {Continuous martingales and {B}rownian motion}, SERIES = {Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, VOLUME = {293}, PUBLISHER = {Springer-Verlag, Berlin}, YEAR = {1991}, PAGES = {x+533}, ISBN = {3-540-52167-4}, MRCLASS = {60G48 (60G44 60H05 60H10)}, MRNUMBER = {1083357}, MRREVIEWER = {F. B. Knight}, DOI = {10.1007/978-3-662-21726-9}, URL = {https://doi.org/10.1007/978-3-662-21726-9}, } @article{chi2017option, author = {Chi, Zeyu and Dong, Fangyuan and Wong, Hoi Ying}, title = {Option Pricing with Threshold Mean Reversion}, journal = {Journal of Futures Markets}, volume = {37}, number = {2}, pages = {107-131}, doi = {10.1002/fut.21795}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1002/fut.21795}, year = {2017} } @book {MR0240343, AUTHOR = {Buchholz, Herbert}, TITLE = {The confluent hypergeometric function with special emphasis on its applications}, SERIES = {Translated from the German by H. Lichtblau and K. Wetzel. Springer Tracts in Natural Philosophy, Vol. 15}, PUBLISHER = {Springer-Verlag New York Inc., New York}, YEAR = {1969}, PAGES = {xviii+238}, MRCLASS = {33.20}, MRNUMBER = {0240343}, } @book {MR0174795, AUTHOR = {Lebedev, N. N.}, TITLE = {Special functions and their applications}, SERIES = {Revised English edition. Translated and edited by Richard A. Silverman}, PUBLISHER = {Prentice-Hall, Inc., Englewood Cliffs, N.J.}, YEAR = {1965}, PAGES = {xii+308}, MRCLASS = {33.00}, MRNUMBER = {0174795}, MRREVIEWER = {A. Erd\'{e}lyi}, } @article{doi:10.1111/sjos.12417, author = {Lejay, Antoine and Pigato, Paolo}, title = {Maximum likelihood drift estimation for a threshold diffusion}, journal = {Scandinavian Journal of Statistics}, volume = {47}, number = {3}, pages = {609-637}, keywords = {maximum likelihood estimator, mixed normal distribution, null recurrent process, oscillating Brownian motion, threshold diffusion, transient process}, doi = {10.1111/sjos.12417}, url = {https://onlinelibrary.wiley.com/doi/abs/10.1111/sjos.12417}, year = {2020} } @article{doi:10.1080/14697688.2020.1781235, author = { Kailin Ding and Zhenyu Cui and Yongjin Wang }, title = {A Markov chain approximation scheme for option pricing under skew diffusions}, journal = {Quantitative Finance}, volume = {0}, number = {0}, pages = {1-20}, year = {2020}, publisher = {Routledge}, doi = {10.1080/14697688.2020.1781235}, URL = { https://doi.org/10.1080/14697688.2020.1781235}, } } } } \end{filecontents} \begin{document} \begin{frontmatter} \title{Parameter estimation for threshold Ornstein-Uhlenbeck processes from discrete observations} \author{Yaozhong Hu\footnote{Y.Hu is supported by an NSERC discovery grant and a startup fund of University of Alberta. }} \ead{yaozhong@ualberta.ca} \author{Yuejuan Xi\footnote{Y. Xi is supported by the National Natural Science Foundation of China under Grant No. $71532001$ and $11631004$ and China Scholarship Council.}} \ead{yjx@mail.nankai.edu.cn} \address{Department of Math and Stat Sciences, University of Alberta at Edmonton, Canada.} \address{School of Mathematical Sciences, Nankai University, Tianjin, China.} \begin{abstract} Assuming that a threshold Ornstein-Uhlenbeck process is observed at discrete time instants, we propose generalized moment estimators to estimate the parameters. Our theoretical basis is the celebrated ergodic theorem. To use this theorem we need to find the explicit form of the invariant measure. With the sampling time step $h>0$ arbitrarily fixed, we prove the strong consistency and asymptotic normality of our estimators as the sample size $N\to\infty$. \end{abstract} \begin{keyword} Threshold Ornstein-Uhlenbeck process; invariant measure; ergodic theorem; generalized moment estimators; strong consistency; asymptotic normality. \MSC[2010] 62M05\sep 62F12\end{keyword} \end{frontmatter} \setcounter{equation}{0}\Section{Introduction} Let $W=\{W(t)\}_{t\ge0}$ be a one-dimensional standard Brownian motion on a filtered probability space $ (\Omega, \mathcal F, \mathbb P, (\mathcal F_t)_{\{t\ge 0\}} )$ and let a threshold Ornstein-Uhlenbeck (hereafter abbreviated as OU) process $X$ be described by the following stochastic differential equation (SDE): \begin{equation}\label{e:MTOU_general} dX_t=\sum_{i=1}^m(\beta_i-\alpha_i X_t)I(\theta_{i-1}< X_t\le \theta_i)dt+\sigma dW_t, \end{equation} where $\theta_i, i=0, 1, \cdots, m$ with $-\infty=\theta_0<\theta_1<\theta_2<\cdots<\theta_m=\infty$ are the so-called thresholds; $\beta_i\in \mathbb{R} $ and $\alpha_i>0$ are the drift parameters; $\sigma>0$ is the diffusion parameter; $X_0\in\mathbb{R} $ is a given initial condition; and $I(\cdot)$ denotes the indicator function. The existence and uniqueness of the solution to the above equation \eqref{e:MTOU_general} have been known \citep[e.g.][]{bass1987uniqueness}. Assume that the parameters $\alpha_i$ and $\beta_i$ are unknown and assume that we can observe the state $X_t$ of the process at discrete time instants $t_k=kh$, $k=1, 2, \cdots, N$, where $h$ is an arbitrarily given fixed time step. This paper aims to estimate the unknown parameters $\Theta=({\alpha}_1, \cdots, {\alpha}_m, {\beta}_1, \cdots, {\beta}_m)$ in \eqref{e:MTOU_general} by using the obtained observations $ \left\{X_{kh}\,, k= 1, 2, \cdots, N\right\}$. The models with different levels of thresholds have been widely studied and applied in various fields. On one hand, the threshold autoregressive models are introduced to model the nonlinearities in nonlinear time series. \citet{MR717388} found that it is more suitable to use the threshold models to describe the asymmetry in the variance-generating mechanism. \citet{brockwell1991continuous}, as well as, \citet{brockwell1992continuous} investigated the problems of modelling and forecasting the continuous-time threshold process. \citet{browne1995piecewise} showed that the piecewise-linear diffusion tends to be a good approximation for some birth-and-death processes. The threshold processes also played an important role in finance, we refer to the works of \citet{chi2017option}, \citet{decamps2006self}, \citet{jiang2018pricing}, \citet{siu2006option}, \citet{siu2016self} and references therein. On the other hand, the threshold diffusion processes have a close tie with the skew diffusion processes that have been widely treated in financial literature \citep[see][]{doi:10.1080/14697688.2020.1781235,gairat2017density,wang2015skew,zhuo2017efficient,zhuo2017simple}. While the threshold models are applied, an important problem is to estimate the parameters $\Theta$ through the available historical data. There have been some approaches to estimate the parameters for threshold diffusion processes such as least squares estimation, likelihood estimation, and Bayesian estimation. We refer the readers to \citet{brockwell2007continuous}, \citet{chan1993consistency}, \citet{kutoyants2012identification}, \citet{doi:10.1111/sjos.12417}, and \citet{stramer2007bayesian}. Let us also mention that in \citet{su2015quasi, su2017testing}, the authors proposed the novel quasi-likelihood estimators and test. Within the above mentioned estimation methods, the observations are supposed to be obtained continuously. Since real data are usually collected at discrete time instants, it is necessary to estimate the parameters when only discrete observations are available. To our best knowledge, the problem to estimate parameters for a continuous-time threshold diffusion processes based on discrete observations is under-explored. One situation in the discrete-time observations is that one has the high-frequency data, which means that in our observations $\left\{X_{kh}, k=1, 2, \cdots, N\right\}$, we have $h$ depends on $N$, $h\rightarrow 0$, and $Nh\rightarrow \infty$. In this case it is possible to approximate the (stochastic) integral by its ``Riemann-It\^o" sum to modify the continuous-time estimators to the discrete ones. In reality, the continuous or high-frequency observations are usually impossible or very costly that we cannot have the luxury to collect such large amount of data. As a consequence, the time step $h$ must be allowed to be an arbitrarily fixed constant. Hence, we cannot borrow methods that are only valid for continuous-time observations or for high-frequency data. The present work proposes a completely different approach to address this problem. Our approach is motivated by the previous works of the construction of the estimators: the ergodic type estimators for the OU process driven by fractional Brownian motion \citep[e.g.][]{hu2013parameter}; the ergodic type estimators for the reflected OU process driven by standard Brownian motions \citep[e.g.][]{hu2015parameter}; and the ergodic type estimators for the OU process driven by stable L\'evy motions \citep[e.g.][]{cheng2020generalized}. Similar to the above mentioned papers, we use the ergodic theorem to obtain the generalized moment estimators for the parameters. To this end, we need first to prove the ergodic theorem for our threshold diffusion \eqref{e:MTOU_general}. Namely, we need to prove that there is a probability density function $\psi(x)$ such that \begin{equation*} \lim_{N\rightarrow \infty}\frac{1}{N} \sum_{k=1}^N f(X_{t_k})=\int_{\mathbb{R} } f(x) \psi(x) dx \end{equation*} and we also need to find the explicit form of the probability density $\psi(x)$. This is done in Section \ref{sec:pre}. After obtaining the explicit dependence of the probability density on the parameters we let \begin{equation} \frac{1}{N}\sum_{k=1}^N f_i(X_{t_k})=\int_{\mathbb{R} } f_i(x) \psi(x) dx \label{e.1.2} \end{equation} for different appropriately chosen functions $f_i$ to obtain a suitable system of algebraic equations for the parameters. In Equation \eqref{e:MTOU_general} there are $3m$ unknown parameters $ {\alpha}_1, \cdots, {\alpha}_m, {\beta}_1, \cdots, {\beta}_m, {\theta}_1, \cdots, {\theta}_{m-1}, {\sigma}$. Presumably, we can choose $3m$ different functions $f$ so that we obtain a system of $3m$ equations for the $3m$ unknowns. However, some parameters are coupled with each other and cannot be separated. For example, from Remark \ref{r.3.1}, when $m=2$, $ {\theta}_0=-\infty $, ${\theta}_1=0$, $\theta_2=\infty$, ${\beta}_1={\beta}_2=0$, we see that if $(\frac{{\alpha}_1}{{\sigma}^2}, \frac{{\alpha}_2}{{\sigma}^2})$ remains the same, then the invariant probability density $\psi_1$ remains the same function. So, even in this simplest case we cannot expect to use \eqref{e.1.2} to estimate ${\alpha}_1$, ${\alpha}_2$, and ${\sigma}$ simultaneously. To avoid this identifiability problem in this paper we focus on the estimation of the parameters $\Theta$ assuming ${\theta}_1, \cdots, {\theta}_{m-1}, {\sigma}$ are known. Furthermore, to better convey our idea, we focus on the case that $m=2$, ${\theta}_0=-\infty$, ${\theta}_2=\infty$, and the parameters ${\theta}={\theta}_1$ and ${\sigma}$ are known. This means that we shall focus on the following equation: \begin{equation}\label{e:MTOU} dX_t= (\beta_1-\alpha_1 X_t)I( X_t\le \theta )dt+ (\beta_2-\alpha_2 X_t)I( X_t> \theta )dt+ \sigma dW_t\,, \end{equation} where $\theta\in\mathbb{R} $, $\beta_1$, $\beta_2\in\mathbb{R} $, $\alpha_1$, $\alpha_2\in(0,\infty)$, and $\sigma\in(0,\infty)$. However, it should be mentioned that if ${\sigma}$ and $\theta$ are unknown, we may assume that the data are collected from the high-frequency type. In this case, ${\sigma}$ and $\theta$ can be estimated in the manners of \citet{kutoyants2012identification} and \citet{su2015quasi}, respectively. Now that we have four parameters ${\Theta}=({\alpha}_1, {\alpha}_2, {\beta}_1, {\beta}_2)$, so we only need to choose four different $f$ to obtain a system of four equations. However, since the invariant probability density $\psi$ depends on the parameters in a very complex way it is hard to know whether the solution exists (locally and globally) uniquely. One of the major contributions of this work is to appropriately use the conditional moments so that we can obtain some manageable equations. This will be carried out in Section \ref{sec:S}. We briefly summarize our efforts in that section as follows. \begin{enumerate} \item[(1)] In Section \ref{sub:I}, we assume ${\beta}_1={\beta}_2 ={\theta}=0$. The conditional moments are introduced to obtain the explicit generalized moment estimators for ${\alpha}_1$ and ${\alpha}_2$. Furthermore, the strong consistency and asymptotic normality of the estimators are obtained. \item[(2)] In Section \ref{sub:II}, we assume that ${\beta}_1={\beta}_2 =0$ whereas ${\theta}$ is known but is not equal to $0$. In this case, we can obtain two uncoupled algebraic equations for the two parameters ${\alpha}_1$ and ${\alpha}_2$ by conditional moments. Each of these equations will be shown to have a globally unique solution, yielding the generalized moment estimators for ${\alpha}_1$ and ${\alpha}_2$, although not explicitly. The strong consistency and asymptotic normality of the estimators are obtained. \item[(3)] In Section \ref{sub:III}, we further assume that ${\theta}$ is known but is equal to not $0$ and we want to estimate all the four parameters $({\alpha}_1, {\alpha}_2, {\beta}_1, {\beta}_2)$. We use the conditional moments to invert the four equations into two uncoupled systems of equations to obtain the generalized estimators for $\alpha_1$, $\alpha_2$, $\beta_1$, and $\beta_2$. The Jacobians (which are independent of data) of the two systems are computed, whose non-degeneracy implies that both systems have unique local solutions. To seek an answer for global uniqueness we reduce the problem to a simpler one of finding the zeros of two functions, both of a single variable. If the derivatives (now involving observation data) of such functions are nonzero, then the global uniqueness holds by the mean value theorem. \end{enumerate} In our cases (2) and (3) the explicit solution to the system of algebraic equations is still hard to obtain. But there are many standard methods, such as the Newton-Raphson iteration method. It is available to solve the nonlinear system in Matlab and Mathematica by the built-in functions ``fsolve'' and ``FindRoot'', respectively. In Section \ref{sec:N}, some numerical experiments are provided to show the efficiency of our estimation approach. Section \ref{sec:C} concludes this paper. \setcounter{equation}{0}\Section{Ergodicity and invariant density} \label{sec:pre} Before proceeding to construct our estimators, we need some stationary and ergodic properties of the threshold diffusion process described by \eqref{e:MTOU_general}. The following proposition is adopted from \citet{brockwell1991continuous}, \citet{brockwell1992continuous}, and \citet{browne1995piecewise}. \begin{Pro}\label{pro:density} Suppose that $\sigma>0$. Then the process defined by \eqref{e:MTOU_general} has a stationary distribution if and only if \begin{equation*} \lim_{x\to-\infty}(-\alpha_1 x^2+2\beta_1x)<0\,,\quad \lim_{x\to\infty}(-\alpha_m x^2+2\beta_mx)<0\;. \end{equation*} Furthermore, the stationary density is given by \begin{equation*} \psi(x)=\sum_{i=1}^mk_i\exp\left(\frac{-\alpha_ix^2+2\beta_ix}{\sigma^2}\right)I(\theta_{i-1}<x\le \theta_i), \label{e.2.1} \end{equation*} where $k_i$ are uniquely determined by the system of $m$ equations: \begin{equation*} \int_{-\infty}^\infty\psi(x)dx=1\,,\quad {\rm and}\quad \psi(\theta_i-)=\psi(\theta_i+)\,,\quad i=1,2,\ldots, m-1 \,. \end{equation*} \end{Pro} \begin{rmk}\label{rmk:0} The constants $k_i, i=1, 2, \cdots, m$ depends on the parameters in the equation \eqref{e:MTOU_general}. This is one of the main reasons to make the analysis of the system of algebraic equations sophisticated. \end{rmk} Although the stationary density function $\psi(\cdot)$ is not Gaussian, it is a mixture of Gaussian densities and has finite moments of all orders. Moreover, if the threshold OU process $X$ is stationary, it is also geometrically ergodic \citep[see][]{stramer1996existence}. The following lemma describes the stochastic stability of threshold OU processes and plays a crucial role in our estimation approach. \begin{lemma}\label{le:ergodic} The $h$-skeleton sampled chain $\{X_{kh}:k\ge0\}$ which comes from the process $X$ defined by \eqref{e:MTOU_general} is ergodic, namely, the following ergodic identity holds: for any $X_0\in\mathcal S:=\mathbb{R} $ and for any $f\in L_1(\mathbb{R} , \psi(x) dx)$, \begin{equation*} \lim_{N\to\infty}\frac{1}{N}\sum_{k=1}^N f(X_{kh})=\mathbb E[f(X_\infty)]=\int_{\mathbb R} f(x)\psi(x)dx, ~a.s. \end{equation*} \end{lemma} \begin{pf} It suffices to show that the process X is bounded in probability on average and is a $T$-process \citep[see][Theorem 8.1]{meyn1993stability}. We note that the threshold diffusion process $X$ is a $\varphi$-irreducible $T$-process, where $\varphi$ is a Lebesgue measure \citep[see][]{stramer1996existence}. Moreover, since for $i=1,2$, \begin{equation*} \lim_{|x|\to\infty}\left(-\alpha_i x^2+2\beta_i x \right)<0, \end{equation*} we have from \citet[Theorem 5.1]{stramer1996existence} that $X$ is a positive Harris recurrent process. Finally, by virtue of \citet[Theroem 3.2(ii)] {meyn1993stability}, we conclude that $X$ is bounded in probability on average. \end{pf} Using the same definitions as that in \citet{karlin1981second}, the scale density function $s(x)$, scale measure $S(x)$, and speed density function $m(x)$ are given by \begin{equation*} s(x)=\left\{ \begin{aligned} &c_1\exp\left( -\frac{2\beta_1x}{\sigma^2}+\frac{\alpha_1x^2}{\sigma^2}\right) \;, & x\le\theta,\\ &c_2\exp\left( -\frac{2\beta_2x}{\sigma^2}+\frac{\alpha_2x^2}{\sigma^2}\right)\;, &x>\theta, \end{aligned} \right. \end{equation*} \begin{equation*} S(x)=\int_{-\infty} ^x s(y)dy, \quad m(x)=\frac{2}{s(x)\sigma^2}, \end{equation*} where $c_1=\exp\left(-\frac{2\beta_2\theta}{\sigma^2}+\frac{\alpha_2\theta^2}{\sigma^2}\right)$ and $c_2=\exp\left(-\frac{2\beta_1\theta}{\sigma^2}+\frac{\alpha_1\theta^2}{\sigma^2}\right)$. For $i=1,2$, let \begin{equation*} \widetilde z_i=\frac{\sqrt{2\alpha_i}}{\sigma}\left(\theta-\frac{\beta_i}{\alpha_i}\right), \quad b_i=\frac{\beta_i^2}{\sigma^2\alpha_i}\;. \end{equation*} Then the coefficients $k_1$ and $k_2$ of $\psi(x)$ are given by \begin{align}\label{e:k1} k_1&=\frac{1}{\sigma\sqrt \pi}\frac{\phi(\widetilde z_2)}{\phi(\widetilde z_2)e^{b_1}\Phi(\widetilde z_1)/\sqrt{\alpha_1}+{\phi(\widetilde z_1)e^{b_1}\Phi(-\widetilde z_2)/\sqrt{\alpha_2}} }\;,\\\label{e:k2} k_2&=\frac{1}{\sigma\sqrt \pi}\frac{\phi(\widetilde z_1)}{\phi(\widetilde z_2)e^{b_2}\Phi(\widetilde z_1)/\sqrt{\alpha_1}+{\phi(\widetilde z_1)e^{b_2}\Phi(-\widetilde z_2)/\sqrt{\alpha_2}} }\;, \end{align} where $\phi(x):=\frac{1}{\sqrt{2\pi}}e^{-x^2/2}$ is the normal density, $\Phi(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^xe^{-\frac{y^2}{2}}dy$ is the the standard normal distribution function. Although the SDE \eqref{e:MTOU} has no explicit solution, we can derive the spectral expansion of its transition density, see the proof in \ref{A:spectral}. \begin{Pro}\label{Pro:spec} For $i=1,2$, set \begin{align*} z_i&=\frac{\sqrt{2\alpha_i}}{\sigma}\left(x-\frac{\beta_i}{\alpha_i}\right), \quad \nu_i=\frac{\lambda}{\alpha_i} \;,\\ \widetilde z_i&=\frac{\sqrt{2\alpha_i}}{\sigma}\left(\theta-\frac{\beta_i}{\alpha_i}\right), \quad \varrho=\frac{2\beta_1\theta+2\beta_2\theta-\alpha_1\theta^2-\alpha_2\theta^2 }{\sigma^2}. \end{align*} Let $D_v(z)$ and $H_v(z)$ denote the parabolic cylinder function and Hermite function respectively \citep[see][]{MR0240343,MR0174795}. Let $0\le\lambda_1<\lambda_2<\cdots<\lambda_n\to\infty$ as $n\to\infty$ be the simple discrete zeros of the Wronskian equation: \begin{equation}\label{e:w} \omega(\lambda)=\exp(\varrho)2^{1-\frac{\nu_1+\nu_2}{2}}\sigma^{-1}\left[\nu_2\sqrt{\alpha_2} H_{\nu_1}(-\frac{\widetilde z_1}{\sqrt2})H_{\nu_2-1} (\frac{\widetilde z_2}{\sqrt 2}) +\nu_1 \sqrt{\alpha_1} H_{\nu_2}(\frac{\widetilde z_2}{\sqrt2})H_{\nu_1-1} (-\frac{\widetilde z_1}{\sqrt 2})\right] =0\,. \end{equation} Denote \begin{equation}\label{e:varphi} \varphi_n(x)=\left\{ \begin{aligned} &\sqrt{\frac{\eta(\theta,\lambda_n)}{\omega^\prime(\lambda_n)\xi(\theta,\lambda_n)}}\xi(x,\lambda_n) \;, & x\le\theta,\\ &\sign(\xi(\theta,\lambda_n)\eta(\theta,\lambda_n))\sqrt{\frac{\xi(\theta,\lambda_n)}{\omega^\prime(\lambda)\eta(\theta,\lambda_n)}}\eta(x,\lambda_n) \;, &x>\theta, \end{aligned} \right. \end{equation} with \begin{equation*} \xi(x,\lambda)=\exp\left( z_1^2/4\right)D_{\nu_1}(-z_1), \quad \eta(x,\lambda)=\exp\left( z_2^2/4\right)D_{\nu_2}(z_2)\,. \end{equation*} Then, the spectral expansion of the transition density of $X$ (defined from $ \mathbb P(X_t\in A|X_0=x)=\int_A p_t(x,y)dy$ for any Borel set $A$ of $\mathbb{R} $) is given by \begin{equation*}\label{density} p_t(x,y)=m(y)\sum_{n=1}^{\infty}\exp(-\lambda_n t)\varphi_n(x)\varphi_n(y)\,. \end{equation*} \end{Pro} \setcounter{equation}{0}\Section{Estimate $\alpha_i$ and $\beta_i$} \label{sec:S} In this section we attempt to construct generalized moment estimators for the parameters $\alpha=(\alpha_1,\alpha_2)^T$ and $\beta=(\beta_1,\beta_2)^T$, where $T$ denotes the transpose of a vector, and to study their strong consistency and asymptotic normality. We classify our study into several cases according to the drift parameters. \subsection{Case I: Estimate $\alpha_i$ for known $\beta_i=0$ and $\theta=0$ }\label{sub:I} Here we consider the case $\beta_i=0$, $i=1,2$ and ${\theta}=0$. In this case the equation becomes \begin{equation}\label{e:0OU} dX_t =-\alpha_1 X_t I( X_t\le 0)dt -\alpha_2 X_t I( X_t> 0)dt+\sigma dW_t\,. \end{equation} Then the stationary density of $X$ is given by \begin{equation} \psi_1(x)=\frac{2 \sqrt{\alpha_1\alpha_2}}{\sqrt{\pi}(\sqrt{\alpha_1}+\sqrt{\alpha_2}) \sigma }\left[ \exp\left(-\frac{\alpha_1x^2}{\sigma^2}\right) I(x\le0)+ \exp\left(-\frac{\alpha_2x^2}{\sigma^2}\right) I(x>0) \right].\label{e.3.6} \end{equation} \begin{remark}\label{r.3.1} It is easily observed that $\psi_1(x)$ depends only on $\frac{{\alpha}_1}{{\sigma}^2}$ and $\frac{{\alpha}_2}{{\sigma}^2}$. \end{remark} From this identity we have \begin{Pro}\label{Prop:Xestimate} Let $X_\infty=\lim_{t\to\infty}X_t$ and define \begin{equation*}\label{e:LR} L_n=\mathbb E\left[(-X_\infty)^{n} I(X_\infty\le0)\right], \quad R_n=\mathbb E\left[X_\infty^{n} I(X_\infty>0)\right]. \end{equation*} Then for any real number $n>0$, \begin{align} L_n&= \frac{\sigma^{n}\sqrt{\alpha_1\alpha_2}}{\alpha_1^{(n+1)/2}\sqrt{\pi}(\sqrt{\alpha_1}+\sqrt{\alpha_2})}\Gamma\left(\frac{n+1}{2}\right)\;, \label{e.3.8} \\ R_n&=\frac{\sigma^{n}\sqrt{\alpha_1\alpha_2}}{\alpha_2^{(n+1)/2}\sqrt{\pi}(\sqrt{\alpha_1}+\sqrt{\alpha_2})}\Gamma\left(\frac{n+1}{2}\right)\;,\label{e.3.9} \end{align} where $\Gamma(\cdot)$ denotes the Gamma function $\Gamma(\alpha)=\int_0^\infty x^{\alpha-1}e^{-x}dx $. \end{Pro} From the above expressions \eqref{e.3.8}-\eqref{e.3.9} and by some elementary calculations, we can represent the parameters $\alpha_1$ and $ {\alpha}_2$ in terms of $L_n$ and $R_n$ as \begin{align}\label{e:Ealpha} \alpha_1&=\left\{\frac{ \sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}L_n\left[ \left( \frac{R_n}{L_n}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}},\\\label{e:Ebeta} \alpha_2&=\left\{\frac{\sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}R_n\left[ \left( \frac{L_n}{R_n}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}}. \end{align} Since $ L_n>0$ and $R_n>0$, ${\alpha}_1$ and ${\alpha}_2$ are well-defined by \eqref{e:Ealpha} and \eqref{e:Ebeta}. Setting \begin{equation*} \widehat L_{n,N}=\frac{1}{N}\sum_{k=1}^N(-X_{kh})^{n}I(X_{kh}\le0),\quad \widehat R_{n,N}=\frac{1}{N}\sum_{k=1}^N(X_{kh})^{n}I(X_{kh}>0), \end{equation*} we naturally construct the generalized moment estimators for ${\alpha}_1, {\alpha}_2$ as follows: \begin{align}\label{e:alpha} \widehat\alpha_{1,n,N}&=\left\{\frac{ \sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}\widehat L_{n,N}\left[ \left( \frac{\widehat R_{n,N}}{\widehat L_{n,N}}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}},\\\label{e:beta} \widehat\alpha_{2,n,N}&=\left\{\frac{\sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}\widehat R_{n,N}\left[ \left( \frac{\widehat L_{n,N}}{\widehat R_{n,N}}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}}. \end{align} We will show the strong consistency and asymptotic normality of the estimators $\widehat\alpha_{1,n,N}$ and $\widehat\alpha_{2,n,N}$ of $\alpha_{1}$ and $\alpha_{2}$ in the following theorems. \begin{rmk} Although the expectation of $(-X_\infty)^n I(X_\infty\le \theta)$ (or $X_\infty^n I(X_\infty>\theta)$) is not the $n$-th order moment in the conventional sense, it captures sufficient information about the parameters and the motivation of the estimation scheme in this paper stems from the generalized moment estimation. For this reason, we still use the term of ``generalized moment estimators". \end{rmk} \begin{thm}\label{thm:1} Fix any real number $n>0$ and fix any time step size $h>0$. Then $\widehat\alpha_{1,n,N}\to\alpha_1$ and $\widehat\alpha_{2,n,N}\to\alpha_2$ almost surely as $N\to\infty$, where $\widehat\alpha_{1,n,N}$, $\widehat\alpha_{2,n,N}$ are defined by \eqref{e:alpha} and \eqref{e:beta} respectively. \end{thm} \begin{pf} The straightforward applications of Lemma \ref{le:ergodic} to $f_1(x)=(-x)^nI(x\le 0)$ and $f_2(x)=x^nI(x>0)$ yield \begin{equation*} \lim_{N\to\infty} \widehat L_{n,N}= L_n> 0,\quad \lim_{N\to\infty}\widehat R_{n,N}=R_n>0,~a.s. \end{equation*} which imply the theorem by \eqref{e:Ealpha}-\eqref{e:beta}. \end{pf} Next, we study the central limit theorem (CLT) for the estimators. In comparison to Theorem 2 in \citet{hu2015parameter}, we shall discuss the joint asymptotic normality of the estimators. Before stating our theorem we need the following notations. Denote \begin{equation*} g_{1n}(x)=(-x)^nI(x\le0), \quad g_{2n} (x)=x^nI(x>0) \,. \end{equation*} Let $\widetilde X_0$ be a random variable with probability density function $\psi_1$ given by \eqref{e.3.6}, independent of the Brownian motion and let $\widetilde X_t$ be the solution to \eqref{e:0OU} with initial condition $\widetilde X_0$. From \citet[Theorem 17.0.1]{meyn2012markov}, we get that \begin{equation}\label{e:sigma} \sigma_{ij}^n:={\rm Cov} ( g_{in} (\widetilde X_0),g_{jn}(\widetilde X_0))+\sum_{k=1}^\infty\left[ {\rm Cov} ( g_{in} (\widetilde X_0),g_{jn} ( \widetilde X_{kh}))+{\rm Cov} (g_{jn} (\widetilde X_0), g_{in} (\widetilde X_{kh})) \right]\,, \end{equation} where $i,j=1,2$, are well defined and are given by \eqref{e.B.2} with ${\theta}=0$. Let $G_{i,n}$, $i=1,2$ be defined on $\mathbb{R} ^2$ by \begin{equation*} G_{1,n}(x,y)=\left\{\frac{ \sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}x\left[ \left( \frac{y}{x}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}},\quad G_{2,n}(x,y)=\left\{\frac{\sigma^{n}\Gamma\left(\frac{n+1}{2}\right)}{\sqrt{\pi}y\left[ \left( \frac{x}{y}\right)^{\frac{1}{n+1}}+1\right]} \right\}^{\frac{2}{n}} \end{equation*} which are the functions corresponding to \eqref{e:alpha} and \eqref{e:beta}. Denote $G_n=(G_{1,n},G_{2,n}): \mathbb{R} ^2\to \mathbb{R} ^2$. Now we can state our main result of this subsection. \begin{thm}\label{asy:1} Fix an arbitrary $h>0$. Denote $\alpha=(\alpha_1,\alpha_2)^T$ and $\widehat\alpha_{n, N}=(\widehat\alpha_{1,n,N},\widehat\alpha_{2,n,N})^T$, where $\widehat\alpha_{1,n,N}$, $\widehat\alpha_{2,n,N}$ are defined by \eqref{e:alpha} and \eqref{e:beta} respectively. Then as $N\to\infty$, \begin{align*} \sqrt N\left(\widehat\alpha_{n, N} -\alpha \right)&\Rightarrow \mathbf N\left(0, \nabla G_n(L_n, R_n) \cdot\Sigma_n\cdot \nabla G_n(L_n,R_n)^T\right)\,, \end{align*} where the symbol ``$\Rightarrow$'' denotes convergence in distribution, $\mathbf N(\mu,\Sigma)$ stands for the normal random vector with mean $\mu$ and variance $\Sigma$, and $ \Sigma_n:=(\sigma_{ij}^n)_{1\le i,j\le 2}$ with $\sigma_{ij}^n$ being defined by \eqref{e:sigma} or equivalently by \eqref{e.B.2} with ${\theta}=0$\,. \end{thm} \begin{pf} The proof is carried out in two steps. First, we establish the bivariate CLT for $(\widehat L_{n,N}, \widehat R_{n,N})^T$, then we employ the bivariate delta method. Recall that $\{X_{kh}\}$ is a positive Harris chain with invariant probability $\psi$ (Lemma \ref{le:ergodic}) and is $V$-uniformly ergodic with a function $V(x)=x^{2m}+1$ or $V(x)=e^{x^{2m}}+1$ \citep[see][Theorem 5.1]{stramer1996existence}. That is to say, there exist $R\in(0,\infty)$ and $\rho\in(0,1)$ such that for all $x\in \mathbb R$, \begin{equation*} ||P^n(x,\cdot)-\psi_1 ||_V\le RV(x)\rho^n, \end{equation*} where $V$-norm $||\nu||_V:=\sup_{g:g\le V}|\nu(g)|$, $\nu$ is any signed measure \citep[see][Page 334]{meyn2012markov}, and $P^n(x, B):=P_{nh}(x, B):=\mathbb P(X_{nh}\in B)$ is an $n$-step transition probability function of the sampled chain $\{X_{kh}\}_{k\ge0}$ from the initial point $x$ to set $B$. Then from \citet[Theorem 17.0.1]{meyn2012markov}, for any $(a_1, a_2)\in \mathbb R^2$, letting $A(x)=a_1x^nI(x\le0)+a_2x^nI(x>0)$, we know that \begin{equation*} \sqrt N (a_1 \widehat L_{n,N} +a_2 \widehat R_{n,N})=\frac{1}{\sqrt N} \sum_{k=1}^N A(X_{kh})=:\frac{1}{\sqrt N}S_n(A) \end{equation*} converges to some normal random variable $Z_{a_1, a_2}$ in the sense of distribution. By the Cram\'er-Wold device we know that $\sqrt N(\widehat L_{n,N} , \widehat R_{n,N})^T$ converges jointly to a (two-dimensional) normal vector. Moreover, in view of the multivariable Markov chain CLT \citep[Section 1.8.1]{brooks2011handbook}, we have \begin{equation*} \sqrt N((\widehat L_{n,N} , \widehat R_{n,N})^T-( L_{n,N} , R_{n,N})^T )\Rightarrow \mathbf N(0, \Sigma_n)\,, \end{equation*} where $\sigma_{ij}^n$ is defined by \eqref{e:sigma}. Let us recall the sufficient conditions of the multivariate delta method \citep[see][]{MR1652247}: all partial derivatives $\partial G_j(x,y)/\partial x$ and $\partial G_j(x,y)/\partial y$ exist for $(x,y)$ in a neighborhood of $(L_n, R_n)$ (notice that $L_n>0$ and $R_n>0$) and are continuous at $(L_n,R_n)$. It is clear that the conditions are justified. From the multivariate delta method, the following desired result follows \begin{equation*} \sqrt N \left( G_n(\widehat L_{n,N}, \widehat R_{n,N})-G_n(L_n,R_n)\right)\Rightarrow \mathbf{N}\left(0, \nabla G_n(L_n, R_n) \cdot\Sigma_n\cdot \nabla G_n(L_n,R_n)^T\right). \end{equation*} Hence, we complete the proof. \end{pf} \begin{rmk} The asymptotic variance is given by $ \nabla G_n(L_n, R_n) \cdot\Sigma_n\cdot \nabla G_n(L_n,R_n)^T$. Our numerical experiments show that the estimators perform better when $\alpha_1$ and $\alpha_2$ are smaller in terms of mean squared error (MSE), see Figure \ref{fig:MSE}, where we set $\sigma=1$, $h=0.5$, $N=100,000$. From Figure \ref{fig:MSE} , we also see that the best estimators is to choose the moment $n$ to be between $2$ and $4$. \end{rmk} \begin{figure} [t] \centering \subfigure[$\widehat\alpha_{1,n,N}$ against $n$ (${\alpha}_1=0.02$)]{\label{fig:a1} \includegraphics[width=0.4\textwidth]{MSE1.eps}} \subfigure[ $\widehat\alpha_{2,n,N}$ against $n$ (${\alpha}_1=0.05$)]{\label{fig:b1} \includegraphics[width=0.4\textwidth]{MSE2.eps}} \subfigure[$\widehat\alpha_{1,n,N}$ against $n$ (${\alpha}_1=0.1$)]{\label{fig:b2} \includegraphics[width=0.4\textwidth]{MSE3.eps}} \subfigure[ $\widehat\alpha_{2,n,N}$ against $n$ (${\alpha}_1=0.5$)]{\label{fig:b2} \includegraphics[width=0.4\textwidth]{MSE4.eps}} \caption{MSE of $\widehat\alpha_{1,n,N}$ and $\widehat\alpha_{2,n,N}$. } \label{fig:MSE} \end{figure} \subsection{Case II: Estimate $\alpha_i$ for known $\beta_i=0$ and $\theta\neq0$} \label{sub:II} Now we consider the case $\theta\neq0$, $\beta_i=0$, $i=1,2$. Recall the explicit expression for the stationary density we obtained in Section \ref{sec:pre}: \begin{equation*} \psi_2(x)=k_1\exp\left(-\frac{\alpha_1x^2}{\sigma^2}\right)I(x\le \theta)+k_2\exp\left(-\frac{\alpha_2x^2}{\sigma^2}\right)I(x> \theta), \end{equation*} where $k_1$ and $k_2$ are determined by $\psi_2(\theta-)=\psi_2(\theta+)$ and $\int_{-\infty}^\infty\psi_2(x)dx=1$. The constants $k_1$ and $k_2$ are complicated functions of the unknown parameters ${\alpha}_1$ and ${\alpha}_2$. We shall use the technique of conditional moments to get rid of them. Since the stationary distribution of $X$ is Gaussian conditioned to stay in the interval $(-\infty,\theta)$ or the interval $(\theta,\infty)$, we shall focus on the conditional moments of $X_\infty$. Some elementary calculations give \begin{equation*} \left\{ \begin{aligned} &\mathbb{E} [X_\infty|X_\infty\le\theta]=\frac{\mathbb{E} [X_\infty I(X_\infty\le\theta)]}{\mathbb{E} [I(X_\infty\le\theta)]}=-\frac{\sigma}{\sqrt{2\alpha_1}}\frac{\phi( -\sqrt{2\alpha_1}\theta/\sigma)}{1-\Phi(-\sqrt{2\alpha_1}\theta/\sigma)},\\ &\mathbb{E} [X_\infty|X_\infty>\theta]=\frac{\mathbb{E} [X_\infty I(X_\infty>\theta)]}{\mathbb{E} [I(X_\infty>\theta)]}=\frac{\sigma}{\sqrt{2\alpha_2}}\frac{\phi( \sqrt{2\alpha_2}\theta/\sigma)}{1-\Phi(\sqrt{2\alpha_2}\theta/\sigma)}\,. \end{aligned} \right.\label{e.3.12} \end{equation*} For simplicity of notations, we set \begin{equation}\label{LR} \widehat L_{n,N}^\theta=\frac{1}{N}\sum_{k=1}^NX_{kh}^{n}I(X_{kh}\le\theta),\quad \widehat R_{n,N}^\theta=\frac{1}{N}\sum_{k=1}^NX_{kh}^{n}I(X_{kh}>\theta), \end{equation} \begin{equation} L_{n}^\theta=\mathbb{E} [X_\infty^n I(X_\infty\le\theta)],\quad R_{n}^\theta=\mathbb{E} [X_\infty^n I(X_\infty>\theta)].\label{e.3.14} \end{equation} Motivated from the approximations $\widehat L_{n,N}^\theta\approx L_n^\theta$ and $\widehat R_{n,N}^\theta\approx R_n^\theta$, we use the following equations to construct our estimators for the parameters ${\alpha}_1, {\alpha}_2$: \begin{equation} \left\{ \begin{aligned} &\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} =-\frac{\sigma}{\sqrt{2\alpha_1}}\frac{\phi( -\sqrt{2\alpha_1}\theta/\sigma)}{1-\Phi(-\sqrt{2\alpha_1}\theta/\sigma)},\\ &\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} =\frac{\sigma}{\sqrt{2\alpha_2}}\frac{\phi( \sqrt{2\alpha_2}\theta/\sigma)}{1-\Phi(\sqrt{2\alpha_2}\theta/\sigma)}. \end{aligned} \right.\label{e.3.15} \end{equation} Let \begin{equation} x=\frac{\sqrt{2\alpha_1}\theta}{\sigma}\,,\qquad y=\frac{\sqrt{2\alpha_2}\theta}{\sigma}\,, \label{e.3.16} \end{equation} and \begin{equation*} A(x)=\frac{\phi(-x)}{1-\Phi(-x)}, \quad B(y)=\frac{\phi(y)}{1-\Phi(y)}. \end{equation*} Equivalently, the system of equations \eqref{e.3.15} becomes \begin{equation}\label{e:L/L} \left\{ \begin{aligned} &\frac{\widehat L_{1,N}^\theta}{\theta \widehat L_{0,N}^\theta} =-\frac{ A(x)}{x}=:K_1(x),\\ &\frac{\widehat R_{1,N}^\theta}{\theta \widehat R_{0,N}^\theta}=\frac{ B(y)}{y}=:K_2(y). \end{aligned} \right. \end{equation} These are two uncoupled equations, so we can solve them separately. To see if there is a unique solution to each of the above equations or not, we use the simple mean value theorem: if a differentiable function $f$ has nonzero derivatives on an interval $I$, then it is injective. Using the fact that $A'(x)=-xA(x)-A^2(x)$ and $B'(y)=-yB(y)+B^2(y)$, we can compute the derivatives of $K_1$ and $K_2$ as follows: \begin{equation*} \left\{ \begin{aligned} &\frac{dK_1}{d x}= A(x)\left(\frac{1}{x^2}+1+\frac{A(x)}{x}\right),\\ &\frac{dK_2}{d y}=- B(y)\left (\frac{1}{y^2}+1-\frac{B(y)}{y}\right). \end{aligned} \right. \end{equation*} To investigate the monotonicity of $K_i$, $i=1,2$, it is equivalent to show the positivity or negativity of $F_1(x)=\frac{1}{x^2}+1+\frac{A}{x}$ and $F_2(y):=\frac{1}{y^2}+1-\frac{B}{y}$. Since $F_1(-y)=F_2(y)$, to show each of the equation in \eqref{e:L/L} has a unique solution in $\mathbb{R} $, we only need to show $F_1(x)>0$ for all $x\neq0$. Denote $\widetilde F(x):=1-\Phi(-x)+x^2(1-\Phi(-x))+x\phi(-x)$. Then $F_1(x)=\widetilde F(x)/[x^2(1-\Phi(-x))]$. Note that \begin{equation*} \widetilde F^\prime(x)=2\phi(x)+2x\Phi(x),\quad \widetilde F^{\prime\prime}(x)=2\Phi(x)>0. \end{equation*} Since $\lim_{x\to-\infty}\widetilde F^\prime(x)=0$, we see $\widetilde F^\prime(x)>0$. Now we can conclude that $\widetilde F(x)>0$ from $\lim_{x\to-\infty}\widetilde F(x)=0$. Therefore, there exists a continuous inverse function $H=(H_1, H_2)$ of $(K_1, K_2)$ such that \begin{equation*} \widehat x_N:=H_1\left(\frac{\widehat L_{1,N}^\theta}{ {\theta}\widehat L_{0,N}^\theta} \right), \quad \widehat y_N:=H_2\left(\frac{\widehat R_{1,N}^\theta}{{\theta} \widehat R_{0,N}^\theta} \right). \end{equation*} From the ergodic theorem we know that $\widehat L_{n,N}^\theta$ and $\widehat R_{n,N}^\theta$ converge almost surely to $ L_{n }^\theta$ and $ R_{n }^\theta$ defined by \eqref{e.3.14}. Thus, the estimators $\widehat x_N$ and $\widehat y_N $ converge almost surely to the parameters \begin{equation}\label{e:xy} x=H_1(K_1(x))=\frac{\sqrt{2\alpha_1}\theta}{\sigma}, \quad y=H_2(K_2(y))=\frac{\sqrt{2\alpha_2}\theta}{\sigma} \end{equation} respectively, as $N\to\infty$. Now the relationship \eqref{e.3.16} between $(x,y)$ and $({\alpha}_1,{\alpha}_2)$ yields the following theorem. \begin{thm}\label{thm:3.3} For any sample size $N$ the system of equations \eqref{e:L/L} has a unique solution $(\widehat x_N,\widehat y_N)$. The generalized moment estimators defined by \begin{equation*} \widehat \alpha_{1,N}=\frac{1}{2}\left(\frac{\sigma\widehat x_N}{\theta} \right)^2, \quad \widehat \alpha_{2,N}=\frac{1}{2}\left(\frac{\sigma\widehat y_N}{\theta} \right)^2 \end{equation*} are strongly consistent, namely, $(\widehat \alpha_{1,N} , \widehat \alpha_{2,N})$ converges to $(\alpha_1,\alpha_2)$ almost surely. \end{thm} Compared with the case I, the estimators only have implicit expressions in terms of the inverse functions $H_1$ and $H_2$. Nevertheless, it is clear that $H_1$ and $H_2$ are continuously differentiable. Hence, we can exhibit the following CLT for the estimators $\widehat\alpha_{i,N}$, $i=1,2$. \begin{thm}\label{t.3.7} As $N\to\infty$, \begin{equation*} \sqrt N \left((\widehat \alpha_{1,N},\widehat \alpha_{2,N})^T-(\alpha_1,\alpha_2)^T\right)\Rightarrow \mathbf N(0,\widehat\Sigma), \end{equation*} where $\widehat\Sigma$ is given by \eqref{e.3.22} below. \end{thm} \begin{pf} The proof is similar to that of Theorem \ref{asy:1}, so we only provide a sketch of the proof. Set \begin{align*} &F_1(x)=I(x\le\theta), \quad F_2(x)=xI(x\le\theta) \,, \\ &F_3(x)=I(x>\theta), \quad F_4(x)=xI(x>\theta) \,. \end{align*} From \citet[Theorem 17.0.1]{meyn2012markov}, we get that for $i,j=1,2,3,4,$ \begin{equation*} \widetilde\sigma_{ij}:={\rm Cov} (F_{i}(\widetilde X_0),F_{j}(\widetilde X_0))+\sum_{k=1}^\infty\left[ {\rm Cov} (F_{i}(\widetilde X_0),F_{j}( \widetilde X_{kh}))+{\rm Cov} (F_{j}(\widetilde X_0),F_{i}(\widetilde X_{kh})) \right], \end{equation*} are well defined and non-negative. They can be computed by using \eqref{e.B.2} as follows: \begin{equation*} \tilde {\sigma}_{ij}=\sigma(F_i, F_j)\,, \quad i,j=1, 2, 3, 4\,. \end{equation*} Denote $\tilde\Sigma_2:=(\tilde\sigma _{ij})_{1\le i,j\le 4}$, then we have \begin{equation*} \sqrt N\left(( \widehat L_{0,N}^\theta, \widehat L_{1,N}^\theta, \widehat R_{0,N}^\theta, \widehat R_{1,N}^\theta)^T-( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta) ^T\right)\Rightarrow \mathbf N(0,\tilde\Sigma_2). \end{equation*} Define two functions by $h_1(x_1,x_2):= H_1(\frac{x_2}{\theta x_1} )$ and $h_2(x_3,x_4):=H_2(\frac{x_4}{\theta x_3})$ and set two maps \begin{align*} &h:(x_1,x_2,x_3,x_4)\mapsto (h_1(x_1,x_2),h_2(x_3,x_4)) \;, \\ & l:(x_1,x_2) \mapsto \left(\frac{\sigma^2 {x_1}^2}{2\theta^2}, \frac{\sigma^2 {x_2}^2}{2\theta^2}\right)\; . \end{align*} By the multivariate delta method, we have \begin{equation*} \sqrt N\left(h(\widehat L_{0,N}^\theta, \widehat L_{1,N}^\theta, \widehat R_{0,N}^\theta, \widehat R_{1,N}^\theta)^T-h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta)^T\right) \Rightarrow \mathbf N(0, \bar\Sigma ), \end{equation*} where $\bar\Sigma=\nabla h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta)\tilde\Sigma_2 \nabla h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta)^T $. Applying the multivariate delta method again, we get the desired CLT result \begin{equation*} \sqrt N\left(l(h(\widehat L_{0,N}^\theta, \widehat L_{1,N}^\theta, \widehat R_{0,N}^\theta, \widehat R_{1,N}^\theta))^T-l(h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta))^T\right) \Rightarrow \mathbf N(0, \widehat\Sigma ), \end{equation*} where \begin{equation} \widehat\Sigma:=\nabla l(h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta) )\ \bar\Sigma \ \nabla l(h( L_{0}^\theta, L_{1}^\theta, R_{0}^\theta, R_{1}^\theta) )^T\,. \label{e.3.22} \end{equation} The proof is then completed. \end{pf} \subsection{Case III: Estimate $\beta_i$ and $\alpha_i$ for known $\theta\neq0$}\label{sub:III} In this subsection, we extend our approach to multiple-parameter case, where $\theta\neq 0$. The stationary density is given by \begin{equation} \psi_3(x)=k_1\exp\left(\frac{-\alpha_1x^2+2\beta_1x}{\sigma^2}\right)I(x\le \theta)+k_2\exp\left(\frac{-\alpha_2x^2+2\beta_2x}{\sigma^2}\right)I(x> \theta),\label{e.general_psi} \end{equation} where $k_1$ and $k_2$ are defined by \eqref{e:k1} and \eqref{e:k2}. We can obtain the following stationary moments \begin{equation} \left\{ \begin{aligned} &\frac{\mathbb{E} [X_\infty I(X_\infty\le\theta)]}{\mathbb{E} [I(X_\infty\le\theta)]}=\frac{-\frac{\sigma}{\sqrt{2\alpha_1}}\phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)}{\Phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)}+\frac{\beta_1}{\alpha_1},\\ &\frac{\mathbb{E} [ X_\infty ^2 I(X_\infty\le\theta)]}{\mathbb{E} [I(X_\infty\le\theta)]} =\frac{\sigma^2}{2\alpha_1}+\left(\frac{\beta_1}{\alpha_1}\right)^2 +\frac{\phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)}{ \Phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)} \left( -\theta-\frac{\beta_1}{\alpha_1}\right) \frac{\sigma}{\sqrt{2\alpha_1}}\,, \\ &\frac{\mathbb{E} [X_\infty I(X_\infty>\theta)]}{\mathbb{E} [I(X_\infty>\theta)]}=\frac{\frac{\sigma}{\sqrt{2\alpha_2}}\phi\left(\frac{\sqrt{2\alpha_2}\theta}{\sigma}-\frac{2\beta_2}{\sqrt{2\alpha_2}\sigma}\right)}{1-\Phi\left(\frac{\sqrt{2\alpha_2}\theta}{\sigma}-\frac{2\beta_2}{\sqrt{2\alpha_2}\sigma}\right)}+\frac{\beta_2}{\alpha_2},\\ &\frac{\mathbb{E} [ X_\infty ^2 I(X_\infty>\theta)]}{\mathbb{E} [I(X_\infty>\theta)]}=\frac{\sigma^2}{2\alpha_2}+\left(\frac{\beta_2}{\alpha_2}\right)^2 +\frac{\phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)}{1- \Phi\left(\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}\right)} \left( \theta+\frac{\beta_1}{\alpha_1}\right) \frac{\sigma}{\sqrt{2\alpha_2}}\,. \end{aligned} \right. \label{e.3.20} \end{equation} Denote the right-hand sides of the above identities by $\bar K_i$, $i=1,2,3,4$. Let \begin{align}\label{u,v} v&=\frac{\beta_1}{\alpha_1}, \quad u=\frac{\sqrt{2\alpha_1}\theta}{\sigma}-\frac{2\beta_1}{\sqrt{2\alpha_1}\sigma}=\frac{\sqrt{2\alpha_1}(\theta-v)}{\sigma},\quad A(u)=\frac{\phi(-u)}{1-\Phi(-u)}, \\\label{w,z} z&=\frac{\beta_2}{\alpha_2}, \quad \omega=\frac{\sqrt{2\alpha_2}\theta}{\sigma}-\frac{2\beta_2}{\sqrt{2\alpha_2}\sigma}=\frac{\sqrt{2\alpha_2}(\theta-z)}{\sigma}, \quad B({\omega})=\frac{\phi(\omega)}{1-\Phi(\omega)}\,. \end{align} Then we can rewrite $\bar K_i$ as \begin{equation} \left\{ \begin{aligned} &\bar K_1(u,v) =\frac{v-\theta}{u}A(u)+v,\\ &\bar K_2(u,v)=\left( \frac{\theta-v}{u}\right)^2+v^2-A(u) \frac{\theta^2-v^2}{u}\,, \\ &\bar K_3(\omega,z)=\frac{\theta-z}{\omega}B({\omega})+z,\\ &\bar K_4(\omega,z)=\left( \frac{\theta-z}{\omega}\right)^2+z^2+B({\omega})\frac{\theta^2-z^2}{\omega}\,. \end{aligned} \right. \label{e.3.23} \end{equation} Similar to the previous cases, we approximate the left hand sides of \eqref{e.3.20} by the following statistics for $i=1,2$: \begin{align*} \widehat L_{i,N}^\theta/\widehat L_{0,N}^\theta&\approx \mathbb{E}[(X_\infty)^i I(X_\infty\le\theta)]/ \mathbb{E}[ I(X_\infty\le\theta)],\\ \widehat R_{i,N}^\theta/\widehat R_{0,N}^\theta&\approx \mathbb{E}[(X_\infty)^i I(X_\infty\le\theta)]/ \mathbb{E}[ I(X_\infty>\theta)], \end{align*} Motivated by \eqref{e.3.20} and \eqref{e.3.23} we first propose the following estimators $\widehat v_N$, $\widehat u_N$, $\widehat z_N$, and $\widehat \omega_N$ to estimate $v, u, z, {\omega}$ by solving the following system \begin{equation}\label{sys:III} \left\{ \begin{aligned} &\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} =\frac{v-\theta}{u}A(u)+v,\\ &\frac{\widehat L_{2,N}^\theta}{\widehat L_{0,N}^\theta} =\left( \frac{\theta-v}{u}\right)^2+v^2-A(u)\frac{\theta^2-v^2}{u}\,, \\ &\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} =\frac{\theta-z}{\omega}B({\omega})+z,\\ &\frac{\widehat R_{2,N}^\theta}{\widehat R_{0,N}^\theta} =\left( \frac{\theta-z}{\omega}\right)^2+z^2+B({\omega})\frac{\theta^2-z^2}{\omega}\,. \end{aligned} \right. \end{equation} Next we need to solve this system of four equations. First, we observe that this system of four equations is decoupled as two systems, each consisting two equations. Let us first study the first pair of equations in \eqref{sys:III}: \begin{equation}\label{sys:III_a} \left\{ \begin{aligned} &\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} =\frac{v-\theta}{u}A(u)+v=:\bar K_1(u,v)\,, \\ &\frac{\widehat L_{2,N}^\theta}{\widehat L_{0,N}^\theta} =\left( \frac{\theta-v}{u}\right)^2+v^2-A(u)\frac{\theta^2-v^2}{u}=:\bar K_2(u,v)\,. \end{aligned} \right. \end{equation} The partial derivatives of $\bar K_1, \bar K_2$ are given by \begin{equation*} \left\{ \begin{aligned} &\frac{\partial \bar K_1}{\partial u} = -\frac{v-\theta}{u^2}A(u)-(v-\theta)A(u)-A^2(u)\frac{v-\theta}{u} \,, \\ &\frac{\partial \bar K_1}{\partial v}= \frac{A(u)}{u}+1\,,\\ &\frac{\partial \bar K_2}{\partial u}=-\frac{2(\theta-v)^2}{u^3}- (-uA(u)-A^2(u))\frac{\theta^2-v^2}{u}+A(u) \frac{\theta^2-v^2}{u^2}\,,\\ &\frac{\partial \bar K_2}{\partial v}=-\frac{2(\theta-v)}{u^2}+2v+\frac{2A(u)v}{u}\,. \end{aligned} \right. \end{equation*} The Jacobian matrix $J_1$ of $(\bar K_1, \bar K_2)$ is given by \begin{equation*} J_1= \begin{pmatrix} \frac{\partial \bar K_1}{\partial u}&\frac{\partial \bar K_1}{\partial v}\\ \frac{\partial \bar K_2}{\partial u}&\frac{\partial \bar K_2}{\partial v}\\ \end{pmatrix}. \end{equation*} The determinant of $J_1$ is \begin{align*} \det(J_1)=-\frac{(v-\theta)^2}{u^3}(A(u)u^3+3A(u)u+A^3(u)u+2A^2(u) u^2+3A^2(u)-2) \;. \end{align*} Let $D_1(u)=A(u)u^3+3A(u)u+A^3(u)u+2A^2(u)u^2+3A^2(u)-2$. To show that $\det(J_1)\neq0$ for all $u\neq0$ and $v\neq\theta$ it suffices to show that $D_1(u)<0$. From the Figure \ref{fig:D1}, we can see that $D_1(u)<0$ for all $u\in[-10,5]$. Let \begin{eqnarray*} \mathbb{D}_1&=&\left\{(u,v)\in \mathbb{R} ^2\,; v\not =\theta\,\right. \ {\rm and} \nonumber\\ &&\left. D_1(u)=A(u)u^3+3A(u)u+A^3(u)u+2A^2(u)u^2+3A^2(u)-2 \not =0 \right\}\,. \end{eqnarray*} The Figure \ref{fig:D1} implies $ \left\{ (u, v), u\in (-10, 5)\,, v\not =0\right\}\subseteq \mathbb{D}_1$. If necessary, one can enlarge the interval $(-10, 5)$. If $(u_0,v_0)\in \mathbb{D}_1$ is from the true parameters $({\alpha}_1, {\beta}_1)$, then by the ergodic Lemma \ref{le:ergodic} we know that when $N$ goes to infinity \eqref{sys:III_a} will become true identities with the $(u,v)$ on the right-hand side replaced by $(u_0, v_0)$. Thus, when $N$ is sufficiently large $\displaystyle \left(\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta}\,, \frac{\widehat L_{2,N}^\theta}{\widehat L_{0,N}^\theta} \right)$ will be in any given neighbourhood of $(\bar K_1(u_0,v_0), \bar K_2(u_0,v_0))$. On the other hand, it is obvious that $\mathbb{D}_1$ is an open set in $\mathbb{R} ^2$ and $\bar K_1, \bar K_2$ are continuous functions of $(u,v)$. Since $\det(J_1)\not=0$ on $\mathbb{D}_1$, by the inverse function theorem there is one unique solution pair $(u,v)\in \mathbb{D}_1$ in some neighbourhood of $(u_0, v_0)$ such that the system of equations \eqref{sys:III_a} are satisfied. This gives the existence and local uniqueness of the solution to the system of equations \eqref{sys:III_a}. Now we consider the second pair of equations in \eqref{sys:III}. \begin{equation}\label{sys:III_b} \left\{ \begin{aligned} &\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} =\frac{\theta-z}{\omega}B({\omega})+z=:\bar K_3( {\omega}, z)\,,\\ &\frac{\widehat R_{2,N}^\theta}{\widehat R_{0,N}^\theta} =\left( \frac{\theta-z}{\omega}\right)^2+z^2+B({\omega})\frac{\theta^2-z^2}{\omega}=:\bar K_4( {\omega}, z) \,. \end{aligned} \right. \end{equation} The partial derivatives of $\bar K_3({\omega}, z)$, $\bar K_4({\omega}, z)$ are \begin{equation*} \left\{ \begin{aligned} &\frac{\partial \bar K_3}{\partial \omega} = -\frac{\theta-z}{\omega^2}B({\omega})-(\theta-z)B({\omega})+B^2({\omega})\frac{\theta-z}{\omega}\\ &\frac{\partial \bar K_3}{\partial z}=-\frac{B({\omega})}{\omega}+1,\\ &\frac{\partial \bar K_4}{\partial \omega}=-\frac{2(\theta-z)^2}{\omega^3}+(-\omega B({\omega})+B^2({\omega}))\frac{\theta^2-z^2}{\omega}-B({\omega}) \frac{\theta^2-z^2}{\omega^2},\\ &\frac{\partial \bar K_4}{\partial z}=-\frac{2(\theta-z)}{\omega^2}+2z-\frac{2B({\omega})z}{\omega}\\ \end{aligned} \right. \end{equation*} The determinant of the Jacobian matrix $J_2$ of $(\bar K_3, \bar K_4)$ is \begin{align*} \det(J_2)=\frac{(\theta-z)^2}{\omega^3}(B({\omega})\omega^3+3B({\omega})\omega+B^3({\omega})\omega-2B^2({\omega})\omega^2-3B^2({\omega})-2)\;. \end{align*} Let $D_2(\omega)=B({\omega})\omega^3+3B({\omega})\omega+B^3({\omega})\omega-2B^2({\omega})\omega^2-3B^2({\omega})-2$. From the Figure \ref{Fig.D2}, we see that $D_2(\omega)<0$ for all ${\omega}\in [-5, 5]$. Denote \begin{eqnarray*} \mathbb{D}_2&=&\left\{(z, {\omega} )\in \mathbb{R} ^2\,; z\not =\theta\,\right. \ {\rm and} \nonumber\\ &&\left. D_2(\omega)=B({\omega})\omega^3+3B({\omega})\omega+B^3({\omega})\omega-2B^2({\omega})\omega^2-3B^2({\omega})-2 \not =0 \right\}\,. \end{eqnarray*} Analogous to the argument for the system of equations \eqref{sys:III_a} we can prove the existence and local uniqueness of the solution to the system of equations \eqref{sys:III_b}. \begin{figure} \centering \subfigure[$D_1(u)$]{\label{fig:a1} \includegraphics[width=0.6\textwidth]{D1.eps}} \subfigure[ $D_1(u)$]{\label{fig:b1} \includegraphics[width=0.6\textwidth]{D11.eps}} \caption{The plot of $D_1(u)$. } \label{fig:D1} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{D2.eps} \caption{$D_2(u)$} \label{Fig.D2} \end{figure} Once we have the existence and local uniqueness of the system of equations \eqref{sys:III_a} and \eqref{sys:III_b} we can follow the substitutions \eqref{u,v} and \eqref{w,z} to obtain the generalized moment estimators $\widehat\alpha_{i,N}$ and $\widehat\beta_{i,N}$ for ${\alpha}_i$ and ${\beta}_i$, $i=1, 2$. We summarize the above as the following theorem. \begin{thm}\label{t.3.8} Let $({\alpha}_1, {\beta}_1, {\alpha}_2, {\beta}_2)$ be the true parameters such that $(u,v)$ and $(z, {\omega})$ defined by \eqref{u,v} and \eqref{w,z} are in $\mathbb{D}_1$ and $\mathbb{D}_2$, respectively. Then, when $N$ is sufficiently large the systems of equations \eqref{sys:III_a} and \eqref{sys:III_b} have solutions $(\widehat u_N, \widehat v_N)$ and $( \widehat {\omega}_N,\widehat z_N)$, respectively. The solutions are unique in a neighbourhood of $(u,v)$ and a neighbourhood of $({\omega}, z)$. If we define \begin{empheq}[left=\empheqlbrace]{align} \widehat \alpha_{1,N}&=\frac{(\widehat u_N)^2\sigma^2}{2(\theta-\widehat v_N)^2}, \quad \widehat \alpha_{2,N}=\frac{(\widehat \omega_N)^2\sigma^2}{2(\theta-\widehat z_N)^2},\label{e.3.33}\\ \widehat \beta_{1,N}&=\widehat v_N \widehat\alpha_{1,N}, \quad \widehat \beta_{2,N}=\widehat z_N \widehat \alpha_{2,N}\,, \label{e.3.34} \end{empheq} then when $N\rightarrow \infty$, we have \begin{equation*} (\widehat \alpha_{1,N}\,, \widehat \alpha_{2,N}\,, \widehat \beta_{1,N}\,, \widehat \beta_{2,N} )\rightarrow ({\alpha}_1, {\alpha}_2, {\beta}_1, {\beta}_2) \quad \hbox{almost suely}\,. \end{equation*} \end{thm} \begin{rmk}\label{rmk:3.10} If $u=0$, $\omega=0$, i.e., $\frac{\beta_i}{\alpha_i}=v=z=\theta$, $i=1,2$. We can estimate $(\alpha_1,\alpha_2)$ by solving \begin{equation*} \left\{ \begin{aligned} &\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} = -\frac{\sigma}{\sqrt{\pi\alpha_1}}+\theta,\\ &\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} = \frac{\sigma}{\sqrt{\pi\alpha_2}}+\theta.\\ \end{aligned} \right. \end{equation*} Then $\widehat\beta_i=\theta\widehat\alpha_i$, $i=1,2$. \end{rmk} We also have the CLT for the above estimators. Before stating the theorem, let us describe the asymptotic variances. Let \begin{empheq}[left=\empheqlbrace]{align*} &G_1(x)=I(x\le\theta), \quad G_2(x)=xI(x\le\theta) , \quad G_3(x)=x^2I(x\le\theta)\,, \\ &G_4(x)=I(x>\theta), \quad G_5(x)=xI(x>\theta) , \quad G_6(x)=x^2I(x>\theta) \,. \end{empheq} Denote \begin{equation*} \tilde\Sigma_3=\left({\sigma}_{ij}\right)_{1\le i,j\le 6}\,, \quad \hbox{where}\quad {\sigma}_{ij}=\sigma(G_i, G_j)\,, 1\le i,j\le 6 \end{equation*} with $\sigma(G_i, G_j)$ being defined by \eqref{e.B.2}. Then we have as before, \begin{equation*} \sqrt N\left( ( \widehat L_{0,N}^\theta, \widehat L_{1,N}^\theta,\widehat L_{2,N}^\theta, \widehat R_{0,N}^\theta, \widehat R_{1,N}^\theta, R_{2,N}^\theta)^T-( L_{0}^\theta, L_{1}^\theta,L_{2}^\theta, R_{0}^\theta, R_{1}^\theta, R_{2}^\theta) ^T\right)\Rightarrow \mathbf N(0, \tilde\Sigma_3). \end{equation*} Let $(u,v) =(\kappa_1 (x_1, x_2), \kappa_2(x_1, x_2))$ be the inverse mapping of $(\bar K_1(u,v), \bar K_2(u,v))$ defined by \eqref{sys:III_a} and let $({\omega},z ) =(\kappa_3 (x_3, x_4), \kappa_4(x_3, x_4))$ be the inverse mapping of $(\bar K_3(u,v), \bar K_4(u,v))$ defined by \eqref{sys:III_b}. Comparing with \eqref{e.3.33}-\eqref{e.3.34} and denoting $x=(x_1, x_2, x_3, x_4, x_5, x_6)$, we introduce \begin{empheq}[left=\empheqlbrace]{align} &\rho_1(x):= \frac{(\kappa_1 (\frac{x_2}{x_1}, \frac{x_3}{x_1}))^2\sigma^2}{2(\theta-\kappa_2 (\frac{x_2}{x_1}, \frac{x_3}{x_1}))^2}\,;\nonumber\\ &\rho_2(x):=\frac{(\kappa_3(\frac{x_5}{x_4}, \frac{x_6}{x_4})^2\sigma^2}{2(\theta-\kappa_4(\frac{x_5}{x_4}, \frac{x_6}{x_4}))^2}\,;\nonumber\\ &\rho_3(x):=\kappa_2(\frac{x_2}{x_1}, \frac{x_3}{x_1}) \rho_1(x)\,;\nonumber\\ &\rho_4(x):=\kappa_4(\frac{x_5}{x_4}, \frac{x_6}{x_4}) \rho_2(x) \,. \nonumber \end{empheq} Define a map $\rho:\mathbb{R} ^6\ni x\mapsto(\rho_1(x),\rho_2(x),\rho_3(x),\rho_4(x))\in \mathbb{R} ^4$. Now we establish the following asymptotic normality theorem. \begin{thm}\label{t.3.10} As $N\to \infty$, we have the following asymptotic normality: \begin{equation*} \sqrt N\left( (\widehat \alpha_{1,N}\,, \widehat \alpha_{2,N}\,, \widehat \beta_{1,N}\,, \widehat \beta_{2,N} )^T - ({\alpha}_1, {\alpha}_2, {\beta}_1, {\beta}_2)^T\right)\Rightarrow \mathbf N(0, \bar \Sigma_3 )\,, \end{equation*} where \begin{equation*} \bar \Sigma_3 =\nabla \rho ( L_{0}^\theta, L_{1}^\theta,L_{2}^\theta, R_{0}^\theta, R_{1}^\theta, R_{2}^\theta)\ \tilde\Sigma_3 \nabla \rho ( L_{0}^\theta, L_{1}^\theta,L_{2}^\theta, R_{0}^\theta, R_{1}^\theta, R_{2}^\theta)^T\,. \end{equation*} \end{thm} Theorem \ref{t.3.8} gives domains $\mathbb{D}_1$ and $\mathbb{D}_2$ so that we can find generalized moment estimators $\widehat \alpha_{1,N}\,, \widehat \alpha_{2,N}$\,, $ \widehat \beta_{1,N}\,, \widehat \beta_{2,N}$ of $ {\alpha}_1, {\alpha}_2, {\beta}_1, {\beta}_2 $. On the one hand, although the functions $D_1$ and $D_2$ are explicit, we still have difficulty to know the shapes of $\mathbb{D}_1$ and $\mathbb{D}_2$. Our numerical experiments suggest that $D_1(u) \not =0$ and $D_2(u)\not =0$ for all $u\in \mathbb{R} $. However, we cannot conclude this analytically. On the other hand, as we know that the implicit function theorem is a local one in high dimensions. This means that the solutions to \eqref{sys:III_a} and to \eqref{sys:III_b} are unique only in a neighbourhood of the true parameters. The method of nondegeneracy of the determinant cannot be used to guarantee the existence of a global inverse function. For example, the mapping $(f(x,y), g(x,y))=(e^x\cos y, e^x \sin y)$ from $\mathbb{R} ^2$ to $\mathbb{R} ^2$ has a strictly positive Jacobian determinant $J(f, g)=e^x$ on the whole plane $\mathbb{R} ^2$. But it is not an injection as a mapping from $\mathbb{R} ^2$ to $\mathbb{R} ^2$. Therefore, Theorem \ref{t.3.8} is powerful when we know a priori roughly the range of the true parameters. For example, in the modelling of the financial market, we know roughly the long memory Hurst parameter $H$ is around $0.5$. But in some other cases researchers do not have any idea about the parameter ranges. Thus, a natural question arises: What should we do if there are more than one solution to \eqref{sys:III_a} and to \eqref{sys:III_b}? Now we are going to address this global uniqueness issue (existence is not an issue by Theorem \ref{t.3.8}). From the first equation of \eqref{sys:III_a} we have \begin{equation} v=\frac{u\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta}+\theta A(u)}{u+A(u)}\,. \label{e.3.32} \end{equation} Substituting it to the second equation of \eqref{sys:III_a} we obtain \begin{eqnarray} &&u\left(\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} -\theta\right)^2+ u\left(u\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} +\theta A(u)\right)^2\nonumber\\ &&\qquad\qquad-A(u) \left[\theta^2(u+A(u))^2- \left(u\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta} +\theta A(u)\right)^2\right] - u(u+A(u))^2 \frac{\widehat L_{2,N}^\theta}{\widehat L_{0,N}^\theta} =0\,. \end{eqnarray} This is one equation on one unknown $u$. Solving $F_1(u)=0$ to get $\widehat u_N$ and substituting it into \eqref{e.3.32}, we can get $\widehat v_N$. Notice that the quantities $\frac{\widehat L_{1,N}^\theta}{\widehat L_{0,N}^\theta}$ and $\frac{\widehat L_{2,N}^\theta}{\widehat L_{0,N}^\theta}$ appeared in \eqref{e.3.33} can be computed from real data. We can proceed similarly for the system of equations \eqref{sys:III_b}. From its first equation we see \begin{equation} z=\frac{{\omega} \frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta}-{\theta} B({\omega})}{{\omega}-B({\omega})}\,. \label{e.3.27} \end{equation} Substituting it to the second equation of \eqref{sys:III_b} we have \begin{eqnarray} &&{\omega} \left(\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} -\theta\right)^2+{\omega} \left({\omega} \frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta} -\theta B({\omega})\right)^2\nonumber\\ &&\qquad\qquad +B({\omega}) \left[\theta^2({\omega}-B({\omega}))^2-\left({\omega}\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta}-{\theta} B({\omega})\right)^2\right] -{\omega} ({\omega}-B({\omega}))^2 \frac{\widehat R_{2,N}^\theta}{\widehat R_{0,N}^\theta}=0\,. \label{e.3.28} \end{eqnarray} Denote the left-hand side by $F_2({\omega})$. Solving $F_2({\omega})=0$ and substituting it into \eqref{e.3.32} yields $\widehat z_N$. Notice that the quantities $\frac{\widehat R_{1,N}^\theta}{\widehat R_{0,N}^\theta}$ and $\frac{\widehat R_{2,N}^\theta}{\widehat R_{0,N}^\theta}$ appeared in \eqref{e.3.33} can also be computed from real data. We simulate a sample of the process \eqref{e:MTOU} and plot the graphs $F_1(u)$ and $F_2({\omega})$ in Figure \ref{fig:f1}. We take $\sigma=1$, $\alpha_1=0.1$, $\alpha_2=0.5$, $\beta_1=0.2$, $\beta_2=0.5$, $\theta=0.3$, $h=0.5$, $N=100,000$. It can be seen that since the case of $u=0$ and ${\omega}=0$ is excluded in Remark \ref{rmk:3.10}, there exists only one root for $F_1$ (or $F_2$). \begin{figure} [t] \centering \subfigure[$F_1(u)$]{\label{fig:F1} \includegraphics[width=0.48\textwidth]{F1.eps}} \subfigure[ $F_2({\omega})$]{\label{fig:F2} \includegraphics[width=0.48\textwidth]{F2.eps}} \caption{The plot of $F_1(u)$ and $F_1(u)$ . } \label{fig:f1} \end{figure} \setcounter{equation}{0}\Section{Numerical experiments}\label{sec:N} To validate our estimation scheme discussed in Section \ref{sec:S}, we conduct some numerical experiments in this section. Table \ref{tab:T0Mean} and Table \ref{tab:T0Std} demonstrate the mean and standard deviation of the estimators $\widehat\alpha_{1,n,N}$ and $\widehat\alpha_{2,n,N}$ with $\sigma=1$ and taking the order $n\in\{1,2,3,4,5,6,7\}$, $\theta=0$, $\beta_1=\beta_2=0$ through 1,000 sample paths. Here we set the simulation parameters as: $h=0.5$, $N=100,000$, $X_0=0$. Based on the numerical results, it can be seen that the estimators have good consistency and the estimators corresponding the order $n=2,3$ are recommended. By using the built-in function ``fsolve'' in Matlab to solve the system in \eqref{e:L/L}, for given parameters $h=0.5$, $\theta=0.1$, $\beta_1=\beta_2=0$, and $\sigma=0.6$, we estimate the parameters $\alpha_1$ and $\alpha_2$ in Table \ref{tab:TheMean} and show the standard deviation in Table \ref{tab:TheStd}. \begin{SCtable} \caption{Mean of the estimators $\widehat\alpha$ through 1,000 sample paths. The true parameters are setting as: $\alpha_1=0.02$, $\alpha_2=0.05$.}\label{tab:T0Mean} \begin{tabular}{lllllllllllllllllll}\topline \rowcolor{gray!10}& n&&&&&& \\ \cline{2-8} \rowcolor{gray!10} Mean&1&2&3&4&5&6&7\\ \arrayrulecolor{tableheadcolor \rowcolor{gray!10}$ \alpha_1$&0.0198&0.0199&0.0199&0.0200&0.0200&0.0200&0.0201\\ \rowcolor{gray!10}$\alpha_2$&0.0497&0.0497&0.0495&0.0496&0.0497&0.0499&0.0497\\ \bottomlinec \end{tabular} \end{SCtable} \begin{SCtable} \caption{Standard deviation of the estimators $\widehat\alpha$ through 1,000 sample paths. The true parameters are setting as: $\alpha_1=0.02$, $\alpha_2=0.05$.}\label{tab:T0Std} \begin{tabular}{lllllllllllllllll}\topline \rowcolor{gray!10}& n&&&&&& \\ \cline{2-8} \rowcolor{gray!10} Std&1&2&3&4&5&6&7\\ \arrayrulecolor{tableheadcolor \rowcolor{gray!10}$ \alpha_1$&0.0012&0.0011&0.0011&0.0011&0.0012&0.0013&0.0015\\ \rowcolor{gray!10} $\alpha_2$&0.0027&0.0025&0.0023&0.0023&0.0025&0.0026&0.0030\\ \bottomlinec \end{tabular} \end{SCtable} \begin{SCtable} \caption{Mean of the estimators $\widehat\alpha$ through 1,000 sample paths. The true parameters are setting as: $\alpha_1=0.1$, $\alpha_2=0.2$, $\theta=0.1$}\label{tab:TheMean} \begin{tabular}{lllllllllllllllllll}\topline \rowcolor{gray!10} & N($\times 10^4$)&&& \\ \cline{2-10} \rowcolor{gray!10} Mean&0.8 &1.2\quad\quad\quad &1.6\quad\quad\quad &2.0\quad\quad\ \\ \arrayrulecolor{tableheadcolor \rowcolor{gray!10}$ \alpha_1$&0.0981&0.0979&0.0977&0.0974\\ \rowcolor{gray!10} $\alpha_2$&0.1917&0.1911&0.1910&0.1908\\ \bottomlinec \end{tabular} \end{SCtable} \begin{SCtable} \caption{Standard deviation of the estimators $\widehat\alpha$ through 1,000 sample paths. The true parameters are setting as: $\alpha_1=0.1$, $\alpha_2=0.2$, $\theta=0.1$}\label{tab:TheStd} \begin{tabular}{lllllllllllllllllll}\topline \rowcolor{gray!10} & N($\times 10^4$)&&& \\ \cline{2-8} \rowcolor{gray!10} Std&0.8 &1.2\quad\quad\quad &1.6\quad\quad\quad &2.0\quad\quad\quad \ \\ \arrayrulecolor{tableheadcolor \rowcolor{gray!10}$ \alpha_1$&0.0094&0.0074&0.0065&0.0056\\ \rowcolor{gray!10} $\alpha_2$&0.0150&0.0122&0.0103&0.0090\\ \bottomlinec \end{tabular} \end{SCtable} \setcounter{equation}{0}\Section{Conclusion}\label{sec:C} We conclude the paper here. In this paper, we have proposed the stationary moment estimators for the two-regime threshold OU process. Our approach can be extended to more threshold diffusion processes, including the threshold square-root process, where $X$ is a positive process almost surely with the diffusion term $\sigma(x)=\sum_{i=1}^m\sigma_i\sqrt x I(\theta_{i-1}<x\le\theta_i)$, $0=\theta_0<\theta_1<\theta_2<\cdots<\theta_m=\infty$. In the multi threshold OU case, the stationary density is given by \begin{equation*} \psi(x)=\sum_{i=1}^mk_i\exp\left(\frac{-\alpha_ix^2+\beta_ix}{\sigma_i^2}\right)I(\theta_{i-1}<x\le \theta_i), \end{equation*} with $k_i$ determined by $\int_{-\infty}^\infty\psi(x)dx$ and $\sigma_i^2\psi(\theta_i-)=\sigma^2_{i+1}\psi(\theta_i+)$, $i=1,\ldots,m-1$. Notice that $\psi(x)$ may be not continuous at the point $\theta_i$. In addition, our estimation approach may be extended to estimate $\alpha_i$, $\beta_i$, $\theta$, and $\sigma$ simultaneously. For the related reference, we mention the recent work in \citet{cheng2020generalized}. They employed the ergodic theorem for $X_{t_k}-X_{t_{k-1}}$, and derived its characteristic function under the stationary distribution. For the threshold process, it is more difficult to conduct these because of the nonlinear term of the threshold process. The problem of estimating $\alpha_i$, $\beta_i$, $\theta$, and $\sigma$ simultaneously will be studied in a future work.
f13a0f0402e5735a7655675c2d929a577f46e499
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction} Let $C$ be a smooth projective curve of genus $g$ over an algebraically closed field~$\k$. We classify all finitely generated thick (triangulated) subcategories of $D^b(\coh C)$. Namely we prove that all such subcategories $\TT$ (if $\TT\neq 0,D^b(\coh C)$) are \emph{quiver-like}, that is there is a finite quiver $Q$ and an equivalence of categories $$D_0^b(Q)\stackrel{\sim}{\to}\TT$$ where $D_0^b(Q)\subset D^b(Q)$ is the full triangulated subcategory generated by the simple modules corresponding to vertices (Theorem \ref{main theo}). We then classify the quivers $Q$ which can be \emph{realized} on curves in this way (Theorem \ref{answer question one}). We also show that if $Q$ and $Q'$ are realizable quivers and there is an equivalence $$D^b_0(Q)\simeq D^b_0(Q')$$ then $Q\simeq Q'$ (Corollary \ref{cor_unique}). As a byproduct we obtain the following result (Propositions \ref{g=0 and g=1}, \ref{g=2}): \textit{If $g\geq 2$, there is an infinite descending binary tree of finitely generated thick subcategories of $D^b(\coh C)$. On the other hand, if $g=0,1$, no infinite descending chain of such subcategories exists (as follows easily from classical results of Grothendieck and Atiyah)}. This phenomenon should be compared with the case of $0$-dimensional schemes. Namely, let $R$ be an artinian algebra. If $R$ is a complete intersection, then there are no infinite descending chains of finitely generated thick subcategories of $D^b(R \mmod)$, see~\cite{CI}. On the other hand in \cite{EL} simple examples of a non-complete intersection $R$ are constructed, such that there exists a descending binary tree of such subcategories. The paper is organized as follows. Section 2 contains a brief reminder on triangulated categories and enhancements. In Section 3 we study quiver-like categories. In Section 4 we formulate our main observation: all thick subcategories on curves are quiver-like. In Section 5 we classify quivers which are realizable on curves (Definition \ref{def realizable}). \section{A reminder about generation and enhancements of triangulated categories} We fix a field $\k$. All our categories are $\k$-linear. References for triangulated and dg categories include \cite{BK}, \cite{BoNe}, \cite{BLL}, \cite{Dr}, \cite{ELO}, \cite{Ke}. If $\TT$ is a triangulated category and $X,Y$ are objects in $\TT$, we will freely use the equivalent notation for the corresponding space of morphisms $$\Hom (X,Y[n])=\Hom ^n(X,Y)=\Ext ^n(X,Y).$$ If $X=Y$, then we also consider the graded algebra $$\Ext ^\bullet _{\TT}(X,X)=\Ext ^\bullet (X,X)=\bigoplus _n\Ext ^n(X,X).$$ A triangulated category is \emph{$\Ext$-finite} if the space $\bigoplus _n\Hom (X,Y[n])$ is finite dimensional for all objects $X,Y$. We say that a triangulated category $\TT$ is \emph{non-split generated} by a collection of objects $X_1,\ldots ,X_n$ if $\TT$ is the smallest full triangulated subcategory of $\TT$ which contains the objects $X_1,\ldots ,X_n$. We denote this $\TT=[X_1,\ldots ,X_n]$. We say that a triangulated category $\TT$ is (split) \emph{generated} by a collection of objects $X_1,\ldots ,X_n$ if $\TT$ is the smallest full triangulated subcategory of $\TT$ which contains the objects $X_1,\ldots ,X_n$ and which is closed under direct summands in $\TT$. We denote this $\TT=\langle X_1,\ldots ,X_n\rangle$. A \emph{thick} subcategory of a triangulated category $\TT$ is a full triangulated subcategory which is closed under direct summands in $\TT$. A category is \emph{Karoubian} if it is idempotent complete, i.e. if every idempotent splits. Note that a thick subcategory of a Karoubian triangulated category is also Karoubian. If a triangulated category $\TT$ is closed under countable direct sums, then it is Karoubian. For an abelian category $\AA$ we denote by $D(\AA)$ its (unbounded) derived category and by $D^b(\AA)$ its bounded derived category, these categories are triangulated. If $\AA$ has countable direct sums then $D(\AA)$ is Karoubian. Therefore all thick subcategories of $D(\AA)$ are also Karoubian. If $\cS$ is a dg category we denote by $[\cS]$ its \emph{homotopy} category. If the dg category~$\cS$ is \emph{pre-triangulated}, then $[\cS]$ is triangulated. An \emph{enhancement} of a triangulated category $\TT$ is a pre-triangulated dg category $\cS$ together with an equivalence of triangulated categories $[\cS]\stackrel{\sim}{\to}\TT$. This allows us to consider an object $X \in \TT$ as an object in the dg category $\cS$. Then we denote its endomorphism dg algebra by $$\bbR \End (X):=\End _{\cS}(X).$$ This dg algebra is well defined up to a quasi-equivalence and its cohomology algebra is $$H^\bullet (\bbR \End (X))=\Ext ^\bullet _{\TT}(X,X).$$ For a dg algebra $\cE$ we consider the triangulated category $\Perf (\cE)$. This is the thick subcategory in $D(\cE)$ (= the derived category of right dg $\cE$-modules) which is (split) generated by the dg $\cE$-module $\cE$. The triangulated category $\Perf (\cE)$ is Karoubian and it has a natural enhancement. If dg algebras $\cE$ and $\cE'$ are quasi-isomorphic, then the triangulated categories $\Perf (\cE)$ and $\Perf (\cE ')$ are equivalent. A dg algebra is \emph{formal} if it is quasi-isomorphic to its cohomology graded algebra. We will often use the following standard fact. \begin{prop} \label{standard equiv} Let $\TT$ be a Karoubian triangulated category which has an enhancement. Assume that $\TT$ is generated by an object $X$, i.e. $\TT =\langle X\rangle$. Consider the dg algebra $\cE =\bbR \End (X)$. Then there exists a natural equivalence of categories $$\TT\simeq \Perf (\cE).$$ \end{prop} Finally, for a ring $R$ we denote by $\Modd R$ (resp. $\modd R$) the category of right $R$-modules (resp. finitely generated right $R$-modules). \section{Quiver-like triangulated categories} A \emph{quiver} means a \emph{finite} quiver, i.e. the set of vertices and arrows is finite. Let $Q$ be a quiver with $n$ vertices $v_1,\ldots ,v_n$. Let $\k Q$ be the corresponding (hereditary) path algebra. Denote by $D^b(Q)=D^b(\Modd \k Q)$ the bounded derived category of right $\k Q$-modules. Since the algebra $\k Q$ is hereditary, every object in $D^b(Q)$ is isomorphic to the direct sum of its cohomology. Let $\cR \subset \k Q$ denote the radical of $\k Q$, i.e. $\cR $ is the $2$-sided ideal generated by all arrows. Denote by $(\modd \k Q)_{\cR }$ the abelian category of finitely generated $\cR $-torsion $\k Q$-modules. (These modules are automatically finite dimensional.) Define $D^b_0(Q)\subset D^b(Q)$ as the subcategory of all finite complexes with cohomology in $(\modd \k Q)_{\cR }$. The triangulated categories $D^b(Q)$ and $D^b_0(Q)$ are Karoubian. Let $s_1,\ldots ,s_n$ be the simple $\k Q$-modules, corresponding to the vertices. For future reference we record the following easy fact. \begin{lemma} \label{triv lemma} In the above notation the following holds: (1) Every object in $(\modd \k Q)_{\cR }$ has a finite filtration with $s_i$'s as subquotients. The corresponding associated graded module is independent of the filtration and \begin{equation} \label{eq_dimvect} K_0((\modd \k Q)_{\cR })\simeq\bigoplus _i\bbZ [s_i]. \end{equation} (2) Every object in $D_0^b(Q)$ is isomorphic to the direct sum of its cohomology, and $$K_0(D_0^b(Q))\simeq\bigoplus _i\bbZ [s_i].$$ (3) $D_0^b(Q)=\langle s_1,\ldots ,s_n\rangle=[s_1,\ldots ,s_n]$, i.e. $D^b_0(Q)$ is non-split generated by the $s_i$'s. \end{lemma} \begin{proof} (1) Let $M\in (\modd \k Q)_{\cR }$, then the quotients of the (finite) filtration $M\supset M\cdot \cR \supset M\cdot \cR ^2\supset\ldots$ are directs sums of $s_i$'s. The isomorphism \eqref{eq_dimvect} is given by the dimension vector of $\k Q$-module. (2) follows from $\mathrm{gldim}(\k Q)=1$. In (3), inclusions $D_0^b(Q)\supset \langle s_1,\ldots ,s_n\rangle\supset [s_1,\ldots ,s_n]$ are obvious, while $D_0^b(Q)\subset [s_1,\ldots ,s_n]$ follows from (1) and (2). \end{proof} We will not need the following lemma but include it here for the interested reader. \begin{lemma} The natural functor $$\Psi\colon D^b((\modd \k Q)_{\cR })\to D^b_0(Q)$$ is an equivalence. \end{lemma} \begin{proof} Note that both categories are Karoubian and generated by the object $\oplus s_i$. Moreover, $\Psi(s_i)\simeq s_i$ for any $i$. Therefore to prove that $\Psi$ is an equivalence it suffices (using the standard devissage technique) to check that~$\Psi$ induces isomorphisms $$\Ext^m_{(\modd \k Q)_{\cR }}(s_i,s_j)=\Hom^m_{D^b((\modd \k Q)_{\cR })}(s_i,s_j)\to \Hom^m_{D^b_0(Q)}(s_i,s_j)= \Ext^m_{\Modd \k Q}(s_i,s_j)$$ for any $i,j$ and $m\in\bbZ$. For $m<0$ this is clear; for $m=0$ this holds since $(\modd \k Q)_{\cR }\subset \Modd \k Q$ is a full subcategory. For $m=1$ this holds by Yoneda's description of $\Ext^1$ groups since the subcategory $(\modd \k Q)_{\cR }\subset \Modd \k Q$ is extension-closed. For $m\ge 2$ we have $\Ext^m_{\Modd \k Q}(s_i,s_j)=0$ since $\k Q$ is hereditary, let us check that $\Hom^m_{D^b((\modd \k Q)_{\cR })}(s_i,s_j)=0$ for $m\ge 2$. By definition, any morphism $f\colon s_i\to s_j[m]$ in $D^b((\modd \k Q)_{\cR })$ has the form $$s_i\xleftarrow{q} C^\bul\xrightarrow{p} s_j[m],$$ where $C^\bul$ is a bounded complex over $(\modd \k Q)_{\cR }$, $p,q$ are homomorphisms of complexes and $q$ is a quasi-isomorphism. We claim that there exists a complex $P^\bul=[P^{-1}\to P^0]$ over $(\modd \k Q)_{\cR }$ and a quasi-isomorphism $s\colon P^\bul\to C^\bul$. Then $$f=pq^{-1}=pss^{-1}q^{-1}=0$$ since $ps=0$ (recall that $m\ge 2$). To prove the claim, let $\bar P^\bul=[\bar P^{-1}\xra{d} \bar P^0]$ be a resolution of $s_i$ by projective finitely generated $\k Q$-modules. There exists a quasi-isomorphism $\bar s\colon \bar P^\bul\to C^\bul$. Since $C^k$ are $\cR$-torsion modules, one can take $N$ such that $C^k\cdot \cR^{N-1}=0$ for $k=0,-1$. We let now $P^0:=\bar P^0/(\bar P^0\cdot \cR^N)$. By assumptions, $\bar s^0\colon \bar P^0\to C^0$ factors via $P^0$. Recall that $H^\bul(\bar P)\simeq s_i$, hence $d$ is injective (we will treat $\bar P^{-1}$ as a submodule in $\bar P^0$) and $$\bar P^0\cdot \cR^{N}\subset \bar P^0\cdot \cR\subset \bar P^{-1}.$$ Let $P^{-1}:=\bar P^{-1}/(\bar P^0\cdot \cR^N)$. Then $P^\bul$ is quasi-isomorphic to $\bar P^\bul$ (and to $s_i$). Clearly $P^\bul$ is a complex over $(\modd \k Q)_\cR$. Also $$\bar s^{-1}(\bar P^0\cdot \cR^N)\subset \bar s^{-1}(\bar P^{-1}\cdot \cR^{N-1})\subset \bar s^{-1}(\bar P^{-1})\cdot \cR^{N-1}\subset C^{-1}\cdot \cR^{N-1}=0,$$ hence $\bar s^{-1}\colon \bar P^{-1}\to C^{-1}$ factors via $P^{-1}$. Therefore, $\bar s$ factors via a quasi-isomorphism $s\colon P^\bul\to C^\bul$. This concludes the proof of the claim and the lemma. \end{proof} \begin{defi} \label{def quivet like} A triangulated category $\TT$ is called {\rm quiver-like} if there exists a finite quiver~$Q$ and an equivalence of triangulated categories $D^b_0(Q)\simeq \TT$. \end{defi} We obtain the immediate consequence of Definition \ref{def quivet like} and Lemma \ref{triv lemma}. \begin{cor} \label{triv cor} Let $\TT$ be a quiver-like triangulated category with an equivalence $\Phi \colon D^b_0(Q)\to \TT$. Put $t_i=\Phi (s_i)$. Then we have the following. (1) For any indecomposable object $B\in \TT$ there exists a sequence of objects $B_0,\ldots ,B_m$ such that $B_m=B[d]$ for some $d\in \bbZ$, $B_0=0$, and for each $i=1,\ldots ,m$ the object $B_i$ fits into an exact triangle $$B_{i-1}\to B_i\to t_{j_i}\to B_{i-1}[1]$$ for some $j_i$. (2) $K_0(\TT)\simeq\bigoplus _j\bbZ [t_j]$. (3) $\TT=[t_1,\ldots ,t_n]$, i.e. $\TT$ is non-split generated by the $t_j$'s. \end{cor} \begin{lemma} \label{lemma on formality} Let $\cA$ be an abelian category in which for every object $A$ there exists an injective resolution $$0\to A\to I^0\to I^1\to 0.$$ Let $A_1,\ldots,A_n$ be objects in $\cA$ such that $\Hom (A_i,A_j)=\delta_{ij}\cdot \k$ and $\Ext ^s(A_i,A_j)=0$ for all $i,j$ and $s\ne 0,1$. Then the dg algebra $\bbR\End (\oplus_{i=1}^n A_i)$ is formal. The same holds for projective resolutions. \end{lemma} \begin{proof} Choose injective resolutions $A_j\to I_j^\bullet$ of length $1$. Then the dg algebra $\bbR \End (\oplus_j A_j)$ is quasi-isomorphic to $$\cE :=\End (\oplus_j I_j^\bullet)=\cE ^{-1}\oplus \cE ^0\oplus \cE ^1.$$ Let $e_j\in \cE ^0$ be the idempotent of the summand $I_j^\bullet$. Then $\cE ^0=\oplus_{i,j}e_i\cE ^0 e_j$, $\cE ^1=\oplus_{i,j}e_i\cE_1e_j$ and $d$ sends $e_i\cE_0e_j$ to $e_i\cE_1e_j$. Choose a subspace $V=V^0\oplus V^1$ of $\cE $, where $V^0=\oplus_j \k e_j\subset \cE ^0$, $V^1_{i,j}\subset e_i\cE^1e_j$ is any complement of $d(e_i\cE ^0e_j)\subset e_i\cE ^1e_j$, and $V^1=\oplus_{i,j} V^1_{i,j}\subset \cE ^1$. Then $V$ is a dg subalgebra of $\cE $ and the inclusion $V\subset \cE $ is a quasi-isomorphism. In case of projective resolutions the proof is similar. \end{proof} \begin{cor} \label{cor formal}Let $Q$ be a quiver and consider the category $D^b_0(Q)$ with the standard enhancement coming from the embedding $D^b_0(Q)\subset D^b(Q)$. Then the dg algebra $$\bbR \End (\oplus s_i)$$ is formal. Hence there is an equivalence of triangulated categories $$D^b_0(Q)\simeq \Perf (\Ext ^\bullet (\oplus s_i,\oplus s_i))$$ where the graded algebra $\Ext ^\bullet (\oplus s_i,\oplus s_i)$ is considered as a dg algebra with zero differential. \end{cor} \begin{proof} The formality of the dg algebra $\bbR \End (\oplus s_i)$ follows from Lemma \ref{lemma on formality} applied to the abelian category $\cA$ of all right $\k Q $-modules and taking $A_i=s_i$. Because the category $D^b_0(Q)$ is Karoubian by Proposition \ref{standard equiv} we get the equivalence $$D^b_0(Q)\simeq \Perf (\bbR \End (\oplus s_i)).$$ The last assertion then follows from the fact that the dg algebras $\Ext ^\bullet (\oplus s_i,\oplus s_i)$ and $\bbR \End (\oplus s_i)$ are quasi-isomorphic. \end{proof} \begin{defi}\label{defi vertex like} Let $\TT$ be a triangulated category with an enhancement. A collection of objects $\{t_1,\ldots ,t_n\}$ is $\TT$ is called {\rm vertex-like} if the endomorphism dg algebra $$\bbR \End (\oplus t_i)$$ is formal, and in addition $\Hom (t_i,t_j)=\delta _{ij}\cdot \k$, and $\Hom ^p(t_i,t_j)=0$ for all $i,j$ and $p\neq 0,1$. \end{defi} \begin{remark} It follows from Corollary \ref{cor formal} that the collection of objects $\{ s_1,\ldots ,s_n\}$ in the category $D^b_0(Q)$ is vertex-like. Note that the dimension of the space $\Ext ^1(s_i,s_j)$ is equal to the number of arrows from $v_j$ to $v_i$. \end{remark} The next proposition gives a necessary and sufficient condition for a category to be quiver-like. \begin{prop} \label{equiv cond} Let $\TT$ be an $\Ext$-finite triangulated category. The following conditions are equivalent. (1) $\TT$ is quiver-like. (2) $\TT\simeq \Perf(E)$ where $E=E^0\oplus E^1$ is a dg algebra with zero differential and such that $E^0=\k \times \ldots \times \k$. (3) $\TT$ is Karoubian, it has an enhancement and it is generated by a collection of objects $\{t_1,\ldots ,t_n\}$ that is vertex-like. Moreover, if $\TT$ satisfies (3), then there exists a quiver $Q$ and an equivalence $\Phi \colon D^b_0(Q)\to \TT$ such that $\Phi (s_i)=t_i$. \end{prop} \begin{proof} (1)$\Rightarrow$(2) is contained in Corollary \ref{cor formal}. (2)$\Rightarrow$(3). Since $\TT\simeq \Perf(E)$, it is Karoubian and has an enhancement. Let $e_1,\ldots ,e_n\in E$ be the idempotents corresponding to the factors in $E^0=\k \times \ldots \times \k$. Then the right dg $E$-modules $e_iE$ are h-projective, they generate $\Perf (E)$, and the dg algebra $$\bbR \End (\bigoplus _ie_iE)=\bbR \End (E)=E$$ is formal. In addition $\Hom (e_iE,e_jE)=\delta _{ij}\cdot \k$, $\Hom ^p(e_iE,e_jE)=0$ for all $i,j$ and $p\neq 0,1$. So we can take $t_i=e_iE$. (3)$\Rightarrow$(1). Consider the graded algebra $\cE =\Ext ^\bullet (\oplus t_i,\oplus t_i)$ as a dg algebra with zero differential. Our assumptions imply that the category $\TT$ is equivalent to the category $\Perf (\cE)$. Now define the quiver $Q$ with vertices $v_1,\ldots ,v_n$ and the number of arrows from $v_i$ to $v_j$ equal to $\dim \Hom (t_j,t_i[1])$. Let $s_1,\ldots ,s_n\in D_0^b(Q)$ be the corresponding simple modules. By construction, we have $\Ext^\bullet(t_i,t_j)\simeq\Ext^\bullet(s_i,s_j)$ for all $i,j$. By Corollary \ref{cor formal} we get $$D_0^b(Q)\simeq \Perf(\Ext^\bullet(\oplus s_i,\oplus s_i))\simeq \Perf (\cE)\simeq \TT,$$ i.e. $\TT$ is quiver-like. This proves the implication (3)$\Rightarrow$(1) and also the last assertion of the proposition. \end{proof} \begin{prop} \label{prop_niceexist} Assume that the field $\k$ is algebraically closed. Let $\AA$ be an abelian ($\k$-linear) category such that any object $A\in \AA$ has an injective resolution of length $\le 1$. Assume that the category $D^b(\AA)$ is Karoubian. Let $\TT\subset D^b(\AA)$ be a finitely generated $\Ext$-finite thick subcategory. Assume there exists a linear function $$r\colon K_0(\TT)\to \Z,$$ such that for any nonzero $F\in \TT\cap\AA$ one has $r([F])>0$. Then the category $\TT$ satisfies the condition (3) in Proposition \ref{equiv cond}, and hence it is quiver-like. The same holds for projective resolutions. \end{prop} \begin{proof} Note that $\AA$ is hereditary and thus any object in $\TT$ is a direct sum of its cohomology. It follows that one can choose a finite set of generators in $\TT$ belonging to $\AA\subset D^b(\AA)$. Take any family $A_1,\ldots,A_n\in \AA$ of nonzero objects generating $\TT$ such that \begin{enumerate} \item $\sum_i r([A_i])$ is the minimal possible; \item the number $n$ is the maximal possible among all families with the fixed $\sum_i r([A_i])$. \end{enumerate} (such a family exists because $r([A])>0$ for any nonzero $A\in\TT\cap \AA$). Note that $A_i\ncong A_j$ for $i\neq j$. We claim that the family $\{ A_1,\ldots,A_n\}$ is vertex-like. First we check that there are no morphisms between $A_i$'s except for scalar multiplication. Let $f\colon A_i\to A_j$ be a morphism. Denote $K:=\ker f,I:=\im f, C:=\coker f$. Since $\AA$ is hereditary, the complex $Cone (f)$ in $\TT$ is quasi-isomorphic to $K[1]\oplus C$. Since $\TT$ is thick we get that $K,C\in \TT$. Also we get $I\in \TT$ and $$\langle A_i,A_j\rangle=\langle K,I,C\rangle.$$ If $i\ne j$ we have $$r([A_i])+r([A_j])=r([K])+r([I])+r([I])+r([C])=(r([K])+r([I])+r([C]))+r([I]).$$ Replacing $A_i,A_j$ with $K,I,C$ we get a generating family with the smaller $\sum_i r([A_i])$ unless $I=0$, it contradicts to condition (1). Hence $f=0$. If $i=j$ we get $$r([A_i])=r([K])+r([I])=r([I])+r([C]).$$ Replacing $A_i$ with $K,I$ we get a generating family with the same $\sum_i r([A_i])$ and with the bigger number of objects unless $I=0$ or $K=0$, it contradicts to condition (2). Hence $f=0$ or $K=0$. Similarly, replacing $A_i$ with $I,C$ we see that $I=0$ or $C=0$. Thus, if $f\ne 0$ then $K=C=0$ and $f$ is an isomorphism. We proved that for each $i$ the endomorphism algebra $\End A_i$ is a finite-dimensional division $\k$-algebra. Since $\k$ is algebraically closed, $\End A_i=\k$. Clearly, $\Hom^s(A_i,A_j)=0$ for $s\ne 0,1$. Finally, the dg algebra $\R\End(\oplus_i A_i)$ is formal by Lemma~\ref{lemma on formality}. Therefore $\{ A_1,\ldots,A_n\}$ is a vertex-like collection. Also $\TT$ is Karoubian, hence it satisfies the condition (3) of Proposition \ref{equiv cond}. \end{proof} \begin{cor}\label{first cor} Let $A$ be a hereditary $\k$-algebra over an algebraically closed field. Let $\TT=\langle M_1,\ldots,M_n\rangle\subset D^b(\Modd A)$ be any thick subcategory generated by finite-dimensional (over $\k$) modules. Then $\TT$ is quiver-like. \end{cor} \begin{proof} We use Proposition~\ref{prop_niceexist}. Namely, we take $\AA=\Modd A$ and $r$ to be the function induced by the dimension of a module over $\k$. \end{proof} \begin{cor} Let $\TT$ be a quiver-like triangulated category. Then any finitely generated thick subcategory $\TT '\subset \TT$ is also quiver-like. \end{cor} \begin{proof} We may assume that $\TT=D^b_0(Q)$ for a quiver $Q$ and use Corollary \ref{first cor} with $A=\k Q$. \end{proof} \medskip \begin{prop} \label{prop_Qtree} Let $Q$ be a quiver with two vertices $1,2$ such that for any $i, j\in\{1,2\}$ there is at least one arrow from $i$ to $j$. Then the category $D^b_0(Q)$ has an infinite descending binary tree of thick subcategories. Moreover one can find such a tree with the following additional property: if $\TT _1$ and $\TT _2$ are two elements of this tree which are not located one above the other, then $\TT _1\cap \TT _2=0$. \end{prop} \begin{proof} Let $a\colon 1\to 1, b\colon 2\to 1, c\colon 2\to 2$ be some arrows. Define the right $\k Q$-module $M^{(1)}$ as follows: $M^{(1)}_1=\k x,$ $M^{(1)}_2=\k y$ with $x\cdot b=y$ and all other arrows in $Q$ acting by zero. Define another right $\k Q$-module $M^{(2)}$ as $M^{(2)}_1=\k \alpha \oplus \k\beta,$ $M^{(2)}_2=\k\gamma \oplus \k\delta$ with the nontrivial action of the arrows given by $\alpha \cdot a=\beta$, $\beta \cdot b=\gamma$, $\gamma \cdot c=\delta$. Then one checks that $$\Hom (M^{(i)},M^{(j)})=\delta _{ij}\cdot\k\quad \text{and}\quad \Ext^1 (M^{(i)},M^{(j)})\ne 0 \quad \text{for all}\quad i,j\in\{1,2\}.$$ We conclude (using Lemma~\ref{lemma on formality}) that the modules $M^{(1)},M^{(2)}$ form a vertex-like set. It follows then from Proposition~\ref{equiv cond} and Corollary \ref{triv cor} that $\langle M^{(1)},M^{(2)}\rangle =[M^{(1)},M^{(2)}]$. Consequently, the thick subcategory $\langle M^{(1)},M^{(2)}\rangle$ is strictly smaller than $D^b_0(Q)$ (because, for example, all objects in $\langle M^{(1)},M^{(2)}\rangle$ have even-dimensional cohomology). Moreover, by Proposition~\ref{equiv cond} we get an equivalence $\langle M^{(1)},M^{(2)}\rangle\simeq D^b_0(Q')$ where the quiver $Q'$ also satisfies the assumptions of the present proposition. We can iterate the process, which then gives an infinite descending chain of thick subcategories of $D^b_0(Q)$. To construct a required descending binary tree of subcategories we can proceed as follows. For convenience let us describe the modules $M^{(1)},M^{(2)}$ constructed above by the diagrams $$M^{(1)}:\ \bullet \stackrel{b}{\to}\bullet$$ $$M^{(2)}:\ \bullet \stackrel{a}{\to}\bullet \stackrel{b}{\to}\bullet \stackrel{c}{\to}\bullet$$ Let us similarly define the right $A$ modules $$M^{(3)}:\ \bullet \stackrel{a}{\to} \bullet \stackrel{a}{\to}\bullet \stackrel{b}{\to}\bullet \stackrel{c}{\to}\bullet \stackrel{c}{\to}\bullet$$ $$M^{(4)}:\ \bullet \stackrel{a}{\to}\bullet \stackrel{a}{\to} \bullet \stackrel{a}{\to}\bullet \stackrel{b}{\to}\bullet \stackrel{c}{\to}\bullet \stackrel{c}{\to}\bullet \stackrel{c}{\to}\bullet$$ One checks that \begin{equation} \label{eq_he} \Hom (M^{(i)},M^{(j)})=\delta _{ij}\cdot\k\quad\text{and}\quad \Ext ^1(M^{(i)},M^{(j)})\ne 0 \end{equation} for all $i,j\in \{1,2,3,4\}$. Indeed, for $\Hom$ this can be done by hands, for $\Ext^1$ use $$\chi(M^{(i)},M^{(j)})=i\cdot j\cdot\chi(S_1\oplus S_2, S_1\oplus S_2)=ij(2-|\{\text{arrows in $Q$}\}|)<0.$$ Hence the thick subcategory $\langle M^{(3)},M^{(4)}\rangle \subset D^b_0(Q)$ is also quiver-like. We claim that the categories $\langle M^{(1)},M^{(2)}\rangle$ and $\langle M^{(3)},M^{(4)}\rangle$ have zero intersection. Assume the converse. Since any object in $D^b_0(Q)$ is a direct sum of its cohomology, it follows that there exists a nonzero indecomposable $\k Q$-module $L\in \langle M^{(1)},M^{(2)}\rangle\cap \langle M^{(3)},M^{(4)}\rangle$. By Corollary~\ref{triv cor}, every indecomposable $\k Q$-module in $\langle M^{(1)},M^{(2)}\rangle$ (resp. in $\langle M^{(3)},M^{(4)}\rangle$) has a filtration with subquotients $M^{(1)},M^{(2)}$ (resp. $M^{(3)},M^{(4)}$). Therefore, if $M$ (resp. $N$) is an indecomposable $\k Q$-module in $\langle M^{(1)},M^{(2)}\rangle$ (resp. in $\langle M^{(3)},M^{(4)}\rangle$), then $\Hom (M,N)=0$ by \eqref{eq_he}. In particular, $\Hom(L,L)=0$ and thus $L=0$, a contradiction. It is clear that we can now iterate the process to construct a descending binary tree of quiver-like categories with the required properties. \end{proof} \section{Thick subcategories on curves} In this section we assume that $C$ is a smooth projective connected curve over an algebraically closed field $\k$. Our goal is to classify thick subcategories in $D^b(\coh C)$. \begin{lemma} \label{lemma_notorsion} Let $\TT\subset D^b(\coh C)$ be a thick subcategory which contains a nonzero vector bundle and a nonzero torsion sheaf. Then $\TT=D^b(\coh C)$. \end{lemma} \begin{proof} Let $V,T\in \TT$ be a vector bundle and a torsion sheaf respectively. Let $x\in \Supp (T)$. It is easy to see that the skyscraper sheaf $\O_x$ is in $\TT$. Choose a line bundle $L$ and a surjection $V\to L$. This gives a short exact sequence of vector bundles $$0\to E\to V\to L\to 0.$$ Choose a surjection $L\to \O_x$ and denote by $V^{(1)}\subset V$ the kernel of the composition $V\to L\to \O_x$. Thus $V^{(1)}\in \TT$ and we obtain a short exact sequence $$0\to E\to V^{(1)}\to L(-x)\to 0.$$ Iterating this process we get for any $n\ge 1$ a short exact sequence $$0\to E\to V^{(n)}\to L(-nx)\to 0$$ with $V^{(n)}\in \TT$. For $n>>0$ this sequence splits, hence $L(nx)\in \TT$ for some $n$. It is then easy so see that $L(nx)\in \TT$ for all $n\in \Z$. Let $F\in \coh C$. We can find an exact sequence of coherent sheaves $$0\to K\to \oplus L(nx)\to \oplus L(mx)\to F\to 0.$$ The two middle terms and in $\TT$ and the category $\coh C$ is hereditary. Hence $F\in \TT$ as a direct summand of $Cone(\oplus L(nx)\to \oplus L(mx))$. Therefore $\coh C\subset \TT$. It follows that $\TT=D^b(\coh C)$. \end{proof} We obtain the immediate corollary. \begin{cor}\label{def cor} If $\TT \subset D^b(\coh C)$ is a thick subcategory, $\TT\ne 0, D^b(\coh C)$, then exactly one of the following holds: (1) Every object in $\TT$ has torsion cohomology. (2) Every object in $\TT$ has torsion free cohomology. \end{cor} \begin{proof} Indeed, every object in $D^b(\coh C)$ is the direct sum of its cohomology. So it remains to apply Lemma \ref{lemma_notorsion}. \end{proof} \begin{defi} \label{tor nontor} We will say that a thick subcategory $\TT \subset D^b(\coh C)$ is \emph{proper} if $\TT\ne 0, D^b(\coh C)$. We call $\TT$ \emph{torsion} (resp. \emph{torsion-free}) in case (1) (resp. (2)) in Corollary \ref{def cor} holds. \end{defi} Now we can formulate our main observation. \begin{theo} \label{main theo} Every finitely generated thick proper subcategory $\TT \subset D^b(\coh C)$ is quiver-like. \end{theo} \begin{proof} We consider the two cases of Definition \ref{tor nontor}. Case 1: $\TT$ is torsion. In this case we may apply Proposition \ref{prop_niceexist} with $\AA =\qcoh C$ and the function $r\colon K_0(\TT)\to \bbZ$ induced by the dimension (over $\k$) of a torsion sheaf. Case 2: $\TT$ is torsion-free. In this case we again apply Proposition \ref{prop_niceexist} with $\AA =\qcoh C$, but take the function $r\colon K_0(\TT)\to \bbZ$ to be induced by the rank of a vector bundle. \end{proof} \begin{defi} \label{def realizable} A quiver $Q$ is called \emph{realizable} if the category $D^b_0(Q)$ is equivalent to a thick finitely generated subcategory of $D^b(\coh C)$ for a smooth projective curve over an algebraically closed field $\k$. \end{defi} In the next section we are going to classify the realizable quivers. For now let us give some examples. \begin{defi} \label{def_T0} We denote by ${\bf Q}_m$ the quiver with one vertex and $m$ loops. \end{defi} \begin{lemma} \label{first example for curves} For any $n$, the quiver which is the disjoint union of $n$ copies of the quiver ${\bf Q}_1$ is realizable on any curve. In fact any torsion category (Definition \ref{tor nontor}) supported at $n$ distinct points is equivalent to $$D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1)=D^b_0({\bf Q}_1)^{\oplus n}.$$ \end{lemma} \begin{proof} Let $p$ be a point on a smooth curve $C$. Then the sky-scraper sheaf $\cO _p\in D^b(\coh C)$ is a vertex-like object with $$\Hom (\cO _p,\cO _p)\simeq \Ext ^1(\cO _p,\cO _p)\simeq \k.$$ Hence by Proposition \ref{equiv cond} the thick subcategory $\langle \cO _p\rangle \subset D^b(\coh C)$ is equivalent to $D^b_0({\bf Q}_1)$. Now it is clear that the thick subcategory $$\TT =\langle \cO _{p_1},\ldots ,\cO _{p_n}\rangle $$ for $n$ different points $p_1,\ldots ,p_n\in C$ is equivalent to $$D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1)=D^b_0({\bf Q}_1)^{\oplus n}.$$ It remains to note that $\langle \cO_p\rangle$ contains no proper thick subcategories and thus any thick finitely generated torsion subcategory in $D^b(\coh C)$ is of the form $\langle \cO _{p_1},\ldots ,\cO _{p_n}\rangle$ for some points $p_1,\ldots ,p_n$. \end{proof} \begin{prop} \label{g=0 and g=1} Let $C$ be a curve of genus $g$ and let $\TT \subset D^b(\coh C)$ be a thick finitely generated proper subcategory. (1) If $g=0$, then $\TT \simeq D^b_0({\bf Q}_0)$ or $\TT\simeq D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1).$ The first case occurs if $\TT$ is torsion-free and the second one occurs when $\TT$ is torsion. (2) If $g=1$, then $\TT\simeq D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1)$. (3) If $g=0$ or $g=1$, the category $\TT$ contains only finitely many distinct thick subcategories. \end{prop} \begin{proof} (1) Let $g=0$. Assume that $\TT$ is torsion-free. Since every vector bundle on~$\bbP ^1$ is a direct sum of line bundles $\cO (n)$, it is easy to see that $\TT =\langle \cO (n)\rangle $ for some~$n$ and hence $$\TT \simeq D^b(\modd \k) \simeq D^b_0({\bf Q}_0).$$ If on the other hand $\TT$ is torsion, it follows from Lemma \ref{first example for curves} that $$\TT\simeq D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1).$$ (2) Let $g=1$ and let $F\in \TT$ be a nonzero indecomposable object. Then there exists an autoequivalence $\Psi $ of $D^b(\coh C)$ such that $\Psi (F)$ is a torsion sheaf. It follows that the category $\Psi (\TT)$ is torsion. Then again by Lemma~\ref{first example for curves} we find that $\TT\simeq \Psi (\TT)\simeq D^b_0({\bf Q}_1\sqcup \ldots \sqcup {\bf Q}_1)$. (3) This follows from (1) and (2) and the fact that the categories $D^b_0({\bf Q}_0)$ and $D^b({\bf Q}_1)$ have no proper thick subcategories. \end{proof} It contrast to Proposition \ref{g=0 and g=1}, thick subcategories of curves of genus $g\geq 2$ behave differently. \begin{prop} \label{g=2} (1) Let $C$ be a curve of genus $g\geq 2$. Then there exists an infinite descending binary tree of thick subcategories in $D^b(\coh C)$ with the following property: if elements $\TT _1$, $\TT _2$ of this tree are not located one above the other, then $$\TT _1\cap \TT _2=0.$$ (2) For any $n\geq 0$ the quiver ${\bf Q}_n$ is realizable on a curve of genus $g=n$. \end{prop} \begin{proof} (1) Let $\cL_1,\cL_2$ be distinct line bundles of degree $0$ on $C$. Then for all $i,j\in \{1,2\}$ we have $\Hom (\cL_i,\cL_j)=\delta _{ij}\cdot \k$ and $\Ext^s(\cL_i,\cL_j)=0$ for $s\ne 0,1$. By Lemma~\ref{lemma on formality}, Definition \ref{defi vertex like} and Proposition \ref{equiv cond} we know that $\{\cL_1,\cL_2\}$ is a vertex-like collection, and the thick subcategory $\langle\cL_1,\cL_2\rangle\subset D^b(\coh C)$ is equivalent to $D^b_0(Q)$, where $Q$ is the quiver with vertices $v_1,v_2$ and $\dim\Ext^1(\cL_j,\cL_i)$ arrows from $v_i$ to $v_j$ for any $i,j\in\{1,2\}$. Note that for any $i,j\in\{1,2\}$ by the Riemann-Roch formula $$\dim \Ext ^1(\cL_i,\cL_j)=g(C)-1+\delta_{ij}> 0,$$ hence by Proposition~\ref{prop_Qtree} the category $D^b_0(Q)$ has an infinite descending binary tree of thick finitely generated subcategories with the required property. (2) It suffices to take any line bundle $\cL$ on $C$. The thick subcategory $\langle \cL \rangle \subset D^b(\coh C)$ is equivalent to $D_0^b({\bf Q}_g)$ where $g$ is the genus of $C$. \end{proof} \section{Realization of quivers on curves and uniqueness problems} In this section a \emph{curve} means a smooth projective connected curve over an algebraically closed field $\k$. We complete the problem of classification of proper finitely generated thick subcategories of curves which we started in the previous section. In view of Theorem \ref{main theo}, Lemma \ref{first example for curves}, and Proposition \ref{g=0 and g=1} it remains to answer the following questions: \medskip \noindent{\bf Q1}. Which quivers $Q$ are realizable by torsion-free subcategories on curves of genus $g\geq 2$? (Definitions \ref{def realizable}, \ref{tor nontor}). \medskip We can also ask the following question. \medskip \noindent{\bf Q2}. Suppose the quiver $Q$ is realizable. Is then $Q$ determined uniquely by the category $D^b_0(Q)$? \medskip We start with question {\bf Q1}. First let us summarize the relevant results from the previous sections. \begin{prop} \label{useful summary for torsion-free} Let $C$ be a curve and let $\TT \subset D^b(\coh C)$ be a proper thick subcategory which is torsion-free (Definition \ref{tor nontor}). Then (1) $\TT$ is quiver-like. (2) $\TT =\langle E_1,\ldots ,E_n\rangle$, where $\{E_1,\ldots ,E_n\}$ is a vertex-like collection of vector bundles on $C$. (3) We have $\TT =[E_1,\ldots ,E_n]$ and $K_0(\TT)=\bigoplus _i\bbZ [E_i]$. (4) Every indecomposable object in $\TT$ is of the form $F[m]$, where $F$ is a vector bundle that has a filtration with subquotients being $E_i$'s. Vice versa, a vertex-like collection of vector bundles $\{E_1,\ldots ,E_n\}$ generates a torsion-free proper thick subcategory of $D^b(\coh C)$. \end{prop} \begin{proof} (1) Follows from Theorem \ref{main theo}. Then (2),(3),(4) follow from Proposition \ref{equiv cond} and Corollary \ref{triv cor}. For the last assertion: we know that $\langle E_1,\ldots ,E_n\rangle \subset D^b(\coh C)$ is a quiver-like subcategory of $D^b(\coh C)$. Now Corollary \ref{triv cor} implies that all indecomposable objects of $\langle E_1,\ldots ,E_n\rangle$ are (shifted) vector bundles on $C$. Hence it is a proper torsion-free thick subcategory of $D^b(\coh C)$. \end{proof} Let $C$ be a curve of genus $g$. For a vector bundle $E$ on $C$ let $r(E)$ and $d(E)$ denote respectively its rank and degree. For vector bundles $E,F$ put $$\chi (E,F)=\chi (C,E,F)=\dim \Hom (E,F)-\dim \Ext ^1(E,F).$$ By a version of Riemann-Roch formula we have \begin{equation} \label{RR formula} \chi (E,F)=r(E)r(F)(1-g)+r(E)d(F)-r(F)d(E). \end{equation} A finite quiver $Q$ with vertices $v_1,\ldots ,v_n$ is determined by a square matrix $A=(a_{ij})\in M_{n\times n}(\bbZ)$ with nonnegative entries $a_{ij}\geq 0$, such that $a_{ij}$ is the number of arrows from $v_i$ to $v_j$. Put $Q=Q(A)$. Recall (Proposition \ref{equiv cond}) that the quiver $Q(A)$ is realized by a torsion-free category on a curve $C$ if and only if there exists a vertex-like collection of vector bundles $\{ E_1,\ldots ,E_n\}$ on $C$ such that \begin{equation}\label{aij=ext1} \dim \Ext ^1(E_j,E_i)=a_{ij}. \end{equation} By \eqref{RR formula} the equation \eqref{aij=ext1} is equivalent to the equation \begin{equation}\label{nec cond} a_{ij}=r_jr_i(g-1)-r_jd_i+r_id_j+\delta _{ij}\quad (=-\chi (E_j,E_i)+\delta _{ij}) \end{equation} where $r_i=r(E_i)$ and $d_i=d(E_i)$. This gives us a necessary condition for the quiver $Q(A)$ to be realized on a curve of genus $g$. Actually this condition is also sufficient. The following theorem answers question {\bf Q1} above. \begin{theo} \label{answer question one} Let $g\geq 2$ and let $A=(a_{ij})\in M_{n\times n}(\bbZ)$ be a matrix with nonnegative entries. Then the quiver $Q(A)$ is realized by a torsion-free category on a given curve~$C$ of genus $g$ if and only if the following holds: there exists a collection of integers $(r_1,\ldots ,r_n,d_1,\ldots ,d_n)\in \bbZ _{>0}^n\times \bbZ ^n,$ such that $$a_{ij}=r_jr_i(g-1)-r_jd_i+r_id_j+\delta _{ij}$$ for each pair $(i,j)$. \end{theo} \begin{proof} We already explained the ``only if'' direction. For the ``if'' direction we will prove the following: given a set of integers $(r_1,\ldots ,r_n,d_1,\ldots ,d_n)\in \bbZ _{>0}^n\times \bbZ ^n$ such that for each pair $(i,j)$ \begin{equation} \label{inequal} r_jr_i(g-1)-r_jd_i+r_id_j+\delta _{ij}\geq 0 \end{equation} holds, on any curve $C$ of genus $g$ there exists a vertex-like collection of vector bundles $\{E_1,\ldots ,E_n\}$ with $d(E_i)=~d_i$ and $r(E_i)=r_i$. Choose vector bundles $E_i$ with $d(E_i)=~d_i$ and $r(E_i)=r_i$. For $i\neq j$ the condition \eqref{inequal} means that $\chi (E_j,E_i)\leq 0$. Recall a theorem by Hirschowitz (see~\cite[Th. 1.2]{RTB}): if $E,F$ are generic vector bundles of given rank and degree on a curve $C$ of genus $\geq 2$ and $\chi(E,F)\leq 0$ then $\Hom(E,F)=0$. Also a generic vector bundle $E$ (of given rank and degree) on $C$ is stable (see~\cite[Prop. 2.6]{NR}) and thus $\End(E)=\k$. It follows that for a generic choice of vector bundles $E_i$ with degree $d_i$ and rank $r_i$ we have $\dim \Hom (E_j,E_i)=\delta _{ij}$ and the collection $\{E_1,\ldots,E_n\}$ is vertex-like. \end{proof} Recall that the \emph{slope} of a vector bundle $E$ is $d(E)/r(E)$. \begin{prop} Let $C$ be a curve of genus $g\geq 2$ and let $(r_1,\ldots ,r_n,d_1,\ldots ,d_n)\in \bbZ _{>0}^n\times \bbZ ^n$. Then there exists a vertex-like collection of vector bundles $\{E_1,\ldots ,E_n\}$ on $C$ with $d(E_i)=d_i$ and $r(E_i)=r_i$ if and only if for all $1\leq i,j\leq n$ we have \begin{equation} \label{reformulation} \left|\frac{d_i}{r_i}-\frac{d_j}{r_j}\right|\le g-1. \end{equation} \end{prop} \begin{proof} Indeed the inequality \eqref{inequal} for pairs $(i,j)$ and $(j,i)$ with $i\neq j$ is equivalent to the condition \eqref{reformulation}. Now the proposition follows by the same argument as in the proof of Theorem \ref{answer question one}. \end{proof} \begin{remark} If a quiver $Q(A)$ for a matrix $A=(a_{ij})$ is realizable by a torsion-free category on a curve of genus $g\geq 2$, then for all pairs of indices $\{i,j\}$ at least one of the numbers $a_{ij}, a_{ji}$ is positive. In particular, the quiver $Q(A)$ is connected (compare with Proposition \ref{g=0 and g=1} for $g=1$ case). Indeed, this follows from Theorem \ref{answer question one}. \end{remark} \subsection{Some uniqueness and non-uniqueness results} \begin{prop} \label{prop_unique} Let $C,C'$ be curves, $g(C),g(C')\geq 2$. Let $E_1,\ldots,E_n$ and $E'_1,\ldots,E'_{n'}$ be two vertex-like families of vector bundles on $C$ and $C'$ respectively. Assume that $$\Phi\colon \langle E_1,\ldots,E_n\rangle\to \langle E'_1,\ldots,E'_{n'}\rangle$$ is an equivalence between the corresponding quiver-like categories. Then $n=n'$ and there exist a permutation $\s\in S_n$ and $m\in\bbZ$ such that $\Phi(E_i)\simeq E'_{\s(i)}[m]$ for all $i=1,\ldots,n$. \end{prop} \begin{proof} By Proposition \ref{useful summary for torsion-free}, the Grothendieck group $K_0(\langle E_1,\ldots,E_n\rangle)$ is freely generated by the classes $[E_1],\ldots,[E_n]$ and similarly for $K_0(\langle E'_1,\ldots,E'_{n'}\rangle)$. The equivalence $\Phi$ induces an isomorphism $K_0(\langle E_1,\ldots,E_{n}\rangle)\simeq K_0(\langle E'_1,\ldots,E'_{n'}\rangle)$, which implies that $n=n'$. For any $i$ the object $\Phi (E_i)$ is indecomposable and so by Proposition \ref{useful summary for torsion-free} we have \begin{equation} \label{eq_EFm} \Phi(E_i)\simeq F_i[m_i] \end{equation} for some vector bundle $F_i$ on $C'$ and $m_i\in\bbZ$. The equation \eqref{RR formula} implies that $$r(E_i)^2(1-g(C))=\chi(C,E_i,E_i)=\chi(C',F_i[m_i],F_i[m_i])=\chi(C',F_i,F_i)=r(F_i)^2(1-g(C')).$$ It follows that the ratio $r(F_i)/r(E_i)=:r/r'$ does not depend on $i$, where we denote $$r=\sqrt{g(C)-1},\ \ r'=\sqrt{g(C')-1}$$ (recall that $g(C),g(C')\geq 2$). By Proposition~\ref{useful summary for torsion-free}, we have $$[F_i]=\sum_jb_{ij} [E'_j]\in K_0(\langle E'_1,\ldots,E'_{n'}\rangle)$$ for some $b_{ij}\in\bbZ_{\ge 0}$. Moreover, the matrix $B=(b_{ij})$ is invertible over $\bbZ$. It follows that $$r(F_i)=\sum_j b_{ij} r(E'_j),\quad r\cdot r(E_i)=\sum_j b_{ij} r'\cdot r(E'_j).$$ Denote $s_i:=r\cdot r(E_i)$ and $s'_i:=r'\cdot r(E'_i)$. We have now $$\sum_i s_i=\sum_{ij}b_{ij}s'_j=\sum_j(s'_j\sum_i b_{ij})\ge \sum_j s'_j,$$ because $\sum_i b_{ij}\ge 1$ (since $B$ is non-degenerate and $b_{ij}\in\bbZ_{\ge 0}$ for all $i,j$). Similarly we have $\sum_j s'_j\ge \sum_i s_i$. It follows that $\sum_i b_{ij}=1$ for any $j$ and thus $B$ is a permutation matrix. Hence $[F_i]=[E'_{\s(i)}]$ in $K_0(\langle E'_1,\ldots,E'_n\rangle)$ for some $\s\in S_n$ and all $i$. It follows from Proposition \ref{useful summary for torsion-free} that $\Phi(E_i)\simeq E'_{\s(i)}[m_i]$. Now Lemma~\ref{no shifts} implies that all $m_i$ in \eqref{eq_EFm} are equal. \end{proof} \begin{lemma}\label{no shifts} Let $C$ be a curve of genus $g\geq 2$. Let $E_1,\ldots ,E_n$ be vector bundles on~$C$ such that for some integers $m_i$ the objects $\{E_1[m_1],\ldots ,E_n[m_n]\}$ form a vertex-like collection. Then $m_i=m_j$ for all $i,j$. \end{lemma} \begin{proof} It suffices to prove that $m_1=m_2$. Formula \eqref{RR formula} implies that $$\chi (E_1,E_2)+ \chi (E_2,E_1)=2(1-g)r(E_1)r(E_2)<0.$$ It follows that at least one of $\chi (E_1,E_2),\chi (E_2,E_1)$ is negative. Assume $\chi (E_1,E_2)<0$, then \begin{equation} \label{condition1} \Ext^1(E_1,E_2)\ne 0. \end{equation} By our assumption \begin{equation} \label{condition2}\Ext ^i(E_1[m_1],E_2[m_2])\neq 0\quad \text{implies that $i=1$}. \end{equation} Equations \eqref{condition1} and \eqref{condition2} imply that $m_1=m_2$. \end{proof} \begin{remark} Lemma \ref{no shifts} also holds for $g=0$, but it fails for $g=1$: if $\cL _1,\cL _2$ are distinct line bundles of the same degree, then for any $m$ the objects $\cL _1$ and $\cL _2[m]$ are orthogonal and hence the collection $\{\cL _1,\cL _2[m]\}$ is vertex-like. \end{remark} The following is an answer to question {\bf Q2} above. \begin{cor} \label{cor_unique} Let $Q,Q'$ be quivers. Assume that $Q$ is realizable and there is an equivalence $D^b_0(Q)\simeq D^b_0(Q')$ (hence $Q'$ is also realizable). Then $Q\simeq Q'$. \end{cor} \begin{proof} Assume that the quiver $Q$ is realized on a curve of genus $g$. We consider several cases. Case 1: $g=0$ and $Q$ is realized by a torsion-free category. Then by Proposition \ref{g=0 and g=1} we know that $$D^b_0(Q)\simeq D^b_0({\bf Q}_0)\simeq D^b(\modd \k).$$ It follows that $Q={\bf Q}_0=Q'$. Case 2: $g=1$ or $Q$ is realized by a torsion category. Then by Proposition \ref{g=0 and g=1} and Lemma \ref{first example for curves} there exists an equivalence of categories $$\Psi \colon D^b_0(Q)\stackrel{\sim}{\to} D^b_0({\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1).$$ Comparing the $K$-groups of these categories we find that the two quivers have the same number of vertices, say $n$. Let $s_1,\ldots ,s_n\in D^b_0(Q)$ (resp. $s'_1,\ldots ,s'_n\in D^b_0({\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1)$) be the collection of simple modules corresponding to vertices. Note that the objects $s'_i\in D^b_0({\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1)$ are characterized (up to a shift) by the property that $\End (s'_i)=\k$. It follows that there exists a permutation $\s \in S_n$ such that for each $i$ $$\Psi (s_i)=s'_{\s (i)}[m_i]\quad \text{for some $m _i \in \bbZ$}.$$ Moreover, $D^b_0({\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1)$ is the orthogonal sum of its subcategories $\langle s_i'\rangle$. It follows that $D^b_0(Q)$ is also the orthogonal sum of its subcategories $\langle s_i\rangle$. Therefore $Q\simeq {\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1$ and similarly $Q'\simeq {\bf Q}_1\sqcup\ldots \sqcup {\bf Q}_1$. Case 3: $g\ge 2$ and $Q$ is realized by a torsion-free category. In this case an isomorphism $Q\simeq Q'$ follows from Proposition \ref{prop_unique}. \end{proof} \begin{remark} Note that Corollary~\ref{cor_unique} does not hold in general if $Q$ is not assumed to be realizable. For example, let $Q$ be any tree and let $Q'$ be the quiver obtained from~$Q$ be reversing some arrows. Then $D^b_0(Q)=D^b(\modd \k Q)$, $D^b_0(Q')=D^b(\modd \k Q')$ and it is well-known that the categories $D^b(\modd \k Q)$ and $D^b(\modd \k Q')$ are equivalent. \end{remark}
14790cfda26c2509ac02834382d95289e316d17e
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\subsection{I. Introduction} In the last two decades molecular electronics became a well established and fast developing field \cite{1,2,3,4,5}. Presently, it provides a general platform which can be used to consider diverse nanoscale electronic and energy conversion devices. The basic building block of such devices is a single-molecule junction (SMJ). This system consists of a couple of metallic/semiconducting electrodes linked with a molecular bridge. Electron transport through SMJs is controlled by electric forces, thermal gradients and electron-phonon interactions. In addition, in SMJs operating inside dielectric solvents, transport properties may be strongly affected by the solvent response to the molecule charge states \cite{6,7,8}. Overall, electrons on the molecule may interact with a collection of thermalized phonon modes associated with the solvent nuclear environment as well as with individual modes associated with molecular vibrations. Such interactions lead to the energy exchange between traveling electrons and the environment thus giving rise to inelastic effects in the electron transport. In the weak electron-phonon coupling limit inelastic contributions may be treated as perturbations of basically elastic transport \cite{9,10,11,12,13,14,15,16}. Stronger coupling of electrons to phonon modes can result in several interesting phenomena including negative differential conductance, rectification and Franck-Condon blockade \cite{17,18,19,20,21,22}. In the present work we consider a limit of very strong electron-phonon interactions when electron transport may be described as a sequence of hops between the electrodes and the bridge sites and/or among the bridge sites subjected to local thermalization. This dynamics is usually described by successive Marcus-type electron transfer processes \cite{3,23,24,25,26,27}. Indeed, Marcus theory has been repeatedly and successfully used to study charge transport through redox molecules \cite{28,29,30,31,32,33,34,35}. Nuclear motions and reorganization are at the core of this transport mechanism. The theory may be modified to include effects of temperature gradients across a SMJ \cite{36,37} as well as a finite relaxation time of the solvent environment. Further generalization of Marcus theory accounting for finite lifetime broadening of the molecule electron levels was recently suggested \cite{38,39}. Besides charge transport, electron-phonon interactions may strongly affect heat generation and transport through SMJs \cite{9,10,20}. There is an increasing interest in studies of vibrational heat transport in atomic scale systems and in their interfaces with bulk substrates \cite{40,41,42,43,44,45}. Electron transfer induced heat transport was also suggested and analyzed \cite{36,37}. Effects of structure-transport correlations on heat transport characteristics of such systems are being studied \cite{46,47,48} as well as effects originating from specific features of coupling between the molecular linker and electrodes \cite{49,50,51,52} and the heat currents rectification \cite{53,54,55}. Correlations between structure and heat transfer in SMJs and similar systems may be accompanied by heating/cooling of the molecular bridge environment \cite{9,10,56,57,58,59,60,61}. Nevertheless, the analysis of heat transfer in SMJs is far from being completed, especially in molecular junctions dominated by Marcus-type electron transfer processes. In the present work, we theoretically analyze energy balance and conversion in such systems. For the molecular bridge we use the standard single single level model that describes two molecular electronic states in which the level is either occupied or unoccupied. The electrodes are treated as free electron reservoirs with respective chemical potentials and temperatures. We assume that the level may be slowly driven by an external agent (such as a gate voltage) which moves it over a certain energy range. Also, we assume that the temperature of solvent environment of the bridge may differ from the electrodes temperatures. Despite its simplicity, the adopted model captures essential physics of energy conversion in such junctions. The paper is organized as follows. In Sec.II we review the application of Marcus rates to the evaluation of steady state currents resulting from voltage and temperature bias across the junction. We study the relationship between heat currents flowing into the electrodes and into the solvent environment of the molecular bridge and demonstrate overall energy conservation. In Sec.III we analyze the energy balance in a system where a bridge electronic level is driven by an external force. We discuss the irreversible work thus done on the system and the corresponding dissipated power and the entropy change. In Sec.IV, we describe a simple model for a Marcus junctione engine and estimate its efficiency. Our conclusions are given in Sec.V. \subsection{II. Steady state currents} \subsection{A. Electron transfer rates and electronic currents} We consider a molecular junction were electrons move between electrodes through a molecular bridge (or dot) that can be occupied ( state $a$) or unoccupied (state $b$). Adopting the Marcus formalism we assume that each state corresponds to a free energy surface which is assumed to be parabolic in the collective solvent coordinate $x$. Here and below we take "solvent" to include also the intramolecular nuclear motion which contributes to the electronic charge accumulation. We use the simplest shifted surfaces model: in terms of $x$, the energy surfaces associated with the two electronic states are assumed to take the forms of identical harmonic surfaces that are shifted relative to each other: \begin{equation} E_a (x) = \frac{1}{2} k x ^2 , \label{1} \end{equation} \begin{equation} E_b (x) = \frac{1}{2} k (x - \lambda)^2 + \epsilon_d. \label{2} \end{equation} Here, $\lambda$ represents a shift in the equilibrium value of the reaction coordinate and $\epsilon_d$ is the difference between the equilibrium energies of the two electronic states. Diverse reaction geometries may be taken into account by varying $\epsilon_d$ and the force constants \cite{62,63,64}. More sophisticated models \cite{65} make the mathematics more involved but not expected to change the essential physics. The reorganization energy $E_r$ associated with the electron transfer process \begin{equation} E_r = \frac{1}{2} k \lambda^2. \label{3} \end{equation} reflects the strength of interactions between electrons on the bridge and the solvent environment. For $E_r=0$ electron transport becomes elastic. The overall kinetic process is determined by the transfer rates $ k_{a\to b}^{L,R} $ and $ k_{b \to a}^{L,R} $ that correspond to transitions at the left (L) and right (R) electrodes between the occupied ($a$) and unoccupied ($b$) molecular states (namely, $a\to b$ corresponds to electron transfer from molecule to metal and $b\to a$ denotes the opposite process). Because of the timescale separation between electron and nuclear motions, these transfer processes have to satisfy electronic energy conservation: \begin{equation} g(x,\epsilon) = E_b(x) - E_a(x) + \epsilon = 0. \label{4} \end{equation} where $\epsilon$ is the energy of electron in the metal. This leads to the Marcus electron transfer rates given by \cite{27}: \begin{equation} k_{a \to b}^K = \sqrt{\frac{\beta_s}{4\pi E_r}} \int_{-\infty}^\infty d \epsilon\Gamma_K(\epsilon) [1 - f_K (\beta_K, \epsilon)] \exp \left[-\frac{\beta_s}{4 E_r} (\epsilon + E_r - \epsilon_{d})^2 \right] \label{5}, \end{equation} \begin{equation} k_{b \to a}^K = \sqrt{\frac{\beta_s}{4\pi E_r}} \int_{-\infty}^\infty d \epsilon \Gamma_K(\epsilon)f_K (\beta_K, \epsilon) \exp\left[- \frac{\beta_s}{4E_r} (\epsilon_{d} + E_r - \epsilon)^2 \right], \label{6} \end{equation} where $K=\{L,R\}$ stands for the left and right electrode. In these expressions, $ \Gamma_{L,R} $ are the bare electron transfer rates between the single molecule level and the electronic continuum in the metal, $ \beta_{L,R} = (kT_{L,R})^{-1} $ and $ \beta_s = (kT_s)^{-1} $ indicate the temperatures of the electrodes and the molecule environment, $k$ is the Boltsmann constant and $ f_{L,R} $ are Fermi distribution functions for the electrodes with chemical potentials $ \mu_{L,R}. $ Expressions Eqs (\ref{5}), (\ref{6}) assume that electron transfer takes place from an equilibrium solvent and metal configurations, namely that thermal relaxation in the metal and solvent environments are fast relative to the metal-molecule electron exchange processes. When $T_L=T_R=T_s$ Eqs (\ref{5}), (\ref{6}) are reduced to the standard Marcus-Hush-Chidsey expressions for electron-electrodes transfer rates \cite{27,62}. In further analysis we assume that the molecule is symmetrically coupled to the electrodes ($\Gamma_L=\Gamma_R=\Gamma$) and, unless stated otherwise, we take $\Gamma$ as a constant independent on energy. Given these rates, the probabilities that the dot is in the states $a$ or $b$ at time $t$, $ P_a$ and $ P_b$, are determined by the kinetic equations: \begin{align} \frac{d P_a}{dt} = P_b k_{b\to a} - P_a k_{a\to b}; \qquad \frac{d P_b}{dt} = P_a k_{a \to b} - P_b k_{b\to a} \label{7} \end{align} where $ k_{a \to b} = k_{a\to b}^L + k_{a\to b}^R; \ k_{b \to a} = k_{b\to a}^L + k_{b\to a}^R. $ The steady state probabilities $P_a^0$ and $ P_b^0$ and the steady state electron current $I_{ss}$ (positive when electrons go from left to right) are given by: \begin{equation} P_a^0 = \frac{k_{b\to a}}{k_{a\to b} + k_{b\to a}}; \qquad P_b^0 = \frac{k_{a\to b}}{k_{a\to b} + k_{b\to a}}; \label{8} \end{equation} \begin{align} I_{ss} = & k^L_{b\to a}P_b^0-k^L_{a\to b}P^0_a=-(k^R_{b\to a}P_b^0-k^R_{a\to b}P^0_a ) \nonumber\\ = & \frac{k_{a\to b}^R k_{b \to a}^L - k_{b \to a}^R k_{a \to b}^L}{(k_{a \to b} + k_{b \to a}) }. \label{9} \end{align} \begin{figure}[t] \begin{center} \includegraphics[width=8cm,height=6cm]{na_1a.eps} \includegraphics[width=8cm,height=6cm]{na_3b.eps} \caption{ Left panel: Current-voltage characteristics computed using Eqs.(\ref{10})-(\ref{12}) (solid lines). The electrodes Fermi energy in the unbiased junction is set to 0, the bias is applied symmetrically ($\mu_{L,R}=\pm|e|V/2$) relative to this origin and $T_L=T_R=T_s$. The Landauer-Buttiker limit is represented by the red line ($E_r=0$). The difference between the results obtained using Eqs.(\ref{10})-(\ref{12}) and the Marcus limit is demonstrated by comparison of the solid black line with the dashed line plotted using the Marcus equations for the electron transfer rates at the same value of $E_r$ ($E_r=0.4 eV$). Right panel: Electron current as a function of the temperature difference symmetrically distributed between the electrodes ($\displaystyle (T_L+T_R)/2=T_s$) in an unbiased SMJ in the Marcus limit. Curves are plotted assuming that $ kT_s = 0.026 eV, \hbar\Gamma=0.01 eV$, $\epsilon_{d}=0.1 eV$ (left panel) and $\epsilon_{d}=-0.02 eV$ (right panel). } \label{rateI} \end{center}\end{figure} The coupling $\Gamma$of the molecular bridge to electrodes affects the electron transfer rates (and, consequently the SMJ transport properties) in two ways. First, as indicated above, it controls the transfer rates between the electrodes and the molecule. This effect is accounted for within the standard Marcus theory. Secondly, it is manifested in the lifetime broadening of molecular levels, an effect disregarded by this theory. It has been suggested in the recent work \cite{38} that the Marcus expressions for the transfer rates may be generalized to include the broadening effect. For a symmetrically coupled system the transfer rates may be approximated as follows \cite{38,39}: \begin{equation} k_{a \to b}^{ L,R}= \frac{ \Gamma}{\pi} \int_{-\infty}^\infty d \epsilon [1 - f_{L,R} (\beta_{L,R}, \epsilon)] K_{-}(\epsilon) \label{10}, \end{equation} \begin{equation} k_{b \to a}^{L,R} = \frac{ \Gamma}{\pi} \int_{-\infty}^\infty d \epsilon f_{L,R} (\beta_{L,R}, \epsilon)K_{+}(\epsilon), \label{11} \end{equation} where \begin{equation} K_{\pm}(\epsilon)=Re\bigg[\sqrt\frac{\pi\beta_s}{4E_r}\exp\left[- \frac{\beta_s}{4E_r} (\hbar\Gamma\mp i(\epsilon_{d} \pm E_r - \epsilon))^2 \right]\times\mbox {erfc}(\sqrt\frac{\beta_s}{4E_r} (\hbar\Gamma\mp i(\epsilon_{d} \pm E_r - \epsilon)))\bigg] \label{12} \end{equation} and $\mbox{erfc(x)}$ is the complimentary error function. Although the derivation of Eqs.(\ref{10})-(\ref{12}) involves some fairy strong assumptions \cite{38,39}, the result is attractive for its ability to yield the Landauer cotunneling expression in the strong molecule-electrodes coupling limit $\sqrt{E_rkT_s}\ll\hbar\Gamma$ and the Marcus expression in the opposite limit. This is shown in the left panel of Fig.1 where three current-voltage curves are plotted at the same value of $\Gamma$ and several values of $E_r$. The Landauer-Buttiker behavior is demonstrated for $E_r=0$. As $E_r$ enhances, the current-voltage curves behavior becomes more similar to the Marcus behavior represented by the dashed line. In the case of Marcus limit one observes a well pronounced plateau in the $I-V$ profile around $V=0$, as seen in the Fig.1 (dashed line). This plateau develops gradually as the electron-phonon coupling increases, and is a manifestation of a Franck-Condon blockade similar to that resulting from interactions between electrons and individual molecular vibrational modes \cite{19,20}. When the two electrodes in an unbiased SMJ ($\mu_L=\mu_R=\mu)$ are kept at different temperatures, a thermally induced charge current emerges, as shown in Fig.1 (right panel). The current does not appear if $\epsilon_d=\mu=0$ for in this case the electron current is completely counterbalanced by the hole current. However, when $\epsilon_{d} \neq \mu $ the current emerges. The current changes its direction at $ T_L = T_R $. Its magnitude strongly depends on the reorganization energy. Indeed, the thermally induced current takes on noticeable values only provided that the effects of nuclear reorganization are weak, and becomes suppressed when the interaction with the solvent environment increases. \subsection{B. Heat currents and energy conservation} The results summarized above were mostly obtained before in works that investigate the implication of Marcus kinetics for the steady state conduction properties of molecular junctions in the limit of hopping conduction. Here we focus on the energy balance associated with such processes, and the implication of Marcus kinetics on heat transfer. Each electron hopping event between the molecule and an electrode is accompanied by solvent and metal relaxation, therefore by heat production in these environments. We denote these heats $Q_s$ and $Q_e$ for the solvent and the electrode, respectively. Specifically, $Q_{s,a\to b}^{L,R}$ denotes the heat change in the solvent when an electron hops from the molecule into the left (L) or right (R) electrode, and similarly $Q_{s,b\to a}^{L,R}$ is heat change in the solvent in the opposite process of electron moving from the electrode to the molecule. For symmetrically coupled electrodes ($\Gamma_L=\Gamma_R=\Gamma$) considered in the Marcus limit these terms have the form : \begin{align} Q_{s,a\to b}^{K} = & \frac{\Gamma}{k_{a\to b}^{K}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon \big[1 - f_{K} (\beta_{K},\epsilon) \big] (\epsilon_{d}-\epsilon ) \nonumber\\ & \times \exp \left[- \frac{\beta_s}{4E_R} (E_{r} - \epsilon_{d}+ \epsilon)^2 \right]. \label{13} \end{align} and \begin{align} Q_{s,b \to a}^{K} = & \frac{\Gamma}{k_{b\to a}^{K}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon f_{K} (\beta_{K},\epsilon) ( \epsilon - \epsilon_{d}) \nonumber \\ & \times \exp \left[- \frac{\beta_s}{4E_R} ( \epsilon_{d} + E_r - \epsilon)^2 \right]. \label{14} \end{align} where $K=\{L,R\}$. Similarly, $Q_{e,a\to b}^{K}$ and $Q_{e,b\to a}^{K}$ are heats generated in electrode K when an electron leaves (enters) the molecule into (from) that c electrode: \begin{align} Q_{e,a\to b}^{K} = & \frac{\Gamma}{k_{a\to b}^{K}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon \big[1 - f_{K} (\beta_{K},\epsilon) \big] (\epsilon -\mu_{K}) \nonumber\\ & \times \exp \left[- \frac{\beta_s}{4 E_r} (E_{r} - \epsilon_{d} + \epsilon)^2 \right]. \label{15} \end{align} and \begin{align} Q_{e,b \to a}^{K} = & \frac{\Gamma}{k_{b\to a}^{K}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon f_{K} (\beta_{K},\epsilon) (\mu_{K}- \epsilon ) \nonumber\\ & \times \exp \left[- \frac{\beta_s}{4E_R} ( \epsilon_{d} + E_r - \epsilon)^2 \right]. \label{16} \end{align} Eqs.(\ref{15}) and (\ref{16}) are analogs of the corresponding results reported in Ref.\cite{37}. Eqs.(\ref{13})-(\ref{16}) are expressions for the heat changes per specific hopping events. The corresponding heat change rates (heat per unit time) in the solvent and the electrodes are obtained from: \begin{equation} J_s\equiv\dot{Q}_s=P^0_{a}(k_{a\to b}^{L}Q_{s,a\to b}^{L}+k_{a\to b}^{R}Q_{s,a\to b}^{R})+P^0_{b}(k_{b\to a}^{L}Q_{s,b\to a}^{L}+k_{b\to a}^{R}Q_{s,b\to a}^{R}) \label{17} \end{equation} and: \begin{equation} J^{K}_{e}\equiv\dot{Q}_{e}^{K}=k_{a\to b}^{K}P^0_{a}Q_{e,a\to b}^{K} +k_{b\to a}^{K}P^0_{b}Q_{e,b\to a}^{K}. \label{18} \end{equation} Using Eqs.(\ref{5}), (\ref{6}) and Eqs.(\ref{13})-(\ref{16}), it can be easily established that Eqs.(\ref{17}), (\ref{18}) imply: \begin{equation} J^{L}_{e}+J^{R}_{e}+J_{s}=(\mu_L-\mu_R)I_{ss}. \label{19} \end{equation} showing the balance between heat change rates in the solvent and the electrodes and the heat generated by the current flow across the voltage bias. In the absence of solvent reorganization $J_s=0$ and Eq.(\ref{19}) is reduced to the standard junction energy balance relation $\displaystyle J^{L}_e+\displaystyle J^{R}_e=\displaystyle(\mu_L-\mu_R)I_{ss}/|e|$. From Eqs.(\ref{13})-(\ref{18}) we obtain after some algebra (see Appendix A): \begin{equation} J_{e}^{L}=(\mu_{L}-\epsilon_{d})I_{ss}-(P_{a}^{0}k_{a\to b}^{L}+P_{b}^{0}k_{b\to a}^{L})E_{r}-Y_{L} \label{20} \end{equation} \begin{equation} J_{e}^{R}=(\epsilon_{d}-\mu_{R})I_{ss}-(P_{a}^{0}k_{a\to b}^{R}+P_{b}^{0}k_{b\to a}^{R})E_{r}-Y_{R} \label{21} \end{equation} \begin{equation} J_{s}=2\frac{k_{a\to b}k_{b\to a}}{k_{a\to b}+k_{b\to a}}E_{r}+Y_{L}+Y_{R} \label{22} \end{equation} where: \begin{align} Y_{K} =& \Gamma\sqrt{\frac{E_{r}}{\pi\beta_s}} \int d \epsilon \frac{\partial f_{K}}{\partial\epsilon} \nonumber\\ & \times \bigg(P_{b}^{0}\exp \left[- \frac{\beta_s}{4E_R} ( \epsilon_{d} + E_r - \epsilon)^2 \right]+P_{a}^{0}\exp \left[- \frac{\beta_s}{4E_R} (E_{r}- \epsilon_{d} +\epsilon)^2 \right]\bigg) \label{23} \end{align} \begin{figure}[t] \begin{center} \includegraphics[width=8cm,height=6cm]{tri_4ca.eps} \includegraphics[width=8cm,height=6cm]{tri_2da.eps} \caption{ Heat currents $J_{s}$ (left panel), $J_{e}^{L}$ (solid lines, right panel ) and $J_{e}^{R}$ (dashed lines, right panel) shown as functions of the bias voltage $V$ for several values of the reorganization energy. Curves are plotted omitting the effect of molecular level broadening and assuming $kT_s=kT_L=kT_R=0.026eV$, $\hbar\Gamma=0.01 eV$, $\epsilon_{d}=0.1 eV$. The inset in the left panel focuses on a segment of the main plot that emphasizes the local cooling of the solvent in the corresponding voltage range. } \label{rateI} \end{center}\end{figure} The analysis presented in this section remains valid regardless of the specific form of the expressions for the relevant heat flows.These may be defined within Marcus theory by Eqs.(\ref{13})-(\ref{16}) or by the generalized expressions derived using the approximation of Ref.\cite{38}: \begin{equation} Q_{s,a\to b}^{L,R} =\frac{1}{\pi} \frac{\Gamma}{k_{a\to b}^{L,R}} \int d \epsilon \big[1 - f_{L,R} (\beta_{L,R},\epsilon) \big] (\epsilon_d-\epsilon )K_{-}(\epsilon) \label{24} \end{equation} \begin{equation} Q_{s,b \to a}^{L,R} = \frac{1}{\pi} \frac{\Gamma}{k_{b\to a}^{L,R}} \int d \epsilon f_{L,R} (\beta_{L,R},\epsilon) ( \epsilon - \epsilon_d) K_{+}(\epsilon) \label{25} \end{equation} \begin{equation} Q_{e,a\to b}^{L,R} = \frac{1}{\pi} \frac{\Gamma}{k_{a\to b}^{L,R}}\int d \epsilon \big[1 - f_{L,R} (\beta_{L,R},\epsilon) \big] (\epsilon -\mu_{L,R}) K_{-}(\epsilon) \label{26} \end{equation} \begin{equation} Q_{e,b \to a}^{L,R} = \frac{1}{\pi} \frac{\Gamma}{k_{b\to a}^{L,R}} \int d \epsilon f_{L,R} (\beta_{L,R},\epsilon) (\mu_{L,R}- \epsilon ) K_{+}(\epsilon) \label{27} \end{equation} where $k_{a\to b}^{L,R}$, $k_{b\to a}^{L,R}$ and $K_{\pm}(\epsilon)$ are given by Eqs(\ref{10})-(\ref{12}). It should be noted that the procedure leading to Eq.(\ref{19}) that demonstrates the energy conservation remains the same when these expressions are used. Results based on Eqs.(\ref{20})-(\ref{24}) are displayed in Figure 2. The left panel shows the heat deposited in the solvent environment plotted against the bias voltage for different values of the reorganization energy. The right panel shows similar results for the left and right electrodes. The following observations can be made: (a). Reflecting the behavior of the electronic current, energy exchange processes are very weak at low bias due to the Franck-Condon blockade that hinders electron transport. Noticeable heat currents appear when $|e|V$ exceeds the reorganization energy $E_r$ thus lifting the blockade. (b). The heat deposited into the electrodes shows an asymmetry between positive and negative biases (or equivalently between left and right electrodes). This asymmetry reflects the different positioning of the energy of the transferred electron relative to the left and right Fermi energies \cite{65}, and was observed experimentally \cite{66}. (c). The heat deposited into the solvent environment (left panel) is symmetric with respect to bias inversion because it reflects energy balance relative to both electrodes. (d). Note that the heat exchanged with the solvent (nuclear) environment can become negative, namely, heat may be pulled out of this environment at some range of bias and reorganization energy. In the present case of a symmetrically coupled SMJ with a symmetrically distributed bias voltage this happens at $|eV|\approx 2E_r$, namely when the driving force originating from the bias is nearly counterbalanced by forces originating from elecron-phonon interactions. This cooling is reminiscent of similar effects discussed in the low electron-phonon coupling regime \cite{59,60,61}. \subsection{III. Driven junction} Next we consider charge and energy currents in driven biased junctions, where driving is modeled by an externally controlled time dependent parameter in the system Hamiltonian. In the present study we limit our consideration to time dependence of the single electron "level" $\epsilon_d$ of the molecular bridge that may in principle be achieved by varying the gate potential. Similar studies in the absence of electron-phonon interactions, focusing on a consistent quantum thermodynamic description of such systems were recently published \cite{67,68,69,70,71,72,73,74,75,76,77,78,79,80,81}. The model considered below includes strong coupling to the phonon environment at the cost of treating this coupling semiclassically and assuming weak coupling between molecule and electrodes. This model is similar to that used to analyze cyclic voltammetry observations when extended to consider two metal interfaces \cite{82}. In further analysis we assume that the electron level $ \epsilon_d$ is a slowly varying and construct an expansion of the solution of in powers of $\dot\epsilon_d$ \cite{83}. To this end we start by separating the time dependent populations into their steady state components (which implicitly depend on time through $\epsilon_d(t)$ and corrections defined by \cite{29}: \begin{equation} P_a(t) = P_a^0 (\epsilon_d) - G (\epsilon_d, t); \qquad P_b(t) = P_b^0 (\epsilon_d) + G(\epsilon_d, t). \label{28} \end{equation} where the steady state populations $P^0_{a,b}$ satisfy $P_bk_{b\to a}-P_ak_{a\to b}=0$; $P_a+P_b=1$ and are given by Eq.(\ref{8}). Then: \begin{equation} \frac{dP_a}{dt} = \dot{\epsilon_d} \frac{\partial P_a^0}{\partial \epsilon_d} - \frac{d G}{d t}; \qquad \frac{d P_b}{dt} = \dot{\epsilon_d} \frac{\partial P_b^0}{\partial \epsilon_d} + \frac{d G}{d t}. \label{29} \end{equation} From Eqs.(\ref{28}) it follows that \begin{equation} \frac{dP_a}{dt} = -\frac{dP_b}{dt}=G(k_{a\to b}+k_{b\to a}). \label{30} \end{equation} Comparing (\ref{29}) and (\ref{30}) we obtain: \begin{equation} \frac{dG}{dt} = \dot{\epsilon_d} \frac{\partial P_a^0}{\partial \epsilon_d} - G (k_{a \to b} + k_{b \to a}). \label{31} \end{equation} The electronic currents can be written in terms of $G$ in the form: \begin{align} I_L=k^L_{b\to a} P_b-k^L_{a\to b} P_a=I_{ss}+I_L^{excess}; \qquad I_L^{excess}=(k^L_{b\to a}+k^L_{a\to b})G \nonumber\\ I_R=k^R_{b\to a} P_b-k^R_{a\to b} P_a=I_{ss}+I_R^{excess}; \qquad I_R^{excess}=(k^R_{b\to a}+k^R_{a\to b})G \label{32} \end{align} expressing the fact that the left and right excess particle (electron) current due to driving are generally not the same. Eqs.(\ref{32}) together with analogs of (\ref{17}) and (\ref{18}) in which $P^0_{a,b}$ are replaced by $P_{a,b}$ can be used to obtain the excess heat currents caused by driving \begin{equation} J^{excess}_{s}=\dot{Q}^{excess}_{s}=G(k^L_{b\to a}Q^L_{s,b\to a}-k^L_{a\to b}Q^L_{s,a\to b}+k^R_{b\to a}Q^R_{s,b\to a}-k^R_{a\to b}Q^R_{s,a\to b}) \label{33} \end{equation} \begin{equation} J^{K,excess}_{e}=\dot{Q}^{K,excess}_{e}=G(k^K_{b\to a}Q^K_{e,b\to a}-k^K_{a\to b}Q^K_{e,a\to b}) \label{34} \end{equation} Using these expressions with Eqs.(\ref{13})-(\ref{16}) for the heat currents and Eqs.(\ref{5}),(\ref{6}) (or (\ref{10}), (\ref{11})) we find: \begin{equation} J^{excess}_{tot}\equiv J^{excess}_{s}+J^{L,excess}_{e}+J^{R,excess}_{e}=G[(\mu_L-\epsilon_d)(k^L_{a\to b}+k^L_{b\to a})+(\mu_R-\epsilon_d)(k^R_{a\to b}+k^R_{b\to a})] \label{35} \end{equation} Eqs.(\ref{31})-(\ref{35}) are exact relations. In particular, Eq.(\ref{31}) can be used as a basis for expansions in powers of $\dot\epsilon_d$. We start by writing $G$ as such a power series: $G=G^{(1)}+G^{(2)}+...$ with $G^{(n)}$ representing order $n$ in $\dot\epsilon_d$ and use this expansion in Eq.(\ref{31}) while further assuming that $G$ depends on time only through its dependence on $\epsilon_d$ implying that $\displaystyle dG_{n}/dt$ is of order $n+1$. We note that our results are consistent with this assumption. \subsection{A The quasistatic limit: First order corrections}. To first order in$\dot\epsilon_d$ the left hand side of Eq.(\ref{31}) vanishes, leading to \begin{equation} G^{(1)} = \dot{\epsilon_d}\frac{\partial P_a^0}{\partial \epsilon_d} \frac{1}{(k_{a \to b} + k_{b \to a})}. \label{36} \end{equation} where $k_{a\to b}$ and $k_{b\to a}$ depend on time through their dependence on $\epsilon_d$. Note that Eqs.(\ref{30}) and (\ref{36}) imply that $dP^{(1)}_a/dt=-dP^{(1)}_b/dt=\dot\epsilon_d\partial P^0_a/\partial\epsilon_d$, namely this order of the calculation corresponds to the quasistatic limit where all dynamics is derived from the time dependence of $\epsilon_d$. At the same time it should be emphasized that this limit is not a reflection of the instantaneous steady state, as is evident from Eqs.(\ref{32}). Consider first the electronic current. Using Eqs.(\ref{32}) and (\ref{36}) the first order correction to the electron exchange rates with the left and right electrodes is obtained in the form: \begin{equation} I^{(1)}_K=(k^K_{b\to a}+k^K_{a\to b})G^{(1)}=\dot\epsilon_d\frac{\partial P^0_a}{\partial\epsilon_d}\nu_K \label{37} \end{equation} with \begin{equation} \nu_K=\frac{k^K_{a\to b}+k^K_{b\to a}}{k_{a\to b}+k_{b\to a}} \label{38} \end{equation} namely, a product of the (first order) change in the electronic population on the molecule $dP^{(1)}_a/dt$ and the fraction $\nu_K$ of this change associated with the electrode $K$. Next consider the heat currents. Using Eqs.(\ref{36}) for $G$ in Eq.(\ref{35}) leads to: \begin{equation} J_{tot}^{(1)}=-\epsilon_d\dot\epsilon_d\frac{\partial P_a^0}{\partial\epsilon_d}+\dot\epsilon_d\frac{\partial P_a^0}{\partial\epsilon_d}\Bigg(\mu_L\frac{k_{a\to b}^{L}+k_{b\to a}^{L}}{k_{a\to b}+k_{b\to a}}+ \mu_R\frac{k_{a\to b}^{R}+k_{b\to a}^{R}}{k_{a\to b}+k_{b\to a}}\Bigg). \label{39} \end{equation} To better elucidate the physical meaning of this result we rearrange the first term on the right according to $-\displaystyle\epsilon_d\dot\epsilon_d\partial P^0_a/\partial\epsilon_d=\dot\epsilon_d P^0_a-d(\epsilon_d P^0_a)/dt$ and use Eq.(\ref{37}) to cast Eq.(\ref{39}) in the form: \begin{equation} \frac{d(\epsilon_d P^0_a)}{dt}\equiv \dot{E}^{(1)}_M =\dot\epsilon_d P^0_a-J^{(1)}_{tot}+\mu_L\dot{n}_L+\mu_R\dot{n}_R \label{40} \end{equation} This equation is a statement of the first law of thermodynamics, where $\dot{E}^{(1)}_M$ represents to order $1$, the rate of change of energy in the molecule and the terms on the right stand for the work per unit time ($\dot\epsilon_d P^0_a$), rate of heat developing in the environment ($-J^{(1)}_{tot}\equiv -(J^{L(1)}_e+J^{R(1)}_e+J^{(1)}_s))$ and rate of chemical work ($\mu_L I^{(1)}_L+\mu_R I^{(1)}_R$) to the same order.. All terms included in Eq.(\ref{40}) are of the form $\dot\epsilon_dr(\epsilon_d)$ where $r$ is an arbitrary function and are therefore the same except of sign when $\epsilon_d$ goes up or down, as this should be within the quasistatic regime. \subsection{B. Beyond the quasistatic regime: Second order corrections} Using the expansion for $G$ in Eq.(\ref{31}) and keeping only second order terms leads to: \begin{equation} G^{(2)}=-\frac{1}{k_{a\to b}+k_{b\to a}}\frac{dG^{(1)}}{dt} \label{41} \end{equation} The second order correction to the electron current is (K=\{L,R\}): which, using Eq.(\ref{36}) gives: \begin{equation} G^{(2)}=-\frac{\dot\epsilon_d^2}{(k_{a\to b}+k_{b\to a})}\frac{\partial}{\partial\epsilon_d} \left[\frac{\partial P_a^0}{\partial\epsilon_d}\frac{1}{(k_{a \to b} + k_{b \to a}}\right] \label{42} \end{equation} The second order correction to the electron current is (K=\{L,R\}): \begin{equation} I^{(2)}_K=(k^K_{a\to b}+k^K_{b\to a})G^{(2)}=-\dot\epsilon_d^2\frac{\partial}{\partial\epsilon_d}\left[\frac{\partial P^0_a}{\partial\epsilon_d}\frac{1}{(k_{a\to b}+k_{b\to a})}\right]\nu_K. \label{43} \end{equation} The second order excess heat is obtained from Eq.(\ref{35}) by replacing $G$ with $G^{(2)}$. The sum of second order corrections to the heat currents then takes the form: \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,height=6cm]{tnita4.eps} \caption{ Friction coefficient characterizing dissipation in the system caused by driving of the energy level in a symmetrically coupled junction with a symmetrically applied bias as a function of $\epsilon_d$. Main figure: Dashed lines represent friction in the absence of molecule-solvent coupling ($E_r$=0); full lines are plotted at $E_r=0.05 eV$ for three values of the bias voltage (indicated by different colors). Inset: Friction at zero bias for the indicated two values of reorganization energy. In all lines, the friction is normalized by its value at zero bias and zero reorganization energy. Other parameters are: $kT_L= kT_R=kT_s = 0.026 eV,$, $\hbar\Gamma =0.01 eV$. } \label{rateI} \end{center}\end{figure} The sum of second order corrections to the heat currents then takes the form: \begin{equation} J^{(2)}_{tot}\equiv J_{e}^{L(2)}+J_{e}^{R(2)}+J_{s}^{(2)}=-\epsilon_d G^{(2)}(k_{a\to b}+k_{b\to a})+G^{(2)}\Big(\mu_L(k_{a\to b}^{L}+k_{b\to a}^{L})+ \mu_R(k_{a\to b}^{R}+k_{b\to a}^{R})\Big). \label{44} \end{equation} Using Eqs.(\ref{42}), (\ref{44}) we can present the work-energy balance equation at this order as follows: \begin{equation} \dot{E}^{(2)}_M=\dot{W}^{(2)}-J^{(2)}_{tot}+(\mu_L I^{(2)}_L+\mu_R I^{(2)}_R) \label{45} \end{equation} where \begin{equation} \dot E^{(2)}_M=-\frac{d}{dt}\big[\epsilon_d G^{(1)}\big]=-\dot\epsilon_d^2\Bigg[\frac{\partial P^0_a}{\partial\epsilon_d}\frac{1}{k_{a\to b}+k_{b\to a}}+\epsilon_d\frac{d}{d\epsilon_d}\Bigg(\frac{\partial P^0_a}{\partial\epsilon_d}\frac{1}{k_{a\to b}+k_{b\to a}}\Bigg)\Bigg] \label{46} \end{equation} is the second order change rate in the total system energy expressed as the time derivative of the first order contribution to this energy (product of $\epsilon_d$ and the first order correction to the population $G^{(1)}$), and \begin{equation} \dot{W}^{(2)}=-\dot\epsilon_d G^{(1)}\equiv-\dot\epsilon_d^{2}\frac{\partial P_{a}^0}{\partial\epsilon_d}\frac{1}{k_{a\to b}+k_{b\to a}} \label{47} \end{equation} is the second order excess work per unit time (power) which corresponds to the lowest order {\it irreversible} work expressing dissipation caused by driving the level. The last term on the right hand side of (\ref{45}) represents the second order contribution to the rate of chemical work, thus Eq.(\ref{45}) is an expression for the first law of thermodynamics at the second order of our expansion. Following Refs\cite{67,84}, the coefficient in front of $\dot\epsilon_d^2$ in Eq.(\ref{47}) \begin{equation} \gamma=-\frac{\partial P^0_a}{\partial\epsilon_d}\frac{1}{k_{a\to b}+k_{b\to a}} \label{48} \end{equation} may be identified with the friction coefficient. Similar interpretation was suggested in Refs.\cite{67,84} for different models for SMJs. Dependencies of $\gamma$ on $\epsilon_d$ are shown in Fig.3. In an unbiased junction the friction coefficient reaches its maximum at $\epsilon_d=\mu=0$ and falls down approaching zero as $\epsilon_d$ moves away from this position. In this case $\gamma$ appears to increase with the increasing voltage. This results from the fact that at low bias the Franck-Condon blockade discussed above makes molecule-electrodes coupling small, and friction increases upon removing this blockade at higher bias voltage. Also, at higher voltage the peak splits -two peaks appear due to electron-electrodes exchange near the two Fermi energies characterizing the biased junction. Note that coupling to the solvent shifts the positions of these peaks, in correspondence with the Eqs.(\ref{5}), (\ref{6}) for transfer rates. \subsection{C. Evolution of the system (dot) entropy} Define the system entropy by the Gibbs formula for our binary system \begin{equation} S=-k(P_a\ln P_a+(1-P_a)\ln (1-P_a)) \label{49} \end{equation} Using Eqs.(\ref{28}) we find \begin{equation} S=-k((P_a^0-G)\ln( P_a^0-G)+(1-P_a^0+G)\ln (1-P_a^0+G)) \label{50} \end{equation} which can be used to find again an expansion in powers of $\dot\epsilon_d$: $S=S^{(0)}+S^{(1)}+...$. In what follows we limit ourselves to the case of an unbiased junction in the wide band limit for which $P^0_a/P^0_b=\displaystyle\exp(-\beta\epsilon_d)$. In the absence of driving \begin{equation} S^{(0)}=-k(P_a^{(0)}\ln P_a^{(0)}+(1-P_a^{(0)})\ln (1-P_a^{(0)})) \label{51} \end{equation} and (assuming that $T_L=T_R=T_s\equiv T$) \begin{equation} S^{(1)}=-k\beta\epsilon_dG^{(1)}. \label{52} \end{equation} The first and second order variations in the dot's entropy due to driving are obtained as (recall that the sign of $J_{tot}$ was chosen so that the heat current into the environment is positive): \begin{equation} \dot{S}^{(1)}=\dot\epsilon_d\frac{\partial S^{(0)}}{\partial\epsilon_d}=\frac{1}{T}\epsilon_d\dot\epsilon_d\frac{\partial P^{(0)}_a}{\partial\epsilon_d}=-\frac{J^{(1)}_{tot}}{T} \label{53} \end{equation} and \begin{equation} \dot{S}^{(2)}=\dot\epsilon_d\frac{\partial S^{(1)}}{\partial\epsilon_d}=-k\beta\dot\epsilon_dG^{(1)}-k\beta\epsilon_d\dot\epsilon_d\frac{\partial G^{(1)}}{\partial\epsilon_d}=\frac{\dot{W}^{(2)}}{T}-\frac{J^{(2)}}{T}. \label{54} \end{equation} Eq.(\ref{54}) may be rewritten as: \begin{equation} \dot{S}^{(2)}+\frac{J^{(2)}}{T}=\frac{\dot{W}^{(2)}}{T}. \label{55} \end{equation} The left side of Eq.(\ref{55}) is the sum of the rate of total entropy change in the system (dot/molecule) $\dot{S}^{(2)}$ and the entropy flux into the electrodes and solvent environment. Together these terms give the total entropy production due to the irreversible nature of the process at this order. This result is identical to that obtained in fully quantum mechanical treatments of similar processes evaluated in the absence of coupling to solvent \cite{67,78,80}, except of a sign difference in the heat definition. Here, the heat which is going {\it out} of the system is defined as positive. \subsection{IV. Marcus junction engine} In this Section, we extend the above analysis to discuss a simple model that simulates an atomic scale engine. This can be achieved by imposing asymmetry on the coupling of the molecular bridge to the electrodes that enables to convert the motion of $\epsilon_d$ to electron current between the electrodes. A simple choice is: \begin{equation} \Gamma_{L,R}(\epsilon)=\Gamma\frac{\delta^2}{(\epsilon \pm \epsilon_0)^2+\delta^2}. \label{56} \end{equation} where, for definiteness, we assign the ($+$) sign to the left electrode. This represents a situation where the moving level is coupled to wide-band electrodes via single level gateway sites with energies $\pm\epsilon_0$ attached to the left/right electrode. The electron transfer rates are calculated from Eqs.(\ref{5}), (\ref{6}). In further calculations we assume that $\epsilon_d$ varies according to \begin{equation} \epsilon_d(t)= E_0-E_1\cos(2\pi t/\tau) \label{57} \end{equation} It is intuitively obvious that fast enough driving (small $\tau$) with a choice of origin $E_0$ and amplitude $E_1$ that encompass the interval ($-\epsilon_0,\epsilon_0$) will produce current from the left to the right electrode which may be appreciable if $\epsilon_0$ is sufficiently larger than $T_L=T_R=T_s\equiv T$. This current is given by the average over a period: \begin{equation} {<I>}_{\tau}=\frac{1}{\tau}\int_{0}^{\tau}dtI_L(t)=\frac{1}{\tau}\int_{0}^{\tau}dtI_R(t). \label{58} \end{equation} where $I_K(t)$ are given by Eq.(\ref{32}). Further analytical progress can be made by using the expansion in powers of $\dot\epsilon_d$. However, using this expansion implies that $\delta$ in Eq.(\ref{56}) is large enough for the inequality $kT\Gamma(\epsilon)\gg\dot\epsilon_d$ to be satisfied for all $\epsilon$. The lowest non-vanishing contribution to Eq.(\ref{58}) is then: \begin{equation} {<I>}^{(2)}_{\tau}=\frac{1}{\tau}\int_{0}^{\tau}dtI^{(2)}_L(t)=\frac{1}{\tau}\int_{0}^{\tau}dtI^{(2)}_R(t). \label{59} \end{equation} where the second order contributions to the currents are given by Eq.(\ref{43}). Note that this is the excess current produced by driving which persists also in the absence of imposed bias. \begin{figure}[t] \begin{center} \includegraphics[width=8.5cm,height=6cm]{init_4g.eps} \caption{ The averaged over the period thermodynamic efficiency $\eta$ (blue line) and power $\Pi$ (red line)produced in the junction by periodically driving the bridge level. Curves are plotted at $kT=0.026 eV$, $\hbar\Gamma=0.01 eV$, $E_0=0$, $E_1=0.2 eV$, $E_r=0.05 eV$, $\tau=10 ps$. } \label{rateI} \end{center}\end{figure} When a voltage bias is imposed so as to drive a current in the opposite direction to ${<I>}^{(2)}_{\tau}$, the total current \begin{equation} {<I>}_{\tau}(V)=I_{ss}(V)+{<I>}^{(2)}_{\tau}(V) \label{60} \end{equation} can be used to define the power produced by the engine: \begin{equation} \Pi(V)=V(I_{ss}(V)+{<I>}^{(2)}_{\tau}(V)). \label{61} \end{equation} The device efficiency is defined as the ratio: \begin{equation} \eta(V)=\frac{\Pi(V)}{\displaystyle\frac{1}{\tau}\int_{0}^{\tau}dt \dot{W}^{(2)}(V,t)} \label{62} \end{equation} Figure 4. shows the voltage dependence of these engine characteristics. Obviously both vanish in the absence of load ($V=0$) as well as at the stopping voltage when the current vanishes, and go through their maxima at different 'optimal' voltages (which in turns depend on the choices of $E_0$ and $E_1$). Note that because of the intrinsic friction in this model, the efficiency vanishes rather than maximizes at the stopping voltage point. \subsection {V. Conclusions} In the present work we have analyzed energy balance in single-molecule junctions characterized by strong electron-phonon interactions, modeled by a single level molecule (dot) connecting free electron metal electrodes, where charge transfer kinetics is described by Marcus electron transfer theory. The standard steady state transport theory was extended to include also slow driving of the molecular level that may be achieved by employing a time dependent gate potential. A consistent description of the energetics of this process was developed leading to the following observations: (a) Accounting for the total energy and its heat, work and chemical components shows that energy conservation (first law of thermodynamics) is satisfied by this model at all examined order of driving. (b) Heat is obviously produced by moving charge across potential bias. In addition, when charge transfer involves solvent reorganization, the current flowing in a biased junction can bring about heat transfer between the metal and the solvent environments, and may even produce solvent cooling in some voltage range. (c) In the presence of solvent reorganization the friction experienced by the driven coordinate $\epsilon_d$ which expresses energy loss (heat production) due to the molecule-metal electron exchange is strongly affected by the presence of solvent reorganization. (d) Beyond the reversible (driving at vanishingly small rate) limit, entropy is produced and is determined, at least to the second order in the driving speed, by the excess work associated with the friction affected by the molecule coupling to the electrodes and solvent environments. We have also used this model to study a molecular junction with a periodically modulated dot energy. We have considered a model engine in which such periodic driving with a properly chosen energy depended molecule-electrode coupling can move charge against a voltage bias and calculated the power and efficiency of such a device. In the parameter range consistent with our mathematical modeling useful work can be produced only in the irreversible regime, and we could determine the points of optimal performance of such engine with respect to power and efficiency. While our calculations are based on Marcus electron transfer kinetics in which level broadening due to molecule-metal coupling is disregarded, we have shown that extension to the more general kinetics suggested in Ref.\cite{38}, which (approximately) bridges between Marcus sequential hopping and Landauer cotunneling limit is possible. Energy conversion on the nanoscale continues to be focus of intense interest. The present calculation provides a first simple step in evaluating such phenomena in a system involving electron transport, electron-solvent interaction and mechanical driving. \subsection{Acknowledgments} The present work was supported by the U.S National Science Foundation (DMR-PREM 1523463). Also, the research of AN is supported by the Israel-U.S Binational Science Foundation, the German Research Foundation (DFG TH 820/11-1), the U.S National Science foundation (Grant No.CHE1665291) and the University of Pennsylvania. NZ acknowledges support of the Sackler Visiting Professor Chair at Tel Aviv University, Israel. \subsection{Appendix A} Here, we derive Eq.(\ref{20}) for $\dot{Q}^L_e$. Starting from Eq.(\ref{15}) we present $Q^L_{e,a\to b}$ in the form: \begin{equation} Q^L_{e,a\to b}=\epsilon_d-E_r-\mu_L+\frac{\Gamma}{k_{a\to b}^{L}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon \big[1 - f_{L} (\beta_{L},\epsilon) \big] (\epsilon +E_r-\epsilon_d)\exp \left[- \frac{\beta_s}{4 E_r} (E_{r} - \epsilon_{d} + \epsilon)^2 \right]. \label{63} \end{equation} Similarly: \begin{equation} Q^L_{e,b\to a}=\mu_L-\epsilon_d-E_r+\frac{\Gamma}{k_{b\to a}^{L}} \sqrt{\frac{\beta_s}{4 \pi E_r}} \int d \epsilon f_{L} (\beta_{L},\epsilon) (\epsilon_d +E_r-\epsilon)\exp \left[- \frac{\beta_s}{4 E_r} (E_{r} +\epsilon_{d} - \epsilon)^2 \right]. \label{64} \end{equation} Integrating by parts we obtain: \begin{equation} Q^L_{e,a\to b}=\epsilon_d-E_r-\mu_L-\frac{\Gamma}{k_{a\to b}^{L}}\sqrt{\frac{E_r}{\pi\beta_s}}\int d\epsilon\frac{\partial f_L(\beta_L,\epsilon)}{\partial\epsilon} \exp \left[- \frac{\beta_s}{4 E_r} (E_{r} -\epsilon_{d} +\epsilon)^2 \right]. \label{65} \end{equation} and \begin{equation} Q^L_{e,b\to a}=\mu_L-\epsilon_d-E_r-\frac{\Gamma}{k_{b\to a}^L}\sqrt{\frac{E_r}{\pi\beta_s}}\int d\epsilon\frac{\partial f_L(\beta_L,\epsilon)}{\partial\epsilon} \exp \left[- \frac{\beta_s}{4 E_r} (E_{r} +\epsilon_{d} - \epsilon)^2 \right]. \label{66} \end{equation} Substituting these expressions into Eq.(\ref{18}) we get the expression for $\dot{Q}^L_e$ given by Eq.(\ref{20}). Expressions for $\dot{Q}^R_e$ and $\dot{Q}^R_e$ and $\dot{Q}_s$ may be derived in the same way
d2715ac8c4e5a95bf67450d3eacd1bf0f10b9be8
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{intro} In this paper we continue our study of nonequilibrium stationary states (NESS) maintained by a Gaussian thermostat \cite{BDL,CELS,BL,Ch}. Theoretical analysis and computer simulations show that the NESS obtained from such artificial model dynamics can give useful information on real systems maintained in NESS by coupling with heat baths \cite{Therm}. Here we focus on the Moran-Hoover (MH) model of a single particle in a periodic billiard moving under the influence of an electric field $\mathbf{E}$ and a Gaussian thermostat that keeps the kinetic energy constant \cite{MH}. The equations of motion are: \begin{equation}\label{motion} \left\{ \begin{array}{l} \dot \mathbf{x}= \mathbf{p}\crcr \dot \mathbf{p}= \mathbf{E} -\alpha(\mathbf{p})\mathbf{p}+\mathbf{F}_{\rm obs}(\mathbf{x})\crcr \alpha(\mathbf{p})=\frac{(\mathbf{p}\cdot \mathbf{E})}{(\mathbf{p}\cdot \mathbf{p})} \end{array}\right. \end{equation} where $\mathbf{x}$ is the position, $\mathbf{p}$ the momentum of the particle with unit mass, and $\mathbf{F}_{\rm obs}(\mathbf{x})$ represents elastic collision with the obstacles. It is clear from eq.(\ref{motion}) and the fact that collisions with the obstacles do not change $|\mathbf{p}|$ that $\frac{d}{dt}(\mathbf{p}\cdot \mathbf{p})=0$. We shall therefore set $|\mathbf{p}|=1$ from now on. The particle moves on a 2-dimensional torus whose side can be chosen to be 1. An arrangement of the obstacles used for all the simulations presented in this paper is shown in Figure \ref{figure1}. The two obstacles have radii $r_1=0.2$ and $r_2=0.4$. This is also the arrangement used in our previous works \cite{BDL,BDLR} and is chosen to have a finite horizon, {\sl i.e.} there is an upper bound for the time between successive collisions of the particle with the obstacles. Moreover we take $\mathbf{E}$ to be along the horizontal $x$ axis, {\sl i.e.} $\mathbf{E}=(E,0)$. The analytical results apply to general geometries with finite horizons. \begin{figure}[ht] \centering \epsfig{file=fig1.eps,width=0.4\linewidth} \centering \caption{Typical obstacles placement.} \label{figure1} \end{figure} The set of states where the particle collides with a given obstacle can be parametrized by two angles: $\vartheta\in[0,2\pi]$ the angle on the obstacle between the collision point and the positive $x$ direction, and $\psi$ the angle between the particle velocity and the outgoing normal to the obstacle at the collision point. To obtain a complete coordinate system for the collision states we define the coordinate $\theta=\vartheta$ for obstacle 1 (see Figure \ref{figure1}) and $\theta=\vartheta+2\pi$ for obstacle 2 so that $\theta\in[0,4\pi]$. In these coordinates, the elastic collision is simply represented by the map $C(\theta,\psi)=(\theta,\pi-\psi)$, where $\psi\in[\pi/2,3\pi/2]$ before collision and $\psi\in[-\pi/2,\pi/2]$ after collision. We will call ${\cal M}=[0,4\pi]\times [-\pi/2,\pi/2]$ the set of possible pairs $(\theta,\psi)$ representing the position of the particle just after a collision. ${\cal M}$ corresponds to a Poincar\'e section of the flow. See Figure \ref{figure2} for a depiction of $\theta$ and $\psi$. \begin{figure} \centering \epsfig{file=fig2.eps,width=0.4\linewidth} \put(-110,210){\makebox(0,0)[r]{$\psi$}} \put(-145,170){\makebox(0,0)[r]{$\pi-\psi$}} \put(-160,17){\makebox(0,0)[r]{$\theta$}} \centering \caption{Elastic collision.} \label{figure2} \end{figure} Since $|\mathbf{p}|$ is constant, the trajectory of the particle can be represented by its position $\mathbf{x}(t)$ and the angle of its momentum with the horizontal axis $\phi(t)$. The motion of the particle between two collisions can be exactly integrated. Moreover one can construct the map $S_E(\theta,\psi)$ mapping the position and momentum of the particle just after a collision to its position and momentum just before the next collision, which may be with the same or a different obstacle. In this way we can represent the dynamics in discrete time as the iteration of the map $T_E:{\cal M}\to{\cal M}$ between successive collisions given by $T_E=C\circ S_E$. Observe that this map is not continuous (grazing collisions) and that, for $E$ small, the dynamics is a perturbation of the free billiard dynamics \cite{CELS}. In our previous works we were primarily concerned with the SRB distribution associated with $T_E$. Let $\mu_0$ be the measure on ${\cal M}$ given by \begin{equation}\label{mu0} \mu_0=\cos(\psi)\chi(\theta)d\theta\,d\psi/Z \end{equation} where $Z=4\pi(r_1+r_2)$ is a normalization constant and $\chi(\theta)=r_1$ for $\theta\in[0,2\pi]$ and $\chi(\theta)=r_2$ otherwise. Observe that $\mu_0$ is invariant under $T_0$. The SRB distribution $\mu_E(d\theta,d\psi)$ is defined as the weak limit of $\mu_0$ under the dynamics $T_E$, {\sl i.e.} \begin{equation}\label{wlim} \mu_{E}=\operatorname{w-lim}\, T_{E}^n \mu_0. \end{equation} The measure $\mu_E$, when it exists and is unique, represents the natural non equilibrium steady state (NESS) for the system \cite{Ru}. Clearly $\mu_0$ is the SRB measure of $T_0$. From the SRB measure $\mu_{E}(d\theta,d\psi)$ on ${\cal M}$ for the collision map one can build the SRB measure $m_{E}(d\mathbf{x},d\phi)$ on $M=Q\times [0,2\pi]$ for the flow generated by eq.(\ref{motion}). Here $Q$ is $\mathbb{T}\backslash$~obstacles. This can be represented as: \begin{equation}\label{mmu} m_{E}(A)= \frac{1}{\bar{\tau}_E}\,\int_{\cal M}\int_0^{\tau_E(\theta,\psi)} I_A(\mathbf{X}^E_t(\theta,\psi),\Phi^E_t(\theta,\psi))\,dt\, \mu_E(d\theta,d\psi) \end{equation} where $I_A$ is the indicator function of the set $A\subset M$, $(\mathbf{X}^E_t,\Phi^E_t)$ is the flow generated by eq.(\ref{motion}) and $\tau_E(\theta,\psi)$ is the time till the next collision when starting at $(\theta,\psi)\in{\cal M}$ with $\bar{\tau}_E = \int_{{\cal M}} \tau_E(\theta,\psi)\,\mu_E(d\theta,d\psi)$ denoting the mean free time. In \cite{CELS} it was proved that, for small fields $E$, $|E|<E_0$, the above model has a unique NESS described by an SRB measure $\mu_E$ which is singular with respect to the Liouville measure with Hausdorff dimension given by the standard Kaplan-Yorke formula \cite{FKYY}. The current $\mathbf{j}(E)$ in this NESS is given by \[ \mathbf{j}(E)=m_E(\mathbf{v}) \] where $\mathbf{v}=(\cos(\phi),\sin(\phi))$ is the velocity of the particle. This current was shown in \cite{CELS} to be given by the Kawasaki formula cf. \cite{KE}. In the limit $E\to 0$ the Kawasaki formula reduces to the Green-Kubo formula for the conductivity $\kappa$ which satisfies the Einstein relation \cite{CELS,KE}. An investigation of the current as a function of the field was carried out in \cite{BDL}. It was argued there that the current is not a $C^1$ function of the field $E$ even close to $E=0$. The results of \cite{CELS} were generalized in \cite{Ch,Z} to systems where the collision rule or the free flow dynamics is perturbed. In none of the above works was the spatial dependence of the singular (with respect to Lebesgue) measure $m_E(d\mathbf{x},d\phi)$ studied. This is what we do in this note. We will describe analytical results and numerical studies of the spacial and angular dependence of the NESS $m_E(d\mathbf{x},d\phi)$ when projected on $\phi\in[0,2\pi]$ or on $\mathbf{x}\in Q$ and related quantities like the local average velocity. The outline of the rest of the paper is as follows. In section \ref{s:avcu} we introduce the local density, local average velocity and angular distribution derived from $m_E$. We find their dependence on position and field strength. We also show there computer generated pictures of the flow and compare them with the predictions of Green-Kubo formulas at small $E$. In section \ref{s:stocha} we introduce and analyze two simple models in which the deterministic collisions with fixed obstacles are replaced by random collisions whose times form a Poisson process and compare their properties with those of the deterministic model. The appendixes are devoted to analytical justification of the claims in section \ref{s:avcu}. A paper describing results for the case where the system consists not just of one but of a large number of particles is in preparation. \section{Local Structure of the SRB measure}\label{s:avcu} We define and study the several projections of the SRB measure $m_E$ introduced in the previous section. For clarity of exposition we delay derivations and justifications to the Appendices. \subsection{Local Density and Average Velocity}\label{ss:2.1} Two interesting quantities to study are {\sl the local density} and {\sl local average velocity}. More precisely, we define the projected measures on the position $\mathbf{x}$ as: \begin{equation}\label{dd} \delta_E(A)=\int_{A\times[0,2\pi)} m_{E}(d\mathbf{x},d\phi) \end{equation} for any $A\subset Q$. This clearly defines a probability measure $\delta_E(d\mathbf{x})$ on $Q$. Using eq.(\ref{mmu}), and defining \begin{equation}\label{JA} J_A^E(\theta,\psi)=\frac{1}{\bar\tau_E} \int_0^{\tau(\theta,\psi)} I_{A}\left(\mathbf{X}^E_t(\theta , \psi)\right)\,dt \end{equation} where $I_A$ is the indicator function of the set $A$, we can represent $\delta_E(A)$ as the integral of a piecewise smooth function with respect to $\mu_{E}(d\theta,d\phi)$: \begin{equation} \label{deltaA} \delta_E(A)=\int_{\cal M} J_A^E(\theta,\psi)\, \mu_{E}(d\theta,d\psi) \end{equation} Observe that $J_A^E(\theta,\psi)$ is the relative amount of time the trajectory starting from $(\theta,\psi)$ spends in the set $A$, that is the amount of time divided by the mean free time $\bar\tau_E$. We also define the vector measure for the local average velocity \begin{equation}\label{nunu} \nu_E(A)=\int_{A\times[0,2\pi)}(\cos(\phi),\sin(\phi))\, m_E(d\mathbf{x},d\phi) \end{equation} Also this measure can be written as the integral of a piecewise smooth function with respect to $\mu_{E}(d\theta,d\phi)$, see eq.(\ref{HH}) below for details. In Appendix \ref{a:regu} we show that, for $|E|<E_0$, the integrals in eq.(\ref{dd}) and eq.(\ref{nunu}) define absolutely continuous measures. That is $\delta_E(d\mathbf{x})=n_E(\mathbf{x})\,d\mathbf{x}$ and $\nu_E(d\mathbf{x})=n_E(\mathbf{x})\mathbf{v}_E(\mathbf{x})\,d\mathbf{x}$. We call $n_E(\mathbf{x})$ the {\sl local density} and $\mathbf{v}_E(\mathbf{x})$ the {\sl local average velocity} at $\mathbf{x}$. We show that both are continuous functions of both $\mathbf{x}$ and $E$ with $n_0(\mathbf{x})=$const$=\bigl({\rm Area}(Q)\bigr)^{-1}$ and $\mathbf{v}_0(\mathbf{x})=0$. To visualize the above numerically, we divided the torus of Figure~\ref{figure1} in a grid of $50\times 50$ cells and computed the time average of the velocity of the particle when it crosses a cell. The results are shown in Figure \ref{Figure_Velo}. We also computed the local density on the same grid. \begin{figure}[ht] \centering \epsfig{file=velocity-field-clean.eps,width=0.7\linewidth,clip=} \caption{Average local velocity $\mathbf{v}_E(\mathbf{x})$ for $E=0.1$.} \label{Figure_Velo} \end{figure} We now show that the local density $n_E(x)$ and the local average velocity $\mathbf{v}_E(\mathbf{x})$ are linear in the field $E$ when $E\to0$, that is \begin{eqnarray} n_E(\mathbf{x})&=&n_0(\mathbf{x})+d(\mathbf{x})E+o(E)\label{dx}\\ \mathbf{v}_E(\mathbf{x})&=&\mathbf{k}(\mathbf{x})E+ o(E)\label{nx} \end{eqnarray} where $d(\mathbf{x})$ and $\mathbf{k}(\mathbf{x})$ can be computed via Green-Kubo-type formulas as follows. Consider the family of all velocity vectors originating at the point $\mathbf{x}$ (at which we are computing the density or average velocity); they make a one parameter family of phase states $W_{\mathbf{x}} = \{(\mathbf{x},\phi)\ |\ 0< \phi<2\pi\}$. Let $\rho^{\mathbf{x}}$ be the probability measure on $W_{\mathbf{x}}$ that has a uniform distribution over $\phi\in[0,2\pi]$. We can map $W_{\mathbf{x}}$ to the collision space ${\cal M}$ by taking every point $(\mathbf{x},\phi)\in W_{\mathbf{x}}$ to its first collision with $\partial Q$ in \emph{the past}, under the field-free dynamics. The image of $W_{\mathbf{x}}$ will then be a collection $W_0$ of curves in ${\cal M}$ on which we get an induced probability measure $\rho_0$. Pulling this measure further back (into the past) we get a sequence of probability measures $\rho_n = T_0^{-n}(\rho_0)$, each sitting on a collection $W_n=T_0^{-n}(W_0)$ of curves in ${\cal M}$. With this definition we get that \begin{equation}\label{d0} d(\mathbf{x})= c\biggl[\rho_0(\Delta_{0,\mathbf{x}})+\sum_{n=1}^{\infty} \rho_{n}(\Delta_0)\biggr] \end{equation} where $c={\rm Area}(Q)^{-1} = n_0(\mathbf{x})$, $\Delta_0=\tau_0(\theta,\psi)\cos(\theta+\psi)$ is the $x$-distance form the collision point $(\theta,\psi)\in{\cal M}$ to the next collision point and $\Delta_{0,\mathbf{x}}$ is the $x$-distance from the collision point $(\theta,\psi)\in{\cal M}$ to the point $\mathbf{x}$. Observe that $\rho_0$ is supported on points whose trajectory passes through $\mathbf{x}$ before colliding again. The above series converges exponentially, because the measures $\rho_n$ converge exponentially fast to the measure $\mu_0$ on ${\cal M}$ (see Theorem 7.31 in \cite{CM}) and $\mu_0(\Delta_0)=0$. Consider now the two {\it signed} measures $\rho^{c,\mathbf{x}}$ and $\rho^{s,\mathbf{x}}$ on $W_{\mathbf{x}}$ that have densities $\cos\phi$ and $\sin \phi$, respectively, with respect to $\rho^{\mathbf{x}}$. As before, we can map $\rho^{c,\mathbf{x}}$ and $\rho^{s,\mathbf{x}}$ on the collision space ${\cal M}$ and obtain signed measures $\rho^c_0$ and $\rho^s_0$ on $W_0$, respectively. We also denote their images by $\rho^{c,s}_n = T_0^{-n}(\rho^{c,s}_0)$ on $W_n$ for $n\in {\mathbb Z}$. \begin{equation} \label{kk} \mathbf{k}(\mathbf{x}) = \tfrac{1}{2} \, \sum_{n=-\infty}^{\infty} \bigl(\rho^c_n(\Delta_0),\rho^s_n(\Delta_0)\bigr). \end{equation} The terms in the series in eq.\eqref{kk} converge to zero as $n\to\pm\infty$ exponentially fast, because the measures $\rho^{c,s}_n$ converge to the zero measure; this again follows from Theorem 7.31 in \cite{CM}. We note that the perturbation of the density $n_E(\mathbf{x})$ and of the local average velocity $\mathbf{v}_E(\mathbf{x})$ are linear in $E$, to the leading order, and the factor of $E$ is given, in both cases, by an infinite sum of correlations, {\sl i.e.} the right hand sides of eq.(\ref{d0}) and eq.(\ref{kk}). We computed numerically the coefficients $d$ and $\mathbf{k}$ in eq.(\ref{dx}) and eq.(\ref{nx}) to compare their predictions with the simulation results shown in Figure \ref{Figure_Velo}. We truncated the infinite sums in eqs.(\ref{d0},\ref{kk}) to $|n|<15$ since we saw no visible difference arise from taking more terms into consideration. Let $l^+_x=\{(x,y)\in Q\}$ be the vertical cross section placed at horizontal coordinate $x$ and $l^-_y=\{(x,y)\in Q\}$ be the horizontal cross section placed at vertical coordinate $y$. Finally let $\mathbf{e}_x=(1,0)$ and $\mathbf{e}_y=(0,1)$ be the unit vectors in the horizontal and vertical direction respectively. Figure \ref{Figure_vx} shows a comparison of the horizontal component $(\mathbf{v}_E(\mathbf{x})\cdot \mathbf{e}_x)$ of $\mathbf{v}_E(\mathbf{x})$ along $l^+_{0.41}$ with the prediction of eq.(\ref{nx}). In the same way, Figure \ref{Figure_vy} shows a comparison of the vertical component $(\mathbf{v}_E(\mathbf{x})\cdot \mathbf{e}_y)$ of $\mathbf{v}_E(\mathbf{x})$ along $l^-_{0.41}$ again with the prediction of eq.(\ref{nx}). In both figures the pluses represent the results of direct simulation while the crosses are obtained using the Green-Kubo formula eq.(\ref{kk}). \begin{figure}[ht] \centering \epsfig{file=vxgraph.eps,width=0.85\linewidth,clip=} \put(-200,-10){\makebox(0,0)[r]{$y$}} \put(-410,130){\makebox(0,0)[r]{$(\mathbf{v}_E\cdot \mathbf{e}_x)$}} \caption{The $x$ component of the average local velocity $\mathbf{v}_E(\mathbf{x})$ for $E=0.1$ and $x=0.41$.} \label{Figure_vx} \end{figure} \begin{figure}[ht] \centering \epsfig{file=vygraph.eps,width=0.85\linewidth,clip=} \put(-200,-10){\makebox(0,0)[r]{$x$}} \put(-410,130){\makebox(0,0)[r]{$(\mathbf{v}_E\cdot \mathbf{e}_y)$}} \caption{The $y$ component of the average local velocity $\mathbf{v}_E(\mathbf{x})$ for $E=0.1$ and $y=0.41$.} \label{Figure_vy} \end{figure} The comparison of $n_E(\mathbf{x})$ with eq.(\ref{dx}) is more difficult. Calling $n_E^o(\mathbf{x})=(n_E(\mathbf{x})-n_{-E}(\mathbf{x}))/2$ and $n_E^e(\mathbf{x})=(n_E(\mathbf{x})+n_{-E}(\mathbf{x}))/2-n_0(\mathbf{x})$, we have that $n_E^o(\mathbf{x})$ satisfies the same linear response formula eq.(\ref{dx}) of $n_E(\mathbf{x})$ with the same coefficient $d(\mathbf{x})$ but we expect the remainder to be smaller. This is relevant in the present case since $n_E^e(\mathbf{x})$ and $n_E^o(\mathbf{x})$ appear to be of comparable magnitude. We observe that, due to the symmetry of the problem, $n_E(1-x,y)=n_{-E}(x,y)$ so that $n^o_E(x,y)=(n_E(x,y)-n_E(1-x,y))/2$. Figure \ref{Figure_d} compares $n^o_E(\mathbf{x})$ along $l^+_{0.41}$ with eq.(\ref{dx}). Again the pluses represents direct simulation while the crosses are obtained using the Green-Kubo formula eq.(\ref{d0}). \begin{figure}[ht] \centering \epsfig{file=dxgraph.eps,width=0.85\linewidth,clip=} \put(-200,-10){\makebox(0,0)[r]{$y$}} \put(-415,130){\makebox(0,0)[r]{$n^o_E$}} \caption{The symmetrized local density $n^o_E(\mathbf{x})$ for $E=0.1$ and $x=0.41$.} \label{Figure_d} \end{figure} More generally, given a probability measure $\rho(d\mathbf{x},d\phi)=l(\mathbf{x},\phi)\,d\mathbf{x}\, d\phi$ absolutely continuous with respect to the Lebesgue measure on $Q$ let $\rho^E_t(d\mathbf{x},d\phi)=l^E_t(\mathbf{x},\phi)\,d\mathbf{x}\, d\phi$ be its time evolution with respect to the dynamics generated by eq.(\ref{motion}). In a similar way as above, we can then define: \begin{eqnarray} n^E_t(\mathbf{x})&=&\int l^E_t(\mathbf{x},\phi)\,d\phi\crcr n^E_t(\mathbf{x})\mathbf{v}^E_t(\mathbf{x}) &=&\int (\cos(\phi),\sin(\phi)) l^E_t(\mathbf{x},\phi)\,d\phi \end{eqnarray} The density $n^E_t(\mathbf{x})$ clearly satisfies a conservation law: \begin{equation} \label{nn} \frac{d}{dt}\int_A n^E_t(x)\,dx=-\int_{\partial A}n^E_t(\mathbf{x})\bigl(\mathbf{v}^E_t(\mathbf{x})\cdot \hat {\bf n}(\mathbf{x})\bigr)\,d\sigma(\mathbf{x}) \end{equation} where $A$ is a subset of $Q$ with smooth enough boundary, $\hat{\bf n}(\mathbf{x})$ is the unit outward normal to $\partial A$ at $\mathbf{x}$ and $\sigma(\mathbf{x})$ is the length element on $\partial A$. Taking the limit $t\to\infty$ and assuming that $\lim_{t\to\infty}n^E_t(\mathbf{x})=n_E(\mathbf{x})$ and $\lim_{t\to\infty}\mathbf{v}^E_t(x)=\mathbf{v}_E(\mathbf{x})$ we obtain \begin{equation}\label{nn1} \int_{\partial A}n_E(\mathbf{x})\bigl(\mathbf{v}_E(\mathbf{x})\cdot \hat{\bf n}(\mathbf{x})\bigr)d\sigma(\mathbf{x})=0 \end{equation} The above assumption is not trivial. It is easy to show that, if $\lim_{t\to\infty}n^E_t(\mathbf{x})$ exists, it has to equal $n_E(\mathbf{x})$. On the other hand, we do not have a proof for the existence of such a limit. A similar argument holds for $\mathbf{v}^E_t(\mathbf{x})$. A complete justification of eq.(\ref{nn1}) will thus require further work but we certainly expect it to be true. Nonetheless we can test the validity of eq.(\ref{nn1}) numerically. Due to the symmetry of $Q$ we have that the average current $\mathbf{j}(E)=(j(E),0)$. Moreover, since the collision are elastic, $\mathbf{v}_E(\mathbf{x})$ is tangent to $\partial Q$ for $\mathbf{x}\in\partial Q$. It follows from this that \begin{eqnarray*} \int_{l^+_x}n_E(\mathbf{x})\bigl(\mathbf{v}_E(\mathbf{x})\cdot \mathbf{e}_x\bigr)\,dy&\equiv& j(E)\crcr \int_{l^+_y}n_E(\mathbf{x})\bigl(\mathbf{v}_E(\mathbf{x})\cdot \mathbf{e}_y\bigr)\,dx&\equiv& 0 \end{eqnarray*} independently on the value of $x$ or $y$. Both these equations are very well verified. \subsection{Angular Distribution}\label{s:angular} We now look at the projection of $m_E$ on the angle $\phi$. We can define the projected measure $\eta(d\phi)$ by setting, for a measurable set $A\subset[0,2\pi]$, \[ \eta_E(A)=\int_M I_{A}(\mathbf{x},\phi)\, m_E(d\mathbf{x},d\phi) \] where $I_{A}$ is the indicator function of the set $A$. Again we can write $\eta_E(A)$ as an integral on the SRB measure $\mu_E(d\theta,d\psi)$ as follows. Define the function: \begin{equation}\label{tau} J^E_A(\theta,\psi)=\frac{1}{\bar\tau_E} \int_0^{\tau_E(\theta,\psi)} I_{A}\left(\Phi^E_t(\theta , \psi)\right)\,dt \end{equation} Then we have that \[ \eta_E(A)=\int_{\cal M} J^E_A(\theta,\psi)\, \mu_E(d\theta,d\phi)=\mu_E(J^E_A) \] Using the argument in Appendix \ref{a:regu} we can show that, for $|E|<E_0$, $\eta_E$ is absolutely continuous with respect to $d\phi$, {\sl i.e.} that $\eta_E(d\phi)=h_E(\phi)\,d\phi$ where $h_E(\phi)$ is a continuous function of both $\phi$ and $E$ with $h_0(\phi)=$const$=1/2\pi$, since the invariant measure $m_0$ is uniform on $Q\times[0,2\pi]$. \begin{figure}[ht] \centering \epsfig{file=dip_new.eps, width=0.85\linewidth} \put(-200,-10){\makebox(0,0)[r]{$\phi$}} \put(-410,130){\makebox(0,0)[r]{$h_E$}} \centering \caption{Angular distribution for $E=0.1$.} \label{Figure_angle} \end{figure} We computed $h_E(\phi)$ numerically for $E=0.1$. The result is shown in Figure \ref{Figure_angle}. A striking characteristic of this distribution is the dip around $\phi=0$. This is somewhat unexpected since the effect of the field $E$ is to push the velocity of the particle to align with the positive $x$ direction so that one would expect a maximum at $\phi=0$ rather than a local minimum (see also section \ref{s:stocha} for a comparison with the stochastic models). To understand this better we consider, for a given $\phi$, all points $(\theta,\psi)\in{\cal M}$ that produce the outgoing velocity vector $(\cos\phi, \sin\phi)$, i.e., we consider $$ V_{\phi} = \{(\theta,\psi)\in{\cal M} \colon \psi + \theta = \phi \ \ ({\rm mod}\ 2\pi)\}. $$ Now ${\cal M}$ is foliated by the lines $\{V_{\phi}\}$, $0\leq \phi <2\pi$. Let $\mu_{0}^{\phi}$ denote the conditional measure induced by $\mu_0$ on the line $V_{\phi}$. If we use $\theta$ as the (only) coordinate on $V_{\phi}$, then $$ d\mu_0^{\phi} = Z_{\phi}^{-1} \cos(\phi-\theta) \chi(\theta)\, d\theta $$ where $Z_{\phi}$ is the normalizing factor \begin{equation} \label{Zphi} Z_{\phi} = \int_{\cos(\phi-\theta)>0} \cos(\phi-\theta) \chi(\theta)\, d\theta = 2(r_1+r_2). \end{equation} We remind the reader that $\chi(\theta)=r_1$ and $0\leq \theta< 2\pi$ on the first obstacle and $\chi(\theta)=r_2$ and $2\pi\leq \theta< 4\pi$ on the second. Now we consider the conditional distribution of the free flight time function $\tau_0$ on each line $V_{\phi}$. It turns out that its first moment is constant, i.e., $\mu_0^{\phi}(\tau_0) = \bar{\tau}_0$ for all $\phi$'s, where $\bar{\tau}_0 = \mu_0(\tau)$ is the total (unconditional) mean free time. In other words, the deterministic collision process is isotropic, on average. This seems to be a novel result in the studies of billiards and we prove it in Appendix \ref{a:tau}. We now argue that the observed dip near $\phi=0$, for small $E$ can be traced to the second moment, $\mu_0^{\phi}(\tau_0^2)$, which is \emph{not} constant and which for our obstacles indeed has a local minimum at $\phi=0$. We will show that the density $h_E(\phi)$ satisfies \begin{equation}\label{ax} h_E(\phi)=\frac{1}{2\pi}+a(\phi)E+o(E) \end{equation} where $a(\phi)$ is given by a Green-Kubo formula \begin{equation}\label{kax} a(\phi)=\frac{Z_\phi}{Z}\frac{1}{2\bar{\tau}_0}\sum_{n=-\infty}^{\infty} \mu_0^\phi\bigl(\tau_0\cdot (\Delta_0\circ T_0^n)\bigr). \end{equation} Recalling that $Z=4\pi(r_1+r_2)$ is the normalization of $\mu_0$, see text after eq.\eqref{mu0}, and $Z_{\phi} =2(r_1+r_2)$ is independent of $\phi$, see eq.\eqref{Zphi}, we have that $Z_\phi/Z=1/2\pi$. Again we see that the fluctuations of the density $h_E$ are linear in $E$, to the leading order, and the factor of $E$ is given by an infinite sum of correlations. The latter converges exponentially fast according to general results (Theorem 7.31 in \cite{CM}). Usually its central term ($n=0$) is the most significant, and it is given by \begin{equation} \label{scnd} \frac{Z_{\phi}}{Z}\frac{1}{2\bar\tau_0}\,\mu^\phi_0\bigl(\tau_0 \Delta_0\bigr) =\frac{\cos\phi}{4\pi\bar\tau_0}\,\mu_0^\phi\bigl(\tau_0^2\bigr). \end{equation} The central term explicitly involves the second moment of $\tau_0$ restricted to $V_\phi$. Even though $\cos\phi$ has a \emph{maximum} at $\phi=0$, it may be more than counterbalanced by a dip that the second moment $\mu_0^\phi(\tau_0^2)$ has near $\phi=0$. This is exactly what happens in our model shown in Figure~\ref{figure1}. To check numerically the above results we proceed like in the case of $n_E(\mathbf{x})$ in Figure \ref{Figure_angle}. We introduce the odd part of the angular distribution $h^o_E(\phi)=(h_E(\phi)-h_{-E}(\phi))/2$ and observe that it satisfies the linear response equation \begin{equation}\label{aax} h^o_E(\phi)=a(\phi)+o(E) \end{equation} with $a(\phi)$ still given by eq.(\ref{kax}). Again we expect the reminder to be smaller. Finally, due to the symmetry of our system, we have that $h^o_E(\phi)=(h_E(\phi)-h_E(\phi+\pi))/2$. Figure~\ref{Figure_anglec} presents the plot of eq.\eqref{aax} and the numerically computed plot of $h^o_E(\phi)$ for $E=0.1$. The crosses represent the numerically computed value of $h^0_E(\phi)$. The pluses come from the central term of eq.\eqref{kax}. Already at this level the dip is clearly visible and the agreement is pretty good. Finally the boxes represent eq.\eqref{kax} truncated at $|n|=20$. We have computed eq.\eqref{kax} truncating the sum up to $|n|=100$ but no significant difference from $|n|\leq 20$ can be observed. This is clearly consistent with a fast convergence in the sum in eq.\eqref{kax}. \begin{figure}[ht] \centering \epsfig{file=dipc_new.eps, width=0.85\linewidth} \put(-200,-10){\makebox(0,0)[r]{$\phi$}} \put(-410,130){\makebox(0,0)[r]{$h^o_E$}} \centering \caption{Comparison between the angular distribution for $E=0.1$ and eq.(\ref{Kawah}); see details after eq.\eqref{scnd}.} \label{Figure_anglec} \end{figure} Our analysis indicates that the dip at $\phi=0$ appears to be an artifact of the geometry of the scatterers chosen for our deterministic model. \section{Random Collision Models}\label{s:stocha} In \cite{BDLR}, we introduced a simplified version of the MH model by replacing the collisions with the fixed obstacles with a Poisson random collision process. More precisely we assume that, in every time interval $dt$ the particle has a probability equal to $\lambda|\mathbf{p}|dt$ (with $|\mathbf{p}|$ in this case fixed to be 1) to undergo a collision. Between collisions the particle moves according to eq.(\ref{motion}) without $F_\mathrm{obs}$. When a collision happens we consider two collision rules: \begin{itemize} \item[I] the velocity of the particle after the collision is in direction $\phi\in[0,2\pi]$ with probability density $d\phi/2\pi$; or \item[II] an angle $\eta\in[-\pi/2,\pi/2]$ is chosen at random with probability proportional to $\cos(\eta)d\eta$ and the direction of the velocity is changed according to an elastic collision rules for a particle colliding with an obstacle with outgoing velocity forming an angle $\eta$ with the normal to the obstacle. \end{itemize} We call the models with the above collision rules Model I and Model II. We can think of Model II as representing a situation in which we have $N$ scatterers with diameter $\epsilon$ randomly placed in $\mathbb{T}$ and we consider the (Boltzmann-Grad) limit in which $N\to\infty$, $\epsilon\to 0$, such that $N\epsilon^2\to 0$ while $N\epsilon \to\lambda^{-1}$, the mean free path \cite{Sp}. Let $f_\alpha(E,\mathbf{x},\phi,t)$ be the probability density at time $t$ of finding the particle at $x$ with momentum $p=(\cos\phi,\sin\phi)$. Here $\alpha=$I,II indicates Model I or Model II respectively. This density satisfies the equation: \begin{eqnarray}\label{sst} &&\partial_t f_\alpha(E,\mathbf{x},\phi,t)-\mathbf{p}\partial_\mathbf{x} f_\alpha(E,\mathbf{x},\phi,t) - E \partial_\theta\left(\sin\theta f_\alpha(E,\mathbf{x},\phi,t)\right)=\crcr &&\qquad\qquad\lambda\left(\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} p_\alpha(\eta) f_\alpha(E,\mathbf{x},\phi+2\eta+\pi,t)\,d\eta-f_\alpha(E,\mathbf{x},\phi,t)\right) \end{eqnarray} where, $t\in\mathbb{R}^+$, $\mathbf{x}\in\mathbb{T}$, the unit torus, $\mathbf{E}=(E,0)$ is in the horizontal direction and $\lambda$ is the collision rate. Moreover we have $p_\mathrm{I}(\eta)=\pi^{-1}$ for Model I and $p_\mathrm{II}(\eta)=\cos\eta/2$ for Model II. It follows from eq.\eqref{sst} that, when the distribution at time 0, $f_\alpha(E,x,\phi,0)$ does not depend on $\mathbf{x}$, the density $f_\alpha(E,\mathbf{x},\phi,t)$ will also not depend on $\mathbf{x}$ for every $t>0$. Even if the initial state does depend on $x$, it is easy to show \cite{BL} that as $t\to\infty$ the system will approach a stationary density $f_\alpha(E,\phi)$ which will satisfy the equation: \begin{equation}\label{steady} -\frac{E}{\lambda} \partial_\phi\left(\sin\phi f_\alpha(E,\phi)\right)=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} p_\alpha(\eta) f_\alpha(E,\phi+2\eta+\pi)\,d\eta-f_\alpha(E,\phi). \end{equation} From now on we will set $\lambda=1$ since the stationary $f_\alpha$ depends only on $E/\lambda$. We can try to solve this equation as a power series in $E$. Since $E$ is a singular perturbation the series will not be convergent for any non zero value of $E$. However, we expect that it will be an asymptotic series and accurate for small $|E|$. Writing \begin{equation}\label{powexp} f_\alpha(E,\phi)=\sum_{i=0}^{\infty}E^if_\alpha^{(i)}(\phi) \end{equation} yields a hierarchy of equations for $i=0,1,2\dots$: \begin{equation}\label{pert} - \partial_\phi\left(\sin\phi f_\alpha^{(i-1)}(\phi)\right)=\int_{-\frac{\pi}{2}}^{\frac{\pi}{2}} p_\alpha(\eta) f_\alpha^{(i)}(\phi+2\eta+\pi)d\eta-f_\alpha^{(i)}(\phi) \end{equation} with $f_\alpha^{(-1)}\equiv 0$. The equation for $i=0$ is easily solved and gives, as the unique solution, $f_\alpha^{(0)}\equiv 1$, since we require $\int f_\alpha(E,\phi)\,d\phi=2\pi$. To solve the higher order equations we write \[ f_\alpha^{(i)}(\phi)=\sum_{n=-\infty}^{\infty} \hat f_\alpha^{(i)}(n)\cos(n\phi) \] where we used the symmetry with respect to the direction orthogonal to the field to eliminate the terms in $\sin(n\theta)$ and clearly $f_\alpha^{(i)}(n)=f_\alpha^{(i)}(-n)$. In this way, for $n \neq 0$, eq.(\ref{pert}) becomes \begin{equation}\label{gerarchia} \hat f^{(i)}_\alpha(n)=\frac{n}{2}\left(1-\hat p_\alpha(n)\right)\left(\hat f^{(i-1)}_\alpha(n-1)-\hat f^{(i-1)}_\alpha(n+1)\right) \end{equation} with $\hat p_\mathrm{I}(n)=0$ for Model I and $\hat p_\mathrm{II}(n)=1/4n^2$ for Model II. Finally $f^{(0)}_\alpha(n)=\delta_{n,0}$, again due to the normalization condition. This yields \begin{eqnarray}\label{power} f_\mathrm{I}(E,\phi)&=&1+E\cos(\phi)+E^2\cos(2\phi)+\ldots\crcr f_\mathrm{II}(E,\phi)&=&1+\frac{3}{4}E\cos(\phi)+\frac{45}{64} E^2\cos(2\phi)+\ldots \end{eqnarray} We can compare the above results with numerical simulation of the stochastic processes generating eq.(\ref{steady}). We set $E=0.2$ and run both processes for $10^8$ collisions. The results are plotted in Figure \ref{Figure_stocha}. The crosses refer to Model I while the pluses refer to Model II. Superimposed are the graph obtained from eq.(\ref{power}). As one can see, the fit is very good. This is in agreement with our expectation that the series in $E$ is an asymptotic one. In the case of model I this can be rigorously justified, see eq.\eqref{devuni} below. \begin{figure} \centering \epsfig{file=rand.eps, width=0.95\linewidth} \put(-200,-5){\makebox(0,0)[r]{$\phi$}} \put(-460,130){\makebox(0,0)[r]{$f_\alpha$}} \centering \caption{Comparison between numerical simulations of the stochastic process and the power series expansion eq.(\ref{power}) for $E=0.2$. See explanation after eq.\eqref{power}. } \label{Figure_stocha} \end{figure} We note that, for both Models, the power series for $f_\alpha(0.2,\phi)$ has a global maximum for $\phi=0$. Since this is not true for the angular distribution of the deterministic MH model, see section \ref{s:angular}, we will investigate the behavior of $f(E,\psi)$ near $\phi=0$ more closely. \subsection{Model I} \newcommand{f_\mathrm{I}}{f_\mathrm{I}} Eq.(\ref{steady}) can be written as: \begin{equation}\label{steady1} -E \partial_\phi\left(\sin\phi f_\mathrm{I}(E,\phi)\right)= 1-f_\mathrm{I}(E,\phi) \end{equation} where we normalize $f_\mathrm{I}$ as $\int f_\mathrm{I}(E,\phi)\,d\phi=2\pi$. Eq.(\ref{steady1}) can be solved by introducing the function \[ h(E,\phi)=\left(\frac{1-\cos\phi}{1+\cos\phi}\right)^{\frac{1}{2E}} \] which is a solution of the differential equation $\partial_\phi h(E,\phi)=\frac{h(E,\phi)}{E\sin\phi} $, and defining \[ f_\mathrm{I}(E,\phi)=\frac{h(E,\phi)}{\sin\phi}g(E,\phi) \] Substituting in eq.(\ref{steady1}) we obtain \[ \partial_\phi g(E,\phi)=-h(E,\phi+\pi) \] Observe that $h(E,\phi)$ has a non integrable singularity at $\phi=\pi$ so that, for $f_\mathrm{I}(E,\phi)$ to be integrable we need $g(E,\pi)=0$. We can thus represent the solution as: \begin{equation}\label{soluni} f_\mathrm{I}(E,\phi)=\frac{h(E,\phi)}{E\sin(\phi)}\int_\phi^\pi h(E,\eta+\pi)\,d\eta \end{equation} We list below some properties of $f_\mathrm{I}(E,\phi)$ that will be useful in the following. We have two possible situations: \begin{itemize} \item[$E<1$]: In this case $f_\mathrm{I}(E,\phi)$ is continuous in $\phi$ for every $\phi$. Moreover it is easy to see that $f_\mathrm{I}(E,\phi)$ is $C^\infty$ for $\phi\not =0,\pi$. For $\phi=0,\pi$, if $E<1/n$, $f_\mathrm{I}(E,\phi)$ is $C^{n-1}$ and $\partial^n_\phi f_\mathrm{I}(E,\phi)$ is H\"older continuous of exponent $\alpha$ for $0<\alpha<1/E-n$. \item[$E>1$] In this case $f_\mathrm{I}(E,\phi)$ is still $C^\infty$ everywhere but for $\phi=0,\pi$. At $\phi=0$ we have a singularity and $f_\mathrm{I}(E,\phi)\simeq \phi^{1/E-1}$. More precisely the function $\phi^{1-1/E'}f_\mathrm{I}(E,\phi)$ is H\"older continuous of exponent $0<\alpha<1/E-1/E'$, for every $E'>E$. \end{itemize} Starting form eq.(\ref{soluni}) and integrating by part we obtain: \begin{eqnarray}\label{devuni} f_\mathrm{I}(E,\phi)&=&1-\frac{h(E,\phi)}{\sin(\phi)}\int_\phi^\pi\cos\eta\, h(E,\eta+\pi)\,d\eta=\crcr &=&1+E\cos\phi+E\frac{h(E,\phi)}{\sin(\phi)}\int_\phi^\pi\cos2\eta\, h(E,\eta+\pi)\,d\eta=\crcr &=&1+E\cos\phi+E^2\cos(2\phi)+E^2\frac{h(E,\phi)}{\sin(\phi)} \int_\phi^\pi\partial_\phi[\cos2\eta\sin\eta]h(E,\eta+\pi)\,d\eta=\crcr &=&\sum_{i=0}^{N}E^nf_\mathrm{I}^{(i)}(\phi)+E^N R_N(E,\phi) \end{eqnarray} The above expansion coincides with the one obtained in eqs.(\ref{powexp}-\ref{gerarchia}). It is not difficult to see that $|R_N(E,\phi)|\leq KC^N N!$. Since it is clear from eq.\eqref{soluni} that $f_I(E,\phi)$ is not analytic in $E$ for small $E$, this inequality means that, as we discussed previously, the perturbative series for $f_\mathrm{I}(E,\phi)$ is at least asymptotic. Notwithstanding this, eq(\ref{devuni}) and the regularity properties of $f_\mathrm{I}(E,\phi)$ tell us that, for $E$ small, $f_\mathrm{I}(E,\phi)$ has a unique maximum at $\phi=0$ and a unique minimum at $\phi=\pi$. \subsection{Model II} \newcommand{f_\mathrm{II}}{f_\mathrm{II}} We can use the solution of Model I to get more analytical information on Model II. Proceeding as in eq.(\ref{steady1}) we write the solution of eq.(\ref{steady}) as \[ f_\mathrm{II}(E,\phi)=\frac{h(E,\phi)}{E\sin\phi}g(E,\phi) \] and obtain the representation for $g(E,\phi)$: \begin{equation} \partial_\phi g(E,\phi)=-\frac{1}{4E}h(E,\phi+\pi) \int_{-\pi}^{\pi} \left|\cos\left(\frac { \omega-\phi } {2}\right)\right|\frac{h(E,\omega+\pi)}{\sin(\omega+\pi)}g(E,\omega+\pi)\, d\omega \end{equation} from which, reasoning as in Model I, we get \begin{equation}\label{Perron} f_\mathrm{II}(E,\phi)=\frac{1}{4E\sin(\phi)}h(E,\phi)\int_\phi^\pi h(E,\eta+\pi)\int_{-\pi}^{\pi}\left|\cos\left(\frac{ \omega-\eta } {2}\right)\right|f(E,\omega+\pi)\,d\omega\, d\eta \end{equation} for $0<\phi<\pi$. We can then set $f_\mathrm{II}(E,-\phi)=f_\mathrm{II}(E,\phi)$. Observe that the above equation can be written has as \begin{equation}\label{PF} f_\mathrm{II}(E,\phi)=\int_{-\pi}^\pi Q(\phi,\omega)f_\mathrm{II}(E,\omega)\,d\omega \end{equation} where \[ Q(\phi,\omega)=\frac{1}{4E\sin(\phi)}h(E,\phi)\int_\phi^\pi h(E,\eta+\pi)\left|\sin\left(\frac{\omega-\eta } {2}\right)\right|\,d\eta \] for $0<\phi<\pi$ and $Q(-\phi,\omega)=Q(\phi,\omega)$. It is easy to see that $Q(\phi,\omega)>0$ for every $\phi,\omega$. Moreover we have \begin{eqnarray} \int_0^\pi Q(\phi,\omega)\,d\phi&=&\lim_{\epsilon\to 0}\int_\epsilon^{\pi-\epsilon} Q(\phi,\omega)\,d\phi=\crcr &=&\frac{1}{4}\int_0^\pi h(E,\phi)h(E,\phi+\pi) \left|\sin\left(\frac{\omega-\phi } {2}\right)\right|\,d\phi+\crcr &-&\frac{1}{4}\lim_{\epsilon\to 0}h(E,\epsilon)\int_\epsilon^\pi h(E,\eta+\pi)\left|\sin\left(\frac{\omega-\eta } {2}\right)\right|\,d\eta\crcr &+&\frac{1}{4}\lim_{\epsilon\to 0}h(E,\pi-\epsilon)\int_{\pi-\epsilon}^\pi h(E,\eta+\pi)\left|\sin\left(\frac{\omega-\eta } {2}\right)\right|\,d\eta=\crcr &=&\frac{1}{4}\int_0^\pi \left|\sin\left(\frac{\omega-\phi } {2}\right)\right|\,d\phi \end{eqnarray} where we have used that $h(E,\phi)=h(E,\phi+\pi)^{-1}$ and that, for $\epsilon$ small, $h(E,\epsilon)\simeq \epsilon^{1/E}$ while $h(E,\epsilon+\pi)\simeq \epsilon^{-1/E}$. Proceeding in the same way for $-\pi<\phi<0$, we get \[ \int_{-\pi}^\pi Q(\phi,\omega)\,d\phi=\frac{1}{4}\int_{-\pi}^\pi \left|\sin\left(\frac{\omega-\phi } {2}\right)\right|\,d\phi= 1 \] for every $\omega$. Finally, in the same way we got the regularity properties of $f_\mathrm{I}(E,\phi)$, we can see that, if $E<1$ then $Qf_\mathrm{II}$ is a H\"older continuous function with H\"older norm bounded by the $L^\infty$ norm of $f_\mathrm{II}$. This immediately implies, by the Ascoli-Arzel\'a theorem, that $Q$ is a compact linear operator on $C^0$. In this situation we can apply the Krein-Rutman theorem, see \cite{KR,Di}, and obtain that there is a unique function $f_\mathrm{II}(E,\phi)$ that satisfies eq.(\ref{PF}). Moreover $f_\mathrm{II}(E,\phi)>0$ for every $\phi$. A similar argument tells us that, for $E>1$, there is a unique solution of eq.(\ref{PF}) and it can be written as \[ f_\mathrm{II}(E,\phi)=|\sin(\phi)|^{1-\frac{1}{E}}l(E,\phi) \] with $l(E,\phi)$ continuous in $\phi$ and strictly positive. Observe that, for any integrable function $f(\phi)$, we have \begin{eqnarray} \partial_\phi \int_{-\pi}^{\pi}\left|\cos\left(\frac{\omega-\phi } {2}\right)\right|f(\omega+\pi)d\omega&=& \frac{1}{2}\int_{-\pi}^{\pi} {\rm sgn}(\omega-\phi)\left|\sin\left(\frac{\omega-\phi} {2}\right)\right|f(E,\omega+\pi)d\omega\crcr \partial_\phi^2 \int_{-\pi}^{\pi}\left|\cos\left(\frac{ \omega-\phi}{2}\right)\right|f(E,\omega+\pi)d\omega&=& -\frac{1}{4}\int_{-\pi}^{\pi}\left|\cos\left(\frac{ \omega-\phi } {2}\right)\right|f(E,\omega+\pi)d\omega+f(\phi)\nonumber \end{eqnarray} Thus the above integral is always at least $C^1$, while if $f(\phi)$ is $C^n$ it is $C^{n+2}$. This implies that $f_\mathrm{II}(E,\phi)$ has the same regularity properties as a function of $\phi$ as $f_\mathrm{I}(E,\phi)$. In particular if $E\leq 1/3$, $f_\mathrm{II}(E,\phi)$ is $C^2$ and we can try to compute $f_\mathrm{II}''(E,0)$ explicitly. Observe that eq.(\ref{steady}) tells us that \begin{equation} E\sin\phi f_\mathrm{II}'(E,\phi)+E\cos\phi f_\mathrm{II}(E,\phi)= -\frac{1}{4}\int_{-\pi}^{\pi}\left|\cos\left(\frac{ \omega-\phi}{2}\right)\right|f_\mathrm{II}(E,\omega+\pi)d\omega+f_\mathrm{II}(E,\phi) \end{equation} Evaluating at $\phi=0$ we get \begin{equation}\label{at0} f_\mathrm{II}(E,0)=\frac{1}{4(1-E)}\int_{-\pi}^{\pi}\cos\left(\frac{ \omega}{2}\right)f_\mathrm{II}(E,\omega+\pi)d\omega \end{equation} As expected, this equation loose meaning when $E\geq 1$ since $f_\mathrm{II}(E,\phi)$ is no more continuous at $\phi=0$. We can now differentiate both side of eq.(\ref{steady}) and obtain, after evaluating in $\phi=0$, \[ (1-2E)f_\mathrm{II}'(E,\phi)=\frac{1}{8}\int_{-\pi}^{\pi}\sin\left(\frac{ \omega}{2}\right)f_\mathrm{II}(E,\omega+\pi)d\omega=0 \] for symmetry reasons. Again this equation make sense only if $E<1/2$. Finally, differentiating once more, we get \[ 3Ef_\mathrm{II}''(E,0)-Ef_\mathrm{II}(E,0)= \frac{1}{16} \int_{-\pi}^{\pi}\cos\left(\frac{ \omega}{2}\right)f_\mathrm{II}(E,\omega)d\omega-\frac{1}{4}f_\mathrm{II}(E,0)+f_\mathrm{II}''(E,0) \] Using eq.(\ref{at0}) we get \begin{equation}\label{at02} f_\mathrm{II}''(E,0)=-\frac{3}{4}\frac{E}{1-3E}f_\mathrm{II}(E,0) \end{equation} that is clearly negative for $E<1/3$ so that we have that $f(E,\phi)$ has a local maximum at $\phi=0$. Observe that expanding this formula to third order in $E$ we get a result in agreement with the expansion in eq.(\ref{power}) \section*{Acknowledgment} The authors thank Eric Carlen for many insightful comments and discussions. The work of FB was supported in part by NSF grant 0604518. The work of NC was supported in part by NSF grant DMS-0969187. The work of JLL was supported in part by NSF grant DMR08021220 and by AFOSR grant AF-FA9550-07. The authors are also grateful to the Alabama supercomputer administration for computational resources. \renewcommand{
a991825431da7d992ddb59d5ea1570b1fc271b91
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Rheological and geometric scaling of purely elastic flow instabilities} In the introduction, we discussed the TC instability of the purely Newtonian fluid and the purely elastic fluid, which are both idealizations that facilitate our analysis, but which capture only the behaviour of very specific fluids. In general, however, non-Newtonian fluids can exhibit other attributes such as a Newtonian solvent contribution to the stress, a spectrum of relaxation times instead of a single relaxation time $\tau$, and/or `shear thinning', \textit{i.e.} a decreasing viscosity with increasing shear rate~\cite{Larson99}. Experiments conducted on such non-Newtonian fluids have documented the effects of such fluid rheology on the elastic TC instability~\cite{Larson94}. To rationalize these observations as well as to generalize the elastic instability criterion to different kinds of flows with curved streamlines, McKinley and coworkers established a general criterion for elastic instabilities~\cite{Mckinley96,Mckinley96b}. If $Re\simeq 0$, then, viscoelastic fluids are unstable if $\frac{N_1}{T_{xy}}\frac{\ell}{\mathcal{R}}> m^2$, where $N_1\equiv T_{xx} -T_{yy}$ is the first normal stress difference~\cite{Larson99}, $T_{xy}$ is the shear stress, $\ell$ is the characteristic distance over which perturbations relax along a streamline~\cite{Mckinley96}, and $\mathcal{R}$ is the characteristic radius of curvature of the streamlines. For a purely viscoelastic fluid, $\ell \equiv U \tau \sim \Omega_i R_i \tau = Wi~d$, $\mathcal{R} \sim R_i$ and $\frac{N_1}{T_{xy}}=\frac{N_1}{T_{\theta r}} \sim Wi$~\cite{Mckinley96} and we recover the criterion of Larson \textit{et al.} for the purely elastic instability: $\Lambda Wi^2 > m^2 \Leftrightarrow \sqrt{\Lambda} Wi > m$ ~\cite{Larson90}. \\ In turn, framed with respect to the general criterion derived by McKinley \textit{et al.}~\cite{Mckinley96,Mckinley96b}, our goal is to determine the functional form of the dimensionless ratio $\frac{N_1}{T_{xy}}\frac{\ell}{\mathcal{R}}$ in terms of measurable quantities in the case of shear-banding flows. By analogy with polymer solutions, we would expect that this ratio can be expressed in terms of a relevant geometric ratio and an appropriately-defined Weissenberg number. \section{Effective gap} \begin{figure} \centering \includegraphics[trim = 0mm 0mm 0mm 0mm,width=6.5cm,clip]{./fig1.pdf} \caption{Effective gap scaling. (a) Overlay of two visualization techniques showing the secondary vortex flow in the high $Wi$ band for $\alpha\simeq 0.4$~\cite{Fardin09}. (b) Wavelength scaling, following $\lambda=n \alpha d$, with $n=3.8\pm 0.1$. For $\alpha>0.6$, the spatio-temporal dynamics of the vortex flow do not allow us to extract a single wavelength~\cite{Lerouge08,Fardin09}. For $\alpha<0.05$, the size of the band is smaller than our spatial resolution. \label{Fig1}} \end{figure} The relevant geometric ratio can indeed be inferred from experiments through the notion of an \textit{effective gap}. In our recent experiments~\cite{Fardin09,Fardin10}, we recognized that the vortices were mainly localized in the high $Wi$ band, and that each interfacial wavelength between the bands corresponded to a pair of counter-rotating vortices~\cite{Fardin09}, as illustrated in Fig. \ref{Fig1}a. In our previous publications, we had noticed that the wavelength increases upon increase of the global shear rate, so one could infer the scaling $\lambda/d \sim Wi$~\cite{Lerouge08,Fardin09}. Then, by combining this scaling and the lever rule we can establish that $\lambda = n \alpha d$ instead of $\lambda=n d$, where $n$ is a number of order unity, whose precise value depends on the boundary conditions. The extent of the high $Wi$ band acts as the effective gap. Increasing the global $Wi$ increases $\alpha$ and so increases $\lambda$. The validity of this scaling is shown on Fig.~\ref{Fig1}b by re-plotting $2\lambda/d$, \textit{i.e.} twice the wavelength of vortices, against $\alpha$ instead of $Wi$~\cite{Lerouge08}. \section{Local Weissenberg number} As explained in the introduction, in a shear-banding flow, the global value of $Wi$ is not a good measure of the local Weissenberg number in the parts of the flow that are unstable. Instead, the dimensionless group relevant to the flow instability is the local value of $Wi_h$ in the high shear rate band. In the instability criterion, one must replace $Wi$ by $Wi_h$. Accordingly, the criterion for elastic instabilities in shear-banding flows should involve the term \begin{equation} \Sigma^* = \sqrt{\alpha \Lambda} Wi_h \label{effgap} \end{equation} It has been observed in experiments that increasing the concentration ($c$) of surfactant, or decreasing the temperature ($\theta$) tends to increase the value of $Wi_h$. This fact is illustrated in Fig.~\ref{Fig2}a in the flow curves of two different surfactant systems~\cite{Berret97,Cappelaere97}. Note that as $c$ increases, the dimensionless stress plateau decreases and its range of Weissenberg numbers increases. In particular, $Wi_h$ shifts to higher values. For the most concentrated solutions, viscometric measurements had to be aborted because the sample was ejected from the rheometer. We believe that this phenomenon is due to an instability of the free surface of the system, driven by the underlying bulk viscoelastic instability. However, we also note that the instability of the free surface could be triggered by second normal stress differences~\cite{Skorski11}. From Eq.~(\ref{effgap}), we note that solutions of high $c$ or at low $\theta$ are more likely to be unstable, owing to the larger values of $Wi_h$.\\ \section{The case of dJS} \begin{figure} \centering \includegraphics[trim = 7mm 5mm 0mm 0mm,width=6cm,clip]{./Fig2a.pdf} \includegraphics[trim = 2mm 15mm 5mm 0mm,width=6.5cm,clip]{./Fig2b.pdf} \caption{Experimental and theoretical ``flow-phase diagrams''~\cite{Berret97}. (a) Open symbols: Measured dimensionless flow curves for varying [CTAB]=3,7,10,12,15,17,18,22wt.$\%$ at fixed [NaN0$_3$]=0.3M (replotted data adapted from Cappelaere \textit{et al.}~\cite{Cappelaere97}, permission from Springer). Closed symbols: Measured dimensionless flow curves for varying [CPCl+0.2NaSal]=2,4,6,8,10,12,21wt.$\%$ (courtesy of Berret \textit{et al.}~\cite{Berret97}). The arrow points in the direction of higher $c$ or lower $\theta$~\cite{Berret97}. In both cases, measurements were done using a cone-and-plate device. The flow curves of the two systems do not overlap, even when the stress and the shear-rate are scaled with $G_0$ and $\tau$ respectively, \textit{i.e.} in the framework of the dJS model, the two systems have a different value of the coefficient `\textit{a}'. (b) Analytical dimensionless flow curves obtained for the dJS model in simple shear~\cite{Sato10}. The different flow curves are obtained for varying $\eta$. The color map gives the value of the scaled dimensionless criterion $\widetilde{\Sigma_{dJS}}\equiv \Sigma_{dJS} \sqrt{\frac{1-a^2}{\Lambda}}$. The arrow points in the direction of lower $\eta$. \label{Fig2}} \end{figure} So far, we have suggested a new relevant dimensionless group for elastic instabilities in shear-banding flows, without appealing to any particular rheological model. To reinforce our argument, we can investigate the form of the instability criterion for the diffusive Johnson-Segalman (dJS) model, which has been widely used to study shear-banding flows~\cite{Fielding07}. Recently, it has even been used in numerical simulations confirming the presence of a secondary vortex flow triggered by a bulk viscoelastic instability in the high $Wi$ band~\cite{Fielding10}. In this model, the stress is taken as the sum of a `polymeric' part $\bf{T^p}$ and a `solvent' part $\bf{T^s}$. The total viscosity of the fluid is the sum of a polymeric and solvent part $\eta_0=\eta_p + \eta_s$, with the zero shear rate value of the polymeric viscosity given by $\eta^0_p\equiv \tau G_0$. The polymeric stress varies non-monotonically with imposed shear rate and goes to zero at large $Wi$, such that $\eta_s$ is the asymptotic value of viscosity for $Wi\to\infty$. \\ To evaluate the expression $\frac{N_1}{T_{xy}}\frac{\ell}{\mathcal{R}}$, we need an analytic expression for the stress ratio in the high $Wi$ band. Let us symbolize this ratio by $[\frac{N_1}{T_{xy}}]_h$. In the small gap limit, we can assume that the stress profile across the gap is close to the profile in a plane Couette geometry. We can then use the inhomogeneous plane Couette solution recently derived by Sato \textit{et al.}~\cite{Sato10}.\\ In plane Couette flow of the dJS model, it is common to express the total shear stress as $T_{xy}= \frac{G_0}{\sqrt{1-a^2}} \sigma $, and the first normal stress difference as $N_1= \frac{2G_0}{1-a^2} N$~\cite{Sato10}. The parameter $a$ is the `slip parameter' of the dJS equation. Shear-banding happens if $|a|\neq 1$ and $\eta\equiv \frac{\eta_s}{\eta^0_p}<\frac{1}{8}$~\cite{Fielding07,Sato10}. $N$ is a dimensionless normal stress difference and $\sigma$ is a dimensionless total shear stress. In plane Couette flow, the momentum balance imposes that $\sigma$ is a constant across the gap, but $N(y)$ is a function of the position $y$ in the gap~\cite{Sato10}. For steady flow in the shear-banding regime, Sato \textit{et al.} established that \begin{align} \sigma &= 3\frac{\sqrt{\eta -\eta^2}}{\sqrt{2}}\\ N & =KS=K(\sigma-\eta K) \label{Neq} \end{align} where $K(y)\equiv \sqrt{1-a^2} Wi(y)$ and $S(y)\equiv \sqrt{1-a^2} \frac{T^p_{xy}(y)}{G_0}$ are respectively a dimensionless shear rate and a polymeric shear stress, both functions of the position in the gap. In dimensionless form, the addition of the polymeric and solvent shear stress is expressed by $\sigma\equiv S(y)+\eta K(y)$~\cite{Sato10}. In the shear-banding regime, Sato \textit{et al.} have found an analytic solution for the dimensionless shear rate profile $K(y)$ that follows a hyperbolic tangent profile between $K_l$ and $K_h$~\cite{Sato10}, with $K_l<K_h$ given by \begin{align} & K_{l} = \frac{\sqrt{1/\eta - 2} - \sqrt{1/\eta -8}}{\sqrt{2}} \\ & K_{h} = \frac{\sqrt{1/\eta - 2} + \sqrt{1/\eta -8}}{\sqrt{2}} \label{Kheq} \end{align} In the high shear rate band, $K\simeq K_h=\sqrt{1-a^2} Wi_h$. Thus from eqs.~(2), (\ref{Neq}) and (\ref{Kheq}) we can obtain the following expressions \begin{align} \Big[\frac{N_1}{T_{xy}}\Big]_h &= \Big[\frac{N}{\sigma}\Big]_h ~\frac{2}{\sqrt{1-a^2}} \\ &= \frac{K_h(\sigma-\eta K_h)}{\sigma} \frac{2}{\sqrt{1-a^2}}\\ &= \frac{2}{3} Wi_h \Big(2 - \sqrt{\frac{1-8\eta}{1-2\eta}}\Big) \label{stressratioWih} \end{align} Then, overall, if we set $\ell\sim Wi_h \alpha d$ and $\mathcal{R}\sim R_i$, we get \begin{equation} \Sigma_{dJS} = \sqrt{\alpha \Lambda} Wi_h f(\eta) =\Sigma^* f(\eta) \label{dJSscaling} \end{equation} Therefore, the result we obtain using dJS is slightly more complex than the naive criterion $\Sigma^*$ since it also depends on the viscosity ratio. For shear-banding we require $\eta<1/8$, so we have $0.7 \lesssim f(\eta) \lesssim 1.3$. This result is indeed not surprising, since we had obtained $\Sigma^*$ in analogy with the purely elastic case derived using the Upper Convected Maxwell model, where $\eta=0$~\cite{Larson90} . In the homogeneous and non shear-banding elastic case, adding a Newtonian solvent also modifies the dimensionless group by the addition of a function $f^\star(\eta)\simeq \sqrt{\frac{2}{1+\eta}}$~\cite{Mckinley96}. \\ Note that the expression for $\Sigma_{dJS}$ can also be expressed in terms of the two dimensionless variables $K$ and $\sigma$. Indeed, from the lever rule, $\alpha=\frac{K-K_l}{K_h-K_l}$, and from eq. (4) and (5), $K_l$ and $K_h$ can be expressed in terms of $\eta$, which can be subsequently expressed in terms of $\sigma$ using eq. (2). Ultimately, one can reach the following equivalent alternative expression for $\Sigma_{dJS}$: \begin{equation} \Sigma_{dJS} = \sqrt{\frac{\Lambda}{1-a^2}} \widetilde{\Sigma_{dJS}}(K, \sigma) \label{dJSscaling} \end{equation} where $\widetilde{\Sigma_{dJS}}(K, \sigma)= (2\sqrt\frac{K}{3\sigma}-\sqrt\frac{\sigma}{3K})+\mathcal{O}[\sigma^{3/2}]$ is a function of $K$ and $\sigma$ only, whose precise functional form is a little too cumbersome to be written explicitly. Figure~\ref{Fig2}b. plots the flow curves computed from eqs. (2), (4) and (5)~\cite{Sato10}, together with the magnitude of $\widetilde{\Sigma_{dJS}}$. We can see that as the shear rate is increased, the proportion of the high $Wi$ band increases, the magnitude of the scaled criterion $\widetilde{\Sigma_{dJS}}$ increases and the flow is increasingly prone to instability. By comparing the experimental flow curves in Fig.~\ref{Fig2}a and the flow curve derived in the case of dJS in Fig.~\ref{Fig2}b one can see that the effect of decreasing the Newtonian solvent contribution $\eta$ to the total stress is similar to the effect of increasing the concentration of surfactant, or decreasing the temperature. \section{Boundary conditions and classes of unstable shear-banding flows} \begin{figure} \centering \includegraphics[width=7.5cm,clip]{./Fig3.pdf} \caption{Schematic instability diagram in the plane (Max[$Wi_h$,$Wi$],$\alpha\Lambda$). The black lines represent the stability limits for soft and hard boundaries, $\Sigma_{sb}=m_s \Leftrightarrow \alpha\Lambda = (m_s/Wi_h)^2$ and $\Sigma_{sb}=m_h \Leftrightarrow \alpha\Lambda = (m_h/Wi_h)^2$, where we have arbitrarily chosen $m_s=1$ and $m_h=3$. The dashed black line represent the value of $1\times \Lambda=1.13/13.33$, the maximum curvature corresponding to our recent experiments~\cite{Lerouge06,Lerouge08,Fardin09,Fardin10}. Above this line, the shaded region is inaccessible. The three paths \textit{1.}, \textit{2.} and \textit{3.} illustrate the three possible types of shear-banding. The direction of the arrows represent the path followed by the state of the flow as the global Weissenberg $Wi$ is increased. $\alpha_c$, $\alpha_{c1}$ and $\alpha_{c2}$ are the critical proportions of the high $Wi$ band at which the flow state crosses a stability limit. $Wi_c$ is the threshold at which the type \textit{1.} trajectory becomes unstable for the first time, and at which the type \textit{2.} trajectory becomes unstable after a short relaminarization. \label{Fig3}} \end{figure} Generally, we expect the relevant dimensionless group for elastic instability in shear-banding flows to be $\Sigma_{sb} \equiv \Sigma^* f^*(\eta)$, where $f^*$ is a function of the ratio between the zero shear and infinite shear viscosities. We expect the specific form of $f^*$ to depend on the constitutive model used to study shear-banding~\cite{Cates06}. Elastic instabilities will generate a secondary vortex flow with wavelength $\lambda=n \alpha d$ for $\Sigma_{sb}>m$. As mentioned already, the precise values of $n$ and $m$ depend on the boundary conditions. Of prime importance are the values of $m$ obtained for `soft' ($m_s$) or for `hard' ($m_h$) boundary conditions~\cite{Fardin10}. Essentially, the `hard' case usually corresponds to a no-slip Dirichlet boundary condition, while the `soft' case usually corresponds to imposing only continuity of the stress, \textit{i.e.} a Neumann boundary condition. In both the purely inertial case~\cite{Chandrasekhar81} and the purely elastic case~\cite{Khayat99}, it is known that $m_s<m_h$. For a banded flow with $Wi\in[Wi_l,Wi_h]$, the interface with the low $Wi$ band acts as a soft boundary for the high $Wi$ band. But for $Wi \geqslant Wi_h$, $\alpha=1$, the flow becomes homogeneous again and the boundary switches from soft to hard. \\ Therefore, for a given geometry, \textit{i.e.} a given value of $\Lambda$, we can use basic Boolean logic to classify shear-banding flows into three possible categories depending only on the value of $Wi_h$: \\ \textit{1.} For sufficiently low $Wi_h$--\textit{i.e.} high $\theta$ and low $c$--the shear-banding flow is stable for any $\alpha$, since $\Sigma_{sb}< m_s$ even for $\alpha=1$. The flow can then become unstable for Weissenberg numbers above a critical value $Wi_c>Wi_h$ as in the case of a regular viscoelastic fluid, \textit{i.e.} following the scaling $\Sigma_i =\sqrt{\Lambda}Wi$.\\ \textit{2.} For intermediate values of $Wi_h$--\textit{i.e.} intermediate $\theta$ and $c$--the shear-banding flow is unstable above a critical value $\alpha_c$ when $\Sigma_{sb} > m_s$ for $\alpha>\alpha_c$. Then as the imposed shear rate is increased and $\alpha\to1$ the boundary conditions change and the flow is stabilized, because the flow is below the threshold $m_h$. Eventually for $Wi>Wi_c>Wi_h$ the flow becomes unstable again. This case was the one we observed in our recent experiments~\cite{Fardin10}. \\ \textit{3.} Finally, if $Wi_h$ is high enough--\textit{i.e.} for low $\theta$ and high $c$--we have two critical band widths $\alpha_{c1}$ and $\alpha_{c2}$. For $\alpha>\alpha_{c1}$, $\Sigma_{sb} > m_s$. And for $\alpha>\alpha_{c2}$, $\Sigma_{sb} > m_h$. In this case, there is no stabilization for $Wi>Wi_h$. The flow remains unstable, although the spatiotemporal characteristics may change \\ The three possible shear-banding scenarios can be illustrated on a stability diagram in the plane (Max[$Wi_h$,$Wi$],$\alpha\Lambda$), as presented in Fig.~\ref{Fig3}. When the global Weissenberg number $Wi$ is increased above $Wi_l$, the flow state is given by a constant abscissa depending on the value of $Wi_h$ (which is a function of the concentration and temperature of the solution). As $Wi$ increases, the thickness of the high shear rate band $\alpha$ increases and so the state of the flow moves vertically to larger ordinates. Once the entire gap is filled, $\alpha\Lambda$ reaches its maximum depending on the geometry of the chosen TC system. Then, since $Wi>Wi_h$, the state of the flow is given by a constant ordinate $\Lambda$ and moves horizontally as $Wi$ increases. Any flow state with $\alpha\Lambda<\Lambda$ will be stable if below the stability limit $\Sigma_{sb}=m_s$, and unstable if above $\Sigma_{sb}=m_s$ and \textit{a fortiori} if above $\Sigma_{sb}=m_h$. Any flow state with $\alpha\Lambda=\Lambda$ will be stable if on the left of the stability limit $\Sigma_{sb}=m_h$, and unstable otherwise. \section{Interaction with interface modes} So far, we have only considered elastic instabilities arising in the bulk of the high $Wi$ band. But there exist other elastic instability mechanisms~\cite{Larson92}. In particular, Fielding has shown that the jump in normal stresses between the bands could generate interfacial modes, even in plane Couette flow~\cite{Fielding07}. In her recent study in TC flow, Fielding suggested that the interfacial and bulk elastic modes lie in two separate regions of the space ($\Lambda$,$N_1|_h$), \textit{i.e.} of the space ($\Lambda$,$Wi_h$)~\cite{Fielding10}. The bulk mode prevails at high $Wi_h$ and high curvature $\Lambda$. The interfacial mode prevails at low $Wi_h$ and low $\Lambda$. Fielding's study would suggest the existence of another unstable region in the lower left corner of the stability diagram sketched in Fig.~\ref{Fig3}. Nonetheless, only axisymmetric perturbations were considered in Fielding's study~\cite{Fielding10}, and the stability analysis was performed for a single value of $\alpha$ and $\eta$. Interfacial and bulk modes may actually interact through non-axisymmetric mechanisms~\cite{Morozov11}. \section{Wall slip and non-local effects} We believe that the instability criterion we have derived for shear-banding flows can be a powerful guide to interpret experiments on wormlike micelles. Nonetheless, the criterion is fallible. In particular, we think that two additional phenomena can strongly compromise the validity of our scaling, since both have been shown to be relevant in some experimental situations. In both phenomena the local Weissenberg value in the high shear-rate band may not be equal to the upper boundary of the shear-banding regime on the flow curve. The first phenomenon is wall slip, which has been reported recently and may actually be a common feature of many shear-banding flows~\cite{Lettinga09}. The second phenomenon is geometric confinement. The present scaling may be inadequate if `non-local effects' become dominant~\cite{Masselon08}. `Non-local effects' are apparent in confined geometries when the size $d$ becomes comparable to the typical interfacial width $\xi\sim \mu m$, linked to the stress diffusion coefficient~\cite{Fielding07,Sato10}. Even in a macroscopic geometry with $d\gg \xi$, non-local effects can be important when the lateral extent of one of the bands is very small, \textit{i.e.} $\alpha\simeq 0$ or $\alpha\simeq 1$. Those effects were ignored in the analytic solution for dJS proposed by Sato \textit{et al.} but can actually be derived directly from the dJS equations~\cite{Fardin11}.\\ \indent In summary, we have derived a useful dimensionless criterion to rationalize the onset of secondary flows in the base shear-banding flow of wormlike micelles. The validity of the criterion for the case of dJS could be checked by numerical simulations for various value of the solvent ratio $\eta$, and for a range of gap spacings ($\Lambda$) and Weissenberg numbers. On the experimental side, we are currently undertaking a large study of the stability of shear-banding flows for many different surfactant types, concentrations and temperatures. Preliminary results confirm the existence of the three distinct scenarios that we derived here. Ultimately, the criterion could be extended to other flows with curved streamlines, if the localization and number of bands is known. \acknowledgments The authors thanks S. Asnacios, O. Cardoso, S.M. Fielding, A.N. Morozov, S.J. Muller and C. Wagner for fruitful discussions. M.A.F. thanks the Fulbright Commission for its support. T.J.O. acknowledges the NSF-GRF for funding.
33eca0c7af4dab59f6961977ea88336cb9717a4b
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{intro} Let $n\geq1$ be an integer and let $f \in L^p(\textbf{R}^{n+1})$. If $p=1$, then $\widehat{f}$ is a continuous function, and hence can be restricted to any hypersurface; whereas if $p=2$, $\widehat{f}$ can be an arbitrary square integrable function meaning that it cannot be restricted to any measure zero set. Restriction problem concerns what happens if $1<p<2$. It is easy to concoct examples showing that we cannot meaningfully restrict $\widehat{f}$ to a hyperplane if $p>1$. So it was a surprising discovery when Stein in 1967 observed that for curved hypersurfaces the situation is different, and one can, for certain values of $p$ depending on the surface chosen, restrict $\widehat{f}$. Restriction problem, essentially, is to determine the range of $p$. The rigorous formulation of this problem is as follows. Let S be a smooth compact hypersurface with boundary in $\textbf{R}^{n+1}$. We say that the linear restriction estimate $R_S(p \rightarrow q)$ holds if \begin{equation} \|\widehat{f} \|_{L^q(S,d\sigma)} \leq C_{p,q,S}\|f\|_{L^{p}(\textbf{R}^{n+1})} \end{equation}for all Schwartz functions $f$ on $\textbf{R}^{n+1}$. Equivalently one can use the following formulation. We say that that linear adjoint restriction estimate $R^*_S(p \rightarrow q)$ holds if \begin{equation}\label{eqi2} \|\widehat{fd\sigma} \|_{L^q(\textbf{R}^{n+1})} \leq C_{p,q,S}\|f\|_{L^{p}(S,d\sigma)} \end{equation} for all $C^{\infty}$ functions $f$ on $S$. This problem was posed by Stein in \cite{S1}. It is well understood for $n=1$ but wide open for $n\geq2$. It is known to be connected to other central problems in harmonic analysis such as the Kakeya conjecture and the Bochner-Riesz conjecture; for the exact nature of these connections see e.g. \cite{W2}, \cite{La}, \cite{T3}, \cite{JB2}. Consider the following variant of this problem: Let $S_1, S_2$ be two smooth compact hypersurfaces in $\textbf{R}^{n+1}$ with Lebesgue measure $d\sigma_1$ and $d\sigma_2$ respectively. We say that bilinear adjoint restriction estimate $R^*_{S_1,S_2}(2\times2 \rightarrow q)$ holds if one has \begin{equation}\label{be} \|\widehat{f_1d\sigma_1}\widehat{f_2d\sigma_2}\|_{L^q(\textbf{R}^{n+1})}\leq C_{q,S_1,S_2}\|f_1\|_{L^2(S_1,d\sigma_1)}\|f_2\|_{L^2(S_2,d\sigma_2)} \end{equation} for all smooth functions $f_1, f_2$ supported respectively on $S_1,S_2$ . Historically the first incentive to study this problem was to attack (\ref{eqi2}) in the special case $q=4$ by squaring both sides and studying the resulting bilinear estimate; see e.g. \cite{F}, \cite{Sj}. Later this idea was extended to other values of $q$. In \cite{JB1}, \cite{TVV} it was observed that if $S_1, S_2$ satisfy certain transversality conditions, further estimates that are not available for arbitrary $S_1,S_2$ are available. What is more, these estimates then can be used to obtain new linear restriction estimates; see \cite{TVV}, \cite{TV1}, \cite{W1}. These advantages motivate study of this type of restriction estimates, which at a first look seems more complicated and less hopeful. Let $S_1, S_2$ be compact, transverse subsets of the light cone \[\{(x,t)\in \textbf{R}^{n+1}: |x|=|t| \}\] or compact, transverse subsets of the paraboloid \[\{(x,t)\in \textbf{R}^{n+1}: t=-\frac{1}{2}|x|^2\}.\] The study of bilinear extension estimates for such $S_1, S_2$ dates back to Carleson-Sj\"{o}lin Theorem, which states that $R^*_{S_1,S_2}(2\times2 \rightarrow q)$ holds for $q=2$ when $n=1$; see \cite{CS}. This theorem is known to be optimal, that is, for $n=1$ going below $q=2$ is not possible. For the cone case, Bourgain proved that going below the exponent $2$ is possible when $n=2$, and that $q\geq 2-13/2048$ is enough; see \cite{JB1}. Then in 1997 Klainerman and Machedon observed that for $n\geq 2$ the condition $q \geq \frac{n+3}{n+1}$ is necessary, and conjectured that this condition suffices. For the cone case further progress came from Tao and Vargas, who proved that when $n=2$, it suffices to have $q \geq 2-8/121$ in \cite{TV1}. Then Wolff made a great breakthrough and settled the conjecture of Klainerman and Machedon except for the endpoint in \cite{W1}, with the endpoint being attained shortly afterwards by Tao in \cite{T1}. In the paraboloid case, first progress came when Tao, Vargas and Vega in \cite{TVV} proved that for the special case $n=2$, $q\geq 2-5/69$ suffices. Tao and Vargas furthered this to $q \geq 2-2/17$ in \cite{TV1}. Finally in \cite{T2} Tao proved the conjecture except for the endpoint. Endpoint is known only for the cylindrically symmetric case; see \cite{SS}. Resolution of Klainerman-Machedon conjecture is important also for its applications to various problems. One important application is to PDE. The cone and the paraboloid are related to solutions of the wave equation and the Schr\"{o}dinger equation respectively. One can thus reformulate these results in terms of solutions of the wave equation and the Schr\"{o}dinger equation, and then apply these to obtain null form estimates, which, in turn, are important for the study of nonlinear PDE; see \cite{FK}, \cite{T1}, \cite{DT}, \cite{LV1}, \cite{LVR}. Actually we will also use this PDE formulation in this paper. It is this connection that motivated study of these conjectures for mixed norms, and progress has been made in \cite{LV1} in this direction. A second application is to the Bochner-Riesz conjecture: the progress in the paraboloid case was used to obtain the best known exponent for the Bochner-Riesz problem; see \cite{LBR}. Another application is to the Falconer distance set problem. Via proving a weighted bilinear restriction estimate for the paraboloid, Erdo\~{g}an improved the known bound for the distance set problem; see \cite{BE1},\cite{BE2}. This idea also applied to obtain bounds for distance sets defined with respect to non-Euclidean distance functions; see \cite{La} and references therein. There has also been some effort to extend the known results for the cone and the paraboloid to more general curved surfaces; see \cite{Lee}, \cite{V1}. We now restrict our attention to the cone case which our result concerns. We reformulate the problem as mentioned above. Let a function $\phi :\textbf{R}^{n+1}\rightarrow H$ be a red wave if $H$ is a finite dimensional complex Hilbert space , and if its spacetime Fourier transform $\widehat{\phi}$ is an $L^2$ measure on the set \[ \Sigma^{R}:=\left\{ (\xi,|\xi|) \in \textbf{R}^{n+1}: \angle(\xi,e_1) \leq \pi/8 ,\ 1 \leq |\xi| \leq 2 \right\}\] where $e_1$ is a fixed basis vector. Similarly let a function $\psi :\textbf{R}^{n+1}\rightarrow H'$ be a blue wave if $H'$ is a finite dimensional complex Hilbert space , and if $\widehat{\psi}$ is an $L^2$ measure on the set \[ \Sigma^{B}:=\left\{ (\xi,-|\xi|) \in \textbf{R}^{n+1}: \angle(\xi,e_1) \leq \pi/8 ,\ 1 \leq |\xi| \leq 2 \right\}.\] Let \textit{energy} for red and blue waves be defined as \begin{equation}\label{eq1.1} E(\phi):=\left\| \phi(t) \right\|^2_2,\ E(\psi):=\left\| \psi(t) \right\|^2_2 \end{equation} where $\phi(t)$, $\psi(t)$ are given by $\phi(t)(x):=\phi(x,t)$, $\psi(t)(x):=\psi(x,t)$. This definition is independent of time $t$. Then by the results of Wolff and Tao \begin{equation}\label{e1.2} \left\|\phi \psi \right\|_p \lesssim E(\phi)^{1/2}E(\psi)^{1/2} \end{equation} holds for $p \geq \frac{n+3}{n+1}$. Here and in what follows implicit constants do not depend on $H, H'$, and $\phi\psi:\textbf{R}^{n+1} \rightarrow H\otimes H'$ where $\otimes$ denotes the tensor product. The following is our main theorem. \begin{theorem}\label{t1.2} Let $\phi,\psi$ be respectively red and blue waves. Let $1 < p,q \leq 2$ be such that $1/q \leq (\frac{n+1}{2})(1-1/p)$ and $1/q <\min(1,\frac{n+1}{4})$. Then we have \begin{equation}\label{eqi.3} \left\|\phi \psi \right\|_{L_t^qL_x^p} \lesssim E(\phi)^{1/2}E(\psi)^{1/2} . \end{equation} \end{theorem} Lee and Vargas proved this theorem with $1/q < (\frac{n+1}{2})(1-1/p)$ in \cite{LV1}, so we extend that result to the endline. We now describe examples showing that the conditions $1/q \leq (\frac{n+1}{2})(1-1/p)$, $1/q <\min(1,\frac{n+1}{4})$ are necessary. These examples are similar to the ones given in \cite{FK}. \vspace{3mm} \begin{flushleft}\textbf{Example 1.} Let's denote an element $\xi \in \textbf{R}^{n}$ as $\xi =(\xi_1, \xi')$. Then let \end{flushleft} \[S:= \{ \xi \in \textbf{R}^{n}: |\xi_1-3/2|<\epsilon^2, |\xi'|<\epsilon \}.\] Then one has $|S|\approx \epsilon^{n+1}$. Define two functions $R,B$ on respectively on $\Sigma^{R}$ and $\Sigma^{B}$ as follows \[R(\xi,|\xi|):=\chi_S(\xi), \ \ \ \ B(\xi,-|\xi|):=\chi_S(\xi).\] Thus, these are the characteristic functions of projections of the set $S$ to $\Sigma^{R}$ and $\Sigma^{B}$ respectively. Let $d\sigma$ denote the surface measure of the cone. Define red and blue waves $\phi, \psi$ by \[\widehat{\phi}:=Rd\sigma, \ \ \ \ \widehat{\psi}:=Bd\sigma.\] So we have \[\|\phi\|_2=\|\widehat{\phi}\|_2 \approx \epsilon^{\frac{n+1}{2}}\] and similarly \[\|\psi\|_2=\|\widehat{\psi}\|_2 \approx \epsilon^{\frac{n+1}{2}}.\] On the other hand by the uncertainty principle both $|\phi|,|\psi|$ are comparable to $\epsilon^{n+1}$ on a rectangular box that has a spatial area $\approx \epsilon^{-(n+1)}$ for $|t| \lesssim \epsilon^{-2} $. Then one obtains \[ \|\phi\psi\|_{L_t^qL_x^p} \gtrsim \epsilon^{2n+2}\epsilon^{-2/q}\epsilon^{-(n+1)/p}.\] Thus we need \[\epsilon^{2n+2}\epsilon^{-2/q}\epsilon^{-(n+1)/p} \lesssim \epsilon^{n+1}\] which implies \[\frac{1}{q}\leq (\frac{n+1}{2})(1-\frac{1}{p}).\] \vspace{3mm} \begin{flushleft}\textbf{Example 2.} Let \end{flushleft} \[ S_1:=\{\xi \in \textbf{R}^{n}: |\xi_1-3/2|<1/4, |\xi'|<\epsilon \}. \] Let $S_2$ be the set formed by intersection of a space-time slab of thickness $\epsilon^2$ whose normal is parallel to that of $S_1$ with $\Sigma^B$. Then we have $|S_1|\approx \epsilon^{n-1}$ and $|S_2|\approx \epsilon^2$. Define two functions $R,B$ on respectively on $\Sigma^{R}$ and $\Sigma^{B}$ as follows \[R(\xi,|\xi|):=\chi_{S_1}(\xi), \ \ \ \ B:=\chi_{S_2}.\] Thus, $R$ is the characteristic function of projection of the set $S_1$ to $\Sigma^{R}$, and $B$ is the characteristic function of $S_2$. Let $d\sigma$ denote the surface measure of the cone. Define red and blue waves $\phi, \psi$ by \[\widehat{\phi}:=Rd\sigma, \ \ \ \ \widehat{\psi}:=Bd\sigma.\] So we have \[\|\phi\|_2=\|\widehat{\phi}\|_2 \approx \epsilon^{\frac{n-1}{2}}\] and similarly \[\|\psi\|_2=\|\widehat{\psi}\|_2 \approx \epsilon.\] On the other hand by the uncertainty principle $|\phi|$ is comparable to $\epsilon^{n-1}$, and $|\psi|$ is comparable to $\epsilon^2$ on a rectangular box that has spatial area comparable to $1$ for all $t \lesssim \epsilon^{-2}$. Thus we obtain \[\|\phi\psi\|_{L_t^qL_x^p} \gtrsim \epsilon^{n+1}\epsilon^{-2/q}.\] Hence we need \[\epsilon^{n+1}\epsilon^{-2/q} \leq \epsilon^{(n+1)/2}.\] This implies \[\frac{1}{q} \leq \frac{n+1}{4}.\] \vspace{2mm} \begin{flushleft} \textbf{\textit{List of Notation}}\hfill $D=D(x_D,t_D;r_D):=\{ (x,t_D): |x-x_D|\leq r_D \}.$ \hfill $D^{ext}=D^{ext}(x_D,t_D;r_D):= \{(x,t_D): |x-x_D|>r_D)\}.$\hfill $Q(x_Q,t_Q;r_Q):$ ${n+1}$ dimensional cube in $\textbf{ R}^{n+1}$ centered at $(x_Q,t_Q)$ with side-legth $r_D$ and sides parallel to axes. Life-span for such a cube is defined to be the interval $[t_Q-\frac{1}{2}r_Q, t_Q+\frac{1}{2}r_Q]$. $cQ:=Q(x_Q,t_Q;cr_Q)$ $Q^{ann}(x_Q,t_Q;r_1,r_2):=Q(x_Q,t_Q;r_2)\setminus Q(x_Q,t_Q;r_1)$ $\underline{\Sigma}:= \{ \xi \in \textbf{R}^{n} : 1/2\leq |\xi| \leq 4, \angle(\xi,e_1)\leq \pi/4 \}.$ $C^{R}(x_0,t_0):=\{(x_0+r\omega,t_0-r) \in \textbf{R}^{n+1}:r\in R, \omega \in S^{n-1} \cap \underline{\Sigma} \}.$ $C^{B}(x_0,t_0):=\{(x_0+r\omega,t_0+r) \in \textbf{R}^{n+1}:r\in R, \omega \in S^{n-1} \cap \underline{\Sigma} \}.$ $C^{R}(x_0,t_0;r),C^{B}(x_0,t_0;r): r$ neighborhoods of $C^{R}(x_0,t_0), C^{R}(x_0,t_0)$ respectively. $C^P(x_0,t_0;r):=C^{R}(x_0,t_0;r)\cup C^{B}(x_0,t_0;r)$ \textbf{T}: The time reversal operator given by $\textbf{T}\phi(x,t)=\phi(x,-t).$ \end{flushleft} \section{Preliminaries}\label{s2} Our proof will mainly follow Tao's proof in \cite{T1}. Here is a sketch of the proof in which we will gloss over tecnical details, and try to convey the main ideas of this complicated proof. First we will localize the estimate (\ref{eqi.3}) to cubes of side-length $R$. Then by monotone convergence theorem finding a bound independent of $R$ suffices. We observe that once localized, using the definition of energy and the H\"{o}lder inequality we can obtain a trivial $L^1$ estimate. So if we can prove a favorable $L^2$ estimate, using the H\"{o}lder inequality and interpolation we can control the localized forms of (\ref{eqi.3}). We are not able to prove such an estimate directly for our waves, but we can still apply this strategy partially as follows. We localize our waves to sub-cubes using the standard tool of wave packet decomposition. Then for waves localized to sub-cubes we can obtain favorable $L^2$ estimates on other sub-cubes, which we do by using the wave packet decomposition, and applying our strategy above yields a constant term. But still we need to estimate localized waves on cubes to which they localized. In this case we do not estimate them at all, and just use the fact that we are estimating them at a lower scale. Thus we are able to bound an estimate at a scale with an estimate at a lower scale plus a constant. This is the induction on scales method of Wolff, using this technique Wolff settled Klainerman-Machedon conjecture for the cone except for the endpoint; see \cite{W1}. Yet due to endpoint/endline nature of the problem we have a constant instead of a negative power of $R$ in the error term, which is the case for the non-endpoint/non-endline problem, so we are not able to run an induction on scales. Arguments used up to this point are enough to prove non-endpoint or non-endline results, and are used in \cite{W1}, \cite{LV1} to obtain these results. At this stage one needs the observation that it is the concentration of energy of both $\phi$ and $\psi$ in a disk of small radius compared to side-length of our cubes that troubles us. In the absence of this problem, one can improve the $L^2$ estimate slightly to obtain the endpoint/endline results. But it is also certain that one cannot escape concentration, for concentration as defined here depends on the side-length of the cube to which we localize: as the scale gets larger previously non-concentrated waves become concentrated. So concentration must also be dealt with. To do this one needs a second observation: if both $\phi$ and $\psi$ concentrate on a disk then $\phi\psi$ concentrate on the double light cone generated by that disk. This phenomenon is called Huygens' principle. Restricting to this set one can get a better $L^1$ estimate due to transversality of Fourier supports of $\phi$ and $\psi$. This allows one to obtain a better error term when controlling the estimate at scale $R$ with estimate at a lower scale, and hence do induction on scales without the problem one encounters in the process described above. This means that if we can pass back and forth between estimates for cubes and estimates for cone neighborhoods, we can exploit this second observation to deal with the concentration. Having dealt with both concentrated and non-concentrated cases, we combine them to obtain a uniform bound on localized estimates for the cubes. The two observations above are due to Tao and using these he proved the endpoint case of Klainerman-Machedon conjecture for the cone; see \cite{T1}. Now we perform some reductions. Firstly, it is clear that it suffices to prove Theorem 1.1 for waves satisfying the energy normalization \[E(\phi), E(\psi)=1.\] We will exploit this in some of our propositions. Secondly, it suffices to prove our theorem only for the endline, since for the non-endline cases it is already known by \cite{LV1}. Finally, observe that \begin{equation}\label{eq2.1} \begin{aligned} \|\phi \psi\|_{L^{\infty}_tL^{1}_x} =\sup_t \int | \phi(t) \psi(t)|dx &\leq \sup_t \|\phi(t)\|_{L^2}\|\psi(t)\|_{L^2}\\ &\leq E(\phi)^{1/2}E(\psi)^{1/2} . \end{aligned} \end{equation} So if (\ref{eqi.3}) is correct for $1< p,q < \infty$ such that $1/q <\min (1,(n+1)/4), \ 1/q=\frac{n+1}{2}(1-\frac{1}{p})$, then interpolating with (\ref{eq2.1}) it is correct for each point on the line between $(1,\infty)$ and $(p,q)$. So it is enough to prove that (\ref{eqi.3}) holds for $(p,q)$ with $q$ arbitrarily close to $\min (1,(n+1)/4)$. Hence for any $n \geq 2$ fix $0< \epsilon < \frac{1}{10n}$ and let $1 \leq p,q \leq \infty$ be such that $1/q=\min (1,(n+1)/4)-\epsilon$, $1/q=\frac{n+1}{2}(1-\frac{1}{p})$. Requirement on $\epsilon $ ensures $p>q$, so we will be able to use Lemma \ref{l2.1} below. Now we introduce the constants that will be used throughout the proof. Let $N$ denote the large integer $N=2^{n^{10}}$, thus N depends only on n. Let $C_0=2^{\left\lfloor N/{\epsilon}\right\rfloor^{10}}$. So $C_0$ is much larger than $N,1/\epsilon$. We will also use the following much larger constant: $C_1=2^{{C_0}^{10}}$. Throughout the proof $C$ will denote various large numbers that vary from line to line and that may depend on $N, \ \epsilon$ but not on $C_0$ and the dimension of $H,H'$. That $C$ may depend on $\epsilon$ is not a problem since implicit constant of (\ref{eqi.3}) also depends on $(p,q)$. So we have $C<N<C_0<C_1$ and each of these quantities dominates any reasonable quantity arising from quantities smaller than it. We shall use $A \lesssim B$ to mean $A \leq CB$, and $A \approx B$ for $A \lesssim B$ and $B \lesssim A$. Let's examine red and blue waves more closely. Clearly blue waves are time reversals of red waves. Both red and blue waves solve free wave equation, but propagate along different sets of characteristics. Due to compact Fourier supports waves are smooth and bounded. The following machinery will help us understand propagation of waves. Since the wave equation is a constant coefficient linear PDE, using the Fourier transform for space variables we can obtain an evolution operator defined by the Fourier transform that given the initial value of a wave, allows one to calculate it at another time. For our red waves this operator takes the following form. Let $a(\xi)$ be a fixed bump function supported on $\underline{\Sigma}$ which is equal to 1 on the spatial projection of $\Sigma^{R}$ and $\Sigma^{B}$. Then the evolution operator is $U(t)$ defined by \[ \widehat{U(t)f}(\xi):= a(\xi)e^{2\pi it|\xi|}\widehat{f}(\xi). \] As this evolution operator is defined by multiplication in the frequency space, it will translate to a convolution with a kernel $K_t$ when we inverse Fourier transform both sides. This convolution kernel is given by \[K_t(x)=\int a(\xi)e^{2\pi i(x\cdot \xi +t|\xi| )}d\xi.\] Thus we have the following equality \begin{equation}\label{eq.fi}\phi(t)=U(t)\phi(0)=\phi(0)\ast K_t \end{equation} for all red waves , and all times $t$. We want to have a decay estimate for our convolution kernel. Fixing a direction in space-time, and using non-stationary phase, we can obtain decay estimates that depend on the distance of the point to $C^{R}(0,0)$. More precisely \begin{equation}\label{eq.k}|K_t(x) | \lesssim (1+dist((x,t),C^{R}(0,0)))^{-N^{10}.} \end{equation} Using Young's inequality, and the estimate above we have $\| \phi(0) \|_{\infty} \lesssim E(\phi)^{1/2}$. Then by time translation invariance and time reversal symmetry we have \begin{equation}\label{infty} \| \phi \|_{\infty} \lesssim E(\phi)^{1/2}, \ \|\psi\|_{\infty} \lesssim E(\psi)^{1/2} \end{equation} and hence \begin{equation}\label{infty2} \| \phi \psi \|_{\infty} \lesssim E(\phi)^{1/2}E(\psi)^{1/2}. \end{equation} We shall decompose our waves into smaller waves. To ensure that these are still waves of the same color, we need to define margin of a wave, and proceed for waves that obey a margin requirement, since our decompositions slightly enlarge Fourier support of a wave. So let $margin(\phi)$ denote the quantity \[ margin(\phi):=dist(supp(\hat{\phi}), \partial \Sigma^{red})\] We define the margin of a blue wave analogously. We are ready to localize to cubes of side-length $R$. \begin{definition}\label{d2.1} For any $R\geq C_02^{C_1 /2}$, let A(R) be the best constant for which the inequality \[\left\| \phi \psi\right\|_{L^q_t L^p_x (Q)} \leq A(R)E(\phi)^{1/2}E(\psi)^{1/2} \] holds for all spacetime cubes Q of side-length R, red waves $\phi$ and blue waves $\psi$ of margin \begin{equation}\label{smr} margin(\phi), margin (\psi) \geq 1/100- (1/R)^{1/N}. \end{equation} \end{definition} It is clear that $A(R)$ is finite for each $R$, e.g using (\ref{infty}) we have the following crude bound \begin{equation}\label{pb} \|\phi\psi\|_{L^q_tL^p_x(Q_R)} \lesssim R^C E(\phi)^{1/2}E(\psi)^{1/2} \end{equation} which in particular shows \begin{equation}\label{pba} A(R) \lesssim R^C. \end{equation} Moreover via a finite decomposition of space and frequency, and some Lorentz transforms we see that \begin{equation}\label{eq.r'} \|\phi\psi\|_{L^q_tL^p_x(Q_R)} \lesssim A(R')E(\phi)^{1/2}E(\psi)^{1/2} \end{equation} for any cube $Q_R$ of side-length $R$, any $R \approx R'$, and any red and blue waves $\phi, \psi.$ Hence it is enough to prove that \begin{equation}\label{eq2.2} A(R) \lesssim 2^{CC_1} \end{equation} uniformly for all $R\geq C_02^{C_1}$. As $R$ gets larger the margin requirement becomes more strict, thus we will also need the following variant of $A(R)$ \[\overline{A}(R):= \sup_{C_02^{C_1/2}\leq r\leq R} A(r).\] Now let's see some easy estimates that we get when we localize to a cube. Let $Q_R$ be a cube of side-length R, and $\phi,\psi$ arbitrary red and blue waves. \begin{equation}\label{ce} \|\phi\|_{L^{2}(Q_R)} \lesssim R^{1/2}E(\phi)^{1/2}, \\ \|\psi\|_{L^2(Q_R)}\lesssim R^{1/2}E(\psi)^{1/2} . \end{equation} To obtain these just integrate energy along the life-span of the cube $Q_R$. Using H\"{o}lder these two gives \begin{equation}\label{eq3.12} \|\phi\psi \|_{L^1(Q_R)} \lesssim R E(\phi)^{1/2}E(\psi)^{1/2}. \end{equation} Rest of the paper is organized as follows: in section \ref{s3} we will give the definitions and estimates necessary for the rest of the paper. Some of the material employed for this purpose is proved in \cite{T1}, and works without any important change in our case. Then in section \ref{s4} we will localize waves to subcubes, and prove our key proposition which is our main tool for the rest of the proof. In section \ref{s5} the concept of energy concentration will be made precise, machinery to deal with the concentrated and the non-concentrated cases developed, and Theorem \ref{t1.2} proved. \section{ Necessary definitions and estimates}\label{s3} We first give definitions that will make precise localization to sub-cubes. Let $Q$ be a cube of side-length $R$. Let $K_j(Q)$ denote collection of all sub-cubes we obtain when we partition $Q$ into cubes of sidelength $2^{-j}R$. Of course, there are $2^{(n+1)j}$ sub-cubes in this collection. We define a $red \ wave \ table \ \phi \ of \ depth \ j$ on $Q$ to be any red wave with the vector form \[\phi:=(\phi^{(q)})_{q\in K_j(Q), }\] where components $\phi$ are also red waves. Note that by the definition of energy \begin{equation}\label{eq3.1} E(\phi)= \sum_{q \in K_j(Q)} E(\phi^{(q)}). \end{equation} For a red wave table $\phi$ of depth $j$ on $Q$ we define the $j-quilt \ [\phi]_{j}$ of $\phi$ to be the function \[[\phi]_{j}:= \sum_{q\in K_{j}(Q)} |\phi^{(q)}| \chi_q.\] Hence we have the pointwise estimates \begin{equation}\label{eq3.2} |\phi^{(q)}| \chi_q \leq [\phi]_{j}\leq |\phi| \chi_{Q} \end{equation} for all $q \in K_{j}(Q)$. We define $(c,k) \ interior \ I^{c,k}(Q)$ of $Q$ for $k$ a nonnegative integer and $0<c\ll 1$ by \[I^{c,k}(Q):= \bigcup_{q\in K_k(Q)} (1-c)q. \] Next we give some definitions concerning localization of energy to a disk. Let $\eta_0$ denote a fixed non-negative Schwarz function on $\textbf{R}^n$ with total mass 1 whose Fourier transform is supported on the unit disk. For any $r>0$ let $\eta_r(x):=r^{-n}\eta_0(x/r)$. Let $D=D(x_D,t_D;r)$ be any disk. Then define the operator $P_D$ as follows: for any red wave $\phi$ let \[ P_D\phi(t_D):=(\chi_D \ast \eta_{r^{1-1/N}})\phi(t_D). \] when $t=t_D$ and \[P_D\phi(t)=U(t-t_D)P_D\phi(t_D)\] at other times $t$. It is easy to see that $P_D\phi$ is a red wave. To extend this definition to blue waves we use time reversal operator $\textbf{T}$: \[ P_D\textbf{T}\phi:=\textbf{T}P_D\phi, \] where \[\textbf{T}\phi(x,t):=\phi(x,-t).\] Next lemma shows that $P_D$ localize a wave to the disk $D$, and $(1-P_D)$ localizes to $D^{ext}$. \begin{lemma}[Lemma 10.2 in \cite{T1}]\label{l3.1} Let $r\geq C_0$, and $D=D(x_D,t_D;r)$ be a disk. Let $\phi$ be a red wave such that $margin(\phi)\geq C_0r^{-1+1/N}.$ Then $P_D\phi$ is a red wave which satisfies the following margin and energy estimates: \begin{equation}\label{eq3.3} margin(P_D\phi)\geq margin(\phi)-Cr^{-1+1/N} \end{equation} \begin{equation}\label{eq3.4} \| \widetilde{\chi}_D^{-N} P_D \phi \|_{L^2(D^{ext}_+)}\lesssim r^{-N^2}E(\phi)^{1/2} \end{equation} \begin{equation}\label{eq3.5} \|(1-P_D)\phi \|_{L^2(D_-)} \lesssim r^{-N}E(\phi)^{1/2} \end{equation} \begin{equation}\label{eq3.6} E(P_D\phi) \leq \|\phi \|^2_{L^2(D_+)}+Cr^{-N}E(\phi) \end{equation} \begin{equation}\label{eq3.7} E((1-P_D)\phi) \leq \|\phi \|^2_{L^2(D_-^{ext})}+Cr^{-N}E(\phi) \end{equation} \begin{equation}\label{eq3.8} E(P_D\phi),E((1-P_D)\phi) \leq E(\phi) \end{equation} where $D_-,D_+$ are the disks $D_{\pm}:=D(x_D,t_D;r(1 \pm r^{-1/2N} ))$. \end{lemma} \begin{proof} Margin estimate is clear. Notice that \[0 \leq \chi_D \ast \eta_{r^{1-1/N}}(x) \leq 1 \] for all $x \in \textbf{R}^n$, thus we have (\ref{eq3.8}). For $x \in D^{ext}_+$ \[\widetilde{\chi}^{-N}_D(x)(\chi_D \ast \eta_{r^{1-1/N}})(x) \lesssim r^{-N^2} \] thus follows (\ref{eq3.4}),(\ref{eq3.6}). For $x \in D_-$ \[\chi_D \ast \eta_{r^{1-1/N}}(x) \geq 1-Cr^{-N}\] and hence we have (\ref{eq3.5}),(\ref{eq3.7}). \end{proof} Analogue of this for blue waves is of course legitimate by time reversal. After looking at localization properties of $P_D$ at time $t_D$ we now explore localization of it in space-time. \begin{lemma}[See Lemma 10.3 in \cite{T1}]\label{l3.2} Let D be a disk of radius $r\geq 2^{C_0}$, and $\phi$ be a red wave with $margin(\phi) \geq C_0r^{-1+1/N}$. Let $1\leq q<p\leq2$ and $r \lesssim R$ . Then if $\psi$ is an arbitrary blue wave, we have the finite speed of propagation law \begin{equation}\label{eq3.9} \|((1-P_D)\phi)\psi\|_{L_t^qL_x^p(Q(x_D,t_D,C^{-1}r))} \lesssim r^{C-N}E(\phi)^{1/2}E(\psi)^{1/2} \end{equation} \textit{and the Huygens' principle} \begin{equation}\label{eq3.10} \| (P_D\phi)\psi \|_{L_t^q L^p_x(Q(x_0,t_0;R)\setminus C^{R}(x_D,t_D;Cr+R^{1/N}))} \lesssim R^{C-N}E(\phi)^{1/2}E(\psi)^{1/2}. \end{equation} \textit{where $x_0\in \textbf{R}^{n}$ is arbitrary and $|t_0-t_D| \leq C_0R$.} \end{lemma} We will also need the analogue of this for blue waves. \begin{proof} Using H\"{o}lder inequality, it is enough to prove this for $p=q=2$. To see (\ref{eq3.9}), observe that \[\|(1-P_D)\phi\|_{L^{\infty}(Q(x_D,t_D;C^{-1}r))} \lesssim r^{C-N}E(\phi)^{1/2}\] by (\ref{eq.fi}), (\ref{eq.k}), (\ref{eq3.5}). Then by (\ref{ce}) we get the desired result. To prove (\ref{eq3.10}) we observe that \[ \| P_D\phi \|_{L^{\infty}(Q(x_0,t_0;R) \setminus C^R(x_D,t_D;Cr+R^{1/N}))} \lesssim R^{C-N}E(\phi)^{1/2} \] by (\ref{eq.fi}), (\ref{eq.k}), (\ref{eq3.4}). Then by (\ref{ce}) we obtain our result. Analogues for blue waves are obtained similarly without any loss. \end{proof} Using transversality we can prove better $L^2$ estimates than (\ref{ce}). Next lemma which is proven in \cite{T1} shows this. \begin{lemma}[Lemma 13.1 in \cite{T1}]\label{l3.3} Let $\phi$ be a red wave. Then for any $(x_0,t_0)\in \textbf{R}^{n+1}$ and $R \gtrsim 1$ we have \[\|\phi\|_{L^2(C^{B}(x_0,t_0;R))} \lesssim R^{1/2}E(\phi)^{1/2}.\] \end{lemma} By time reversal we of course have an analogue of this for blue waves. Hence we have the following corollary. \begin{corollary}[Corollary 13.2 in \cite{T1}]\label{c3.1} Let $\phi$ be a red wave and $\psi $ a blue wave. Let $R > r \gg 1$, $(x_0,t_0)\in \textbf{R}^{n+1}$, and $Q_R$ be any cube of side-length R. Then \[\|\phi \psi\|_{L^{1}(C^{P}(x_0,t_0;r)\cap Q_R)} \lesssim r^{1/2}R^{1/2}E(\phi)^{1/2}E(\psi)^{1/2}.\] \end{corollary} Finally we give a lemma that will be used in section \ref{s5}. \begin{lemma}\label{lx} Let $R>0$, $0<c \leq 2^{-C}$, and $Q_R$ be a cube of side-length $R$. Let $F$ be any function essentially bounded on $C_0Q_R$. Then there exists a cube $Q$ of side-length $CR$ contained in $C^2Q_R$ such that \[\|F\|_{L^q_tL^p_x(Q_R)}\leq (1+Cc)\|F\|_{L^q_tL^p_x(I^{c,C_0}(Q))}\] \end{lemma} \begin{proof} We first prove this for $L^1$ then by duality arguments extend it to $L^q_tL^p_x$. Let $G $ be integrable on $C_0Q_R$. By pigeonhole principle it suffices to prove \[\|G\|_{L^1(Q_R)} \leq \frac{1}{|Q_R|}\int_{Q_R}(1+Cc)\|G\|_{L^1(I^{c,C_0}(Q(x_0,t_0;CR))\cap Q_R)} dx_0dt_0.\] Then applying Fubini's theorem we have \[ \int_{Q_R}\|G\|_{L^1(I^{c,C_0}(Q)\cap Q_R)} dx_0dt_0 = \int_{Q_R} |G(x,t)||I^{c,C_0}(Q)\cap Q_R|dxdt. \] But we have \[ |Q(x_0,t_0;CR)\setminus I^{c,C_0}(Q(x_0,t_0;CR))| \lesssim c|Q(x_0,t_0;CR)| \] hence \[|Q_R|\leq (1+Cc)|I^{c,C_0}(Q(x_0,t_0;CR)) \cap Q_R|.\] from which the result for $L^1$ follows. Now observe that it suffices to prove our lemma for $\|F\|_{L^q_tL^p_x(Q_R)}=1$. We have, by duality, a function $A$ such that $\|A\|_{L^{q'}_tL^{p'}_x(Q_R)}=1$ and \[ \int_{Q_R}|F(x,t)|A(x,t)dxdt=1.\] But notice that by our result for $L^1$ functions we have \begin{align*} 1=\big|\int_{Q_R}|F(x,t)|A(x,t)dxdt \big| &\leq \|FA\|_{L^1(Q_R)} \\ & \leq (1+Cc)\|FA\|_{L^1(I^{c,C_0}(Q))}. \end{align*} Then our result follows from the H\"{o}lder inequality. \end{proof} \section{The key proposition}\label{s4} In this section we will state and prove the key proposition that will be used in the next section. First we import a proposition from \cite{T1} on which we do not need to make any change. As stated there, we have an analogue of this for blue waves. \begin{proposition}[Proposition 15.1 in \cite{T1}]\label{p4.1} Let $R \geq C_02^{C_1}$, $0<c \leq 2^{-C_0}$. Let $Q$ be a spacetime cube of side-length R. Let $\phi$ be a red wave such that $margin(\phi) \gtrsim R^{-1/2}$, and let $\psi$ be a blue wave. Then there exists a red wave table $\Phi=\Phi_c(\phi,\psi;Q)$ of depth $C_0$ on Q such that the following properties hold. \begin{equation}\label{eq4.1} margin(\Phi) \geq margin(\phi)-CR^{-1/2}. \end{equation} $ [\Phi]_{C_0} \ approximates \ \phi: $ \begin{equation}\label{eq4.2} \left\| (|\phi|-[\Phi]_{C_0}) \psi \right\|_{L^2(I^{c,C_0}(Q))} \lesssim c^{-C}R^{(1-n)/4}E(\phi)^{1/2}E(\psi)^{1/2} . \end{equation} $Bessel \ inequality:$ \begin{equation}\label{eq4.3} E(\Phi) \leq (1+Cc)E(\phi) . \end{equation} $Persistence \ of \ non-concentration: For \ any \ r \gtrsim R^{(1/2+1/N)} \ we \ have $ \begin{equation}\label{eq4.4} E_{r(1-C_0r^{-1/2N}),C_0Q}(\Phi,\psi) \leq (1+Cc)E_{r,C_0Q}(\phi,\psi). \end{equation} \end{proposition} Now we state and prove our key proposition. \begin{proposition}\label{p4.2} Let $R \geq C_0 2^{C_1},0<c \leq 2^{-C_0} $, and let $\phi, \psi$ be respectively red and blue waves which obey the energy normalization and the relaxed margin requirement \begin{equation}\label{rm} margin(\phi), margin(\psi) \geq 1/100-2(1/R)^{1/N}. \end{equation} Then for any cube $Q$ of side-length $CR$, we can find on $Q$ a red wave table $\Phi$ of depth $C_0$ and a blue wave table $\Psi$ of depth $C_0$ such that the following properties hold. \\ We have the margin estimate \begin{equation}\label{eq4.5} margin(\Phi),margin(\Psi) \geq 1/100- 3(1/R)^{1/N}. \end{equation} We have the energy estimate \begin{equation}\label{eq4.6} E(\Phi),E(\Psi)\leq 1+Cc. \end{equation} The following inequality holds \begin{equation}\label{eq4.7} \left\|\phi \psi \right\|_{L^q_t L^p_x(I^{c,C_0}(Q))} \leq \left\| [\Phi]_{C_0}[\Psi]_{C_0}\right\|_{L^q_tL^p_x(I^{c,C_0}(Q))} + c^{-C} . \end{equation} If $r>1$ then for any cone $C^{purple}(x_0,t_0;r)$ we have \begin{equation}\label{eq4.8} \begin{aligned} \left\|\phi \psi \right\|_{L^q_tL^p_x(I^{c,C_0}(Q)\cap C^{P}(x_0,t_0;r))} &\leq \left\| [\Phi]_{C_0}[\Psi]_{C_0}\right\|_{L^q_tL^p_x(I^{c,C_0}(Q)\cap C^{P}(x_0,t_0;r))}\\ &+ c^{-C}(1+R/r)^{-\epsilon/4} . \end{aligned} \end{equation} We have the persistence of non-concentration: for all $r \gtrsim R^{1/2+3/N}$ \begin{equation}\label{eq4.9} E_{r(1-C_0(r)^{-1/3N}), C_0Q}(\Phi,\Psi) \leq E_{r,C_0Q}(\phi,\psi)+Cc. \end{equation} \end{proposition} \begin{proof} Define $\Phi:=\Phi_c(\phi,\psi,c)$ as in the Proposition 4.1. Then \begin{equation} \begin{aligned} margin(\Phi)\geq margin(\phi)-CR^{-1/2} &\geq 1/100-2R^{-1/N}-CR^{-1/2} \\ &\geq 1/100-3(1/R)^{1/N}. \end{aligned} \end{equation} Hence we have the margin requirement on $\Phi$. Energy estimate directly follows from the definition of $\Phi$. Let $\Psi:=\Psi_c(\Phi,\psi,c)$. Energy and margin requirements follow from time reversal. We now prove (\ref{eq4.7}). By (\ref{eq4.2}) we have \begin{equation}\label{eq4.10} \|(|\phi|-[\Phi]_{C_0})\psi\|_{L^2(I^{c,C_0}(Q))} \lesssim c^{-C}R^{\frac{1-n}{4}}. \end{equation} On the other hand by (\ref{eq3.12}), (\ref{eq3.2}) and (\ref{eq4.6}) \[ \|\phi \psi\|_{L^1(I^{c,C_0}(Q))}, \\ \|[\Phi]_{C_0} \psi\|_{L^1(I^{c,C_0}(Q))} \lesssim R. \] So by triangle inequality we have \begin{equation}\label{eq4.11} \|(|\phi|-[\Phi]_{C_0})\psi\|_{L^1(I^{c,C_0}(Q))} \lesssim R. \end{equation} By H\"{o}lder (\ref{eq4.10}) gives \begin{equation}\label{n1} \|(|\phi|-[\Phi]_{C_0})\psi\|_{L^q_tL^2_x(I^{c,C_0}(Q))} \lesssim c^{-C}R^{(\frac{1-n}{4}+\frac{2-q}{2q})}. \end{equation} To handle $L^1$ case, observe that using (\ref{eq2.1}) together with (\ref{eq3.2}), (\ref{eq4.6}) and the triangle inequality one obtains \[ \|(|\psi|-[\Phi]_{C_0})\psi\|_{L_t^{\infty}L_x^{1}(I^{c,C_0}(Q))} \lesssim 1 . \] We interpolate this last inequality with (\ref{eq4.11}) to get, \[\|(|\phi|-[\Phi]_{C_0})\psi\|_{L^q_tL_x^1(I^{c,C_0}(Q))} \lesssim R^{1/q} .\] Then by interpolating the last one with (\ref{n1}) we obtain \begin{equation}\label{eq4.12} \|(|\phi|-[\Phi]_{C_0})\psi\|_{L^q_tL^p_x(I^{c,C_0}(Q))} \lesssim c^{-C}R^{(\frac{1-n}{4}+\frac{2-q}{2q})(2-\frac{2}{p})+\frac{1}{q}(\frac{2}{p}-1)} =c^{-C}. \end{equation} By the analogue of (\ref{eq4.2}) for blue waves, (\ref{eq3.2}) and (\ref{eq4.6}) we have \begin{equation}\label{eq4.13} \begin{aligned} \|(|\psi|-[\Psi]_{C_0})[\Phi]_{C_0}\|_{L^2(I^{c,C_0}(Q))} &\leq \|(|\psi|-[\Psi]_{C_0})\Phi\|_{L^2(I^{c,C_0}(Q))}\\ &\lesssim c^{-C}R^{\frac{1-n}{4}}. \end{aligned} \end{equation} For $L^1$ case by (\ref{eq3.12}), (\ref{eq3.2}) and (\ref{eq4.6}) we have \[ \| \psi [\Phi]_{C_0} \|_{L^1(I^{c,C_0}(Q))}, \\ \|[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^1(I^{c,C_0}(Q))} \lesssim R \] so by the triangle inequality \begin{equation}\label{eq4.14} \|(|\psi|-[\Psi]_{C_0})[\Phi]_{C_0}\|_{L^1(I^{c,C_0}(Q))} \lesssim R. \end{equation} Then we apply H\"{o}lder and interpolation to (\ref{eq4.13}) and (\ref{eq4.14}) exactly as we did to (\ref{eq4.10}) and (\ref{eq4.11}) to get \begin{equation}\label{eq4.15} \|(|\psi|-[\Psi]_{C_0})[\Phi]_{C_0}\|_{L^q_tL^p_x(I^{c,C_0}(Q))} \lesssim c^{-C}. \end{equation} The triangle inequality together with (\ref{eq4.15}) and (\ref{eq4.12}) gives (\ref{eq4.7}). We will apply the same process to prove (\ref{eq4.8}). Let \[ \Omega:=I^{c,C_0}(Q) \cap C^{P}(x_0,t_0;r)\] We shall assume $R>r$ since otherwise (\ref{eq4.12}), (\ref{eq4.15}) combined with the triangle inequality and the fact that $\Omega \subseteq I^{c,C_0}(Q)$ gives (\ref{eq4.8}). In $L^2$ case using the triangle inequality, (\ref{eq4.10}), (\ref{eq4.13}), and the fact that $\Omega \subseteq I^{c,C_0}(Q)$ we have \begin{equation}\label{eq4.16}\| \phi\psi-[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^2(\Omega)} \lesssim c^{-C} R^{\frac{1-n}{4}}. \end{equation} In $L^1$ case, using Corollary \ref{c3.1}, (\ref{eq3.2}) and (\ref{eq4.6}) we obtain \[\|\phi\psi\|_{L^1(\Omega)}, \|[\Phi]_{C_0}[\Psi]_{C_0} \|_{L^1(\Omega)} \lesssim (r/R)^{1/2} R.\] Hence by the triangle inequality \begin{equation}\label{eq4.17} \| \phi\psi-[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^1(\Omega)} \lesssim (r/R)^{1/2}R . \end{equation} Apply H\"{o}lder to (\ref{eq4.16}) as above to get \begin{equation} \| \phi\psi-[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^2_x(\Omega)} \lesssim c^{-C} R^{\frac{1-n}{4}+\frac{1}{q}- \frac{1}{2}} . \end{equation} On the other hand (\ref{eq2.1}) together with (\ref{eq3.2}), (\ref{eq4.6}) and the triangle inequality yields \[ \| \phi\psi-[\Psi]_{C_0}[\Psi]_{C_0}\|_{L_t^{\infty}L_x^{1}(\Omega)} \lesssim 1 . \] interpolating this with (\ref{eq4.17}) we obtain. \begin{equation}\label{eq4.18} \| \phi\psi-[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^1_x(\Omega)} \lesssim (r/R)^{1/2q}R^{1/q} . \end{equation} Then interpolating (\ref{eq4.17}) with (\ref{eq4.18}) gives \[ \| \phi\psi-[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^p_x(\Omega)} \lesssim c^{-C}(r/R)^{\epsilon/4}. \] By the triangle inequality we get (\ref{eq4.8}). Now it remains to prove (\ref{eq4.9}). Fix $r \gtrsim R^{1/2+3/N}$, and pick $\rho$ such that $\rho(1-C_0\rho^{-1/2N})=r(1-C_0r^{-1/3N})$. Clearly such a $\rho$ value exists, furthermore it satisfies $\rho \gtrsim R^{1/2+1/N}$, and $\rho \leq r(1-C_0r^{-1/2N}) $. Then using monotonicity of energy concentration, (\ref{eq4.9}) and its analogue for blue waves we have \begin{align*} E_{r(1-C_0r^{-1/3N}),C_0Q}(\Phi,\Psi) &=E_{\rho(1-C_0\rho^{-1/2N}),C_0Q}(\Phi,\Psi)\\ &\leq (1+Cc)E_{\rho,C_0Q}(\Phi,\psi)\\ &\leq (1+Cc)E_{r(1-r^{-1/2N}),C_0Q}(\Phi,\psi) \\ &\leq(1+Cc)E_{r,C_0Q}(\phi,\psi). \end{align*} from which our result follows by the energy normalization. \end{proof} \section{Proof of Theorem 1.1}\label{s5} At the end of section \ref{s2} we localized to cubes, and then in section \ref{s4} to sub-cubes. The following proposition completes the first paragraph of the sketch of the proof given in section \ref{s2}. \begin{proposition}\label{p5.2}Suppose $R \geq 2 C_0 2^{C_1}$ and $0<c \leq 2^ {C_0}$ and $\phi, \psi$ respectively red and blue waves satisfying the energy normalization and the relaxed margin requirement (\ref{rm}). Then for any cube $Q_R$ of side length $R$ one has \begin{equation}\label{it} \left\| \phi \psi\right\|_{L^q_t L^p_x (Q)} \leq (1+Cc)\overline{A}(R/2)E(\phi)^{1/2}E(\psi)^{1/2} + c^{-C} . \end{equation} \end{proposition} \begin{proof} Using Lemma \ref{lx} with $F:=\phi\psi$ we can find a cube Q of side-length CR inside $C^2Q_R$ such that \[\|\phi\psi\|_{L^q_tL^p_x(Q_R)} \leq (1+Cc)\|\phi\psi\|_{L^q_tL^p_x(I^{c,c_0}(Q))}.\] Let $\Phi,\Psi$ be as in Proposition \ref{p4.2}. Then by (\ref{eq4.7}), we have \begin{equation}\label{eq5.1} \|\phi\psi\|_{L^q_tL^p_x(Q_R)} \leq (1+Cc)\|[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^p_x(I^{c,C_0}(Q))}+ c^{-C}. \end{equation} Applying the triangle inequality we get \[ \|[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^p_x(Q)} \leq \sum_{q \in K_{C_0}(Q)} \|\Phi^{(q)}\Psi^{(q)}\|_{L^q_tL^p_x(q)}.\] Then (\ref{eq4.5}) combined with Definition \ref{d2.1} gives \[\|[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^p_x(Q)} \leq A(2^{-C_0}R) \sum_{q \in K_{C_0}(Q)} E(\Phi^{(q)})^{1/2}E(\Psi^{(q)})^{1/2}\] Cauchy-Schwarz combined with (\ref{eq3.1}) and (\ref{eq4.6}) we obtain \[ \|[\Phi]_{C_0}[\Psi]_{C_0}\|_{L^q_tL^p_x(Q)} \leq (1+Cc)\overline{A}(R/2).\] which, inserted to (\ref{eq5.1}), gives the desired result. \end{proof} Iterating (\ref{it}) and using a globalization lemma gives non-endpoint/non-endline results; see \cite{LV1}, \cite{T1}. For our purposes, set $c=(1/R)^{1/N}$, iterate (\ref{it}) and use (\ref{pb}) when $R \approx C_02^{C_1}$ to obtain \begin{equation}\label{cn} A(R)\lesssim 2^{CC_1}R^{C/N} \end{equation} This last inequality proves (\ref{eq2.2}) for all $2C_02^{C_1} \leq R \leq C_02^{NC_1}$. For larger $R$ we shall introduce the notion of energy concentration. \begin{definition}\label{d5.1} Let $r>0$. Let $Q$ be a space-time cube of side-length $R$, let $\phi$ a red wave, and $\psi$ a blue wave. The the energy concentration $E_{r,Q}$ is defined to be \[E_{r,Q}(\phi,\psi):= \max \left\{ \frac{1}{2} E(\phi)^{1/2} E(\psi) ^{1/2},\sup_D \left\| \phi \right\|_{L^2(D)} \left\| \psi \right\|_{L^2(D)} \right\} \] where supremum is taken over all disks of radius $r$ whose time coordinate is inside the life-span of $Q$. \end{definition} The next definition gives a variant of $A(R)$ which is sensitive to energy concentration, allows one to do induction on scales in cone neighborhoods successfully, and can be related to $A(R)$. With this variant at hand one first bounds $A(R)$ by this variant with some gain, then handles concentrated and non-concentrated cases separately. \begin{definition}\label{d5.2} Let $R \geq 2^{NC_1/2}$ and $r,r'>0$. Then $A(R,r,r')$ is defined to be the best constant for which the inequality \[ \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R \cap C^{P}(x_0,t_0;r'))} \leq A(R,r,r') (E(\phi)^{1/2} E(\psi) ^{1/2})^{1/q} E_{r,C_0Q_R}(\phi,\psi)^{1/q'} \] holds for all spacetime cubes $Q_R$ of side-length $R$, all $(x_0,t_0) \in \textbf{R}^{n+1}$, red waves $\phi$ and blue waves $\psi$ that obey the strict margin requirement (\ref{smr}). \end{definition} While using this definition to do induction on scales in cone neighborhoods, and to bound $A(R)$ with $A(R,r,r')$ one needs to use the following lemma instead of the triangle inequality to make some exponential gain. \begin{lemma}\label{l2.1} Let $f_1,f_2...f_k$ be a finite collection of functions such that $f_j:\textbf{R}^{n+1}\rightarrow H $, and $f_j \in L^q_tL^p_x(\textbf{R}^{n+1}), \ 1\leq j \leq k$ where H is a finite dimensional complex Hilbert space. If $q<p$ and supports of these functions are mutually disjoint then \[ \big\| \sum_{j=1}^k f_j \big \|_{L^q_tL^p_x}^q \leq \sum_{j=1^k} \left\|f_j \right\|_{L^q_tL^p_x}^q . \] \end{lemma} \begin{proof} We first exploit disjointness of supports, and then concavity: \begin{align*} \int (\int |\sum_{j=1}^k f_j(x,t)|^p dx )^{q/p}dt &= \int( \sum_{j=1}^k \int |f_j(x,t)|^p dx )^{q/p} dt\\ &\leq \sum_{j=1}^k \int (\int |f_j(x,t)|^pdx )^{q/p}dt. \end{align*} \end{proof} This lemma shows that the following fact about $L^p$ norms extends partially to mixed norms. Let $f_1,f_2...f_k$ be a finite collection of functions such that $f_j:\textbf{R}^{n+1}\rightarrow H $, and $f_j \in L^p(\textbf{R}^{n+1}),\ 1\leq j \leq k$ where H is a finite dimensional complex Hilbert space. If supports of these functions are mutually disjoint then \[\big\| \sum_{j=1}^k f_j \big\|_p^p = \sum_{j=1}^k \left\|f_j \right\|_p^p .\] Our lemma, of course, is not so good as the property of $L^p$ norms given above, but will do in our case. We now exploit non-concentration and relate $A(R)$ to $A(R,r,r')$ with some gain. \begin{proposition}\label{p5.3} Let $R\geq 2^{NC_1}$. Then we have \[ A(R) \leq (1-C_0^{-C}) \sup_{\stackrel{2^{NC_1} \leq \widetilde{R} \leq R}{\widetilde{R}^{1/2+4/N} \leq r}} A(\widetilde{R},r,C_0(1+r)) +2^{CC_1}. \] \end{proposition} We shall need the following lemma in the proof. \begin{lemma}\label{l5.1} Let $R \geq 2^{NC_1}$ and $2^{ NC_1/2} \leq r \leq R^{1/2+4/N}$. Let $D=D(x_D,t_D;C_0^{1/2}r)$ be a disk. Let $\phi,\psi$ be respectively red and blue waves with $margin(\phi),margin(\psi)\geq 1/200$. Then we have \[ \left\| (P_{D}\phi) \psi \right \|_{L^q_tL^p_x(Q^{ann}(x_0,t_0;R,2R))} \lesssim R^{-1/C}E(\phi)^{1/2}E(\psi)^{1/2} \] \[ \left\| \phi P_{D} \psi \right\|_{L^q_tL^p_x(Q^{ann}(x_0,t_0;R,2R) )} \lesssim R^{-1/C}E(\phi)^{1/2}E(\psi)^{1/2}. \] \end{lemma} We first prove the lemma, then the proposition. \begin{proof} By translation invariance we can take $(x_0,t_0)=(0,0)$. First we consider $\left\| (P_{D}\phi) \psi \right\|_{L^q_tL^p_x(Q^{ann}(x_0,t_0;R,2R) )} .$ By using H\"{o}lder and interpolation as in the proof of Proposition 4.1 it suffices to prove \[ \left\| (P_{D}\phi) \psi \right\|_{L^1(Q^{ann}(x_0,t_0;R,2R))} \lesssim R^{C/N}R^{3/4}, \] \[\left\| (P_{D}\phi) \psi \right\|_{L^2(Q^{ann}(x_0,t_0;R,2R))} \lesssim R^{C/N}R^{\frac{1-n}{4}}.\] But frequency of $\psi$ plays no role in the proof given in \cite{T1} and so the same proof works. For $\left\| \phi (P_{D} \psi )\right\|_{L^q_tL^p_x(Q^{ann}(x_0,t_0;R,2R))}$ since we have no difference between frequencies of $\phi$ and $\psi$, by time reversal we get the same result without any loss. \end{proof} \begin{proof} Let $Q_R$ be a spacetime cube of side-length R. Let $\phi,\psi$ be respectively red and blue waves with strict margin requirement (\ref{smr}) and the energy normalization. Clearly it suffices to prove \[ \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R)} \leq (1-C_0^{-C}) \sup_{\stackrel{2^{NC_1} \leq \widetilde{R} \leq R}{\widetilde{R}^{1/2+4/N} \leq r} } A(\widetilde{R}, r,C_0(1+r)) + 2^{CC_1} . \] We may of course assume that $\left\| \phi\psi\right\|_ {L^q_tL^p_x(Q_R)}\approx A(R)$ and that $A(R) \geq 2^{CC_1}$. Let $0< \delta < 1/4$ be a small number to be specified later, and let $r$ be the supremum of all radii $r \geq 2^{NC_1(1/2+4/N)}$ such that $E_{r,C_0Q_R} (\phi,\psi) \leq 1-\delta $ or $r=2^{NC_1(1/2+4/N)}$ if no such radius exists. Let $D:=D(x_0,t_0;r)$ be a disk with $t_D$ in the lifespan of $C_0Q_R$, and \begin{equation}\label{eq5.2} min(\left\|\phi \right\|_{L^2(D)}, \left\|\psi \right\|_{L^2(D)}) \geq 1-2\delta. \end{equation} Such a disk clearly exists by the definition of r. Let $D'=C_0^{1/2}D$ and $ \Omega = Q_R \cap C^{P}(x_0,t_0;C_0(1+r))$. Let $\phi=(1-P_{D'})\phi+ P_{D'}\phi $, and $\psi=(1-P_{D'})\psi+ P_{D'}\psi$. We have two cases: $r>R^{1/2+4/N}$ or $r\leq R^{1/2+4/N}$. So first assume $r>R^{1/2+4/N}$. Then by (\ref{eq3.7}), (\ref{eq3.8}) and (\ref{eq5.2}) we have \[E((1-P_{D'})\phi),E((1-P_{D'})\psi) \lesssim \delta +C_0^{-C} .\] Thus by (\ref{eq.r'}) one has \begin{equation}\label{eq5.3} \left\|(1-P_{D'})\phi(1-P_{D'})\psi \right\|_{L^q_tL^p_x(Q_R)} \lesssim (\delta+C_0^{-C})A(R) . \end{equation} By (\ref{eq3.10}) and its analogue for blue waves we have \[ \left\| (P_{D'}\phi) \psi \right\|_{L^q_tL^p_x(Q_R\setminus \Omega)}, \left\| (1-P_{D'})\phi P_{D'}\psi \right\|_{L^q_tL^p_x(Q_R\setminus \Omega)} \lesssim C_0^{-C}.\] Then by the triangle inequality and our assumptions on $A(R)$ at the beginning of the proof we have \[ \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R\setminus \Omega)} \lesssim (\delta+C_0^{-C})A(R) \lesssim (\delta+C_0^{-C})\left\| \phi \psi \right\|_{L^q_t L^p_x(Q_R)} . \] Here we will use Lemma \ref{l2.1} instead of directly applying triangle inequality. This is where we cede the uppermost endpoint $(n+1/n-1,1)$ when $n \geq 3.$ \begin{align*} \| \phi \psi \|^q_{L^q_tL^p_x(Q_R)} &\leq \| \phi \psi \|^q_{L^q_tL^p_x(Q_R \setminus \Omega)} + \| \phi \psi \|^q_{L^q_tL^p_x( \Omega)} \\ &\leq C(\delta+C_0^{-C})^q \| \phi \psi \|^q_{L^q_tL^p_x(Q_R)} + \| \phi \psi \|^q_{L^q_tL^p_x( \Omega)} . \end{align*} Hence, \[ \left\| \phi \psi \right\|_{L^q_tL^p_x( \Omega)} \geq (1-C(\delta+C_0^{-C})^q)^{1/q} \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R)}. \] On the other hand, by our assumption $r>R^{1/2+4/N}$ and by the definition of $r$ we have \[\left\| \phi \psi \right\|_{L^q_tL^p_x( \Omega)} \leq A(R,r,C_0(1+r))(1-\delta)^{1/q'} . \] But then setting $\delta=C_0^{-C}$ one obtains the desired estimate. Now we handle the second case. Define $\widetilde{R}:= r^{\frac{1}{1/2+4/N}}$. Thus $2^{NC_1}\leq \widetilde{R} \leq R$ and $r\geq \widetilde{R}^{1/2+4/N}$. If $\widetilde{R} > 2^{NC_1}$ then by Definition \ref{d5.2} one has \[ \| \phi \psi \|_{L^q_tL^p_x(Q(x_0,t_0;\widetilde{R}) \cap \Omega)} \leq A(\widetilde{R},r,C_0(1+r))(1-\delta)^{1/q'}. \] If $R=2^{NC_1}$ then we have by (\ref{cn}) \[\| \phi \psi \|_{L^q_tL^p_x(Q(x_0,t_0;\widetilde{R}) \cap \Omega)} \leq 2^{CC_1}.\] Note that with this definition of $\widetilde{R}$, we can obtain \[ \| \phi \psi \|_{L^q_tL^p_x(Q(x_0,t_0;\widetilde{R}) \setminus \Omega)} \lesssim (\delta+C_0^{-C})\| \phi \psi \|_{L^q_tL^p_x(Q_R)} \] by the same arguments as above. Hence if we can show that \[ \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R \setminus Q(x_0,t_0;\widetilde{R}) )} \lesssim (\delta+C_0^{-C})A(R) \] then we apply the Lemma \ref{l2.1} as we did above and obtain the desired result. To show this together with (\ref{eq5.3}) we need the estimates \begin{align*} \left\| (P_{D'}\phi) \psi \right\|_{L^q_tL^p_x(Q_R \setminus Q(x_0,t_0;\widetilde{R})) } &\lesssim (\delta+C_0^{-C})A(R), \\ \left\| ((1-P_{D'})\phi) P_{D'} \psi \right\|_{L^q_tL^p_x(Q_R \setminus Q(x_0,t_0;\widetilde{R}) )} &\lesssim (\delta+C_0^{-C})A(R). \end{align*} But by a dyadic decomposition these would follow Lemma \ref{l5.1}. \end{proof} Now it remains to bound $A(R,r,r')$ by $A(R)$. This we will do in two steps: the non-concentrated case and the concentrated case. First we deal with the non-concentrated case. \begin{proposition}\label{p5.5} Let $R\geq 2^{NC_1/2}$, $r \geq C_0^C R$, $r'>0$ and $0<c \leq 2^{-C_0}$. Then we have \[ A(R,r,r')\leq (1+Cc)\overline{A}(R) + c^{-C}.\] \end{proposition} \begin{proof} Let $\phi,\psi$ be respectively red and blue waves that satisfy the strict margin requirement (\ref{smr}), and the energy normalization. Then it is enough to prove that \[ \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R)} \leq E_{r,C_0Q_R}(\phi,\psi)^{1/q'}(1+Cc)\overline{A}(R) +c^{-C}\] where $Q_R$ is an arbitrary cube of side-length $R$. Let $D:=D(x_{Q_R},t_{Q_R};r/2)$ where $(x_{Q_R},t_{Q_R})$ is the center of $Q_R$. We will decompose our waves: $\phi= (1-P_D)\phi+P_D\phi$, $\psi=(1-P_D)\psi+P_D\psi$. By Lemma \ref{l3.1} $P_D\phi$, $P_D\psi$ satisfy relaxed margin requirement (\ref{rm}) and the energy estimate \[E(P_D\phi)^{1/2}E(P_D\psi)^{1/2} \leq E_{r,C_0Q_R}(\phi,\psi)+CR^{C-N/2}.\] So we can apply Proposition \ref{p5.2} to get \[\left\| (P_D\phi) (P_D\psi )\right\|_{L^q_tL^p_x(Q_R)} \leq (1+Cc)(E_{r,C_0Q_R}(\phi,\psi)+CR^{C-N/2})\overline{A}(R) + c^{-C}.\] Using a trivial polynomial bound on $\overline{A}(R)$ we absorb $CR^{C-N}$ into $c^{-C}$. Hence we will be done if we can show that \[ \left\| ((1-P_D)\phi) \psi \right\|_{L^q_tL^p_x(Q_R)}, \left\| (P_D\phi) (1-P_D)\psi \right\|_{L^q_tL^p_x(Q_R)} \leq c^{-C}.\] Both of these follow from (\ref{eq3.9}) and its analogue for blue waves. \end{proof} We now turn to concentrated case. \begin{proposition}\label{p5.6} Let $R\geq C_02^{NC_1/2}$ and $C_0^C R \geq r > R^{1/2+3/N}$. Then we have \[A(R,r,r')\leq (1+Cc)A(R/C_0,r(1-Cr^{-1/3N}),r') + c^{-C}(1+\frac{R}{r'})^{-\epsilon/4} \] for any $0< c \leq 2^{-C_0}$. \end{proposition} \begin{proof}Let $Q_R$ be a spacetime cube of side-length R, $(x_0,t_0)$ be an element of $\textbf{R}^{n+1}$, $\phi$ and $\psi$ respectively red and blue waves that obey the strict margin requirement (\ref{smr}). Then it is enough to prove that \begin{align*} \left\| \phi \psi \right\|_{L^q_tL^p_x(Q_R \cap C^{P}(x_0,t_0;r'))} &\leq (1+Cc)A(R/C_0,\widetilde{r},r')E_{r,C_0Q_R}(\phi,\psi)^{1/q'}\\ &+ c^{-C}(1+\frac{R}{r})^{-\epsilon/4} \end{align*} where $\widetilde{r} = r(1-Cr^{-1/3N})$ since $E_{r,C_0Q_R}(\phi,\psi) \approx 1$. We will perform some reductions. By Lemma \ref{lx} applied to $\phi \psi \chi_{C^{P}(x_0,t_0;r)}$ there is a cube $Q$ of side-length $CR$ contained in $C^2Q_R$ such that \[\| \phi \psi \|_{L^q_tL^p_x(Q_R \cap C^P(x_0,t_0;r'))} \leq (1+Cc)\left\| \phi \psi \right\|_{L^q_tL^p_x(I^{c,C_0}(Q) \cap C^P(x_0,t_0;r'))}.\] Applying Proposition \ref{p4.2} we reduce to showing \begin{align*} \| [\Phi]_{C_0}[\Psi]_{C_0} \|_{L^q_tL^p_x(I^{c,C_0}(Q) \cap C^{P}(x_0,t_0;r'))} &\leq (1+Cc)A(R/C_0,\widetilde{r},r')E_{r,C_0Q_R}(\phi,\psi)^{1/q'}\\ &+ c^{-C}R^{C-N/2}. \end{align*} Using (\ref{eq4.9}) it suffices to prove that \[ \| [\Phi]_{C_0}[\Psi]_{C_0} \|_{L^q_tL^p_x(I^{c,C_0}(Q) \cap C^{P}(x_0,t_0;r'))} \leq (1+Cc)A(R/C_0,\widetilde{r},r')E_{\widetilde{r},C_0Q}(\Phi,\Psi)^{1/q'}.\] Using Lemma \ref{l2.1} it is enough to prove that \[ \sum_{q \in \textit{K}_{C_0}(Q)} \| \Phi^{(q)} \Psi^{(q)} \|^q_{L^q_tL^p_x(q\cap C^P(x_0,t_0;r'))} \leq (1+Cc)A(R/C_0,\widetilde{r},r')^q E_{\widetilde{r},C_0Q}(\Phi,\Psi)^{q/q'}. \] The observation $E_{\widetilde{r},C_0Q}(\Phi^{(q)},\Psi^{(q)}) \leq E_{\widetilde{r},C_0Q}(\Phi,\Psi)$ together with Definition \ref{d5.2} yield \[ \| \Phi^{(q)} \Psi^{(q)} \|^q_{L^q_tL^p_x(q\cap C^P(x_0,t_0;r'))} \leq A(R/C_0,\widetilde{r}, r')^{q} E(\Phi^{(q)})^{1/2}E(\Psi^{(q)})^{1/2} E_{\widetilde{r},C_0Q}(\Phi,\Psi)^{q/q'} . \] But then summing up followed by Cauchy-Schwarz and (\ref{eq4.6}) will yield the desired result. \end{proof} We combine these two propositions to obtain the following corollary. \begin{corollary} Let $R\geq 2^{NC_1}$ and $r\geq R^{1/2+4/N}$. Then we have \[A(R,r,C_0(1+r))\leq (1+Cc)\overline{A}(R)+c^{-C}\] for any $0< c\leq 2^{-C_0}$. \end{corollary} \begin{proof} We can assume that $r < C_0^CR $ since the claim otherwise follows from Proposition \ref{p5.5}. Let $J$ be the least integer such that $r\geq C_0^{-J}C_0^CR$. Since $r\geq R^{1/2+4/N}$, this implies $J \lesssim \log r$. Define $r:=r_0 >r_1>\ldots>r_J$ recursively by $r_{j+1}=r_j(1-Cr_j^{-1/3N})$. The sequence $\{r_j\}_{j=0}^J$ decreases slowly, and has only about $\log r$ terms, thus $r_J \approx r$. For $0 < j \leq J$ define $c_j:=C_0^{-1}cC_0^{(j-J)\epsilon/4C}$. Using these values of $c_j$ and $r_j$ we iterate Proposition \ref{p5.6} to obtain \[A(R,r,C_0(1+r)) \leq (1+Cc)A(R/C_0^J,r_J,C_0(1+r))+c^{-C}.\] Now we can use Proposition \ref{p5.5}, which applied to right hand side yields the desired result. \end{proof} Now we are in a position to show (\ref{eq2.2}). Combining the last result with Proposition \ref{p5.3} and setting $c=2^{-C_1}$ one sees that \[A(R)\leq (1-C_0^{-C}) \overline{A}(R) +2^{CC_1}\] holds if $R \geq 2^{NC_1}$. Combining with (\ref{cn}) this extends to $R \geq 2C_02^{C_1}$. Using (\ref{pba}) we further extend it to $R \geq 2^{C_1/2}$, and hence can take the supremum for the left hand side to get \[\overline{A}(R)\leq (1-C_0^{-C}) \overline{A}(R) +2^{CC_1}\] for all $R\geq 2^{C_1/2}$. From this clearly (\ref{eq2.2}) follows.
f9ccff4584a1b16df225c49572b028615a502783
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section*{Introduction} When back in 1888 Gregorio Ricci-Curbastro and his pupil Tullio Levi-Civita had to found the geometry of absolute differential calculus, having no torsion was helpful because what will later be called Levi-Civita connection was symmetric and entirely written in terms of the Riemannian metric: the principle for which relativistic symmetries imposed at a differential level require the introduction of a connection could be easily implemented upon the definition of the symmetric metric Levi-Civita connection given in terms of the Riemannian metric itself. However, some time later, the absence of torsion was recognized more as an assumption to simplify calculations than as a necessary constraint, and arguments of generality insisted for having the simplest Levi-Civita connection completed by the presence of the Cartan torsion tensor: if the principle for which relativistic symmetries imposed at a differential level require the introduction of a connection is taken in its utmost generality then the connection is not symmetric and thus torsion does not vanish, the extension of Riemann metric geometry as to include Cartan torsion being the Riemann-Cartan geometry. The fact that there is no a priori reason to neglect torsion does not imply that there cannot be posteriori reasons implying zero torsion, and in fact when in 1916 Albert Einstein had to construct the theory of gravitation forcing torsion to vanish was insightful, because what will later be known as Einstein tensor was symmetric and divergenceless; such a physical quantity was essential since at that time physics described only systems with energy densities symmetric and divergenceless: the link between spacetime and matter could be realized by the identification of the symmetric divergenceless Einstein tensor and the symmetric divergenceless energy density known as Einstein field equations. However, few decades after that, the vanishing torsion should not have been welcomed any longer, as there arose reasons to consider Einstein tensor no longer symmetric; the reason to need a more general geometry was that for the first time physics had a system with non-symmetric energy density and a spin density: if the demand of a link between spacetime and matter had to be extended to this more general instance then there should have been an identification of a non-symmetric Einstein tensor and the non-symmetric energy density in addition to another relationship linking torsion to the spin density in an extended system of complete Einstein-Sciama--Kibble field equations \cite{sa-si}. In his time, Planck commented that a new scientific idea does not triumph by convincing its opponents but rather because its opponents eventually die, and a new generation grows familiar with the idea; still, torsion has not found its place beside curvature in the geometric theory of gravitation: the reason for this fact is that, if on the one hand, torsion completes the Einstein gravitation without spoiling its predictions, on the other hand, this is so because the Einstein-Sciama--Kibble gravitation as it is considered now has effects that are negligible except at the Planck scale. This makes torsion a great gift of little value. Albeit we clearly retain that we cannot have torsional effects relevant at scales at which we know that torsional contributions have to be negligible, nevertheless if torsional effects were relevant soon beyond these scales it would be interesting to see what would follow: a torsion relevant soon after the Fermi scale means that the torsion coupling should not occur with the strength we have always been assuming, which ultimately means that in the history of torsion somewhere, somehow something about the torsion coupling constant has been overlooked; the simplest intuition may start from the remark that, because within the Einstein-Sciama--Kibble gravitational field equations there is not the identification but only the proportionality between curvature and energy and between torsion and spin, and since the curvature and torsion are independent fields, then curvature and torsion could couple to energy and spin with two different coupling constants. So far as we are concerned, the first who discussed the possibility to have two different coupling constant was Kaempffer, but despite that he clearly explained that such a modification should and could have been achieved formally, he did not discuss how to accomplish this concretely \cite{k}. In the present paper, our purpose is to consider the Einstein-Sciama--Kibble gravitational field theory as modified by Kaempffer to implement the coupling between geometry and physics with two coupling constants, giving the explicit form of the field equations with the two different coupling constants. \section{Kinematical Background Symmetries} In this first section we simply recall the foundation of the Riemannian metric geometry extended as to include the Cartan torsion tensor, the RC geometry. First, we recall the purely metric case: the Riemann geometry is based on the Riemannian metric, defining the symmetric Levi-Civita connection $\Lambda^{\alpha}_{\mu\nu}$ for the Levi-Civita metric covariant derivative $\nabla_{\mu}$ and the Riemann metric curvature tensor $R^{\alpha}_{\phantom{\alpha}\rho\nu\sigma}$ with one independent contraction $R^{\alpha}_{\phantom{\alpha}\rho\alpha\sigma}\!=\!R_{\rho\sigma}$ called Ricci metric curvature tensor itself having a single contraction $R_{\rho\sigma}g^{\rho\sigma}\!=\!R$ called Ricci metric curvature scalar; these will be obtained as the torsionless limit of more general quantities which we will next introduce. In fact if one wishes to pursue the same path Riemann followed but being in the most general case in which there is the Cartan torsion tensor, then one is taken to the Riemann-Cartan geometry, where a general connection $\Gamma^{\alpha}_{\mu\nu}$ allows to define the covariant derivative $D_{\mu}$ containing differential information, and as the most general connection is not symmetric \begin{eqnarray} &Q^{\alpha}_{\phantom{\alpha}\mu\nu}=\Gamma^{\alpha}_{\mu\nu}-\Gamma^{\alpha}_{\nu\mu} \label{Cartan} \end{eqnarray} in general is not zero and it is known as Cartan torsion tensor; we assume the complete antisymmetry of Cartan torsion tensor $Q_{[\alpha\mu\rho]}\! =\!6Q_{\alpha\mu\rho}$ and the covariant constancy of the metric tensor $Dg\!=\!0$ known under the name of metricity condition, the first constraints ensuring the existence of a unique symmetric part of the connection that can be vanished in a point of a given coordinate system while the second condition ensuring that symmetric part of the connection to be vanished and the metric to be flattened in the same neighborhood of the same coordinate system. The relationship of this decomposition for the most general connection with the principle of equivalence and causality has been investigated in sets of related works, as it has been discussed in references \cite{ha,xy,a-l,m-l,f/1a,f/1b}. From the most general connection $\Gamma^{\alpha}_{\mu\nu}$ and its derivatives we can also define \begin{eqnarray} &G^{\mu}_{\phantom{\mu}\rho\sigma\pi}=\partial_{\sigma}\Gamma^{\mu}_{\rho\pi} -\partial_{\pi}\Gamma^{\mu}_{\rho\sigma} +\Gamma^{\mu}_{\lambda\sigma}\Gamma^{\lambda}_{\rho\pi} -\Gamma^{\mu}_{\lambda\pi}\Gamma^{\lambda}_{\rho\sigma} \label{Riemann} \end{eqnarray} as Riemann curvature tensor, with one contraction $G^{\alpha}_{\phantom{\alpha}\rho\alpha\sigma} \!=\!G_{\rho\sigma}$ that is called Ricci curvature tensor, whose only contraction $G_{\rho\sigma}g^{\rho\sigma}\!=\!G$ is called Ricci curvature scalar: both Cartan torsion and Riemann curvature tensors vanish if and only if a global coordinate system exists in which the connection vanishes. The RC geometry can be written in world formalism by defining the dual bases of orthonormal tetrads $\xi^{a}_{\sigma}$ and $\xi_{a}^{\sigma}$ together with the introduction of the spin-connection $\Gamma^{i}_{j\mu}$ defining the covariant derivative $D_{\mu}$ that extends the differential properties to this formalism; what corresponds to the metric tensor are the Minkowskian matrices $\eta_{aq}$ and $\eta^{aq}$ as known: the previously introduced formalism of coordinate indices and the presently defined formalism of world indices are made equivalent upon the requirement of the covariant constancy of the tetrads and the Minkowskian matrices $D\xi=0$ and $D\eta=0$ which we will call formalism-compatibility conditions. Although in this formalism it is not possible to define torsion, the torsion tensor (\ref{Cartan}) can be written as \begin{eqnarray} &-Q^{a}_{\phantom{a}\mu\nu}=\partial_{\mu}\xi^{a}_{\nu}-\partial_{\nu}\xi^{a}_{\mu} +\Gamma^{a}_{j\mu}\xi^{j}_{\nu}-\Gamma^{a}_{j\nu}\xi^{j}_{\mu} \label{Cartangauge} \end{eqnarray} identically, as it is easy to check with a straightforward computation. Now considering the spin-connection it is possible to define \begin{eqnarray} &G^{a}_{\phantom{a}b\sigma\pi} =\partial_{\sigma}\Gamma^{a}_{b\pi}-\partial_{\pi}\Gamma^{a}_{b\sigma} +\Gamma^{a}_{j\sigma}\Gamma^{j}_{b\pi}-\Gamma^{a}_{j\pi}\Gamma^{j}_{b\sigma} \label{Riemanngauge} \end{eqnarray} which is the Riemann curvature tensor written in this formalism. We also recall that it is possible to define a geometry of complex fields, with gauge-connection $A_{\mu}$ defining gauge-covariant derivatives $D_{\mu}$ that extend the differential properties to complex fields. Here no analogous of torsion is defined. The analogous of the curvature is given from the gauge-connection as \begin{eqnarray} &F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu} \label{Maxwellgauge} \end{eqnarray} called Maxwell tensor, where $q$ is called the charge of the complex field. Writing the RC geometry in the world indices formalism has the advantage that the transformation laws are now Lorentz transformations of explicit structure that can also be written in the complex representation in which the inclusion of the Maxwell geometry would fit perfectly: this can be done after the introduction of the matrices $\boldsymbol{\gamma}_{a}$ verifying $\{\boldsymbol{\gamma}_{i},\boldsymbol{\gamma}_{j}\}=2\boldsymbol{\mathbb{I}}\eta_{ij}$ by defining the set of matrices $\boldsymbol{\sigma}_{ab}$ given by $[\boldsymbol{\gamma}_{i},\boldsymbol{\gamma}_{j}]=4\boldsymbol{\sigma}_{ij}$ defining $\{\boldsymbol{\gamma}_{i},\boldsymbol{\sigma}_{jk}\}=i\varepsilon_{ijkq}\boldsymbol{\gamma}\boldsymbol{\gamma}^{q}$ which can be proven to be a set of complex generators of the infinitesimal Lorentz complex transformation we need, called spinorial transformation, and then the spinor-connection $\boldsymbol{A}_{\mu}$ defines the spinor-covariant derivative $\boldsymbol{D}_{\mu}$ that contains the information about the dynamics of the spinor fields, of which we are going to consider the simplest $\frac{1}{2}$-spin spinor field alone; the spinorial constancy of the matrices $\boldsymbol{\gamma}_{j}$ is implemented automatically. No analogous of torsion is defined. The analogous of the Riemann curvature given with the spinor-connection \begin{eqnarray} &\boldsymbol{F}_{\sigma\pi} =\partial_{\sigma}\boldsymbol{A}_{\pi}-\partial_{\pi}\boldsymbol{A}_{\sigma} +[\boldsymbol{A}_{\sigma},\boldsymbol{A}_{\pi}] \label{RiemannMaxwellgauge} \end{eqnarray} which is a tensorial spinor antisymmetric in the tensorial indices. As a final comment, we remark that this kinematic background has been constructed by requiring only the implementation of general symmetry principles for the underlying geometry, where once the rototraslations are gauged then the tetrad-basis and spin-connection are potentials while the Cartan and Riemann tensors are strengths according to (\ref{Cartangauge}-\ref{Riemanngauge}) of the gravitational field, as it has been demonstrated in references \cite{h,h-h-k-n}, analogously to the fact that once the phase transformation is gauged then the gauge field is potential while Maxwell tensor is strength according to (\ref{Maxwellgauge}) of the electrodynamic field. \section{Dynamical Field Equations} Having settled the RC kinematics, next we are going to consider the Einsteinian purely metric gravity completing it as to include the Sciama--Kibble torsional sector, obtaining the known ESK theory; then we consider the suggestion of Kaempffer to enlarge the ESK model as to have both torsion and metric entering with their own coupling constant, developing what we call the $\mathrm{ESK}^{2}$ theory. First of all, we recall the historical path: when Einstein asked torsion to vanish he was motivated by the fact that in doing so the Bianchi identities in their contracted form $\nabla_{\mu}\!\!\left(R^{\mu\nu}\!-\!\frac{1}{2}g^{\mu\nu}R\!-\!\lambda g^{\mu\nu}\right)\!\equiv\!0$ suggested that the so called Einstein tensor $R_{\mu\nu}\!-\!\frac{1}{2}g_{\mu\nu}R$ would be symmetric and divergenceless like all energy density tensors $T_{\mu\nu}$ known at that time; in searching for field equations linking geometrical quantities on the one hand and material fields on the other, he thought to set $R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}R-\lambda g_{\mu\nu}=8\pi kT_{\mu\nu}$ in terms of some proportionality constant $k$ later acknowledged to be the Newton constant. These field equations might have been obtained through the variation of the lagrangian density that is given in terms of the Ricci metric curvature scalar alone, which is the only possible torsionless least-order derivative lagrangian density. Now if one wishes to pursue the same path that Einstein followed but being in the most general case in which there is the torsion tensor, then the torsional Bianchi identities, which we call Jacobi-Bianchi identities to distinguish them, can be obtained, and when they are written in their fully contracted form it is straightforward to see that they can be worked out to be given by the geometrical identities \begin{eqnarray} &D_{\rho}Q^{\rho\mu\nu}\!-\!\left(G^{\mu\nu}-\frac{1}{2}g^{\mu\nu}G-\lambda g^{\mu\nu}\right) \!+\!\left(G^{\nu\mu}-\frac{1}{2}g^{\nu\mu}G-\lambda g^{\nu\mu}\right)\equiv0\\ &D_{\mu}\!\!\left(G^{\mu\nu}\!-\!\frac{1}{2}g^{\mu\nu}G\!-\!\lambda g^{\mu\nu}\right) \!\!-\!\!\left(G_{\rho\beta}\!-\!\frac{1}{2}g_{\rho\beta}G\! -\!\lambda g_{\rho\beta}\right)\!Q^{\beta\rho\nu} \!\!+\!\frac{1}{2}G^{\mu\rho\beta\nu}Q_{\beta\mu\rho}\!\equiv\!0 \end{eqnarray} and one should also take into account the fact that an additional spin density tensor $S_{\lambda\mu\nu}$ is present beside an energy density tensor $T_{\mu\nu}$ that is not symmetric any longer; in searching for field equations linking geometrical quantities on the one hand and material fields on the other, we may still define Einstein tensor in the same way and eventually set the field equations given by \begin{eqnarray} &Q^{\rho\mu\nu}=-16\pi kS^{\rho\mu\nu}\\ &G^{\mu}_{\phantom{\mu}\nu}-\frac{1}{2}\delta^{\mu}_{\nu}G -\lambda\delta^{\mu}_{\nu}=8\pi kT^{\mu}_{\phantom{\mu}\nu} \end{eqnarray} so to convert the above identities into the conservation laws \begin{eqnarray} &D_{\rho}S^{\rho\mu\nu}+\frac{1}{2}\left(T^{\mu\nu}-T^{\nu\mu}\right)\equiv0\\ &D_{\mu}T^{\mu\nu}+T_{\rho\beta}Q^{\rho\beta\nu}-S_{\mu\rho\beta}G^{\mu\rho\beta\nu}\equiv0 \end{eqnarray} which are to be valid once the matter field equations are assigned. These field equations may be obtained through the variation of the lagrangian density given in terms of the Ricci scalar $\mathscr{L}\!=\!\frac{1}{16\pi k}(G\!+\!2\lambda)$ being it the simplest torsional completion of the least-order derivative lagrangian density. However, although this system of field equations has the Einstein equations completed to include torsion, nevertheless this is only the most straightforward but not yet the most general of all the possible enlargements in which torsion is present, with a spin density tensor $S_{\lambda\mu\nu}$ and energy density tensor $T_{\mu\nu}$ not symmetric; in searching for field equations linking geometrical quantities on the one hand and material fields on the other, we have that the field equations given by the following \begin{eqnarray} &Q^{\rho\mu\nu}=-aS^{\rho\mu\nu}\\ &\frac{b}{2a}\!\left(\frac{1}{4}\delta^{\mu}_{\nu}Q^{2} \!-\!\frac{1}{2}Q^{\mu\alpha\sigma}Q_{\nu\alpha\sigma} \!+\!D_{\rho}Q^{\rho\mu}_{\phantom{\rho\mu}\nu}\right) \!+\!\left(G^{\mu}_{\phantom{\mu}\nu}\!-\!\frac{1}{2}\delta^{\mu}_{\nu}G -\lambda\delta^{\mu}_{\nu}\right)\!=\!\left(\frac{b+a}{2}\right)T^{\mu}_{\phantom{\mu}\nu} \end{eqnarray} are in fact the most general for which the above Jacobi-Bianchi identities can be used to obtain the validity of the above conservation laws \begin{eqnarray} &D_{\rho}S^{\rho\mu\nu}+\frac{1}{2}\left(T^{\mu\nu}-T^{\nu\mu}\right)\equiv0\\ &D_{\mu}T^{\mu\nu}+T_{\rho\beta}Q^{\rho\beta\nu}-S_{\mu\rho\beta}G^{\mu\rho\beta\nu}\equiv0 \end{eqnarray} holding when the matter field equations are assigned. To see that these are the most general field equations we just have to notice that they may be obtained through the variation of the lagrangian density given in terms of the Ricci scalar plus torsional squared contributions, so that torsion is both implicitly and explicitly present beside the curvature scalar, in a lagrangian density given by the form $\mathscr{L}\!=\!-\frac{1}{4a}Q^{2}\!+\!\frac{1}{(a+b)}(R\!+\!2\lambda)\!\equiv\! -\frac{b}{4a(a+b)}Q^{2}\!+\!\frac{1}{(a+b)}(G\!+\!2\lambda)$ which is the most general completely antisymmetric torsion completion of the least-order derivative lagrangian density as it can be proven quite straightforwardly. Next step would then consist in the inclusion of the Maxwell field, for which we may follow a path that is analogous to the previous one by having some geometrical identities in the case of gauge fields to suggest the form that the gauge field equation should have, and in order to do so we can use the commutator of the gauge curvature taken in its fully contracted form as \begin{eqnarray} &D_{\rho}\left(D_{\sigma}F^{\sigma\rho}+\frac{1}{2}F_{\alpha\mu}Q^{\alpha\mu\rho}\right)\equiv0 \end{eqnarray} and considering that also the current vector $J_{\mu}$ is to be accounted; for field equations linking geometrical quantities and material fields, we have the field equations for the completely antisymmetric torsion and curvature tensors as \begin{eqnarray} &Q^{\rho\mu\nu}=-aS^{\rho\mu\nu} \label{torsion-spin}\\ \nonumber &\frac{b}{2a}\left(\frac{1}{4}\delta^{\mu}_{\nu}Q^{2} -\frac{1}{2}Q^{\mu\alpha\sigma}Q_{\nu\alpha\sigma} +D_{\rho}Q^{\rho\mu}_{\phantom{\rho\mu}\nu}\right) +\left(G^{\mu}_{\phantom{\mu}\nu}-\frac{1}{2}\delta^{\mu}_{\nu}G -\lambda\delta^{\mu}_{\nu}\right)+\\ &+\left(\frac{b+a}{2}\right)\left(F^{\rho\mu}F_{\rho\nu}-\frac{1}{4}\delta^{\mu}_{\nu}F^{2}\right) =\left(\frac{b+a}{2}\right)T^{\mu}_{\phantom{\mu}\nu} \label{curvature-energy} \end{eqnarray} together with the field equations for the gauge fields as \begin{eqnarray} &\frac{1}{2}F_{\alpha\mu}Q^{\alpha\mu\rho}+D_{\sigma}F^{\sigma\rho}=J^{\rho} \label{gauge-current} \end{eqnarray} so to convert the above geometrical identities into the conservation laws for the completely antisymmetric spin and energy densities given by \begin{eqnarray} &D_{\rho}S^{\rho\mu\nu}+\frac{1}{2}\left(T^{\mu\nu}-T^{\nu\mu}\right)\equiv0 \label{conservationspin}\\ &D_{\mu}T^{\mu\nu} +T_{\rho\beta}Q^{\rho\beta\nu}-S_{\mu\rho\beta}G^{\mu\rho\beta\nu}+J_{\rho}F^{\rho\nu}\equiv0 \label{conservationenergy} \end{eqnarray} and also for the current given by the expression \begin{eqnarray} &D_{\rho}J^{\rho}=0 \label{conservationcurrent} \end{eqnarray} which are valid as the matter field equations are assigned. These field equations can also be obtained by varying the previous gravitational lagrangian density plus the electrodynamic lagrangian density $\mathscr{L}\!=\!-\frac{1}{4}F^{2}$ as the most comprehensive geometric least-order derivative lagrangian density as it is rather clear. This is the geometrical system of field equations defining the structure of the torsional-gravitational gauge interactions which has to be coupled to a given material content, and the completely antisymmetric spin and energy densities \begin{eqnarray} &S^{\rho\mu\nu} =\frac{i\hbar}{4}\overline{\psi}\{\boldsymbol{\gamma}^{\rho},\boldsymbol{\sigma}^{\mu\nu}\}\psi \label{spin}\\ &T^{\mu}_{\phantom{\mu}\nu} =\frac{i\hbar}{2}\left(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\nu}\psi -\boldsymbol{D}_{\nu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi\right) \label{energy} \end{eqnarray} alongside with the current given by the expression \begin{eqnarray} &J^{\rho}=q\hbar\overline{\psi}\boldsymbol{\gamma}^{\rho}\psi \label{current} \end{eqnarray} are precisely the conserved quantities we need once the spinor field has \begin{eqnarray} &i\hbar\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\psi-m\psi=0 \label{matterequations} \end{eqnarray} as the matter field equations themselves. These field equations might have also been obtained by varying the previous geometric lagrangian density plus Dirac lagrangian density $\mathscr{L}\!=\! \frac{i\hbar}{2}(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\psi\!-\!\boldsymbol{D}_{\mu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi)\!-\!m\overline{\psi}\psi$ as the entire least-order derivative lagrangian density that can be written for this theory as a whole. Taken all together, we have that the most general coupling involving the completely antisymmetric torsion present both implicitly through the connection and explicitly with quadratic terms beside the curvature scalar, plus gauge fields, for the Dirac matter content, is given by the system of field equations for the completely antisymmetric torsion-spin coupling (\ref{torsion-spin}-\ref{spin}) together with curvature-energy coupling given in terms of (\ref{curvature-energy}-\ref{energy}) and alongside to gauge-current coupling given by (\ref{gauge-current}-\ref{current}) given by the geometric field equations for the completely antisymmetric torsion and curvature tensor with the gauge field \begin{eqnarray} &Q^{\rho\mu\nu} =-a\frac{i\hbar}{4}\overline{\psi}\{\boldsymbol{\gamma}^{\rho},\boldsymbol{\sigma}^{\mu\nu}\}\psi \label{Sciama--Kibble}\\ \nonumber &\frac{b}{2a}\left(\frac{1}{4}\delta^{\mu}_{\nu}Q^{2} -\frac{1}{2}Q^{\mu\alpha\sigma}Q_{\nu\alpha\sigma} +D_{\rho}Q^{\rho\mu}_{\phantom{\rho\mu}\nu}\right) +\left(G^{\mu}_{\phantom{\mu}\nu}-\frac{1}{2}\delta^{\mu}_{\nu}G -\lambda\delta^{\mu}_{\nu}\right)+\\ &+\left(\frac{b+a}{2}\right)\left(F^{\rho\mu}F_{\rho\nu}-\frac{1}{4}\delta^{\mu}_{\nu}F^{2}\right) =\left(\frac{b+a}{2}\right) \frac{i\hbar}{2}\left(\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\nu}\psi -\boldsymbol{D}_{\nu}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi\right) \label{Einstein}\\ &\frac{1}{2}F_{\mu\nu}Q^{\mu\nu\rho}+D_{\sigma}F^{\sigma\rho} =q\hbar\overline{\psi}\boldsymbol{\gamma}^{\rho}\psi \label{Maxwell} \end{eqnarray} verified once the matter field equations (\ref{matterequations}) given by \begin{eqnarray} &i\hbar\boldsymbol{\gamma}^{\mu}\boldsymbol{D}_{\mu}\psi-m\psi=0 \label{Dirac} \end{eqnarray} are satisfied, as a direct calculation shows: notice that a completely antisymmetric torsion restrains the description to a completely antisymmetric spin allowing only the simplest spinor field to be defined without constraints, or conversely requiring complete antisymmetry for torsion does not constitute any loss of generality for the complete antisymmetry of the spin since we want to study only the simplest spinor field; then the matter field equation (\ref{Dirac}) has characteristic equation given simply by $n^{2}\!=\!0$ so that $n^{\mu}$ is light-like thus yielding the characteristic surfaces on the light-cone and causality is preserved. The fact that all degrees of freedom are accounted by a causal matter field equation tells that the matter field equation is well defined, as it has been discussed in \cite{f/2a,f/2b}. As a final remark, we notice that these dynamical fields have been endowed with equations for the completely antisymmetric torsion-spin and curvature-energy coupling with gauge-current coupling for a geometry filled with Dirac matter fields, where the completely antisymmetric torsion and gravitational constants $a$ and $b$ are accompanied by the electric charge $q$ and the Planck constant $\hbar$ altogether accounting for as many free coupling constants as independent fields, since the cosmological constant $\lambda$ and the mass $m$ of the spinor have to be seen as parameters; again we stress that if we wish to develop a geometry of Dirac matter fields whose completely antisymmetric spin turns into a completely antisymmetric torsion for least-order derivative field equations then this theory is the only one possible, but if one releases the assumption of working with Dirac matter fields then there no longer is a complete antisymmetry of the spin that is to be reflected into the complete antisymmetry of torsion and if this is accompanied by a further releasing of the hypothesis of having only the least-order derivative field equations, then more terms quadratic in the curvature and quartic in torsion can be added in enlarged actions such as those for instance of reference \cite{Baekler:2011jt}, or with even more curvature and torsion in yet even more enlarged actions, including more and more coupling constants. However, in order to see what happens to the Dirac field with completely antisymmetric spin coupled to the completely antisymmetric torsion in least-order derivative field equations, our restrictions will work just fine. \subsection{The Self-Interactions for Matter Fields} In the system of field equations, torsional quantities can be decomposed in terms of torsionless quantities and torsional contributions that can be converted through the torsion-spin coupling field equation into spinorial potentials in all field equations starting from the curvature-energy coupling field equations as \begin{eqnarray} \nonumber &\left(R_{\mu\nu}+\lambda g_{\mu\nu}\right) +\left(\frac{a+b}{2}\right)\left(F^{\rho}_{\phantom{\rho}\mu}F_{\rho\nu}-\frac{1}{4}g_{\mu\nu}F^{2}\right) =-\left(\frac{a+b}{2}\right)\frac{m}{2}\overline{\psi}\psi g_{\mu\nu}+\\ &+\left(\frac{a+b}{2}\right) \frac{i\hbar}{4}\left(\overline{\psi}\boldsymbol{\gamma}_{\mu}\boldsymbol{\nabla}_{\nu}\psi +\overline{\psi}\boldsymbol{\gamma}_{\nu}\boldsymbol{\nabla}_{\mu}\psi -\boldsymbol{\nabla}_{\nu}\overline{\psi}\boldsymbol{\gamma}_{\mu}\psi -\boldsymbol{\nabla}_{\mu}\overline{\psi}\boldsymbol{\gamma}_{\nu}\psi\right) \label{gravitation} \end{eqnarray} which are exactly the field equations we would have had without torsion and for the gauge-current coupling field equations given by \begin{eqnarray} &\nabla_{\sigma}F^{\sigma\rho}=q\hbar\overline{\psi}\boldsymbol{\gamma}^{\rho}\psi \label{electrodynamics} \end{eqnarray} again as those we would have had with no torsion, with matter field equations \begin{eqnarray} \nonumber &i\hbar\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi+\frac{3a}{16} \hbar^{2}\overline{\psi}\boldsymbol{\gamma}^{\mu}\boldsymbol{\gamma}\psi \boldsymbol{\gamma}_{\mu}\boldsymbol{\gamma}\psi-m\psi\equiv\\ \nonumber &\equiv i\hbar\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi-\frac{3a}{16} \hbar^{2}\overline{\psi}\boldsymbol{\gamma}^{\mu}\psi\boldsymbol{\gamma}_{\mu}\psi-m\psi\equiv\\ &\equiv i\hbar\boldsymbol{\gamma}^{\mu}\boldsymbol{\nabla}_{\mu}\psi -\frac{3a}{16}\hbar^{2}\left(\overline{\psi}\psi\mathbb{I} -\overline{\psi}\boldsymbol{\gamma}\psi\boldsymbol{\gamma}\right)\psi-m\psi=0 \label{matter} \end{eqnarray} as in the torsionless case but complemented with self-interactions for spinors having the Nambu-Jona--Lasinio structure, and therefore the torsionally-induced self-interactions for a given spinor are actually chiral interactions between the projections of each spinor, within the Dirac field equations. In non-gravitational non-relativistic limit the temporal and spatial components are split, so that for electric and magnetic fields and taking the standard representation where the spinor field has only the large component $\phi$ then \begin{eqnarray} &\mathrm{div}\vec{E}=q\hbar\phi^{\dagger}\phi\\ &\mathrm{rot}\vec{B}-\frac{\partial\vec{E}}{\partial t}=0 \end{eqnarray} with matter field equations given according to \begin{eqnarray} \nonumber &i\hbar\frac{\partial\phi}{\partial t} +\frac{\hbar^{2}}{2m}\boldsymbol{\nabla}^{2}\phi -\frac{q\hbar^{2}}{2m}\vec{B}\cdot\vec{\boldsymbol{\sigma}}\phi -\frac{3a}{16}\hbar^{2}\phi^{\dagger}\vec{\boldsymbol{\sigma}}\phi \cdot \vec{\boldsymbol{\sigma}}\phi-m\phi\equiv\\ &\equiv i\hbar\frac{\partial\phi}{\partial t} +\frac{\hbar^{2}}{2m}\boldsymbol{\nabla}^{2}\phi -\frac{q\hbar^{2}}{2m}\vec{B}\cdot\vec{\boldsymbol{\sigma}}\phi -\frac{3a}{16}\hbar^{2}\phi^{\dagger}\phi\phi-m\phi=0 \label{matterapproximated} \end{eqnarray} where the presence of torsion is manifested as semispinorial self-interactions of the Ginzburg-Landau form, and these are known as Pauli field equations. As a final remark, we have to notice that when we consider a static configurations of total energy $E$ it is possible to have situations where the semispinor has a single component $\phi^{\dagger}\!=\!(u^{\ast},0)$ or $\phi^{\dagger}\!=\!(0,u^{\ast})$ with matter field equation \begin{eqnarray} &\frac{\hbar^{2}}{2m}\boldsymbol{\nabla}^{2}u-\frac{3a}{16}\hbar^{2}u^{\ast}u u+Eu=0 \label{matterapproximatedlimit} \end{eqnarray} in which the torsion is eventually expressed in the guise of semispinorial self-interactions with Gross-Pitaevskii form, in the Schr\"{o}dinger equation. We have to notice that the constant $a\!+\!b\!=\!16\pi G_{N}$ is to be interpreted as the gravitational Newton constant, the constant $q$ is of course the electric charge and the constant $\frac{3a\hbar^{2}}{16}$ is still totally undetermined: in the case in which the torsional constant is taken to be positive therefore giving rise to a repulsion in the non-linear potentials, since for antialigned-spin matter distributions the overall non-linear potential vanishes, the non-linear potential that keeps apart two matter distributions in general fails to do so by allowing linear superposition in the case of opposite helicities, consequently entailing a dynamical form of the exclusion principle, as it has been discussed in \cite{f/3}: in particular, as for single-handed massless fields the non-linear potential vanishes, neutrinos would not obey the exclusion principle, as suggested in \cite{d-s}; the fact that the torsional constant is positive also implies that at high densities the dominant forces are repulsive, whereas the fact that the total energy can also be negative tells that at low densities the dominant effect is attractiveness, and hence the whole potential is capable of giving rise to a dynamical symmetry breaking down to a stable equilibrium with a positive energy: condition $3a\hbar^{2}\phi^{2}\!=\!16E$ is the non-trivial solution we have for bosonization and the eventual condensation in the theory of superconductivity as it is well known. Finally, notice that the discrete charge-conjugation given by $\psi\!\rightarrow\!\boldsymbol{\gamma}^{2}\psi^{\ast}$ is not a symmetry for the non-linear Dirac field equations, as it is also clear from the fact that this discrete transformation in the non-relativistic limit would swap the small and large components failing to transform a slow-speed field into another slow-speed field contrary to what it should be, and therefore rendering meaningless to apply such a transformation in the slow-speed case given by the non-linear Pauli field equations, and even more for the slow-speed single-helicity non-linear Schr\"{o}dinger field equation. With what we have done so far, we have presented a theory that allows for the possibility to have the torsional interactions and coupling constant turned into spinorial non-linear potentials with a totally undetermined strength, a theory in which the non-linearities would become more relevant as the strength gets larger, within the Dirac matter field equation: in this theory the non-linear effects may be amplified by a larger strength rendering them more likely to be detected in the sense that in the restricted theory such non-linearities are tuned on the gravitational constant remaining beyond observation, although the fact that non-linear effects may be amplified by a larger strength does not imply this will necessarily be the case, for the dynamic of the Dirac matter fields. Because having torsion with a constant that is left undetermined would always allow for the possibility to set it to any specific value experimentally while fixing its constant immediately would forbid any eventual tuning, the wisest choice is to take torsion with its constant undetermined. And therefore we will have to let observation tell. \section*{Conclusion} In the present paper, we have started from the ESK theory considering Kaempffer speculation for which it must have been possible to incorporate for the independent metric and torsion fields two different coupling constants, giving to this $\mathrm{ESK}^{2}$ theory a concrete realization; we have obtained all the field equations, which have eventually been decomposed and rearranged in order to show that the presence of torsion and its coupling constant are converted into spinorial self-interactions displaying a free universal coupling constant within the Dirac matter field equations: we have shown that the non-linear potentials have the structure of the Nambu-Jona--Lasinio potential in the Dirac spinorial equation, approximated to the structure of the Ginzburg-Landau potential in the Pauli non-relativistic semi-spinorial equation, and then to the structure of the Gross-Pitaevskii potential in the Schr\"{o}dinger non-relativistic scalar--like equation. The main result of the present work is that the torsional spinorial self-interactions may give rise to the potentials related to a constant $a$ that in the ESK theory is the Newton constant, whose smallness suppresses all sorts of effects unless at the Planck scale, whereas in the present $\mathrm{ESK}^{2}$ theory the constant is still undetermined and if it is chosen properly, those effects may actually be relevant much beyond the Planck scales; this means that the single remaining argument against torsion spelling its alleged smallness does not apply any longer, and torsionally-induced interactions potentially relevant at larger scales have now the right to be studied and their consequences investigated. Whether torsion is indeed relevant because its constant is found to be large or to prove that despite all torsion is in fact negligible for its constant happens to be small nobody will known, until detection.
b82d97c1f2b66f4977064c7c615e9c6669a0ff25
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Current value for $V_{ud}$ } Superallowed $0^+ \rightarrow 0^+$ beta decay between isospin $T = 1$ nuclear analog states currently provides the most accurate determination of the Cabibbo-Kobayashi-Maskawa (CKM) matrix element, $V_{ud}$. There are two reasons for this: First, there are many nuclear beta decays that could be chosen for study. By limiting the study to just those decays between $0^+$ analog states only the vector component of the weak interaction is operative. Second, by limiting the study in this way, the Conserved Vector Current (CVC) hypothesis becomes useful. This hypothesis states that the strength of the vector component of the weak interaction, $G_V$, is a `true' constant and independent of the nucleus under study. This result provides a consistency check among the different nuclear decays studied. The hypothesis, however, is only operative in the isospin-symmetry limit. So one disadvantage is that a nuclear-structure dependent calculation of isospin-symmetry breaking is required and the uncertainty associated with this is the subject of the second half of this report. For the moment, we note that this correction is small and is testable via the CVC hypothesis. The analysis proceeds as follows. The experimental measured quantities of $Q$-value, lifetime and branching ratio are combined into an $ft$ value. To this, radiative and isospin-symmetry breaking corrections are added in defining a `corrected' ${\cal F}t$ value \begin{equation} {\cal F}t \equiv ft(1+\delta_R^{\prime})(1+\delta_{NS}-\delta_C) = \frac{K}{2 G_V^2 (1+\Delta_R^V)} . \label{Ftdef} \end{equation} Here the radiative correction is separated into three terms, $\delta_R^{\prime}$, $\delta_{NS}$ and $\Delta_R^V$, where $\delta_R^{\prime}$ and $\delta_{NS}$ depend on the nucleus under study, while $\Delta_R^V$ does not. Further, $\delta_R^{\prime}$ depends trivially on the nucleus, depending on the total charge of the nucleus, $Z$, and on the emitted electron's energy. But $\delta_{NS}$, like the isospin-symmetry breaking correction $\delta_C$ depends in its evaluation on the details of nuclear structure. Lastly, $K$ is a constant, $K/(\hbar c)^6 = 2 \pi^3 \hbar \ln 2/(m_e c^2)^5$, with $m_e$ the electron mass. Immediately one sees from the CVC hypothesis that a nucleus-independent $G_V$ value leads to the requirement that the ${\cal F}t$ value be nucleus-independent as well. This provides a demanding consistency check on the experimental measurements and the theoretical corrections. If the ${\cal F}t$ values are found to be statistically consistent with each other, then one is justified in taking an average value, $\overline{{\cal F}t}$, from which $G_V$ can be determined. In addition, via the relationship $V_{ud} = G_V / G_F$, where $G_F$ is the well-known weak-interaction strength constant for a purely leptonic decay, a value for $V_{ud}$ is obtained as well. From the 2009 survey of experimental data, Hardy and Towner \cite{HT09} determine the value of $\overline{{\cal F}t}$ to be \begin{equation} {\cal F}t = 3071.81 \pm 0.83~s \label{AvgFt} \end{equation} leading to \begin{equation} |V_{ud}| = 0.97425 \pm 0.00022 .~~~~~~~~~~[0 \rightarrow 0] \label{Vud00} \end{equation} Other methods of obtaining $V_{ud}$ -- see survey in \cite{TH10} -- are currently less accurate. They are: \begin{itemize} \item {\it neutron decay}, for which \begin{equation} |V_{ud}| = 0.9743 \pm 0.0015 .~~~~~~~~~~~~[{\rm neutron}] \label{Vudneut} \end{equation} This value was presented at the CKM2010 Workshop by M\"{a}rkisch \cite{Ma10}. It is based on the 2010 Particle Data Group's analysis \cite{PDG10}, but updated for new lifetime measurements from Serebrov {\it et al}. \cite{Se05} and Pichlmaier {\it et al}. \cite{Pi10} and for preliminary beta-asymmetry measurements from PERKEO II \cite{Ab08} and UCNA \cite{Li10}. \item {\it $T = 1/2$ mirror transitions}, for which \begin{equation} |V_{ud}| = 0.9719 \pm 0.0017 ~~~~~~~~~~~~[{\rm mirror~transitions}] \label{Vudmirror} \end{equation} from Naviliat-Cuncic and Severijns \cite{NS09}. \item {\it pion beta decay}, for which \begin{equation} |V_{ud}| = 0.9742 \pm 0.0026 ~~~~~~~~~~~~[{\rm pion}] \label{Vudpion} \end{equation} using the branching ratio measured by the PIBETA group \cite{Po04}. \end{itemize} The CKM matrix is posited to be unitary. To date, the most demanding test of this comes from the sum of squares of the top-row elements, $|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2$, which should sum to one. Taking $|V_{us}|$ from the recent FlaviaNet report \cite{An10}, $|V_{us}| = 0.2253(9)$, and $|V_{ub}|$ from the Particle Data Group \cite{PDG10}, $|V_{ub}| = 3.39(44) \times 10^{-3}$, the unitarity sum becomes \begin{equation} |V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2 = 0.99990 \pm 0.00060 . \label{unit} \end{equation} This result shows unitarity to be fully satisfied to a precision of $0.06 \%$. Only $V_{us}$ and $V_{ud}$ contribute preceptibly to the uncertainty and their contributions to the error budget are almost equal to one another. \section {Test of isospin-symmetry breaking correction} Let's return to the isospin-symmetry breaking correction, $\delta_C$. Its evaluation requires a nuclear-structure calculation. Although the role played by nuclear structure is relatively small, the precision currently reached by experiment is such that the theoretical uncertainties introduced with $\delta_C$ now dominate over the experimental uncertainties. Consequently, this correction has attracted a lot of attention recently. We offer a recommended set of values for this correction \cite{TH08,HT09}, but there are a growing number of alternative choices \cite{OB85,OB95,Sa96,Li09,Au09}. There has also been a claim, albeit unsupported by any detailed computations, that our calculations neglect a radial excitation term, which is purported to be important \cite{MS08}. To counterbalance that, however, there are two recent papers that confirm our result: one \cite{Gr10} does so based on a semi-empirical analysis of the data, while the other \cite{Sa10} quotes results from a Skyrme-density-functional-theory calculation in which simultaneous isospin and angular-momentum projection has been incorporated. Clearly it would be valuable if the various sets of $\delta_C$ corrections could be tested against the experimental data. Towner and Hardy \cite{TH10a} have suggested such a test, which is based on the acceptance of the CVC hypothesis. We start by rearranging Eq.~(\ref{Ftdef}) to read \begin{equation} \delta_C = 1 + \delta_{NS} - \frac{\overline{{\cal F}t}}{ft (1 + \delta_R^{\prime})} \label{dctest} \end{equation} where ${\cal F}t$ has been replaced by its average value. For any set of $\delta_C$ values to be acceptable, this equation must be satisfied. For a series of $n$ superallowed transitions, one treats $\overline{{\cal F}t}$ as a single adjustable parameter and use it to bring the $n$ results from the right-hand side of Eq.~(\ref{dctest}), which is based predominantly on the experimental $ft$ values ($\delta_{NS}$ is small, and $\delta_R^{\prime}$ unambiguous), into the best possible agreement with the corresponding $n$ calculated values for $\delta_C$. The normalized $\chi^2$, minimized by this process then provides a figure of merit for that set of calculations. The recent $\delta_C$ calculations are described in \cite{TH10a} and are: \begin{itemize} \item Shell model with Saxon-Woods radial functions, SM-SW \cite{TH08}. \item Shell model with Hartree-Fock radial functions, SM-HF \cite{HT09}. \item Relativistic Hartree-Fock with the random phase approximation (RPA) and an effective interaction labelled PKO1, RHF-RPA \cite{Li09}. \item Relativistic Hartree with RPA and a density-dependent effective interaction, labelled DD-ME2, RH-RPA \cite{Li09}. \item Isovector monopole resonance model, IVMR \cite{Au09}. \end{itemize} We have applied the test to these five sets of model calculations. The resulting normalized $\chi^2$ for each least-squares fit -- expressed as $\chi^2/n_d$, where $n_d$ is the number of degrees of freedom -- is given in Table~\ref{t:chi2}. \begin{table}[t] \begin{center} \begin{tabular}{lccccc} & SM-SW & SM-HF & RHF-RPA & RH-RPA & IVMR \\ \hline & & & & & \\[-3mm] $\chi^2/n_d$ (Row 1 -- see text) & 1.2 & 8.3 & 7.2 & 6.0 & 48.0 \\ Confidence Level (\%) & 26 & 0 & 0 & 0 & 0 \\ $\chi^2/n_d$ (Row 3 -- see text) & 0.4 & 2.2 & 2.7 & 2.1 & 11.0 \\ $\chi^2/n_d$ (Row 4 -- see text) & 0.3 & 1.1 & 1.6 & 1.3 & 4.5 \\[2mm] \hline \end{tabular} \caption{Normalized $\chi^2/n_d$ obtained in the test described in the text for five sets of model calculations of $\delta_C$. From Ref. \cite{TH10a}.} \label{t:chi2} \end{center} \end{table} We give three sets of normalized $\chi^2$; they differ one from another on how the uncertainties are handled. Strictly speaking, the $\chi^2$ test only has an unambiguous interpretation if the errors considered are solely statistical. Thus in the first row in Table~\ref{t:chi2}, we keep only statistical errors on the experimental $ft$ values and assign no errors to the theoretical quantities $\delta_R^{\prime}$, $\delta_{NS}$ and $\delta_C$. For this case, we can define a confidence level as \cite{PDG10} \begin{equation} CL = \int_{\chi_0^2}^{\infty} P_{n_d}(\chi^2) d \chi^2 \label{CL} \end{equation} where $P_{n_d}(\chi^2)$ is the $\chi^2$ probability distribution function for $n_d$ degrees of freedom, and $\chi_0^2$ is the minimum value of $\chi^2$ obtained in the fit for the particular model set of values for $\delta_C$. Loosely speaking, the larger the value of $CL$ the more acceptable are the values of $\delta_C$ in satisfying the CVC hypothesis. The $CL$ values are given in the second row of the Table. In the third row, we have added non-statistical errors to the radiative correction, while in the fourth row non-statistical errors are included for both the radiative and isospin-symmetry breaking corrections. The inclusion of non-statistical errors generally reduces the normalized $\chi^2$ of the fit, but the ranking of the models remains unaltered. The most obvious outcome of these analyses is that only one model, SM-SW, produces satisfactory agreement with CVC. All the others have confidence levels below $0.5 \%$. It is somewhat surprising that SM-HF with Hartree-Fock radial functions does not do as well as Saxon-Woods radial functions. The problem is that these SM-HF calculations fail to give large enough $\delta_C$ values for high-$Z$ cases of $^{62}$Ga and $^{74}$Rb. This has been noted before by Ormand and Brown \cite{OB95}. We have tried varying the Skyrme interaction used in the Hartree-Fock calculation -- to date we have sampled 12 interactions -- but they all fail in the high-$Z$ cases. The discrepancy appears inherent in the SM-HF model.
542c1e6ce33c8fd00ed52f8ff93f14fbed98a8e9
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }
\section{Introduction}\label{sec:intro} A fundamental open question in knot theory is the question of when a crossing change on an oriented knot changes the isotopy class of the knot. A crossing disc for an oriented knot $K\subset S^3$ is an embedded disc $D\subset S^3$ such that $K$ intersects ${\rm int}(D)$ twice with zero algebraic intersection number. A crossing change on $K$ can be achieved by twisting $D$ or equivalently by performing appropriate Dehn surgery of $S^3$ along the crossing circle $\partial D$. The crossing is called nugatory if and only if $\partial D$ bounds an embedded disc in the complement of $K$. A non-nugatory crossing on a knot $K$ is called cosmetic if the oriented knot $K'$ obtained from $K$ by changing the crossing is isotopic to $K$. Clearly, changing a nugatory crossing does not change the isotopy class of a knot. The nugatory crossing conjecture (Problem 1.58 of Kirby's list \cite{Kirbylist}) asserts that the converse is true: if a crossing change on a knot $K$ yields a knot isotopic to $K$ then the crossing is nugatory. In other words, there are not any knots in $S^3$ that admit cosmetic crossings. In the case that $K$ is the trivial knot an affirmative answer follows from a result of Gabai \cite{gabai} and work of Scharlemann and Thompson \cite{st}. The conjecture is also known to hold for 2-bridge knots by work of Torisu \cite{torisu}, and for fibered knots by work of Kalfagianni \cite{kalfagianni}. For knots of braid index three a weaker form of the conjecture, requiring that the crossing change happens on a closed 3-braid diagram, is discussed by Wiley in \cite{3braids}. In this paper we study cosmetic crossings on genus one knots and we show that the Alexander polynomial and the homology of the double cover branching over the knot provide obstructions to cosmetic crossings. \begin{theorem} \label{general} Given an oriented genus one knot $K$ let $\Delta_K(t)$ denote the Alexander polynomial of $K$ and let $Y_K$ denote the double cover of $S^3$ branching over $K$. Suppose that $K$ admits a cosmetic crossing. Then \begin{enumerate} \item $K$ is algebraically slice. In particular, $\Delta_K(t) \doteq f(t) f(t^{-1})$, where $f(t)\in {\mathbb{Z}}[t]$ is a linear polynomial. \item The homology group $H_1(Y_K):=H_1(Y_K, {{\mathbb{Z}}})$ is a finite cyclic group. \end{enumerate} \end{theorem} For knots that admit unique (up to isotopy) minimal genus Seifert surfaces we have the following stronger result. \begin{named}{Theorem \ref{Thm:unique_minimal_genus}} Let $K$ be an oriented genus one knot with a unique minimal genus Seifert surface, which admits a cosmetic crossing. Then $\Delta_K(t) \doteq 1$. \end{named} Given a knot $K$ let $D_{+}(K, n)$ denote the $n$-twisted, positive-clasped Whitehead double of $K$ and let $D_{-}(K, n)$ denote the $n$-twisted, negative-clasped Whitehead double of $K$. Theorems \ref{general} and \ref{Thm:unique_minimal_genus} can be used to prove the nugatory crossing conjecture for several classes of Whitehead doubles. For example, Theorem \ref{Thm:unique_minimal_genus}, combined with results of Lyon and Whitten \cite{Lyons, whitten}, gives the following. (See Section \ref{sec:examples} for more results in this vein.) \begin{corollary} If $K$ is a non-cable knot, then, for every $n\neq 0$, $D_{\pm }(K, n)$ admits no cosmetic crossings. \end{corollary} Combining Theorem \ref{general} with a result of Trotter \cite{trotter}, we prove the nugatory crossing conjecture for all the genus one knots with up to twelve crossings (Theorem \ref{12crossings}) and for several families of pretzel knots (Corollary \ref{pretzels}). \smallskip The paper is organized as follows: In Section \ref{sec:mimimum} we use a result of Gabai \cite{gabai} to prove that a cosmetic crossing change on a knot $K$ can be realized by twisting along an essential arc on a minimal genus Seifert surface of $K$ (Proposition \ref{prop:minimum}). For genus one knots such an arc will be non-separating on the surface. In subsequent sections this will be our starting point for establishing connections between cosmetic crossings and knot invariants determined by Seifert matrices. Sections \ref{sec:aslice} and \ref{sec:doublecover} are devoted to the proof of Theorem \ref{general}. The proof of this theorem shows that the $S$-equivalence class of the Seifert matrix for a genus one knot provides more refined obstructions to cosmetic crossings: \begin{corollary}\label{sequivalent} Let $K$ be a genus one knot. If $K$ admits a cosmetic crossing, then $K$ has a Seifert matrix $V$ of the form $\begin{pmatrix}a & b \\ b+1 & 0\end{pmatrix}$ which is $S$--equivalent to $\begin{pmatrix}a +\epsilon & b \\ b+1 & 0\end{pmatrix}$ for some $\epsilon\in \{-1,1\}$. \end{corollary} In Section \ref {Section:S_equivalence} we study the question of whether the $S$--equivalence class of the Seifert matrix of a genus one knot contains enough information to resolve the nugatory crossing conjecture (Question \ref{qu:sequivalent}). Using Corollary \ref{sequivalent} we prove Theorem \ref{Thm:unique_minimal_genus} which implies the nugatory crossing conjecture for genus one knots with non--trivial Alexander polynomial with a \emph{unique} minimal genus Seifert surface. We also construct examples showing that Corollary \ref{sequivalent} is not enough to prove the nugatory crossing conjecture for \emph{all} genus one knots with non--trivial Alexander polynomial (Proposition \ref{prop:sequivalent}). In Sections \ref{sec:lowcrossings} and \ref{sec:examples} we provide examples of knots for which Theorems \ref{general} and \ref{Thm:unique_minimal_genus} settle the nugatory crossing question. In Section \ref{sec:lowcrossings} we combine Theorem \ref{general} and Corollary \ref{sequivalent} with a result of Trotter to settle the conjecture for all the 23 genus one knots with up to 12 crossings. The examples we discuss in Section \ref{sec:examples} are twisted Whitehead doubles and pretzel knots. \vskip 0.04in Throughout the paper we will discuss oriented knots in an oriented $S^3$ and we work in the smooth category. \subsection*{Acknowledgement} CB and EK thank Matt Hedden and Matt Rathbun for helpful discussions. Part of this work completed while MP was a visitor at WWU M\"{u}nster, which he thanks for its hospitality. \smallskip \section{Crossing changes and arcs on surfaces} \label{sec:mimimum} In this section we use a result of Gabai \cite{gabai} to prove that a cosmetic crossing change on a knot $K$ can be realized by twisting along an essential arc on a minimal genus Seifert surface of $K$ (Proposition \ref{prop:minimum}). For genus one knots such an arc will be non-separating on the surface. in the next sections this will be our starting point for establishing connections between cosmetic crossings and knot invariants determined by Seifert matrices. \vskip 0.1in Let $K$ be an oriented knot in $S^3$ and $C$ be a crossing of sign $\epsilon$, where $\epsilon=1$ or $-1$ according to whether $C$ is a positive or negative crossing (see Figure 1). A \emph{crossing disc} of $K$ corresponding to $C$ is an embedded disc $D\subset S^3$ such that $K$ intersects ${\rm int}(D)$ twice, once for each branch of $C$, with zero algebraic intersection number. The boundary $L = \partial D$ is called a \emph{crossing circle}. Performing $({\textstyle {{-\epsilon}}})$-surgery on $L$, changes $K$ to another knot $K^{'}\subset S^3$ that is obtained from $K$ by changing the crossing $C$. \begin{define} \label{nugat} A crossing supported on a crossing circle $L$ of an oriented knot $K$ is called \emph{nugatory} if $L = \partial D$ also bounds an embedded disc in the complement of $K$. This disc and $D$ form an embedded 2-sphere that decomposes $K$ into a connected sum where some of the summands may be trivial. A non-nugatory crossing on a knot $K$ is called \emph{cosmetic} if the oriented knot $K'$ obtained from $K$ by changing $C$ is isotopic to $K$; that is, there exists an orientation-preserving diffeomorphism $f: S^3\longrightarrow S^3$ with $f(K)=K'$. \qed\end{define} \vskip .1in \begin{figure} \includegraphics[width=1.3in]{crossings.eps} \caption{Left: a positive crossing. Right: a negative crossing.}\label{cr_signs} \end{figure} For a link $J$ in $S^3$ we will use $\eta(J)$ to denote a regular neighborhood of $J$ in $S^3$ and we will use $M_J \colonequals \overline {S^3\setminus \eta(J)}$ to denote the closure of the complement of $\eta(J)$ in $S^3$. \begin{lemma} \label{lem:irreducible} Let $K$ be an oriented knot and $L$ a crossing circle supporting a crossing $C$ of $K$. Suppose that $M_{K \cup L}$ is reducible. Then $C$ is nugatory. \end{lemma} \begin{proof} An essential 2-sphere in $M_{K \cup L}$ must separate $\eta(K)$ and $\eta(L)$. Thus in $S^3$, $L$ lies in a 3-ball disjoint from $K$. Since $L$ is unknotted, it bounds a disc in the complement of $K$. \end{proof} Let $K$ be an oriented knot and $L = \partial D$ a crossing circle supporting a crossing $C$. Let $K'$ denote the knot obtained from $K$ by changing $C$. Since the linking number of $L$ and $K$ is zero, $K$ bounds a Seifert surface in the complement of $L$. Let $S$ be a Seifert surface that is of minimal genus among all such Seifert surfaces in the complement of $L$. Since $S$ is incompressible, after an isotopy we can arrange so that the closed components of $S\cap D$ are homotopically essential in $D\setminus K$. But then each such component is parallel to $\partial D$ on $D$ and by further modification we can arrange so that $S\cap D$ is a single arc $\alpha$ that is properly embedded on $S$ as illustrated in Figure \ref{alpha}. The surface $S$ gives rise to Seifert surfaces $S$ and $S'$ of $K$ and $K'$, respectively. \begin{figure} \input{crossingarc.eps_tex} \caption{The crossing arc $\alpha = S \cap D$.}\label{alpha} \end{figure} \begin{prop} \label{prop:minimum} Suppose that $K$ is isotopic to $K'$. Then $S$ and $S'$ are Seifert surfaces of minimal genus for $K$ and $K'$, respectively. \end{prop} \begin{proof} If the crossing is nugatory then $L$ bounds a disc in the complement of $S$ and the conclusion is clear. Suppose the crossing is cosmetic; by Lemma \ref{lem:irreducible}, $M_{K \cup L}$ is irreducible. We can consider the surface $S$ properly embedded in $M_{K \cup L}$ so that it is disjoint from $\partial \eta(L) \subset \partial M$. The assumptions on irreducibility of $M_{K \cup L}$ and on the genus of $S$ imply that the foliation machinery of Gabai \cite{gabai} applies. In particular, $S$ is taut in the Thurston norm of $M_{K\cup L}$. The manifolds $M_K$ and $M_{K'}$ are obtained by Dehn filling of $M_{K \cup L}$ along $\partial \eta(L)$. By \cite[Corollary 2.4] {gabai}, $S$ can fail to remain taut in the Thurston norm (i.e. genus minimizing) in at most one of $M_K$ and $M_{K'}$. Since we have assumed that $C$ is a cosmetic crossing, $M_K$ and $M_{K'}$ are homeomorphic (by an orientation-preserving homeomorphism). Thus $S$ remains taut in both of $M_K$ and $M_{K'}$. This implies that $S$ and $S'$ are Seifert surfaces of minimal genus for $K$ and $K'$, respectively. \end{proof} By Proposition \ref{prop:minimum}, a crossing change of a knot $K$ that produces an isotopic knot corresponds to a properly embedded arc $\alpha$ on a minimal genus Seifert surface $S$ of $K$. We observe the following. \begin{lemma} \label{lem:essential} If $\alpha$ is inessential on $S$, then the crossing is nugatory. \end{lemma} \begin{proof} Recall that $\alpha$ is the intersection of a crossing disc $D$ with $S$. Since $\alpha$ is inessential, it separates $S$ into two pieces, one of which is a disc $E$. Consider $D$ as properly embedded in a regular neighborhood $\eta(S)$ of the surface $S$. The boundary of a regular neighborhood of $E$ in $\eta (S)$ is a 2-sphere that contains the crossing disc $D$. The complement of the interior of $D$ in that 2-sphere gives a disc bounded by the crossing circle $L=\partial D$ with its interior disjoint from the knot $K=\partial S$. \end{proof} \section{Obstructing cosmetic crossings in genus one knots} \label{sec:aslice} A knot $K$ is called \emph{algebraically slice} if it admits a Seifert surface $S$ such that the Seifert form $\theta: H_1(S)\times H_1(S)\longrightarrow {\mathbb{Z}}$ vanishes on a half-dimensional summand of $H_1(S)$; such a summand is called a \emph{metabolizer} of $H_1(S)$. If $S$ has genus one, then the existence of a metabolizer for $H_1(S)$ is equivalent to the existence of an essential oriented simple closed curve on $S$ that has zero self-linking number. If $K$ is algebraically slice, then the Alexander polynomial $\Delta_K(t)$ is of the form $\Delta_K(t) \doteq f(t) f(t^{-1})$, where $f(t)\in {\mathbb{Z}}[t]$ is a linear polynomial with integer coefficients and $\doteq$ denotes equality up to multiplication by a unit in the ring of Laurent polynomials ${{\mathbb{Z}}}[t, t^{-1}]$. For more details on these and other classical knot theory concepts we will use in this and the next section, the reader is referred to \cite{burde-zieschang:knots} or \cite{lickorish:book}. \begin{theorem}\label{aslice} Let $K$ be an oriented genus one knot. If $K$ admits a cosmetic crossing, then it is algebraically slice. In particular, there is a linear polynomial $f(t)\in {\mathbb{Z}}[t]$ such that $\Delta_K(t)\doteq f(t) f(t^{-1})$. \end{theorem} \begin{proof} Let $K'$ be a knot that is obtained from $K$ by a cosmetic crossing change $C$. By Proposition \ref{prop:minimum}, there is a genus one Seifert surface $S$ such that a crossing disc supporting $C$ intersects $S$ in a properly embedded arc $\alpha \subset S$. Let $S'$ denote the result of $S$ after the crossing change. \begin{figure} \input{figure1.eps_tex} \caption{A genus one surface $S$ with generators $a_1$ and $a_2$ of $H_1 (S)$ and a non-separating arc $\alpha$.}\label{fig1} \end{figure} Since $C$ is a cosmetic crossing, by Lemma \ref{lem:essential}, $\alpha$ is essential. Further, since the genus of $S$ is one, $\alpha$ is non-separating. We can find a simple closed curve $a_1$ on $S$ that intersects $\alpha$ exactly once. Let $a_2$ be another simple closed curve so that $a_1$ and $a_2$ intersect exactly once and the homology classes of $a_1$ and $a_2$ form a symplectic basis for $H_1 (S) \cong {{\mathbb{Z}}} \oplus {{\mathbb{Z}}}$. Note that $\{ a_1, a_2 \}$ form a corresponding basis of $H_1(S')$. See Figure \ref{fig1}. The Seifert matrices of $S$ and $S'$ with respect to these bases are $$V = \begin{pmatrix}a & b \\ c & d\end{pmatrix} \ \ \ \rm{and} \ \ \ V'= \begin{pmatrix}a-\epsilon & b \\ c & d\end{pmatrix}$$ respectively, where $a,b,c, d\in {{\mathbb{Z}}}$ and $\epsilon=1$ or $-1$ according to whether $C$ is a positive or a negative crossing. The Alexander polynomials of $K$, $K'$ are given by $$\Delta_K(t)\doteq \det( V-tV^T)=ad(1-t)^2-(b-ct)(c-tb),$$ $$\Delta_{K'}(t)\doteq(a-\epsilon)d(1-t)^2-(b-ct)(c-tb).$$ Since $K$ and $K'$ are isotopic we must have $\Delta_K(t)\doteq\Delta_{K'}(t)$ which easily implies that $d={\rm lk}(a_2, \ a_2)=0$. Hence $K$ is algebraically slice and $$\Delta_K(t)\doteq(b-ct)(c-tb)=(-t)(b-ct)(b-ct^{-1})\doteq (b-ct)(b-ct^{-1}).$$ Setting $f(t)=b-ct$ we obtain $\Delta_K(t)\doteq f(t) f(t^{-1})$ as desired. Note that since $\abs{b-c}$ is the intersection number between $a_1$ and $a_2$, by suitable orientation choices, we may assume that $c=b+1$. \end{proof} Recall that the determinant of a knot $K$ is defined by $\det(K)=\abs{\Delta_K(-1)}$. As a corollary of Theorem \ref{aslice} we have the following. \begin{corollary}\label{determinant} Let $K$ be a genus one knot. If $\det(K)$ is not a perfect square then $K$ admits no cosmetic crossings. \end{corollary} \begin{proof} Suppose that $K$ admits a cosmetic crossing. By Theorem \ref{aslice} $\Delta_K(t)\doteq f(t) f(t^{-1})$, where $f(t)\in {\mathbb{Z}}[t]$ is a linear polynomial. Thus, if $K$ admits cosmetic crossings we have $\det(K)=\abs{\Delta_K(-1)}=[f(-1)]^{2}$. \end{proof} \section{Further obstructions: homology of double covers} \label{sec:doublecover} In this section we derive further obstructions to cosmetic crossings in terms of the homology of the double branched cover of the knot. More specifically, we will prove the following. \begin{theorem}\label{doublecover} Let $K$ be an oriented genus one knot and let $Y_K$ denote the double cover of $S^3$ branching over $K$. If $K$ admits a cosmetic crossing, then the homology group $H_1(Y_K)$ is a finite cyclic group. \end{theorem} To prove Theorem \ref{doublecover} we need the following elementary lemma. (Here, given $m\in {{\mathbb{Z}}}$ we denote by ${{\mathbb{Z}}}_m={{\mathbb{Z}}}/{m{{\mathbb{Z}}}}$ the cyclic abelian group of order $\abs{m}$.) \begin{lemma} \label{abelian} If $H$ denotes the abelian group given by the presentation \begin{equation*} H\cong \left< \begin{array}{l|l} c_1, c_2 & 2x c_1+ (2y+1)c_2= 0\\ & (2y+1)c_1= 0 \end{array} \right> , \end{equation*} then we have \begin{enumerate} \item $H\cong 0$, if $y=0$ or $y=-1$. \item $H \cong \mathbb{Z}_{d} \oplus \mathbb{Z}_{\frac{{(2y+1)}^2}{d}}$, if $y \neq 0,\ -1$ and {\rm gcd}$(2x,\ 2y+1) = d$ where $1 \leq d \leq 2y+1$. \end{enumerate} \end{lemma} \begin{proof} If $y=0$ or $y=-1$, clearly we have $H\cong 0$. Suppose now that $y \neq 0,\ -1$ and {\rm gcd}$(2x,\ 2y+1) = d$ where $1 \leq d \leq 2y+1$. Then there are integers $A$ and $B$ such that $2x = dA,\ 2y+1 = dB$, and $\textrm{gcd}(A,B) = 1$. Let $\alpha$ and $\beta$ be such that $\alpha A + \beta B = 1$. Since $ \begin{pmatrix}dA & dB \\ dB & 0\end{pmatrix}$ is a presentation matrix of $H$ and $\begin{pmatrix}\alpha & \beta \\ -B & A\end{pmatrix}$ is invertible over $\mathbb{Z}$, we get that $\begin{pmatrix}\alpha & \beta \\ -B & A\end{pmatrix} \begin{pmatrix}dA & dB \\ dB & 0\end{pmatrix} = \begin{pmatrix}d & d \alpha B \\ 0 & -dB^2\end{pmatrix}$ is also a presentation matrix for $H$. So \begin{equation*} H\cong \left< \begin{array}{l|l} c_1,c_2 & dc_1 + d \alpha B c_2 = 0\\ & dB^2 c_2 = 0 \end{array} \right> \end{equation*} Now letting $c_3 = c_1+ \alpha Bc_2$, we have \begin{equation*} H \cong \left< \begin{array}{l|l} c_2, c_3 & dc_3 = 0\\ & dB^2 c_2= 0 \end{array} \right> \end{equation*} Hence $H \cong \mathbb{Z}_{d} \oplus \mathbb{Z}_{dB^2}=\mathbb{Z}_{d} \oplus \mathbb{Z}_{\frac{{(2y+1)}^2}{d}}$. \end{proof} \vskip 0,05in \begin{proof} [Proof of Theorem \ref{doublecover}] Suppose that a genus one knot $K$ admits a cosmetic crossing yielding an isotopic knot $K'$. The proof of Theorem \ref{aslice} shows that $K$ and $K'$ admit Seifert matrices of the form \begin{equation}\label{Eqn:matrix1}V= \left(\begin{array}{cc} a & b \\ b+1 & 0 \end{array}\right)\mbox{and} \ V'=\left(\begin{array}{cc} a+\epsilon & b \\ b+1 & 0 \end{array}\right)\end{equation} respectively, where $a,b \in {{\mathbb{Z}}}$ and $\epsilon=1$ or $-1$ according to whether $C$ is a negative or a positive crossing. In particular we have \begin{equation}\label{Eqn:matrix2}\Delta_K(t)\doteq \Delta_{K'}(t)\doteq b(b+1)(t^2+1)-(b^2+(b+1)^2).\end{equation} Presentation matrices for $H_1(Y_K)$ and $H_1(Y_{K'})$ are given by \begin{equation}\label{Eqn:matrix3}V+V^T = \begin{pmatrix}2a & 2b+1\\ 2b+1 & 0\end{pmatrix} \mbox{and} \ V'+ (V')^{T}= \begin{pmatrix} 2a+2\epsilon &2 b+1 \\ 2b+1 & 0\end{pmatrix},\end{equation} respectively. It follows that Lemma \ref{abelian} applies to both $H_1(Y_K)$ and $H_1(Y_{K'})$. By that lemma, $H_1(Y_K)$ is either cyclic or $H_1(Y_K)\cong\mathbb{Z}_{d} \oplus \mathbb{Z}_{\frac{{(2b+1)}^2}{d}}$, with $b \neq 0,\ -1$ and $ {\rm gcd}(2a,\ 2b+1) = d$ where $1 < d \leq 2b+1$. Similarly, $H_1(Y_{K'})$ is either cyclic or $H_1(Y'_K)\cong\mathbb{Z}_{d'} \oplus \mathbb{Z}_{\frac{{(2b+1)}^2}{d'}}$, with $ {\rm gcd}(2a+2\epsilon,\ 2b+1) = d'$ where $1 < d' \leq 2b+1$. Since $K$ and $K'$ are isotopic, we have $H_1(Y_K)\cong H_1(Y_{K'})$. One can easily verify this can only happen in the case that $\textrm{gcd}(2a,\ 2b+1) = \textrm{gcd}(2a+2\epsilon,\ 2b+1) = 1$ in which case $H_1 (Y_K)$ is cyclic. \end{proof} It is known that for an algebraically slice knot of genus one every minimal genus surface $S$ contains a metabolizer (compare \cite[Theorem 4.2]{livingston}). After completing the metabolizer to a basis of $H_1(S)$ we have a Seifert matrix $V$ as in (\ref{Eqn:matrix1}) above. \begin{corollary} \label{matrix} Let $K$ be an oriented, algebraically slice knot of genus one. Suppose that a genus one Seifert surface of $K$ contains a metabolizer leading to a Seifert matrix $V$ as in (\ref{Eqn:matrix1}) so that $b\neq 0,-1$ and $\emph{gcd}(2a,\ 2b+1) \neq 1$. Then $K$ cannot admit a cosmetic crossing. \end{corollary} \begin{proof} Let $d=\textrm{gcd}(2a,\ 2b+1) $. As in the proof of Theorem \ref{doublecover}, we use Lemma \ref{abelian} to conclude that $H_1(Y_K)\cong\mathbb{Z}_{d} \oplus \mathbb{Z}_{\frac{{(2b+1)}^2}{d}}$ and hence is non-cyclic unless $d=1$. Now the conclusion follows by Theorem \ref{doublecover}. \end{proof} Theorems \ref{aslice} and \ref{doublecover} immediately yield Theorem \ref{general} stated in the introduction. \section{$S$--equivalence of Seifert matrices}\label{Section:S_equivalence} We begin by recalling the notion of $S$-equivalence. \begin{define} \label{defi:sequivalent} We say that an integral square matrix $V$ is a \emph{Seifert matrix} if $\det(V-V^T)=1$. We say that two Seifert matrices are \emph{$S$--equivalent} if they are related by a finite sequence of the following moves or their inverses: \begin{enumerate} \item replacing $V$ by $PVP^T$, where $P$ is an integral unimodular matrix, \item column expansion, where we replace an $n \times n$ Seifert matrix $V$ with an $(n+2) \times (n+2)$ matrix of the form: \[\left(\begin{array}{ccc|cc} & & & 0 & 0\\ & V & & \vdots & \vdots\\ & & & 0 & 0\\ \hline u_1 & \cdots & u_n & 0 & 0\\ 0 & \cdots & 0 & 1 & 0 \end{array}\right),\] where $u_1,\dots,u_n \in {\mathbb{Z}}$, \item a row expansion, which is defined analogously to the column expansion, with the r\^{o}les of rows and columns reversed. \end{enumerate} \qed\end{define} Note that $W$ is a row expansion of $V$ if and only if $W^T$ is a column expansion of $V^T$. In the following, given two Seifert matrices $V$ and $W$ we write $V\sim W$ if they are $S$--equivalent, and we write $V \approx W$ if they are congruent. The proof of Theorem \ref{aslice} immediately gives Corollary \ref{sequivalent} stated in the introduction. This in turn leads to the following question. \begin{question}\label{qu:sequivalent} Let $a, b$ and $d$ be integers and $\epsilon\in \{-1,1\}$. Are the matrices \[ \begin{pmatrix} a&b \\ b+1&d\end{pmatrix}\mbox{ and } \begin{pmatrix} a+\epsilon&b \\ b+1&d\end{pmatrix} \] $S$--equivalent? \end{question} Now we focus on Question \ref{qu:sequivalent}. A first trivial observation is that if $d=0$ and $b=0$, then the two given matrices are congruent and, in particular, $S$--equivalent. We therefore restrict ourselves to matrices with non--zero determinant, or equivalently, to knots of genus one such that the Alexander polynomial $\Delta_K(t)=\det(V-tV^T)$ is non--trivial. \subsection{Knots with a unique minimal genus Seifert surface} In this subsection we prove an auxiliary algebraic result about congruences of Seifert matrices. As a first application of it we prove the nugatory crossing conjecture for genus one knots with non--trivial Alexander polynomial and with a minimal genus Seifert surface which, up to isotopy, is \emph{unique}. \begin{prop}\label{prop:general_congruences} Suppose that the matrices \[ \begin{pmatrix} a&b \\ b+1&0\end{pmatrix}\mbox{ and } \begin{pmatrix} c&b \\ b+1&0\end{pmatrix}, \] where $a,b,c \in {\mathbb{Z}}$, are congruent over ${\mathbb{Z}}$. Then there is an integer $n$ such that $a + n(2b+1) = c$. \end{prop} Before we prove Proposition \ref{prop:general_congruences} we explain one of its consequences. If $K$ is a knot with, up to isotopy, a unique minimal genus Seifert surface, then the Seifert matrix corresponding to that surface only depends on the choice of basis for the first homology. Put differently, the integral congruence class of the Seifert matrix corresponding to the unique minimal genus Seifert surface is an invariant of the knot $K$. Assuming Proposition \ref{prop:general_congruences}, we have the following theorem. \begin{theorem}\label{Thm:unique_minimal_genus} Let $K$ be an oriented genus one knot with a unique minimal genus Seifert surface, which admits a cosmetic crossing. Then $\Delta_K(t) \doteq 1$. \end{theorem} \begin{proof Let $K$ be a genus one knot with a unique minimal genus Seifert surface, which admits a cosmetic crossing. It follows from Corollary \ref{sequivalent} and from the discussion preceding the statement of this theorem that $K$ admits a Seifert matrix $\begin{pmatrix}a & b \\ b+1 & 0\end{pmatrix}$ which is $S$--equivalent to $\begin{pmatrix}a +\epsilon & b \\ b+1 & 0\end{pmatrix}$ for some $\epsilon\in \{-1,1\}$. For $b\neq 0$, Proposition \ref{prop:general_congruences} precludes such congruences from being possible. If $b=0$, then the Alexander polynomial is $1$. \end{proof} We now proceed with the proof of Proposition \ref{prop:general_congruences}. \begin{proof}[Proof of Proposition \ref{prop:general_congruences}] To begin, we suppose that an integral unimodular congruence exists as hypothesized. That is, suppose that there exist integers $x,y,z,t$ such that \[\left(\begin{array}{cc} x & y \\ z & t \end{array} \right)\left(\begin{array}{cc} a & b \\ b+1 & 0 \end{array} \right)\left(\begin{array}{cc} x & z \\ y & t \end{array} \right) = \left(\begin{array}{cc} c & b \\ b+1 & 0 \end{array} \right).\] The left hand side multiplies out to give \begin{equation}\label{Eqn:matrix4} \left(\begin{array}{cc} x^2a+xy(2b+1) & xza+yz(b+1)+xtb \\ xza + xt(b+1) +zyb & z^2a+zt(2b+1) \end{array}\right).\end{equation} Solving the bottom right entry equal to zero implies that either $z=0$, or (for $a \neq 0$) $z = -t(2b+1)/a$. We required that $z$ was an integer, so it must also be the case that $t$ is such that $a$ divides $t(2b+1)$, but we shall not need this. In the case that $a = 0$ and $z \neq 0$, we then have $t=0$. First, if $z=0$, then (\ref{Eqn:matrix4}) becomes \[\left(\begin{array}{cc}x^2a +(2b+1)xy & xtb \\ xt(b+1) & 0 \end{array}\right).\] We require that $x=t=1$ or $x=t=-1$ for the top right and bottom left entries to be correct. But then setting $n=xy$ proves the proposition in this case. Next, suppose $z\ne 0$ and $a=0$. Then $t=0$, and (\ref{Eqn:matrix4}) becomes \[\left(\begin{array}{cc} (2b+1)xy & yz(b+1) \\ zyb & 0 \end{array}\right).\] The equations $zyb = b+1$ and $zy(b+1) = b$ imply that $b^2 = (b+1)^2$, which has no integral solutions. Now in the general case, i.e. $z\ne 0$ and $a\ne 0$, we substitute $z = -t(2b+1)/a$ into (\ref{Eqn:matrix4}), to yield \[\left(\begin{array}{cc} xk & -t(b+1)k/a \\ -tbk/a & 0 \end{array}\right),\] where $k := ax+y(2b+1)$. Setting this equal to $$\left(\begin{array}{cc} c & b \\ b+1 & 0 \end{array} \right),$$ the equations \[-t(b+1)k/a = b\] and \[-tbk/a = b+1\] imply again that $(b+1)^2 = b^2$. Since this does not have integral solutions, we also rule out this case. The only congruences possible are therefore those claimed, which occur when $z=0$ and $x=t= \pm 1$. This completes the proof of Proposition \ref{prop:general_congruences}. \end{proof} \subsection{Other algebraically slice genus one knots} In this subsection we will show that, in general, the answer to Question \ref{qu:sequivalent} can be affirmative, even for matrices with non--zero determinant. This implies that the $S$--equivalence class of the Seifert matrix of a genus one knot with non--trivial Alexander polynomial does not in general contain enough information to resolve the nugatory crossing conjecture. In fact we will prove the following proposition: \begin{prop}\label{prop:sequivalent} For any $b>4$ such that $b \equiv 0 \text{ or } 2\mod3$, there exists an $a\in {\mathbb{Z}}$ such that $$V = \begin{pmatrix}a & b \\ b+1 & 0\end{pmatrix} \ \ \ \rm{and} \ \ \ V'= \begin{pmatrix}a+1& b \\ b+1 & 0\end{pmatrix}$$are $S$--equivalent. \end{prop} Since any Seifert matrix $V$ can be realized as the Seifert matrix of a knot it follows that the $S$--equivalence class of Seifert matrices cannot resolve the nugatory crossing conjecture for genus one knots with non--trivial Alexander polynomial. We will need the following elementary lemma to prove Proposition \ref{prop:sequivalent}. \begin{lemma} Let $a,b,k\in {\mathbb{Z}}$, then the matrices \[ \begin{pmatrix} a&b \\ b+1&0\end{pmatrix}, \begin{pmatrix} a+k(2b+1)&b \\ b+1&0\end{pmatrix} \mbox{ and } \begin{pmatrix} ab^2&b \\ b+1&0\end{pmatrix} \] are $S$--equivalent. \end{lemma} \begin{proof} It is obvious that the first two matrices are congruent. It remains to show that the first and the third matrix are $S$--equivalent. This follows immediately from the following sequence of $S$--equivalences: \[ \begin{array}{rll} & \begin{pmatrix} a&b \\ b+1&0\end{pmatrix} \Rightarrow \begin{pmatrix} a&b& 1&0 \\ b+1&0&0&0 \\ 0&0&0&1 \\ 0&0&0&0 \end{pmatrix} \Rightarrow \begin{pmatrix} a&0&1&0 \\ b+1&0&0&-b \\ 0&0&0&1 \\ 0&0&0&0 \end{pmatrix} \\ & \\ \Rightarrow& \begin{pmatrix} a&0&1 &0 \\ 0&0&0&0 \\ 0&1&0&0 \\ b+1&-b&0&0 \end{pmatrix} \Rightarrow \begin{pmatrix} a&0&1&0 \\ 0&0&0&0 \\ 1&1&0&0 \\ 1&-b&0&0 \end{pmatrix} \Rightarrow \begin{pmatrix} a&ab&1&0 \\ ab&ab^2&b&0 \\ 0&1+b&0&0 \\ 1&0&0&0 \end{pmatrix} \\ & \\ \Rightarrow& \begin{pmatrix} 0&1+b&0&0 \\ b&ab^2&ba&0 \\ 1&ab&a&0 \\ 0&0&1&0\end{pmatrix} \Rightarrow \begin{pmatrix} 0&1+b \\ b&ab^2 \end{pmatrix} \Rightarrow \begin{pmatrix} ab^2 & b\\ b+1&0\end{pmatrix}.\end{array} \] \end{proof} Using this lemma we can now prove the proposition: \begin{proof}[Proof of Proposition \ref{prop:sequivalent}] Let $b>4$ and such that $b \equiv 0 \text{ or } 2\mod3$. It is then straight--forward to see $1+b$ is coprime to $2(b+1)-1=2b+1$ and that $b-1$ is coprime to $2(b-1)+3$. In particular $1-b^2=(1-b)(1+b)$ is coprime to $2b+1$. We can therefore find an $a\in {\mathbb{Z}}$ such that \[ a(1-b^2)\equiv -1\mod (2b+1),\] by the Chinese Remainder Theorem. Put differently, we can find a $k\in {\mathbb{Z}}$ such that \[ a+1=ab^2+k(2b+1).\] It follows from the above lemma that \[ \begin{pmatrix} a&b \\ b+1&0\end{pmatrix}\mbox{ and } \begin{pmatrix} a+1&b \\ b+1&0\end{pmatrix} \] are $S$--equivalent. \end{proof} \section{Low crossing knots} \label{sec:lowcrossings} In this section we combine Theorem \ref{general} and Corollary \ref{sequivalent} with the following result of Trotter \cite{trotter} to prove the nugatory crossing conjecture for all genus one knots with up to 12 crossings. \begin{theorem} \cite[Corollary~4.7]{trotter}\label{thm:trotter_congruence} Let $V$ be a Seifert matrix with $|\det(V)|$ a prime or $1$. Then any matrix which is $S$-equivalent to $V$ is congruent to $V$ over ${\mathbb{Z}}$. \end{theorem} \begin{theorem}\label{12crossings}Let $K$ be a genus one knot that has a diagram with at most 12 crossings. Then $K$ admits no cosmetic crossings. \end{theorem} \begin{proof} Table 1, obtained from KnotInfo \cite{knotinfo}, gives the 23 knots of genus one with at most 12 crossings, with the values of their determinants. We observe that there are four knots with square determinant. These are $6_1$, $9_{46}, 10_3$ and $11\textrm{n}_{139}$ which are all known to be algebraically slice. Thus Corollary \ref{determinant} excludes cosmetic crossings for all but these four knots. Now $6_1$ and $10_3$ are 2-bridge knots; by \cite{torisu} they do not admit cosmetic crossings. The knot $K=9_{46}$ is isotopic to the pretzel knot $P(3,3,-3)$ of Figure \ref{pretzelknots} which has Seifert matrix $ \begin{pmatrix}3 & 2\\ 1 & 0\end{pmatrix}$ since the pretzel knot $P(p,q,r)$ has a Seifert matrix given by $\frac{1}{2}\begin{pmatrix}{p+q} & {q+1} \\ {q-1} & q+r\end{pmatrix}$; see \cite[Example 6.9]{lickorish:book}. The homology $H_1(Y_K)$ is represented by $ \begin{pmatrix}6 & 3\\ 3 & 0\end{pmatrix}$ (compare Corollary \ref{pretzels} below). Thus by Lemma \ref{abelian}, $H_1(Y_K)\cong {{\mathbb{Z}}}_3\oplus{{\mathbb{Z}}}_3$, and by Theorem \ref{doublecover}, $K$ cannot have cosmetic crossings. \begin{table}[height=1.00in] \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline $K$& $\det (K)$ & $K$ & $\det (K)$& $K$ & $\det (K)$ \\ \hline $3_1$ &3& $9_2$ &15& ${11\textrm{a}}_{362}$ & 39 \\ \hline $4_1$ & 5& $9_5$ & 23&${11\textrm{a}}_{363}$&35 \\ \hline $5_2$ &7&$9_{35}$ &27&${\bf {11\textrm{\bf n}}_{139}}$ &{\bf 9}\\ \hline ${\bf 6_1}$ &{\bf 9}&${ \bf 9_{46}}$ &{\bf 9}&${11\textrm{n}}_{141}$& 21\\ \hline $7_2 $& 11&$10_1$&17&${12\textrm{a}}_{803}$ & 21\\ \hline $7_4$&15&${\bf 10_3}$ & {\bf 25}&${12\textrm{a}}_{1287}$ & 37\\ \hline $8_1$ &13&${11\textrm{a}}_{247}$ &19&${12\textrm{a}}_{1166}$& 33\\ \hline $8_3$&17&${11\textrm{a}}_{343}$ & 31&-&- \\ \hline \end{tabular} \label{twelve} \end{center} \vskip.13in \caption{Genus one knots with at most 12 crossings.} \end{table} The only remaining knot from Table 1 is the knot $K=11\textrm{n}_{139}$. This knot is isotopic to the pretzel knot $P(-5,3,-3)$. There is therefore a genus one surface for which a Seifert matrix is \[V = \left(\begin{array}{cc} -1 & 2 \\ 1 & 0 \end{array} \right),\] again by \cite[Example 6.9]{lickorish:book}. Using this Seifert matrix we calculate $H_1(Y_K)$ $\cong {{\mathbb{Z}}}_9$. Thus Theorem \ref{general} does not work for the knot $11\textrm{n}_{139}$. Next we turn to Corollary \ref{sequivalent}. Since $|\det(V)| = 2$ is prime, by Theorem \ref{thm:trotter_congruence} it suffices to show that $V$ is neither integrally congruent to \[\left(\begin{array}{cc} 0 & 2 \\ 1 & 0 \end{array} \right)\mbox{ nor to } \left(\begin{array}{cc} -2 & 2 \\ 1 & 0 \end{array} \right).\] But this follows from Proposition \ref{prop:general_congruences}, with $a=-1$ and $b=1$, noting that two matrices are congruent if and only if their transposes are. \end{proof} \begin{remark} The method applied for $11\textrm{n}_{139}$ in the proof of Theorem \ref{12crossings} can also be used to show that the knots $6_1$ and $9_{46}$ do not admit cosmetic crossings. \end{remark} \section{More examples} \label{sec:examples} In this section we discuss some families of examples for which Theorems \ref{general} and \ref{Thm:unique_minimal_genus} imply the nugatory crossing conjecture. \subsection{Twisted Whitehead doubles} Given a knot $K$ let $D_{+}(K, n)$ denote the $n$-twisted Whitehead double of $K$ with a positive clasp and let $D_{-}(K, n)$ denote the $n$-twisted Whitehead double of $K$ with a negative clasp. \begin{corollary}\label{whitehead}\ (a)\ Given a knot $K$, the Whitehead double $D_{+}(K, n)$ admits no cosmetic crossing if either $n<0$ or $\abs{n}$ is odd. Similarly $D_{-}(K, n)$ admits no cosmetic crossing if either $n>0$ or $\abs{n}$ is odd. \vskip 0.03in (b)\ If $K$ is not a cable knot then $D_{\pm}(K, n)$ admits no cosmetic crossings for every $n\neq 0$. \end{corollary} \begin{proof} (a)\ A Seifert surface of $D_{+}(K, n)$ obtained by plumbing an $n$-twisted annulus with core $K$ and a Hopf band gives rise to a Seifert matrix $V_n = \begin{pmatrix}-1& 0 \\ -1 & n\end{pmatrix}$ \cite[Example 6.8]{lickorish:book}. Thus the Alexander polynomial is of the form \begin{equation}\label{Eqn:matrix5}\Delta_K(t)\doteq \Delta_{K'}(t)\doteq -n(t^2+1)+(1+2n)t=:\Delta_n.\end{equation} Suppose now that $D_{+}(K, n)$ admits a cosmetic crossing. Then $\Delta_n$ should be of the form shown in equation (\ref{Eqn:matrix2}). Comparing the leading coefficients in the expressions (\ref{Eqn:matrix2}) and (\ref{Eqn:matrix5}) we obtain $\abs{n}=\abs{b(b+1)}$ which implies that $\abs{n}$ should be even. We have shown that if $\abs{n}$ is odd then $D_{+}(K, n)$ admits no cosmetic crossing changes. Suppose now that $n<0$. Since the Seifert matrix $V_n$ depends only on $n$ and not on $K$, the knot $D_{+}(K, n)$ is $S$-equivalent to the $n$-twisted, positive-clasped double of the unknot. This is a positive knot (all the crossings in the standard diagram of $D_{+}(O, n)$ are positive) and it has non-zero signature \cite{positive}. Hence $D_{+}(K, n)$ is not algebraically slice and by Theorem \ref{aslice} it cannot admit cosmetic crossings. A similar argument holds for $D_{-}(K, n)$. \vskip 0.05in (b)\ Suppose that $K$ is not a cable knot. By results of Lyon and Whitten \cite{Lyons, whitten}, for every $n \neq 0$ the Whitehead doubles $D_{\pm}(K, n)$ have unique Seifert surfaces of minimal genus. By (\ref{Eqn:matrix5}), $\Delta_n\neq 1$, and the conclusion follows by Theorem \ref{Thm:unique_minimal_genus}. \end{proof} \begin{figure} \includegraphics[width=1.5in]{dfr1.eps} \caption{The $(-4)$-twisted negative-clasped double of the unknot, $D_{-}(O, -4)$.} \label{fig:twistknot} \end{figure} \begin{figure} \input{pretzels.eps_tex} \caption{$P(p,q,r)$ with $p,q$ and $r$ positive and $ P(3,3,-3)$.} \label{pretzelknots} \end{figure} \subsection{Pretzel knots} Let $K$ be a three string pretzel knot $P(p,q,r)$ with $p,q$ and $r$ odd (see Figure \ref{pretzelknots}). The knot determinant is given by $\det (K) = |pq+qr+pr|$ and if $K$ is non-trivial then it has genus one. It is known that $K$ is algebraically slice if and only if $pq+qr+pr=-m^2$, for some odd $m\in {{\mathbb{Z}}}$ \cite{levine}. \begin{corollary} \label{pretzels}The knot $P(p,q,r)$ with $p,q$ and $r$ odd does not admit cosmetic crossings if one of the following is true: \vskip 0.05in (a)\ $pq+qr+pr\neq -m^2$, for every odd $m\in {{\mathbb{Z}}}$. \vskip 0.05in (b)\ $q+r=0$ and $\textrm{gcd}(p,\ q)\neq 1$. \vskip 0.05in (c)\ $p+q=0$ and $\textrm{gcd}(q,\ r)\neq 1$. \end{corollary} \begin{proof} In case (a) the result follows from Theorem \ref{aslice} and the discussion above. For case (b) recall that there is a genus one surface for $P(p,q,r)$ for which a Seifert matrix is $V_{(p,q,r)} = \frac{1}{2}\begin{pmatrix}{p+q} & {q+1} \\ {q-1} & q+r\end{pmatrix}$ \cite[Example 6.9]{lickorish:book}. Suppose that $q+r=0$. If $\textrm{gcd}(p,\ q)\neq 1$, then $\textrm{gcd}(p+q,\ q)\neq 1$ and the conclusion in case (b) follows by Corollary \ref{matrix}. Case (c) is similar. \end{proof} \bibliographystyle{annotate}
bad886a2b0f6a79e57be8ce3f03b12066f6cb3db
{ "file_path": "/home/ubuntu/dolma-v1_7/arxiv-0016.json.gz" }